sentence1
stringlengths 1
133k
| sentence2
stringlengths 1
131k
|
---|---|
yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now. Voyages undertaken before the minimum will be overtaken by those that leave at the minimum, whereas voyages that leave after the minimum will never overtake those that left at the minimum. Prime targets for interstellar travel There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions: Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration. Proposed methods Slow, uncrewed probes Slow interstellar missions based on current and near-future propulsion technologies are associated with trip times starting from about one hundred years to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes like those used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot. Fast, uncrewed probes Nanoprobes Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly. Slow, crewed missions In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. Generation ships A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Suspended animation Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. Frozen embryos A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. Island hopping through interstellar space Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. Fast, crewed missions If a spaceship could average 10 percent of light speed (and decelerate at the destination, for human crewed missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see § Propulsion below), but none of them are ready for near-term (few decades) developments at acceptable cost. Time dilation Physicists generally believe faster-than-light travel is impossible. Relativistic time dilation allows a traveler to experience time more slowly, the closer their speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth. Constant acceleration Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology. From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed. When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction. The result is an impressively fast journey for the crew. Propulsion Rocket concepts All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable and a tremendous heating load must be adequately handled. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. Ion engine A type of electric propulsion, spacecraft such as Dawn use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited to about 5 km/s by the chemical energy stored in the fuel's molecular bonds. They produce a high thrust (about 106 N), but they have a low specific impulse, and that limits their top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and
|
per second, so 1 light-year is about or AU. Proxima Centauri, the nearest (albeit not naked-eye visible) star, is 4.243 light-years away. Another way of understanding the vastness of interstellar distances is by scaling: One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star), can be pictured by scaling down the Earth–Sun distance to . On this scale, the distance to Alpha Centauri A would be . The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600 of a light-year in 30 years and is currently moving at 1/18,000 the speed of light. At this rate, a journey to Proxima Centauri would take 80,000 years. Required energy A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy where is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to . The velocity for a crewed round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least (world energy consumption 2008 was 143,851 terawatt-hours), without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances. Interstellar medium A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission. A major issue with traveling at extremely high speeds is that interstellar dust may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects, and methods of mitigating these risks, have been discussed in literature, but many unknowns remain and, owing to the inhomogeneous distribution of interstellar matter around the Sun, will depend on direction travelled. Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium. Hazards The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the effects of exposure to ionizing radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome. Wait calculation The physicist Robert L. Forward has argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now. Voyages undertaken before the minimum will be overtaken by those that leave at the minimum, whereas voyages that leave after the minimum will never overtake those that left at the minimum. Prime targets for interstellar travel There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions: Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration. Proposed methods Slow, uncrewed probes Slow interstellar missions based on current and near-future propulsion technologies are associated with trip times starting from about one hundred years to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes like those used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot. Fast, uncrewed probes Nanoprobes Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly. Slow, crewed missions In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. Generation ships A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Suspended animation Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. Frozen embryos A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. Island hopping through interstellar space Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. Fast, crewed missions If a spaceship could average 10 percent of light speed (and decelerate at the destination, for human crewed missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see § Propulsion below), but none of them are ready for near-term (few decades) developments at acceptable cost. Time dilation Physicists generally believe faster-than-light travel is impossible. Relativistic time dilation allows a traveler to experience time more slowly, the closer their speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth. Constant acceleration Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology. From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed. When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction. The result is an impressively fast journey for the crew. Propulsion Rocket concepts All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable and a tremendous heating load must be adequately handled. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. Ion engine A type of electric propulsion, spacecraft such as Dawn use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited to about 5 km/s by the chemical energy stored in the fuel's molecular bonds. They produce a high thrust (about 106 N), but they have a low specific impulse, and that limits their top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and on the gas ions being accelerated. The exhaust speed of the charged particles range from 15 km/s to 35 km/s. Nuclear fission powered Fission-electric Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power solar system exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime. Fission-fragment Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to . With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio. Nuclear pulse Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse (space travel's equivalent of fuel economy) and high specific power. Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to
|
is considered a classful routing protocol. Because the protocol has no field for a subnet mask, the router assumes that all subnetwork addresses within the same Class A, Class B, or Class C network have the same subnet mask as the subnet mask configured for the interfaces in question. This contrasts with classless routing protocols that can use variable length subnet masks. Classful protocols have become less popular as they are wasteful of IP address space. Advancement In order to address the issues of address space and other factors, Cisco created EIGRP (Enhanced Interior Gateway Routing Protocol). EIGRP adds support for VLSM (variable length subnet mask) and adds the Diffusing Update Algorithm (DUAL) in order to improve routing and provide a loopless environment. EIGRP has completely replaced IGRP, making IGRP an obsolete routing protocol. In Cisco IOS versions 12.3 and greater, IGRP is
|
or Class C network have the same subnet mask as the subnet mask configured for the interfaces in question. This contrasts with classless routing protocols that can use variable length subnet masks. Classful protocols have become less popular as they are wasteful of IP address space. Advancement In order to address the issues of address space and other factors, Cisco created EIGRP (Enhanced Interior Gateway Routing Protocol). EIGRP adds support for VLSM (variable length subnet mask) and adds the Diffusing Update Algorithm (DUAL) in order to improve routing and provide a loopless environment. EIGRP has completely replaced IGRP, making IGRP an obsolete routing
|
of Regional Studies Leibniz Institute for Research on Society and Space, a research institute in Germany and part of the Leibniz Association Medicine Indoor residual spraying, the spraying of insecticide to kill the mosquitoes that spread malaria Insulin receptor substrate, a protein family involved in the insulin signaling pathway (insulin response) Intergroup Rhabdomyosarcoma Study Group, about the cancer, rhabdomyosarcoma Immune reconstitution syndrome, or Immune reconstitution inflammatory syndrome Transportation Independent rear suspension, a form of independent suspension, used in automobiles Industrial Railway Society, a UK railway society International Railway Standard, standards produced by the International Union of Railways Inertial reference system, more commonly known as an inertial navigation system International Railway Systems, a Romanian freight railroad car producer
|
a ring name of professional wrestler Mike Rotunda IR$, a Franco-Belgian comics series telling the adventures of Larry Max, an IRS special agent Economics Indian Revenue Service, of the Republic of India Interest rate swap Organisations Independence Republic of Sardinia, an independentist political party in Sardinia, Italy Institute of Regional Studies Leibniz Institute for Research on Society and Space, a research institute in Germany and part of the Leibniz Association Medicine Indoor residual spraying, the spraying of insecticide to kill the mosquitoes that spread malaria Insulin receptor substrate, a protein family involved in the insulin signaling pathway (insulin response) Intergroup Rhabdomyosarcoma Study Group, about the cancer, rhabdomyosarcoma Immune reconstitution syndrome, or Immune reconstitution inflammatory syndrome Transportation Independent rear suspension, a form of independent suspension, used in automobiles Industrial
|
otherwise a totally unattested Balkan Indo-European language that was closely related to Illyrian and Messapic. Anatolian, extinct by Late Antiquity, spoken in Anatolia, attested in isolated terms in Luwian/Hittite mentioned in Semitic Old Assyrian texts from the 20th and 19th centuries BC, Hittite texts from about 1650 BC. Armenian, attested from the early 5th century AD. Balto-Slavic, believed by most Indo-Europeanists to form a phylogenetic unit, while a minority ascribes similarities to prolonged language-contact. Slavic (from Proto-Slavic), attested from the 9th century AD (possibly earlier), earliest texts in Old Church Slavonic. Slavic languages include Bulgarian, Russian, Polish, Czech, Slovak, Silesian, Kashubian, Macedonian, Serbo-Croatian (Bosnian, Croatian, Montenegrin, Serbian), Sorbian, Slovenian, Ukrainian, Belarusian, and Rusyn. Baltic, attested from the 14th century AD; although attested relatively recently, they retain many archaic features attributed to Proto-Indo-European (PIE). Living examples are Lithuanian and Latvian. Celtic (from Proto-Celtic), attested since the 6th century BC; Lepontic inscriptions date as early as the 6th century BC; Celtiberian from the 2nd century BC; Primitive Irish Ogham inscriptions from the 4th or 5th century AD, earliest inscriptions in Old Welsh from the 7th century AD. Modern Celtic languages include Welsh, Cornish, Breton, Scottish Gaelic, Irish and Manx. Germanic (from Proto-Germanic), earliest attestations in runic inscriptions from around the 2nd century AD, earliest coherent texts in Gothic, 4th century AD. Old English manuscript tradition from about the 8th century AD. Includes English, Frisian, German, Dutch, Scots, Danish, Swedish, Norwegian, Afrikaans, Yiddish, Low German, Icelandic and Faroese. Hellenic (from Proto-Greek, see also History of Greek); fragmentary records in Mycenaean Greek from between 1450 and 1350 BC have been found. Homeric texts date to the 8th century BC. Indo-Iranian, attested circa 1400 BC, descended from Proto-Indo-Iranian (dated to the late 3rd millennium BC). Indo-Aryan (including Dardic), attested from around 1400 BC in Hittite texts from Anatolia, showing traces of Indo-Aryan words. Epigraphically from the 3rd century BC in the form of Prakrit (Edicts of Ashoka). The Rigveda is assumed to preserve intact records via oral tradition dating from about the mid-second millennium BC in the form of Vedic Sanskrit. Includes a wide range of modern languages from Northern India, Southern Pakistan and Bangladesh including Hindustani (Hindi, Urdu), Bengali, Odia, Assamese, Punjabi, Kashmiri, Gujarati, Marathi, Sindhi and Nepali as well as Sinhala of Sri Lanka and Dhivehi of the Maldives and Minicoy. Iranian or Iranic, attested from roughly 1000 BC in the form of Avestan. Epigraphically from 520 BC in the form of Old Persian (Behistun inscription). Includes Persian, Ossetian, Pashto and Kurdish. Nuristani (includes Kamkata-vari, Vasi-vari, Askunu, Waigali, Tregami, and Zemiaki). Italic (from Proto-Italic), attested from the 7th century BC. Includes the ancient Osco-Umbrian languages, Faliscan, as well as Latin and its descendants, the Romance languages, such as Italian, Venetian, Galician, Sardinian, Neapolitan, Sicilian, Spanish, Asturleonese, French, Romansh, Occitan, Portuguese, Romanian, and Catalan. Tocharian, with proposed links to the Afanasevo culture of Southern Siberia. Extant in two dialects (Turfanian and Kuchean, or Tocharian A and B), attested from roughly the 6th to the 9th century AD. Marginalized by the Old Turkic Uyghur Khaganate and probably extinct by the 10th century. In addition to the classical ten branches listed above, several extinct and little-known languages and language-groups have existed or are proposed to have existed: Ancient Belgian: hypothetical language associated with the proposed Nordwestblock cultural area. Speculated to be connected to Italic or Venetic, and to have certain phonological features in common with Lusitanian. Cimmerian: possibly Iranic, Thracian, or Celtic Dacian: possibly very close to Thracian Elymian: Poorly-attested language spoken by the Elymians, one of the three indigenous (i.e. pre-Greek and pre-Punic) tribes of Sicily. Indo-European affiliation uncertain, but relationships to Italic or Anatolian have been proposed. Illyrian: possibly related to Albanian, Messapian, or both Liburnian: evidence too scant and uncertain to determine anything with certainty Ligurian: possibly close to or part of Celtic. Lusitanian: possibly related to (or part of) Celtic, Ligurian, or Italic Ancient Macedonian: proposed relationship to Greek. Messapian: not conclusively deciphered Paionian: extinct language once spoken north of Macedon Phrygian: language of the ancient Phrygians Sicel: an ancient language spoken by the Sicels (Greek Sikeloi, Latin Siculi), one of the three indigenous (i.e. pre-Greek and pre-Punic) tribes of Sicily. Proposed relationship to Latin or proto-Illyrian (Pre-Indo-European) at an earlier stage. Sorothaptic: proposed, pre-Celtic, Iberian language Thracian: possibly including Dacian Venetic: shares several similarities with Latin and the Italic languages, but also has some affinities with other IE languages, especially Germanic and Celtic. Membership of languages in the Indo-European language family is determined by genealogical relationships, meaning that all members are presumed descendants of a common ancestor, Proto-Indo-European. Membership in the various branches, groups and subgroups of Indo-European is also genealogical, but here the defining factors are shared innovations among various languages, suggesting a common ancestor that split off from other Indo-European groups. For example, what makes the Germanic languages a branch of Indo-European is that much of their structure and phonology can be stated in rules that apply to all of them. Many of their common features are presumed innovations that took place in Proto-Germanic, the source of all the Germanic languages. In the 21st century, several attempts have been made to model the phylogeny of Indo-European languages using Bayesian methodologies similar to those applied to problems in biological phylogeny. Although there are differences in absolute timing between the various analyses, there is much commonality between them, including the result that the first known language groups to diverge were the Anatolian and Tocharian language families, in that order. Tree versus wave model The "tree model" is considered an appropriate representation of the genealogical history of a language family if communities do not remain in contact after their languages have started to diverge. In this case, subgroups defined by shared innovations form a nested pattern. The tree model is not appropriate in cases where languages remain in contact as they diversify; in such cases subgroups may overlap, and the "wave model" is a more accurate representation. Most approaches to Indo-European subgrouping to date have assumed that the tree model is by-and-large valid for Indo-European; however, there is also a long tradition of wave-model approaches. In addition to genealogical changes, many of the early changes in Indo-European languages can be attributed to language contact. It has been asserted, for example, that many of the more striking features shared by Italic languages (Latin, Oscan, Umbrian, etc.) might well be areal features. More certainly, very similar-looking alterations in the systems of long vowels in the West Germanic languages greatly postdate any possible notion of a proto-language innovation (and cannot readily be regarded as "areal", either, because English and continental West Germanic were not a linguistic area). In a similar vein, there are many similar innovations in Germanic and Balto-Slavic that are far more likely areal features than traceable to a common proto-language, such as the uniform development of a high vowel (*u in the case of Germanic, *i/u in the case of Baltic and Slavic) before the PIE syllabic resonants *ṛ, *ḷ, *ṃ, *ṇ, unique to these two groups among IE languages, which is in agreement with the wave model. The Balkan sprachbund even features areal convergence among members of very different branches. An extension to the Ringe-Warnow model of language evolution, suggests that early IE had featured limited contact between distinct lineages, with only the Germanic subfamily exhibiting a less treelike behaviour as it acquired some characteristics from neighbours early in its evolution. The internal diversification of especially West Germanic is cited to have been radically non-treelike. Proposed subgroupings Specialists have postulated the existence of higher-order subgroups such as Italo-Celtic, Graeco-Armenian, Graeco-Aryan or Graeco-Armeno-Aryan, and Balto-Slavo-Germanic. However, unlike the ten traditional branches, these are all controversial to a greater or lesser degree. The Italo-Celtic subgroup was at one point uncontroversial, considered by Antoine Meillet to be even better established than Balto-Slavic. The main lines of evidence included the genitive suffix -ī; the superlative suffix -m̥mo; the change of /p/ to /kʷ/ before another /kʷ/ in the same word (as in penkʷe > *kʷenkʷe > Latin quīnque, Old Irish cóic); and the subjunctive morpheme -ā-. This evidence was prominently challenged by Calvert Watkins, while Michael Weiss has argued for the subgroup. Evidence for a relationship between Greek and Armenian includes the regular change of the second laryngeal to a at the beginnings of words, as well as terms for "woman" and "sheep". Greek and Indo-Iranian share innovations mainly in verbal morphology and patterns of nominal derivation. Relations have also been proposed between Phrygian and Greek, and between Thracian and Armenian. Some fundamental shared features, like the aorist (a verb form denoting action without reference to duration or completion) having the perfect active particle -s fixed to the stem, link this group closer to Anatolian languages and Tocharian. Shared features with Balto-Slavic languages, on the other hand (especially present and preterit formations), might be due to later contacts. The Indo-Hittite hypothesis proposes that the Indo-European language family consists of two main branches: one represented by the Anatolian languages and another branch encompassing all other Indo-European languages. Features that separate Anatolian from all other branches of Indo-European (such as the gender or the verb system) have been interpreted alternately as archaic debris or as innovations due to prolonged isolation. Points proffered in favour of the Indo-Hittite hypothesis are the (non-universal) Indo-European agricultural terminology in Anatolia and the preservation of laryngeals. However, in general this hypothesis is considered to attribute too much weight to the Anatolian evidence. According to another view, the Anatolian subgroup left the Indo-European parent language comparatively late, approximately at the same time as Indo-Iranian and later than the Greek or Armenian divisions. A third view, especially prevalent in the so-called French school of Indo-European studies, holds that extant similarities in non-satem languages in general—including Anatolian—might be due to their peripheral location in the Indo-European language-area and to early separation, rather than indicating a special ancestral relationship. Hans J. Holm, based on lexical calculations, arrives at a picture roughly replicating the general scholarly opinion and refuting the Indo-Hittite hypothesis. Satem and centum languages The division of the Indo-European languages into satem and centum groups was put forward by Peter von Bradke in 1890, although Karl Brugmann did propose a similar type of division in 1886. In the satem languages, which include the Balto-Slavic and Indo-Iranian branches, as well as (in most respects) Albanian and Armenian, the reconstructed Proto-Indo-European palatovelars remained distinct and were fricativized, while the labiovelars merged with the 'plain velars'. In the centum languages, the palatovelars merged with the plain velars, while the labiovelars remained distinct. The results of these alternative developments are exemplified by the words for "hundred" in Avestan (satem) and Latin (centum)—the initial palatovelar developed into a fricative in the former, but became an ordinary velar in the latter. Rather than being a genealogical separation, the centum–satem division is commonly seen as resulting from innovative changes that spread across PIE dialect-branches over a particular geographical area; the centum–satem isogloss intersects a number of other isoglosses that mark distinctions between features in the early IE branches. It may be that the centum branches in fact reflect the original state of affairs in PIE, and only the satem branches shared a set of innovations, which affected all but the peripheral areas of the PIE dialect continuum. Kortlandt proposes that the ancestors of Balts and Slavs took part in satemization before being drawn later into the western Indo-European sphere. Proposed external relations From the very beginning of Indo-European studies, there have been attempts to link the Indo-European languages genealogically to other languages and language families. However, these theories remain highly controversial, and most specialists in Indo-European linguistics are sceptical or agnostic about such proposals. Proposals linking the Indo-European languages with a single language family include: Indo-Uralic, joining Indo-European with Uralic Pontic, postulated by John Colarusso, which joins Indo-European with Northwest Caucasian Other proposed families include: Nostratic, comprising all or some of the Eurasiatic languages and the Kartvelian, Dravidian (or wider, Elamo-Dravidian) and Afroasiatic language families Eurasiatic, a theory championed by Joseph Greenberg, comprising the Uralic, Altaic and various 'Paleosiberian' families (Ainu, Yukaghir, Nivkh, Chukotko-Kamchatkan, Eskimo–Aleut) and possibly others Nostratic and Eurasiatic, in turn, have been included in even wider groupings, such as Borean, a language family separately proposed by Harold C. Fleming and Sergei Starostin that encompasses almost all of the world's natural languages with the exception of those native to sub-Saharan Africa, New Guinea, Australia, and the Andaman Islands. Objections to such groupings are not based on any theoretical claim about the likely historical existence or nonexistence of such macrofamilies; it is entirely reasonable to suppose that they might have existed. The serious difficulty lies in identifying the details of actual relationships between language families, because it is very hard to find concrete evidence that transcends chance resemblance or is not equally likely explained as being due to borrowing, including Wanderwörter, which can travel very long distances. Because the signal-to-noise ratio in historical linguistics declines over time, at great enough time-depths it becomes
|
labiovelar kʷ gʷ gʷh. (The correctness of the terms palatal and plain velar is disputed; see Proto-Indo-European phonology.) All daughter languages have reduced the number of distinctions among these sounds, often in divergent ways. As an example, in English, one of the Germanic languages, the following are some of the major changes that happened: None of the daughter-language families (except possibly Anatolian, particularly Luvian) reflect the plain velar stops differently from the other two series, and there is even a certain amount of dispute whether this series existed at all in PIE. The major distinction between centum and satem languages corresponds to the outcome of the PIE plain velars: The "central" satem languages (Indo-Iranian, Balto-Slavic, Albanian, and Armenian) reflect both "plain velar" and labiovelar stops as plain velars, often with secondary palatalization before a front vowel (e i ē ī). The "palatal" stops are palatalized and often appear as sibilants (usually but not always distinct from the secondarily palatalized stops). The "peripheral" centum languages (Germanic, Italic, Celtic, Greek, Anatolian and Tocharian) reflect both "palatal" and "plain velar" stops as plain velars, while the labiovelars continue unchanged, often with later reduction into plain labial or velar consonants. The three-way PIE distinction between voiceless, voiced and voiced aspirated stops is considered extremely unusual from the perspective of linguistic typology—particularly in the existence of voiced aspirated stops without a corresponding series of voiceless aspirated stops. None of the various daughter-language families continue it unchanged, with numerous "solutions" to the apparently unstable PIE situation: The Indo-Aryan languages preserve the three series unchanged but have evolved a fourth series of voiceless aspirated consonants. The Iranian languages probably passed through the same stage, subsequently changing the aspirated stops into fricatives. Greek converted the voiced aspirates into voiceless aspirates. Italic probably passed through the same stage, but reflects the voiced aspirates as voiceless fricatives, especially f (or sometimes plain voiced stops in Latin). Celtic, Balto-Slavic, Anatolian, and Albanian merge the voiced aspirated into plain voiced stops. Germanic and Armenian change all three series in a chain shift (e.g. with bh b p becoming b p f (known as Grimm's law in Germanic). Among the other notable changes affecting consonants are: The Ruki sound law (s becomes before r, u, k, i) in the satem languages. Loss of prevocalic p in Proto-Celtic. Development of prevocalic s to h in Proto-Greek, with later loss of h between vowels. Verner's law in Proto-Germanic. Grassmann's law (dissimilation of aspirates) independently in Proto-Greek and Proto-Indo-Iranian. The following table shows the basic outcomes of PIE consonants in some of the most important daughter languages for the purposes of reconstruction. For a fuller table, see Indo-European sound laws. Notes: C- At the beginning of a word. -C- Between vowels. -C At the end of a word. `-C- Following an unstressed vowel (Verner's law). -C-(rl) Between vowels, or between a vowel and (on either side). CT Before a (PIE) stop (). CT− After a (PIE) obstruent (, etc.; ). C(T) Before or after an obstruent (, etc.; ). CH Before an original laryngeal. CE Before a (PIE) front vowel (). CE' Before secondary (post-PIE) front-vowels. Ce Before . C(u) Before or after a (PIE) (boukólos rule). C(O) Before or after a (PIE) (boukólos rule). Cn− After . CR Before a sonorant (). C(R) Before or after a sonorant (). C(r),l,u− Before or after . Cruki− After (Ruki sound law). C..Ch Before an aspirated consonant in the next syllable (Grassmann's law, also known as dissimilation of aspirates). CE..Ch Before a (PIE) front vowel () as well as before an aspirated consonant in the next syllable (Grassmann's law, also known as dissimilation of aspirates). C(u)..Ch Before or after a (PIE) as well as before an aspirated consonant in the next syllable (Grassmann's law, also known as dissimilation of aspirates). Comparison of conjugations The following table presents a comparison of conjugations of the thematic present indicative of the verbal root * of the English verb to bear and its reflexes in various early attested IE languages and their modern descendants or relatives, showing that all languages had in the early stage an inflectional verb system. {| class="wikitable" style="text-align: center;" |- ! rowspan="2" | Major subgroup ! rowspan="2" |Hellenic ! colspan="2" |Indo-Iranian ! rowspan="2" |Italic ! rowspan="2" |Celtic ! rowspan="2" |Armenian ! rowspan="2" |Germanic ! colspan="2" |Balto-Slavic ! rowspan="2" |Albanian |- !Indo-Aryan !Iranian !Baltic !Slavic |- ! Ancient representative !Ancient Greek !Vedic Sanskrit !Avestan !Latin !Old Irish !Classical Armenian !Gothic !Old Prussian !Old Church Sl. !Old Albanian |- ! I (1st sg.) |phérō | bʰárāmi | barā |ferō | biru; berim | berem | baíra /bɛra/ | *bera | berǫ | *berja |- ! You (2nd sg.) | phéreis | bʰárasi | barahi | fers | biri; berir | beres | baíris | *bera | bereši | *berje |- ! He/She/It (3rd sg.) | phérei | bʰárati | baraiti | fert | berid | berē | baíriþ | *bera | beretъ | *berjet |- ! We two (1st dual) | — | bʰárāvas | barāvahi | — | — | — | baíros |— | berevě |— |- ! You two (2nd dual) | phéreton | bʰárathas | — | — | — | — | baírats |— | bereta |— |- ! They two (3rd dual) | phéreton | bʰáratas | baratō | — | — | — | — |— | berete |— |- ! We (1st pl.) | phéromen | bʰárāmas | barāmahi | ferimus | bermai | beremkʿ | baíram | *beramai | beremъ | *berjame |- ! You (2nd pl.) | phérete | bʰáratha | baraθa | fertis | beirthe | berēkʿ | baíriþ | *beratei | berete | *berjeju |- ! They (3rd pl.) | phérousi | bʰáranti | barəṇti | ferunt | berait | beren | baírand | *bera | berǫtъ | *berjanti |- ! Modern representative !Modern Greek !Hindustani !Persian !Portuguese !Irish !Armenian (Eastern; Western) !German !Lithuanian !Slovene !Albanian |- ! I (1st sg.) | férno | (ma͠i) bʰarūm̥ | (man) {mi}baram | {con}firo |beirim | berum em; g'perem | (ich) {ge}bäre | beriu | bérem | (unë) bie |- ! You (2nd sg.) | férnis | (tū) bʰarē | (tu) {mi}bari | {con}feres | beirir | berum es; g'peres | (du) {ge}bierst | beri | béreš | (ti) bie |- ! He/She/It (3rd sg.) | férni | (ye/vo) bʰarē | (ān) {mi}barad | {con}fere | beiridh | berum ē; g'perē | (er/sie/es) {ge}biert | beria | bére | (ai/ajo) bie |- ! We two (1st dual) |— |— |— |— |— |— |— | beriava | béreva |— |- ! You two (2nd dual) |— |— |— |— |— |— |— |beriata |béreta |— |- ! They two (3rd dual) |— |— |— |— |— |— |— | beria | béreta |— |- ! We (1st pl.) | férnume | (ham) bʰarēm̥ | (mā) {mi}barim | {con}ferimos | beirimid; beiream | berum enkʿ; g'perenkʿ | (wir) {ge}bären | beriame | béremo | (ne) biem |- ! You (2nd pl.) | férnete | (tum) bʰaro | (šomā) {mi}barid | {con}feris | beirthidh | berum ekʿ; g'perekʿ | (ihr) {ge}bärt | beriate | bérete | (ju) bini |- ! They (3rd pl.) | férnun | (ye/vo) bʰarēm̥ | (ānān) {mi}barand | {con}ferem | beirid | berum en; g'peren | (sie) {ge}bären | beria | bérejo; berọ́ | (ata/ato) bien |} While similarities are still visible between the modern descendants and relatives of these ancient languages, the differences have increased over time. Some IE languages have moved from synthetic verb systems to largely periphrastic systems. In addition, the pronouns of periphrastic forms are in parentheses when they appear. Some of these verbs have undergone a change in meaning as well. In Modern Irish beir usually only carries the meaning to bear in the sense of bearing a child; its common meanings are to catch, grab. Apart from the first person, the forms given in the table above are dialectical or obsolete. The second and third person forms are typically instead conjugated periphrastically by adding a pronoun after the verb: beireann tú, beireann sé/sí, beireann sibh, beireann siad. The Hindustani (Hindi and Urdu) verb bʰarnā, the continuation of the Sanskrit verb, can have a variety of meanings, but the most common is "to fill". The forms given in the table, although etymologically derived from the present indicative, now have the meaning of future subjunctive. The loss of the present indicative in Hindustani is roughly compensated by the periphrastic habitual indicative construction, using the habitual participle (etymologically from the Sanskrit present participle bʰarant-) and an auxiliary: ma͠i bʰartā hū̃, tū bʰartā hai, vah bʰartā hai, ham bʰarte ha͠i, tum bʰarte ho, ve bʰarte ha͠i (masculine forms). German is not directly descended from Gothic, but the Gothic forms are a close approximation of what the early West Germanic forms of c. 400 AD would have looked like. The descendant of Proto-Germanic *beraną (English bear) survives in German only in the compound gebären, meaning "bear (a child)". The Latin verb ferre is irregular, and not a good representative of a normal thematic verb. In most Romance languages such as Portuguese, other verbs now mean "to carry" (e.g. Pt. portar < Lat. portare) and ferre was borrowed and nativized only in compounds such as "to suffer" (from Latin sub- and ferre) and "to confer" (from Latin "con-" and "ferre"). In Modern Greek, phero φέρω (modern transliteration fero) "to bear" is still used but only in specific contexts and is most common in such compounds as αναφέρω, διαφέρω, εισφέρω, εκφέρω, καταφέρω, προφέρω, προαναφέρω, προσφέρω etc. The form that is (very) common today is pherno φέρνω (modern transliteration ferno) meaning "to bring". Additionally, the perfective form of pherno (used for the subjunctive voice and also for the future tense) is also phero. The dual forms are archaic in standard Lithuanian, and are only presently used in some dialects (e.g. Samogitian). Among modern Slavic languages, only Slovene continues to have a dual number in the standard variety. Comparison of cognates Present distribution Today, Indo-European languages are spoken by billions of native speakers across all inhabited continents, the largest number by far for any recognised language family. Of the 20 languages with the largest numbers of speakers according to Ethnologue, 10 are Indo-European: English, Hindustani, Spanish, Bengali, French, Russian, Portuguese, German, Persian and Punjabi, each with 100 million speakers or more. Additionally, hundreds of millions of persons worldwide study Indo-European languages as secondary or tertiary languages, including in cultures which have completely different language families and historical backgrounds—there are between 600 million and one billion L2 learners of English alone. The success of the language family, including the large number of speakers and the vast portions of the Earth that they inhabit, is due to several factors. The ancient Indo-European migrations and widespread dissemination of Indo-European culture throughout Eurasia, including that of the Proto-Indo-Europeans themselves, and that of their daughter cultures including the Indo-Aryans, Iranian peoples, Celts, Greeks, Romans, Germanic peoples, and Slavs, led to these peoples' branches of the language family already taking a dominant foothold in virtually all of Eurasia except for swathes of the Near East, North and East Asia, replacing many (but not all) of the previously-spoken pre-Indo-European languages of this extensive area. However Semitic languages remain dominant in much of the Middle East and North Africa, and Caucasian languages in much of the Caucasus region. Similarly in Europe and the Urals the Uralic languages (such as Hungarian, Finnish, Estonian etc) remain, as does Basque, a pre-Indo-European isolate. Despite being unaware of their common linguistic origin, diverse groups of Indo-European speakers continued to culturally dominate and often replace the indigenous languages of the western two-thirds of Eurasia. By the beginning of the Common Era, Indo-European peoples controlled almost the entirety of this area: the Celts western and central Europe, the Romans southern Europe, the Germanic peoples northern Europe, the Slavs eastern Europe, the Iranian peoples most of western and central Asia and parts of eastern Europe, and the Indo-Aryan peoples in the Indian subcontinent, with the Tocharians inhabiting the Indo-European frontier in western China. By the medieval period, only the Semitic, Dravidian, Caucasian, and Uralic languages, and the language isolate Basque remained of the (relatively) indigenous languages of Europe and the western half of Asia. Despite medieval invasions by Eurasian nomads, a group to which the Proto-Indo-Europeans had once belonged, Indo-European expansion reached another peak in the early modern period with the dramatic increase in the population of the Indian subcontinent and European expansionism throughout the globe during the Age of Discovery, as well as the continued replacement and assimilation
|
community in Singapore was the Church of Saints Peter and Paul. Even so, the church was continuously packed on Sundays, feast days and other special occasions, and it was largely a Teochew-speaking congregation. In 1910, Bishop Emile Barillon wrote back to the Paris Foreign Missions Society (MEP), mentioning that the Church in Singapore "foresees that a third Chinese parish would become necessary for the Catholics originating from Fukien who have multiplied more and more.” During that time, there were a few hundred Hokkien-speaking Christians. Conversions within this dialect group were few as they had no church of their own, and as such there was a need to build a church catered to the Hokkien community. In 1923, Father Emile Joseph Mariette, who was then the parish priest of the Church of Saints Peter and Paul, suggested to the MEP to build this new church, and thus they began the search for a suitable site. On 21 November 1925, the Church acquired 2.1 acres of land at Bukit Purmei, which at that time was a large tract of undeveloped and dismal marshland occupied by Malay Squatters and Catholic families. During that period, this plot of land was near Outram General Hospital as well as Tanjong Pagar Railway Station, which was still undergoing construction then. The cost of the land amounted to $26,000, and the building fund took about a quarter of a million dollars, all of which was contributed by the MEP and prominent Chinese Catholics. The building was based on the sketches of Father Jean Marie Ouillon, and his design was heavily inspired by the Basilica of the Sacred Heart in Montmarte, which was also built on a hill. The foundation stone was laid on Easter Monday, 18 April 1927, by Pierre Louis Perrichon, Bishop of Corona and Coadjutor to the Bishop of Malacca. During the construction, Father Mariette was inspecting the construction prorgress on site when a plank fell from the top of the steeple and hit him on his head. He was rushed to Outram General Hospital, but passed away soon after. Up till this day, a marble plaque stands dedicated to him near where the accident took place. Father Stephen Lee was then tasked to supervise the rest of the building project, and subsequently became the parish priest in 1930. Early Church The church was completed and officially opened on 7 April 1929 to much fanfare, with a crowd of approximately 6,000 people. As church members believed that the land was acquired through the intercession of St Teresa of the Child Jesus, it was decided that the church should be named after her, and that she be made its patron saint. However soon after the opening, the Church struggled to remain open due to its low attendance rates. Due
|
his head. He was rushed to Outram General Hospital, but passed away soon after. Up till this day, a marble plaque stands dedicated to him near where the accident took place. Father Stephen Lee was then tasked to supervise the rest of the building project, and subsequently became the parish priest in 1930. Early Church The church was completed and officially opened on 7 April 1929 to much fanfare, with a crowd of approximately 6,000 people. As church members believed that the land was acquired through the intercession of St Teresa of the Child Jesus, it was decided that the church should be named after her, and that she be made its patron saint. However soon after the opening, the Church struggled to remain open due to its low attendance rates. Due to the lack of developed infrastructure surrounding the Church, people found it inaccessible and hence continued to attend services at the more conveniently located Church of Saint Peter and Paul. Thus, the services at the Church of St Teresa were highly irregular, and at one point had only 4 services in a year. It ultimately failed to cater to the Hokkien Catholic community, but its parish population slowly grew due to the patronage of workers from the dockyards, as well as staff and patients from the Singapore General Hospital. Developments In June 1930, the government acquired a portion of land in front of the church to make way for the deviation of Kampong Bahru Road and the Federated Malay States Railway, resulting in the loss of more than 8,000 square feet of its frontage. Soon after in 1934, the Catholic Church acquired the surrounding land and a Catholic settlement quickly developed around the Church, helping the parish population grow. In 1935, Father Lee founded St Teresa’s Sino-English Primary School to provide education for the children living in the vicinity. The school was renamed St Teresa's Sino-English School when secondary students enrolled in 1965. The school later became St Teresa's High School, which closed in 1998. It was the Chinese-medium counterpart of CHIJ Saint Theresa's Convent, which was run by the Holy Infant Jesus sisters and provided English and Tamil-medium education for girls. During the Japanese Occupation (1942–1945), Bukit Teresa became the British military’s anti-aircraft post. Due to the church’s proximity to Bukit Teresa and the port, it was attacked frequently by the Japanese. Both the church and the buildings in the Catholic settlement sustained heavy damage during the bombings. When the war ended, Father Lee oversaw the task of rebuilding the church from the ravages of war. During the post-war period, the Church was a place of refuge for many groups in society who had fallen through the cracks. The church opened its doors to women who sought protection from Japanese soldiers, Caucasians during the Maria Hertogh riots, and the homeless from the Bukit Ho Swee Fire. The Church of St Teresa was
|
Rosemont, Illinois in the Chicago area. Minor league sports Many minor league teams also call Illinois their home. They include: The Bloomington Edge of the Indoor Football League The Bloomington Flex of the Midwest Professional Basketball Association The Chicago Wolves are an AHL team playing in the suburb of Rosemont The Gateway Grizzlies of the Frontier League in Sauget, Illinois The Kane County Cougars of the American Association The Joliet Slammers of the Frontier League The Peoria Chiefs of the High-A Central The Peoria Rivermen are an SPHL team The Rockford Aviators of the Frontier League The Rockford IceHogs of the AHL The Schaumburg Boomers of the Frontier League The Southern Illinois Miners based out of Marion in the Frontier League The Windy City Bulls, playing in the Chicago suburb of Hoffman Estates, of the NBA G League The Windy City ThunderBolts of the Frontier League College sports The state features 13 athletic programs that compete in NCAA Division I, the highest level of U.S. college sports. The two most prominent are the Illinois Fighting Illini and Northwestern Wildcats, both members of the Big Ten Conference and the only ones competing in one of the so-called "Power Five conferences". The Fighting Illini football team has won five national championships and three Rose Bowl Games, whereas the men's basketball team has won 17 conference seasons and played five Final Fours. Meanwhile, the Wildcats have won eight football conference championships and one Rose Bowl Game. The Northern Illinois Huskies from DeKalb, Illinois compete in the Mid-American Conference winning four conference championships and earning a bid in the Orange Bowl along with producing Heisman candidate Jordan Lynch at quarterback. The Huskies are the state's only other team competing in the Football Bowl Subdivision, the top level of NCAA football. Four schools have football programs that compete in the second level of Division I football, the Football Championship Subdivision (FCS). The Illinois State Redbirds (Normal, adjacent to Bloomington) and Southern Illinois Salukis (representing Southern Illinois University's main campus in Carbondale) are members of the Missouri Valley Conference (MVC) for non-football sports and the Missouri Valley Football Conference (MVFC). The Western Illinois Leathernecks (Macomb) are full members of the Summit League, which does not sponsor football, and also compete in the MVFC. The Eastern Illinois Panthers (Charleston) are members of the Ohio Valley Conference (OVC). The city of Chicago is home to four Division I programs that do not sponsor football. The DePaul Blue Demons, with main campuses in Lincoln Park and the Loop, are members of the Big East Conference. The Loyola Ramblers, with their main campus straddling the Edgewater and Rogers Park community areas on the city's far north side, compete in the MVC but will move to the Atlantic 10 Conference in July 2022. The UIC Flames, from the Near West Side next to the Loop, are in the Horizon League but will move to the MVC in July 2022. The Chicago State Cougars, from the city's south side, compete in the Western Athletic Conference through the 2021–22 school year, after which they will leave for an as-yet-unknown affiliation. Finally, two non-football Division I programs are located downstate. The Bradley Braves (Peoria) are MVC members, and the SIU Edwardsville Cougars (in the Metro East region across the Mississippi River from St. Louis) compete in the OVC. Former Chicago sports franchises Folded teams The city was formerly home to several other teams that either failed to survive or belonged to leagues that folded. The Chicago Blitz, United States Football League 1983–1984 The Chicago Sting, North American Soccer League 1975–1984 and Major Indoor Soccer League The Chicago Cougars, World Hockey Association 1972–1975 The Chicago Rockers, Continental Basketball Association The Chicago Skyliners, American Basketball Association 2000–01 The Chicago Bruisers, Arena Football League 1987–1989 The Chicago Power, National Professional Soccer League 1984–2001 The Chicago Blaze, National Women's Basketball League The Chicago Machine, Major League Lacrosse The Chicago Whales of the Federal Baseball League, a rival league to Major League Baseball from 1914 to 1916 The Chicago American Giants of the Negro baseball league, 1910–1952 The Chicago Bruins of the National Basketball League, 1939–1942 The Chicago Studebaker Flyers of the NBL, 1942–43 The Chicago American Gears of the NBL, 1944–1947 The Chicago Stags of the Basketball Association of America, 1946–1950 The Chicago Majors of the American Basketball League, 1961–1963 The Chicago Express of the ECHL The Chicago Enforcers of the XFL pro football league The Chicago Fire, World Football League 1974 The Chicago Winds, World Football League 1975 The Chicago Hustle, Women's Professional Basketball League 1978–1981 The Chicago Mustangs, North American Soccer League 1966–1967 The Chicago Rush, Arena Football League 2001–2013 The Chicago Storm, American Profesional Slo-Pitch League (APSPL), 1977-1978 The Chicago Nationwide Advertising, North American Softball League (NASL), 1980 Relocated teams The NFL's Arizona Cardinals, who currently play in the Phoenix suburb of Glendale, Arizona, played in Chicago as the Chicago Cardinals, until moving to St. Louis, Missouri after the 1959 season. An NBA expansion team known as the Chicago Packers in 1961–1962, and as the Chicago Zephyrs the following year, moved to Baltimore after the 1962–1963 season. The franchise is now known as the Washington Wizards. Professional sports teams outside Chicago The Peoria Chiefs are a High-A minor league baseball team affiliated with the St. Louis Cardinals. The Schaumburg Boomers, Southern Illinois Miners, Gateway Grizzlies, Joliet Slammers and Windy City ThunderBolts all belong to the independent Frontier League. Additionally, the Kane County Cougars play in the American Association and the Lake County Fielders were members of the former North American League. In addition to the Chicago Wolves, the AHL also has the Rockford IceHogs serving as the AHL affiliate of the Chicago Blackhawks. The second incarnation of the Peoria Rivermen plays in the SPHL. Motor racing Motor racing oval tracks at the Chicagoland Speedway in Joliet, the Chicago Motor Speedway in Cicero and the Gateway International Raceway in Madison, near St. Louis, have hosted NASCAR, CART, and IRL races, whereas the Sports Car Club of America, among other national and regional road racing clubs, have visited the Autobahn Country Club in Joliet, the Blackhawk Farms Raceway in South Beloit and the former Meadowdale International Raceway in Carpentersville. Illinois also has several short tracks and dragstrips. The dragstrip at Gateway International Raceway and the Route 66 Raceway, which sits on the same property as the Chicagoland Speedway, both host NHRA drag races. Golf Illinois features several golf courses, such as Olympia Fields, Medinah, Midlothian, Cog Hill, and Conway Farms, which have often hosted the BMW Championship, Western Open, and Women's Western Open. Also, the state has hosted 13 editions of the U.S. Open (latest at Olympia Fields in 2003), six editions of the PGA Championship (latest at Medinah in 2006), three editions of the U.S. Women's Open (latest at The Merit Club), the 2009 Solheim Cup (at Rich Harvest Farms), and the 2012 Ryder Cup (at Medinah). The John Deere Classic is a regular PGA Tour event played in the Quad Cities since 1971, whereas the Encompass Championship is a Champions Tour event since 2013. Previously, the LPGA State Farm Classic was an LPGA Tour event from 1976 to 2011. Parks and recreation The Illinois state parks system began in 1908 with what is now Fort Massac State Park, becoming the first park in a system encompassing more than 60 parks and about the same number of recreational and wildlife areas. Areas under the protection of the National Park Service include: the Illinois and Michigan Canal National Heritage Corridor near Lockport, the Lewis and Clark National Historic Trail, the Lincoln Home National Historic Site in Springfield, the Mormon Pioneer National Historic Trail, the Trail of Tears National Historic Trail, the American Discovery Trail, and the Pullman National Monument. The federal government also manages the Shawnee National Forest and the Midewin National Tallgrass Prairie. Law and politics In a 2020 study, Illinois was ranked as the 4th easiest state for citizens to vote in. State government The government of Illinois, under the Constitution of Illinois, has three branches of government: executive, legislative and judicial. The executive branch is split into several statewide elected offices, with the governor as chief executive. Legislative functions are granted to the Illinois General Assembly. The judiciary is composed of the Supreme Court and lower courts. The Illinois General Assembly is the state legislature, composed of the 118-member Illinois House of Representatives and the 59-member Illinois Senate. The members of the General Assembly are elected at the beginning of each even-numbered year. The Illinois Compiled Statutes (ILCS) are the codified statutes of a general and permanent nature. The executive branch is composed of six elected officers and their offices as well as numerous other departments. The six elected officers are: Governor, Lieutenant Governor, Attorney General, Secretary of State, Comptroller, and Treasurer. The government of Illinois has numerous departments, agencies, boards and commissions, but the so-called code departments provide most of the state's services. The Judiciary of Illinois is the unified court system of Illinois. It consists of the Supreme Court, Appellate Court, and Circuit Courts. The Supreme Court oversees the administration of the court system. The administrative divisions of Illinois are counties, townships, precincts, cities, towns, villages, and special-purpose districts. The basic subdivision of Illinois are the 102 counties. Eighty-five of the 102 counties are in turn divided into townships and precincts. Municipal governments are the cities, villages, and incorporated towns. Some localities possess home rule, which allows them to govern themselves to a certain extent. Party balance Illinois is a Democratic stronghold. Historically, Illinois was a political swing state, with near-parity existing between the Republican and the Democratic parties. However, in recent elections, the Democratic Party has gained ground, and Illinois has come to be seen as a solid "blue" state in presidential campaigns. Votes from Chicago and most of Cook County have long been strongly Democratic. However, the "collar counties" (the suburbs surrounding Chicago's Cook County, Illinois), can be seen as moderate voting districts. College towns like Carbondale, Champaign, and Normal also lean Democratic. Republicans continue to prevail in the rural areas of northern and central Illinois, as well as southern Illinois outside of East St. Louis. From 1920 until 1972, Illinois was carried by the victor of each of these 14 presidential elections. In fact, the state was long seen as a national bellwether, supporting the winner in every election in the 20th century, except for 1916 and 1976. By contrast, Illinois has trended more toward the Democratic party, and has voted for their presidential candidates in the last six elections; in 2000, George W. Bush became the first Republican to win the presidency without carrying either Illinois or Vermont. Local politician and Chicago resident Barack Obama easily won the state's 21 electoral votes in 2008, with 61.9% of the vote. In 2010, incumbent governor Pat Quinn was re-elected with 47% of the vote, while Republican Mark Kirk was elected to the Senate with 48% of the vote. In 2012, President Obama easily carried Illinois again, with 58% to Republican candidate Mitt Romney's 41%. In 2014, Republican Bruce Rauner defeated Governor Quinn 50% to 46% to become Illinois's first Republican governor in 12 years after being sworn in on January 12, 2015, while Democratic senator Dick Durbin was re-elected with 53% of the vote. In 2016, Hillary Clinton carried Illinois with 55% of the vote, and Tammy Duckworth defeated incumbent Mark Kirk 54% to 40%. George W. Bush and Donald Trump are the only Republican presidential candidates to win without carrying either Illinois or Vermont. In 2018, Democrat JB Pritzker defeated the incumbent Bruce Rauner for the governorship with 54% of the vote. History of corruption Politics in the state have been infamous for highly visible corruption cases, as well as for crusading reformers, such as governors Adlai Stevenson and James R. Thompson. In 2006, former governor George Ryan was convicted of racketeering and bribery, leading to a six-and-a-half-year prison sentence. In 2008, then-Governor Rod Blagojevich was served with a criminal complaint on corruption charges, stemming from allegations that he conspired to sell the vacated Senate seat left by President Barack Obama to the highest bidder. Subsequently, on December 7, 2011, Rod Blagojevich was sentenced to 14 years in prison for those charges, as well as perjury while testifying during the case, totaling 18 convictions. Blagojevich was impeached and convicted by the legislature, resulting in his removal from office. In the late 20th century, Congressman Dan Rostenkowski was imprisoned for mail fraud; former governor and federal judge Otto Kerner, Jr. was imprisoned for bribery; Secretary of State Paul Powell was investigated and found to have gained great wealth through bribes, and State Auditor of Public Accounts (Comptroller) Orville Hodge was imprisoned for embezzlement. In 1912, William Lorimer, the GOP boss of Chicago, was expelled from the U.S. Senate for bribery and in 1921, Governor Len Small was found to have defrauded the state of a million dollars. U.S. presidential elections Illinois has shown a strong presence in presidential elections. Three presidents have claimed Illinois as their political base when running for president: Abraham Lincoln, Ulysses S. Grant, and most recently Barack Obama. Lincoln was born in Kentucky, but he moved to Illinois at age 21. He served in the General Assembly and represented the 7th congressional district in the U.S. House of Representatives before his election to the presidency in 1860. Ulysses S. Grant was born in Ohio and had a military career that precluded settling down, but on the eve of the Civil War and approaching middle age, he moved to Illinois and thus utilized the state as his home and political base when running for president. Barack Obama was born in Hawaii and made Illinois his home after graduating from law school, and later represented Illinois in the U.S. Senate. He then became president in 2008, running as a candidate from his Illinois base. Ronald Reagan was born in Illinois, in the city of Tampico, raised in Dixon, Illinois, and educated at Eureka College, outside Peoria. Reagan later moved to California during his young adulthood. He then became an actor, and later became California's Governor before being elected president. Hillary Clinton was born and raised in the suburbs of Chicago and became the first woman to represent a major political party in the general election of the U.S. presidency. Clinton ran from a platform based in New York State. African-American U.S. senators Nine African-Americans have served as members of the United States Senate. Of which three have represented Illinois, the most of any single state: Carol Moseley-Braun, Barack Obama, and Roland Burris, who was appointed to replace Obama after his election to the presidency. Moseley-Braun was the first African-American woman to become a U.S. Senator. Political families Three families from Illinois have played particularly prominent roles in the Democratic Party, gaining both statewide and national fame. Stevenson The Stevenson family, initially rooted in central Illinois and later based in the Chicago metropolitan area, has provided four generations of Illinois officeholders. Adlai Stevenson I (1835–1914) was a Vice President of the United States, as well as a Congressman Lewis Stevenson (1868–1929), son of Adlai, served as Illinois Secretary of State. Adlai Stevenson II (1900–1965), son of Lewis, served as Governor of Illinois and as the U.S. Ambassador to the United Nations; he was also the Democratic party's presidential nominee in 1952 and 1956, losing both elections to Dwight Eisenhower. Adlai Stevenson III (1930–2021), son of Adlai II, served ten years as a United States Senator. Daley The Daley family's powerbase was in Chicago. Richard J. Daley (1902–1976) served as Mayor of Chicago from 1955 to his death. Richard M. Daley (1942–), son of Richard J, was Chicago's longest-serving mayor, in office from 1989 to 2011. William M. Daley (1948–), another son of Richard J, is a former White House Chief of Staff and has served in a variety of appointed positions. Pritzker The Pritzker family is based in Chicago and have played important roles in both the private and the public sectors. Jay Pritzker (1922–1999), co-founder of Hyatt Hotel based in Chicago. Penny Pritzker (born 1959), 38th United States Secretary of Commerce under President Barack Obama. J.B. Pritzker (born 1965), current and 43rd governor of Illinois and co-founder of the Pritzker Group. Education Illinois State Board of education The Illinois State Board of Education (ISBE) is autonomous of the governor and the state legislature, and administers public education in the state. Local municipalities and their respective school districts operate individual public schools, but the ISBE audits performance of public schools with the Illinois School Report Card. The ISBE also makes recommendations to state leaders concerning education spending and policies. Primary and secondary schools Education is compulsory for ages 7–17 in Illinois. Schools are commonly, but not exclusively, divided into three tiers of primary and secondary education: elementary school, middle school or junior high school, and high school. District territories are often complex in structure. Many areas in the state are actually located in two school districts—one for high school, the other for elementary and middle schools. And such districts do not necessarily share boundaries. A given high school may have several elementary districts that feed into it, yet some of those feeder districts may themselves feed into multiple high school districts. Colleges and universities Using the criterion established by the Carnegie Foundation for the Advancement of Teaching, there are eleven "National Universities" in the state. The University of Chicago is continuously ranked as one of the world's top ten universities on various independent university rankings, and its Booth School of Business, along with Northwestern's Kellogg School of Management consistently rank within the top five graduate business schools in the country and top ten globally. The University of Illinois Urbana-Champaign is often ranked among the best engineering schools in the world and in United States. , six of these rank in the "first tier" among the top 500 National Universities in the nation, as determined by the U.S. News & World Report rankings: the University of Chicago, Northwestern University, the University of Illinois Urbana-Champaign, Loyola University Chicago, the Illinois Institute of Technology, DePaul University, University of Illinois Chicago, Illinois State University, Southern Illinois University Carbondale, and Northern Illinois University. Illinois also has more than twenty additional accredited four-year universities, both public and private, and dozens of small liberal arts colleges across the state. Additionally, Illinois supports 49 public community colleges in the Illinois Community College System. School financing Schools in Illinois are funded primarily by property taxes, based on state assessment of property values, rather than direct state contributions. Scholar Tracy Steffes has described Illinois public education as historically “inequitable,” a system where one of “the wealthiest of states” is “the stingiest in its support for education.” There have been several attempts to reform school funding in Illinois. The most notable attempt came in 1973 with the adoption of the Illinois Resource Equalizer Formula, a measure through which it was hoped funding could be collected and distributed to Illinois schools more equitably. However, opposition from affluent Illinois communities who objected to having to pay for the less well-off school districts (many of them Black majority communities, produced by redlining, white flight, and other “soft” segregation methods) resulted in the formula's abolition in the late 1980s. Infrastructure Transportation Because of its central location and its proximity to the Rust Belt and Grain Belt, Illinois is a national crossroads for air, auto, rail, and truck traffic. Airports From 1962 until 1998, Chicago's O'Hare International Airport (ORD) was the busiest airport in the world, measured both in terms of total flights and passengers. While it was surpassed by Atlanta's Hartsfield in 1998 (as Chicago splits its air traffic between O'Hare and Midway airports, while Atlanta uses only one airport), with 59.3 million domestic passengers annually, along with 11.4 million international passengers in 2008, O'Hare consistently remains one of the two or three busiest airports globally, and in some years still ranks number one in total flights. It is a major hub for both United Airlines and American Airlines, and a major airport expansion project is currently underway. Midway Airport (MDW), which had been the busiest airport in the world at one point until it was supplanted by O'Hare as the busiest airport in 1962, is now the secondary airport in the Chicago metropolitan area and still ranks as one of the nation's busiest airports. Midway is a major hub for Southwest Airlines and services many other carriers as well. Midway served 17.3 million domestic and international passengers in 2008. Rail Illinois has an extensive passenger and freight rail transportation network. Chicago is a national Amtrak hub and in-state passengers are served by Amtrak's Illinois Service, featuring the Chicago to Carbondale Illini and Saluki, the Chicago to Quincy Carl Sandburg and Illinois Zephyr, and the Chicago to St. Louis Lincoln Service. Currently there is trackwork on the Chicago–St. Louis line to bring the maximum speed up to , which would reduce the trip time by an hour and a half. Nearly every North American railway meets at
|
are heavily concentrated in and around Chicago, and account for nearly 30% of the state's population. However, taken together as a group, the various Protestant denominations comprise a greater percentage of the state's population than do Catholics. In 2010 Catholics in Illinois numbered 3,648,907. The largest Protestant denominations were the United Methodist Church with 314,461, and the Southern Baptist Convention, with 283,519 members. Illinois has one of the largest concentrations of Missouri Synod Lutherans in the United States. Illinois played an important role in the early Latter Day Saint movement, with Nauvoo, Illinois, becoming a gathering place for Mormons in the early 1840s. Nauvoo was the location of the succession crisis, which led to the separation of the Mormon movement into several Latter Day Saint sects. The Church of Jesus Christ of Latter-day Saints, the largest of the sects to emerge from the Mormon schism, has more than 55,000 adherents in Illinois today. Other Abrahamic religious communities A significant number of adherents of other Abrahamic faiths can be found in Illinois. Largely concentrated in the Chicago metropolitan area, followers of the Muslim, Baháʼí, and Jewish religions all call the state home. Muslims constituted the largest non-Christian group, with 359,264 adherents. Illinois has the largest concentration of Muslims by state in the country, with 2,800 Muslims per 100,000 citizens. The largest and oldest surviving Baháʼí House of Worship in the world is located on the shores of Lake Michigan in Wilmette, Illinois, one of eight continental Baháʼí House of Worship. It serves as a space for people of all backgrounds and religions to gather, meditate, reflect, and pray, expressing the Baháʼí principle of the oneness of religions. The Chicago area has a very large Jewish community, particularly in the suburbs of Skokie, Buffalo Grove, Highland Park, and surrounding suburbs. Former Chicago Mayor Rahm Emanuel is the Windy City's first Jewish mayor. Other religions Chicago is also home to a very large population of Hindus, Sikhs, Jains, and Buddhists. Economy The dollar gross state product for Illinois was estimated to be billion in 2019. The state's 2019 per capita gross state product was estimated to be around $72,000. As of February 2019, the unemployment rate in Illinois reached 4.2%. Illinois's minimum wage will rise to $15 per hour by 2025, making it one of the highest in the nation. Agriculture Illinois's major agricultural outputs are corn, soybeans, hogs, cattle, dairy products, and wheat. In most years, Illinois is either the first or second state for the highest production of soybeans, with a harvest of 427.7 million bushels (11.64 million metric tons) in 2008, after Iowa's production of 444.82 million bushels (12.11 million metric tons). Illinois ranks second in U.S. corn production with more than 1.5 billion bushels produced annually. With a production capacity of 1.5 billion gallons per year, Illinois is a top producer of ethanol, ranking third in the United States in 2011. Illinois is a leader in food manufacturing and meat processing. Although Chicago may no longer be "Hog Butcher for the World", the Chicago area remains a global center for food manufacture and meat processing, with many plants, processing houses, and distribution facilities concentrated in the area of the former Union Stock Yards. Illinois also produces wine, and the state is home to two American viticultural areas. In the area of The Meeting of the Great Rivers Scenic Byway, peaches and apples are grown. The German immigrants from agricultural backgrounds who settled in Illinois in the mid- to late 19th century are in part responsible for the profusion of fruit orchards in that area of Illinois. Illinois's universities are actively researching alternative agricultural products as alternative crops. Manufacturing Illinois is one of the nation's manufacturing leaders, boasting annual value added productivity by manufacturing of over $107 billion in 2006. , Illinois is ranked as the 4th-most productive manufacturing state in the country, behind California, Texas, and Ohio. About three-quarters of the state's manufacturers are located in the Northeastern Opportunity Return Region, with 38 percent of Illinois's approximately 18,900 manufacturing plants located in Cook County. As of 2006, the leading manufacturing industries in Illinois, based upon value-added, were chemical manufacturing ($18.3 billion), machinery manufacturing ($13.4 billion), food manufacturing ($12.9 billion), fabricated metal products ($11.5 billion), transportation equipment ($7.4 billion), plastics and rubber products ($7.0 billion), and computer and electronic products ($6.1 billion). Services By the early 2000s, Illinois's economy had moved toward a dependence on high-value-added services, such as financial trading, higher education, law, logistics, and medicine. In some cases, these services clustered around institutions that hearkened back to Illinois's earlier economies. For example, the Chicago Mercantile Exchange, a trading exchange for global derivatives, had begun its life as an agricultural futures market. Other important non-manufacturing industries include publishing, tourism, and energy production and distribution. Investments Venture capitalists funded a total of approximately $62 billion in the U.S. economy in 2016. Of this amount, Illinois-based companies received approximately $1.1 billion. Similarly, in FY 2016, the federal government spent $461 billion on contracts in the U.S. Of this amount, Illinois-based companies received approximately $8.7 billion. Energy Illinois is a net importer of fuels for energy, despite large coal resources and some minor oil production. Illinois exports electricity, ranking fifth among states in electricity production and seventh in electricity consumption. Coal The coal industry of Illinois has its origins in the middle 19th century, when entrepreneurs such as Jacob Loose discovered coal in locations such as Sangamon County. Jacob Bunn contributed to the development of the Illinois coal industry, and was a founder and owner of the Western Coal & Mining Company of Illinois. About 68% of Illinois has coal-bearing strata of the Pennsylvanian geologic period. According to the Illinois State Geological Survey, 211 billion tons of bituminous coal are estimated to lie under the surface, having a total heating value greater than the estimated oil deposits in the Arabian Peninsula. However, this coal has a high sulfur content, which causes acid rain, unless special equipment is used to reduce sulfur dioxide emissions. Many Illinois power plants are not equipped to burn high-sulfur coal. In 1999, Illinois produced 40.4 million tons of coal, but only 17 million tons (42%) of Illinois coal was consumed in Illinois. Most of the coal produced in Illinois is exported to other states and countries. In 2008, Illinois exported three million tons of coal, and was projected to export nine million in 2011, as demand for energy grows in places such as China, India, and elsewhere in Asia and Europe. , Illinois was ranked third in recoverable coal reserves at producing mines in the nation. Most of the coal produced in Illinois is exported to other states, while much of the coal burned for power in Illinois (21 million tons in 1998) is mined in the Powder River Basin of Wyoming. Mattoon was chosen as the site for the Department of Energy's FutureGen project, a 275-megawatt experimental zero emission coal-burning power plant that the DOE just gave a second round of funding. In 2010, after a number of setbacks, the city of Mattoon backed out of the project. Petroleum Illinois is a leading refiner of petroleum in the American Midwest, with a combined crude oil distillation capacity of nearly . However, Illinois has very limited crude oil proved reserves that account for less than 1% of the U.S. total reserves. Residential heating is 81% natural gas compared to less than 1% heating oil. Illinois is ranked 14th in oil production among states, with a daily output of approximately in 2005. Nuclear power Nuclear power arguably began in Illinois with the Chicago Pile-1, the world's first artificial self-sustaining nuclear chain reaction in the world's first nuclear reactor, built on the University of Chicago campus. There are six operating nuclear power plants in Illinois: Braidwood, Byron, Clinton, Dresden, LaSalle, and Quad Cities. With the exception of the single-unit Clinton plant, each of these facilities has two reactors. Three reactors have been permanently shut down and are in various stages of decommissioning: Dresden-1 and Zion-1 and 2. Illinois ranked first in the nation in 2010 in both nuclear capacity and nuclear generation. Generation from its nuclear power plants accounted for 12 percent of the nation's total. In 2007, 48% of Illinois's electricity was generated using nuclear power. The Morris Operation is the only de facto high-level radioactive waste storage site in the United States. Wind power Illinois has seen growing interest in the use of wind power for electrical generation. Most of Illinois was rated in 2009 as "marginal or fair" for wind energy production by the U.S. Department of Energy, with some western sections rated "good" and parts of the south rated "poor". These ratings are for wind turbines with hub heights; newer wind turbines are taller, enabling them to reach stronger winds farther from the ground. As a result, more areas of Illinois have become prospective wind farm sites. As of September 2009, Illinois had 1116.06 MW of installed wind power nameplate capacity with another 741.9 MW under construction. Illinois ranked ninth among U.S. states in installed wind power capacity, and sixteenth by potential capacity. Large wind farms in Illinois include Twin Groves, Rail Splitter, EcoGrove, and Mendota Hills. As of 2007, wind energy represented only 1.7% of Illinois's energy production, and it was estimated that wind power could provide 5–10% of the state's energy needs. Also, the Illinois General Assembly mandated in 2007 that by 2025, 25% of all electricity generated in Illinois is to come from renewable resources. Biofuels Illinois is ranked second in corn production among U.S. states, and Illinois corn is used to produce 40% of the ethanol consumed in the United States. The Archer Daniels Midland corporation in Decatur, Illinois, is the world's leading producer of ethanol from corn. The National Corn-to-Ethanol Research Center (NCERC), the world's only facility dedicated to researching the ways and means of converting corn (maize) to ethanol is located on the campus of Southern Illinois University Edwardsville. University of Illinois Urbana-Champaign is one of the partners in the Energy Biosciences Institute (EBI), a $500 million biofuels research project funded by petroleum giant BP. Taxes Tax is collected by the Illinois Department of Revenue. State income tax is calculated by multiplying net income by a flat rate. In 1990, that rate was set at 3%, but in 2010, the General Assembly voted for a temporary increase in the rate to 5%; the new rate went into effect on January 1, 2011; the personal income rate partially sunset on January 1, 2015, to 3.75%, while the corporate income tax fell to 5.25%. Illinois failed to pass a budget from 2015 to 2017, after the 736-day budget impasse, a budget was passed in Illinois after lawmakers overturned Governor Bruce Rauner's veto; this budget raised the personal income rate to 4.95% and the corporate rate to 7%. There are two rates for state sales tax: 6.25% for general merchandise and 1% for qualifying food, drugs, and medical appliances. The property tax is a major source of tax revenue for local government taxing districts. The property tax is a local—not state—tax, imposed by local government taxing districts, which include counties, townships, municipalities, school districts, and special taxation districts. The property tax in Illinois is imposed only on real property. On May 1, 2019, the Illinois Senate voted to approve a constitutional amendment that would have stricken language from the Illinois Constitution requiring a flat state income tax, in a 73–44 vote. If approved, the amendment would have allowed the state legislature to impose a graduated income tax based on annual income. The governor, J.B. Pritzker, approved the bill on May 27, 2019. It was scheduled for a 2020 general election ballot vote and required 60 percent voter approval to effectively amend the state constitution. The amendment was not approved by Illinoisans, with 55.1% of voters voting "No" on approval and 44.9% voting "Yes." As of 2017 Chicago had the highest state and local sales tax rate for a U.S. city with a populations above 200,000, at 10.250%. The state of Illinois has the second highest rate of real estate tax: 2.31%, which is second only to New Jersey at 2.44%. Toll roads are a de facto user tax on the citizens and visitors to the state of Illinois. Illinois ranks seventh out of the 11 states with the most miles of toll roads, at 282.1 miles. Chicago ranks fourth in most expensive toll roads in America by the mile, with the Chicago Skyway charging 51.2 cents per mile. Illinois also has the 11th highest gasoline tax by state, at 37.5 cents per gallon. Culture Museums Illinois has numerous museums; the greatest concentration of these are in Chicago. Several museums in Chicago are ranked as some of the best in the world. These include the John G. Shedd Aquarium, the Field Museum of Natural History, the Art Institute of Chicago, the Adler Planetarium, and the Museum of Science and Industry. The modern Abraham Lincoln Presidential Library and Museum in Springfield is the largest and most attended presidential library in the country. The Illinois State Museum boasts a collection of 13.5 million objects that tell the story of Illinois life, land, people, and art. The ISM is among only 5% of the nation's museums that are accredited by the American Alliance of Museums. Other historical museums in the state include the Polish Museum of America in Chicago; Magnolia Manor in Cairo; Easley Pioneer Museum in Ipava; the Elihu Benjamin Washburne; Ulysses S. Grant Homes, both in Galena; and the Chanute Air Museum, located on the former Chanute Air Force Base in Rantoul. The Chicago metropolitan area also hosts two zoos: The Brookfield Zoo, located about ten miles west of the city center in suburban Brookfield, contains more than 2,300 animals and covers . The Lincoln Park Zoo is located in Lincoln Park on Chicago's North Side, approximately north of the Loop. The zoo accounts for more than of the park. Music Illinois is a leader in music education, having hosted the Midwest Clinic International Band and Orchestra Conference since 1946, as well being home to the Illinois Music Educators Association (ILMEA, formerly IMEA), one of the largest professional music educator's organizations in the country. Each summer since 2004, Southern Illinois University Carbondale has played host to the Southern Illinois Music Festival, which presents dozens of performances throughout the region. Past featured artists include the Eroica Trio and violinist David Kim. Chicago, in the northeast corner of the state, is a major center for music in the midwestern United States where distinctive forms of blues (greatly responsible for the future creation of rock and roll), and house music, a genre of electronic dance music, were developed. The Great Migration of poor black workers from the South into the industrial cities brought traditional jazz and blues music to the city, resulting in Chicago blues and "Chicago-style" Dixieland jazz. Notable blues artists included Muddy Waters, Junior Wells, Howlin' Wolf and both Sonny Boy Williamsons; jazz greats included Nat King Cole, Gene Ammons, Benny Goodman, and Bud Freeman. Chicago is also well known for its soul music. In the early 1930s, Gospel music began to gain popularity in Chicago due to Thomas A. Dorsey's contributions at Pilgrim Baptist Church. In the 1980s and 1990s, heavy rock, punk, and hip hop also became popular in Chicago. Orchestras in Chicago include the Chicago Symphony Orchestra, the Lyric Opera of Chicago, and the Chicago Sinfonietta. Movies John Hughes, who moved from Grosse Pointe to Northbrook, based many films of his in Chicago, and its suburbs. Ferris Bueller's Day Off, Home Alone, The Breakfast Club, and all his films take place in the fictional Shermer, Illinois (the original name of Northbrook was Shermerville, and Hughes's High School, Glenbrook North High School, is on Shermer Road). Most locations in his films include Glenbrook North, the former Maine North High School, the Ben Rose House in Highland Park, and the famous Home Alone house in Winnetka, Illinois. Sports Major league sports As one of the United States' major metropolises, all major sports leagues have teams headquartered in Chicago. Two Major League Baseball teams are located in the state. The Chicago Cubs of the National League play in the second-oldest major league stadium (Wrigley Field) and are widely known for having the longest championship drought in all of major American sport: not winning the World Series since 1908. However, this ended in 2016 when the Cubs finally won their first world series in 108 years. That drought finally came to an end when the Cubs beat the Cleveland Indians in seven games to win the 2016 World Series. The Chicago White Sox of the American League won the World Series in 2005, their first since 1917. They play on the city's south side at Guaranteed Rate Field. The Chicago Bears football team has won nine total NFL Championships, the last occurring in Super Bowl XX on January 26, 1986. The Chicago Bulls of the NBA is one of the most recognized basketball teams in the world, largely as a result of the efforts of Michael Jordan, who led the team to six NBA championships in eight seasons in the 1990s. The Chicago Blackhawks of the NHL began playing in 1926, and became a member of the Original Six once the NHL dropped to that number of teams during World War II. The Blackhawks have won six Stanley Cups, most recently in 2015. The Chicago Fire F.C. is a member of MLS and has been one of the league's most successful and best-supported clubs since its founding in 1997, winning one league and four Lamar Hunt U.S. Open Cups in that timespan. The team played in Bridgeview, adjacent to Chicago from 2006 to 2019. The team now plays at Soldier Field in Chicago. The Chicago Red Stars have played at the top level of U.S. women's soccer since their formation in 2009, except in the 2011 season. The team currently plays in the National Women's Soccer League, sharing a stadium with the Fire. The Chicago Sky have played in the Women's National Basketball Association (WNBA) since 2006. The Sky won their first WNBA Championship in 2021. They play at Wintrust Arena in Chicago. The Chicago Bandits of the NPF, a women's softball league; have won four league titles, most recently in 2016. They play at Parkway Bank Sports Complex in Rosemont, Illinois in the Chicago area. Minor league sports Many minor league teams also call Illinois their home. They include: The Bloomington Edge of the Indoor Football League The Bloomington Flex of the Midwest Professional Basketball Association The Chicago Wolves are an AHL team playing in the suburb of Rosemont The Gateway Grizzlies of the Frontier League in Sauget, Illinois The Kane County Cougars of the American Association The Joliet Slammers of the Frontier League The Peoria Chiefs of the High-A Central The Peoria Rivermen are an SPHL team The Rockford Aviators of the Frontier League The Rockford IceHogs of the AHL The Schaumburg Boomers of the Frontier League The Southern Illinois Miners based out of Marion in the Frontier League The Windy City Bulls, playing in the Chicago suburb of Hoffman Estates, of the NBA G League The Windy City ThunderBolts of the Frontier League College sports The state features 13 athletic programs that compete in NCAA Division I, the highest level of U.S. college sports. The two most prominent are the Illinois Fighting Illini and Northwestern Wildcats, both members of the Big Ten Conference and the only ones competing in one of the so-called "Power Five conferences". The Fighting Illini football team has won five national championships and three Rose Bowl Games, whereas the men's basketball team has won 17 conference seasons and played five Final Fours. Meanwhile, the Wildcats have won eight football conference championships and one Rose Bowl Game. The Northern Illinois Huskies from DeKalb, Illinois compete in the Mid-American Conference winning four conference championships and earning a bid in the Orange Bowl along with producing Heisman candidate Jordan Lynch at quarterback. The Huskies are the state's only other team competing in the Football Bowl Subdivision, the top level of NCAA football. Four schools have football programs that compete in the second level of Division I football, the Football Championship Subdivision (FCS). The Illinois State Redbirds (Normal, adjacent to Bloomington) and Southern Illinois Salukis (representing Southern Illinois University's main campus in Carbondale) are members of the Missouri Valley Conference (MVC) for non-football sports and the Missouri Valley Football Conference (MVFC). The Western Illinois Leathernecks (Macomb) are full members of the Summit League, which does not sponsor football, and also compete in the MVFC. The Eastern Illinois Panthers (Charleston) are members of the Ohio Valley Conference (OVC). The city of Chicago is home to four Division I programs that do not sponsor football. The DePaul Blue Demons, with main campuses in Lincoln Park and the Loop, are members of the Big East Conference. The Loyola Ramblers, with their main campus straddling the Edgewater and Rogers Park community areas on the city's far north side, compete in the MVC but will move to the Atlantic 10 Conference in July 2022. The UIC Flames, from the Near West Side next to the Loop, are in the Horizon League but will move to the MVC in July 2022. The Chicago State Cougars, from the city's south side, compete in the Western Athletic Conference through the 2021–22 school year, after which they will leave for an as-yet-unknown affiliation. Finally, two non-football Division I programs are located downstate. The Bradley Braves (Peoria) are MVC members, and the SIU Edwardsville Cougars (in the Metro East region across the Mississippi River from St. Louis) compete in the OVC. Former Chicago sports franchises Folded teams The city was formerly home to several other teams that either failed to survive or belonged to leagues that folded. The Chicago Blitz, United States Football League 1983–1984 The Chicago Sting, North American Soccer League 1975–1984 and Major Indoor Soccer League The Chicago Cougars, World Hockey Association 1972–1975 The Chicago Rockers, Continental Basketball Association The Chicago Skyliners, American Basketball Association 2000–01 The Chicago Bruisers, Arena Football League 1987–1989 The Chicago Power, National Professional Soccer League 1984–2001 The Chicago Blaze, National Women's Basketball League The Chicago Machine, Major League Lacrosse The Chicago Whales of the Federal Baseball League, a rival league to Major League Baseball from 1914 to 1916 The Chicago American Giants of the Negro baseball league, 1910–1952 The Chicago Bruins of the National Basketball League, 1939–1942 The Chicago Studebaker Flyers of the NBL, 1942–43 The Chicago American Gears of the NBL, 1944–1947 The Chicago Stags of the Basketball Association of America, 1946–1950 The Chicago Majors of the American Basketball League, 1961–1963 The Chicago Express of the ECHL The Chicago Enforcers of the XFL pro football league The Chicago Fire, World Football League 1974 The Chicago Winds, World Football League 1975 The Chicago Hustle, Women's Professional Basketball League 1978–1981 The Chicago Mustangs, North American Soccer League 1966–1967 The Chicago Rush, Arena Football League 2001–2013 The Chicago Storm, American Profesional Slo-Pitch League (APSPL), 1977-1978 The Chicago Nationwide Advertising, North American Softball League (NASL), 1980 Relocated teams The NFL's Arizona Cardinals, who currently play in the Phoenix suburb of Glendale, Arizona, played in Chicago as the Chicago Cardinals, until moving to St. Louis, Missouri after the 1959 season. An NBA expansion team known as the Chicago Packers in 1961–1962, and as the Chicago Zephyrs the following year, moved to Baltimore after the 1962–1963 season. The franchise is now known as the Washington Wizards. Professional sports teams outside Chicago The Peoria Chiefs are a High-A minor league baseball team affiliated with the St. Louis Cardinals. The Schaumburg Boomers, Southern Illinois Miners, Gateway Grizzlies, Joliet Slammers and Windy City ThunderBolts all belong to the independent Frontier League. Additionally, the Kane County Cougars play in the American Association and the Lake County Fielders were members of the former North American League. In addition to the Chicago Wolves, the AHL also has the Rockford IceHogs serving as the AHL affiliate of the Chicago Blackhawks. The second incarnation of the Peoria Rivermen plays in the SPHL. Motor racing Motor racing oval tracks at the Chicagoland Speedway in Joliet, the Chicago Motor Speedway in Cicero and the Gateway International Raceway in Madison, near St. Louis, have hosted NASCAR, CART, and IRL races, whereas the Sports Car Club of America, among other national and regional road racing clubs, have visited the Autobahn Country Club in Joliet, the Blackhawk Farms Raceway in South Beloit and the former Meadowdale International Raceway in Carpentersville. Illinois also has several short tracks and dragstrips. The dragstrip at Gateway International Raceway and the Route 66 Raceway, which sits on the same property as the Chicagoland Speedway, both host NHRA drag races. Golf Illinois features several golf courses, such as Olympia Fields, Medinah, Midlothian, Cog Hill, and Conway Farms, which have often hosted the BMW Championship, Western Open, and Women's Western Open. Also, the state has hosted 13 editions of the U.S. Open (latest at Olympia Fields in 2003), six editions of the PGA Championship (latest at Medinah in 2006), three editions of the U.S. Women's Open (latest at The Merit Club), the 2009 Solheim Cup (at Rich Harvest Farms), and the 2012 Ryder Cup (at Medinah). The John Deere Classic is a regular PGA Tour event played in the Quad Cities since 1971, whereas the Encompass Championship is a Champions Tour event since 2013. Previously, the LPGA State Farm Classic was an LPGA Tour event from 1976 to 2011. Parks and recreation The Illinois state parks system began in 1908 with what is now Fort Massac State Park, becoming the first park in a system encompassing more than 60 parks and about the same number of recreational and wildlife areas. Areas under the protection of the National Park Service include: the Illinois and Michigan Canal National Heritage Corridor near Lockport, the Lewis and Clark National Historic Trail, the Lincoln Home National Historic Site in Springfield, the Mormon Pioneer National Historic Trail, the Trail of Tears National Historic Trail, the American Discovery Trail, and the Pullman National Monument. The federal government also manages the Shawnee National Forest and the Midewin National Tallgrass Prairie. Law and politics In a 2020 study, Illinois was ranked as the 4th easiest state for citizens to vote in. State government The government of Illinois, under the Constitution of Illinois, has three branches of government: executive, legislative and judicial. The executive branch is split into several statewide elected offices, with the governor as chief executive. Legislative functions are granted to the Illinois General Assembly. The judiciary is composed of the Supreme Court and lower courts. The Illinois General Assembly is the state legislature, composed of the 118-member Illinois House of Representatives and the 59-member Illinois Senate. The members of the General Assembly are elected at the beginning of each even-numbered year. The Illinois Compiled Statutes (ILCS) are the codified statutes of a general and permanent nature. The executive branch is composed of six elected officers and their offices as well as numerous other departments. The six elected officers are: Governor, Lieutenant Governor, Attorney General, Secretary of State, Comptroller, and Treasurer. The government of Illinois has numerous departments, agencies, boards and commissions, but the so-called code departments provide most of the state's services. The Judiciary of Illinois is the unified court system of Illinois. It consists of the Supreme Court, Appellate Court, and Circuit Courts. The Supreme Court oversees the administration of the court system. The administrative divisions of Illinois are counties, townships, precincts, cities, towns, villages, and special-purpose districts. The basic subdivision of Illinois are the 102 counties. Eighty-five of the 102 counties are in turn divided into townships and precincts. Municipal governments are the cities, villages, and incorporated towns. Some localities possess home rule, which allows them to govern themselves to a certain extent. Party balance Illinois is a Democratic stronghold. Historically, Illinois was a political swing state, with near-parity existing between the Republican and the Democratic parties. However, in recent elections, the Democratic Party has gained ground, and Illinois has come to be seen as a solid "blue" state in presidential campaigns. Votes from Chicago and most of Cook County have long been strongly Democratic. However, the "collar counties" (the suburbs surrounding Chicago's Cook County, Illinois), can be seen as moderate voting districts. College towns like Carbondale, Champaign, and Normal also lean Democratic. Republicans continue to prevail in the rural areas of northern and central Illinois, as well as southern Illinois outside of East St. Louis. From 1920 until 1972, Illinois was carried by the victor of each of these 14 presidential elections. In fact, the state was long seen as a national bellwether, supporting the winner in every election in the 20th century, except for 1916 and 1976. By contrast, Illinois has trended more toward the Democratic party, and has voted for their presidential candidates in the last six elections; in 2000, George W. Bush became the first Republican to win the presidency without carrying either Illinois or Vermont. Local politician
|
the United States in 1975, and Murdock grew up in Lafayette, Indiana, beginning in 1977 when his father became a professor of entomology at Purdue University. Murdock graduated from Harrison High School in 1991, and then earned his bachelor's degree in computer science from Purdue in 1996. While a college student, Murdock founded the Debian project in August 1993, and wrote the Debian Manifesto in January 1994. Murdock conceived Debian as a Linux distribution that embraced open design, contributions, and support from the free software community. He named Debian after his then-girlfriend (later wife) Debra Lynn, and himself. They later married, had three children, and divorced in January 2008. In January 2006, Murdock was appointed Chief Technology Officer of the Free Standards Group and elected chair of the Linux Standard Base workgroup. He continued as CTO of the Linux Foundation when the group was formed from the merger of the Free Standards Group and Open Source Development Labs. Murdock left the Linux Foundation to join Sun Microsystems in March 2007 to lead Project Indiana, which he
|
a full OpenSolaris distribution with GNOME and userland tools from GNU plus a network-based package management system. From March 2007 to February 2010, he was Vice President of Emerging Platforms at Sun, until the company merged with Oracle and he resigned his position with the company. From 2011 until 2015 Murdock was Vice President of Platform and Developer Community at Salesforce Marketing Cloud, based in Indianapolis. From November 2015 until his death Murdock was working for Docker, Inc. Death Murdock died on December 28, 2015 in San Francisco. Though initially no cause of death was released, in July 2016 it was announced his death had been ruled a suicide. The police confirmed that the cause of death was due to asphyxiation caused by hanging himself with a vacuum cleaner electrical cord. The last tweets from Murdock's Twitter account first announced that he would commit suicide, then said he would not. He reported having been accused of assault on a police officer after having been himself assaulted and sexually humiliated by the police, then declared an intent to devote his life to opposing police abuse. His Twitter account was taken down shortly afterwards. The San Francisco police confirmed he was detained, saying he matched the description in a reported attempted break-in and that he appeared to be drunk. The police stated that he became violent and was ultimately taken to jail on suspicion of four misdemeanor counts. They added that he did not appear to be suicidal and was medically examined prior to release.
|
norm. The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable. For instance, if then but the next example shows that the converse is in general true. Given any the vector (which is the vector rotated by 90°) belongs to and so also belongs to (although scalar multiplication of by is not defined in the vector in denoted by is nevertheless still also an element of ). For the complex inner product, whereas for the real inner product the value is always If is a complex inner product and is a continuous linear operator that satisfies for all then This statement is no longer true if is instead a real inner product, as this next example shows. Suppose that has the inner product mentioned above. Then the map defined by is a linear map (linear for both and ) that denotes rotation by in the plane. Because and perpendicular vectors and is just the dot product, for all vectors nevertheless, this rotation map is certainly not identically In contrast, using the complex inner product gives which (as expected) is not identically zero. Orthonormal sequences Let be a finite dimensional inner product space of dimension Recall that every basis of consists of exactly linearly independent vectors. Using the Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis is orthonormal if for every and for each index This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let be any inner product space. Then a collection is a for if the subspace of generated by finite linear combinations of elements of is dense in (in the norm induced by the inner product). Say that is an for if it is a basis and if and for all Using an infinite-dimensional analog of the Gram-Schmidt process one may show: Theorem. Any separable inner product space has an orthonormal basis. Using the Hausdorff maximal principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that Theorem. Any complete inner product space has an orthonormal basis. The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's A Hilbert Space Problem Book (see the references). {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Proof |- | Recall that the dimension of an inner product space is the cardinality of a maximal orthonormal system that it contains (by Zorn's lemma it contains at least one, and any two have the same cardinality). An orthonormal basis is certainly a maximal orthonormal system but the converse need not hold in general. If is a dense subspace of an inner product space then any orthonormal basis for is automatically an orthonormal basis for Thus, it suffices to construct an inner product space with a dense subspace whose dimension is strictly smaller than that of Let be a Hilbert space of dimension (for instance, ). Let be an orthonormal basis of so Extend to a Hamel basis for where Since it is known that the Hamel dimension of is the cardinality of the continuum, it must be that Let be a Hilbert space of dimension (for instance, ). Let be an orthonormal basis for and let be a bijection. Then there is a linear transformation such that for and for Let and let be the graph of Let be the closure of in ; we will show Since for any we have it follows that Next, if then for some so ; since as well, we also have It follows that so and is dense in Finally, is a maximal orthonormal set in ; if for all then so is the zero vector in Hence the dimension of is whereas it is clear that the dimension of is This completes the proof. |} Parseval's identity leads immediately to the following theorem: Theorem. Let be a separable inner product space and an orthonormal basis of Then the map is an isometric linear map with a dense image. This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series: Theorem. Let be the inner product space Then the sequence (indexed on set of all integers) of continuous functions is an orthonormal basis of the space with the inner product. The mapping is an isometric linear map with dense image. Orthogonality of the sequence follows immediately from the fact that if then Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the , follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials. Operators on inner product spaces Several types of linear maps between inner product spaces and are of relevance: : is linear and continuous with respect to the metric defined above, or equivalently, is linear and the set of non-negative reals where ranges over the closed unit ball of is bounded. : is linear and for all : satisfies for all A (resp. an ) is an isometry that is also a linear map (resp. an antilinear map). For inner product spaces, the polarization identity can be used to show that is an isometry if and only if for all All isometries are injective. The Mazur–Ulam theorem establishes that every surjective isometry between two normed spaces is an affine transformation. Consequently, an isometry between real inner product spaces is a linear map if and only if Isometries are morphisms between inner product spaces, and morphisms of real inner product spaces are orthogonal transformations (compare with orthogonal matrix). : is an isometry which is surjective (and hence bijective). Isometrical isomorphisms are also known as unitary operators (compare with unitary matrix). From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces. Generalizations Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products
|
that the Hamel dimension of is the cardinality of the continuum, it must be that Let be a Hilbert space of dimension (for instance, ). Let be an orthonormal basis for and let be a bijection. Then there is a linear transformation such that for and for Let and let be the graph of Let be the closure of in ; we will show Since for any we have it follows that Next, if then for some so ; since as well, we also have It follows that so and is dense in Finally, is a maximal orthonormal set in ; if for all then so is the zero vector in Hence the dimension of is whereas it is clear that the dimension of is This completes the proof. |} Parseval's identity leads immediately to the following theorem: Theorem. Let be a separable inner product space and an orthonormal basis of Then the map is an isometric linear map with a dense image. This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series: Theorem. Let be the inner product space Then the sequence (indexed on set of all integers) of continuous functions is an orthonormal basis of the space with the inner product. The mapping is an isometric linear map with dense image. Orthogonality of the sequence follows immediately from the fact that if then Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the , follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials. Operators on inner product spaces Several types of linear maps between inner product spaces and are of relevance: : is linear and continuous with respect to the metric defined above, or equivalently, is linear and the set of non-negative reals where ranges over the closed unit ball of is bounded. : is linear and for all : satisfies for all A (resp. an ) is an isometry that is also a linear map (resp. an antilinear map). For inner product spaces, the polarization identity can be used to show that is an isometry if and only if for all All isometries are injective. The Mazur–Ulam theorem establishes that every surjective isometry between two normed spaces is an affine transformation. Consequently, an isometry between real inner product spaces is a linear map if and only if Isometries are morphisms between inner product spaces, and morphisms of real inner product spaces are orthogonal transformations (compare with orthogonal matrix). : is an isometry which is surjective (and hence bijective). Isometrical isomorphisms are also known as unitary operators (compare with unitary matrix). From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces. Generalizations Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened. Degenerate inner products If is a vector space and a semi-definite sesquilinear form, then the function: makes sense and satisfies all the properties of norm except that does not imply (such a functional is then called a semi-norm). We can produce an inner product space by considering the quotient The sesquilinear form factors through This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly important example of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets. Nondegenerate conjugate symmetric forms Alternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero there exists some such that though need not equal ; in other words, the induced map to the dual space is injective. This generalization is important in differential geometry: a manifold whose tangent spaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugate symmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index. Product of vectors in Minkowski space is an example of indefinite inner product, although, technically speaking, it is not an inner product according to the standard definition above. Minkowski space has four dimensions and indices 3 and 1 (assignment of "+" and "−" to them differs depending on conventions). Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the injective homomorphism ) and thus hold more generally. Related products The term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a with an vector, yielding a matrix (a scalar), while the outer product is the product of an vector with a covector, yielding an matrix. The outer product is defined for different dimensions, while the
|
travelled through Europe and North America. During this period he worked as an IBM 'Expediter Analyser' (a kind of procurement clerk), a testing technician for the British Steel Corporation and a costing clerk for a law firm in London's Chancery Lane. Career Writing career Banks took up writing at the age of 11. He completed a first novel, The Hungarian Lift-Jet, at 16 and a second, TTR (also entitled The Tashkent Rambler) in his first year at Stirling University in 1972. Though he saw himself mainly as a science fiction author, his publishing problems led him to pursue mainstream fiction. His first published novel The Wasp Factory, appeared in 1984, when he was thirty. After the success of The Wasp Factory, Banks began to write full time. His editor at Macmillan, James Hale, advised him to write a book a year, which he agreed to do. His second novel Walking on Glass followed in 1985, then The Bridge in 1986, and in 1987 Espedair Street, which was later broadcast as a series on BBC Radio 4. His first published science fiction book, Consider Phlebas, emerged in 1987 and as the first of several in the acclaimed Culture series. Banks cited Robert A. Heinlein, Isaac Asimov, Arthur C. Clarke, Brian Aldiss, M. John Harrison and Dan Simmons as influences. The Crow Road, published in 1992, was adapted as a BBC television series. Banks continued to write both science fiction and mainstream. His final novel The Quarry appeared in June 2013, the month of his death. Banks published work under two names. His parents had meant to name him "Iain Menzies Banks", but his father mistakenly registered him as "Iain Banks". Banks still used the middle name and submitted The Wasp Factory for publication as "Iain M. Banks". Banks's editor inquired about the possibility of omitting the 'M' as it appeared "too fussy" and the potential existed for confusion with Rosie M. Banks, a romantic novelist in the Jeeves novels by P. G. Wodehouse; Banks agreed to the omission. After three mainstream novels, Banks's publishers agreed to publish his first science fiction (SF) novel Consider Phlebas. To create a distinction between the mainstream and the SF, Banks suggested returning the 'M' to his name, which was then used in all of his science fiction works. By his death in June 2013, Banks had published 26 novels. A 27th novel The Quarry was published posthumously. His final work, a poetry collection, appeared in February 2015. In an interview in January 2013, he also mentioned he had the plot idea for another novel in the Culture series, which would most likely have been his next book and was planned for publication in 2014. Banks wrote in various categories, but enjoyed science fiction most. In September 2012 Banks became a Guest of Honour at the 2014 World Science Fiction Convention, Loncon 3. Radio and television Banks was the subject of The Strange Worlds of Iain Banks South Bank Show (1997), a TV documentary that examined his mainstream writing, and was an in-studio guest for the final episode of Marc Riley's Rocket Science radio show, broadcast on BBC Radio 6 Music. An audio version of The Business, set to contemporary music, arranged by Paul Oakenfold, was broadcast in October 1999 on Galaxy Fm as the tenth Urban Soundtracks. Banks's The State of the Art, adapted for radio by Paul Cornell, was broadcast on BBC Radio 4 in 2009 with Nadia Molinari producing and directing. In 1998 Espedair Street was dramatised as a serial for Radio 4, presented by Paul Gambaccini in the style of a Radio 1 documentary. In 2011 Banks featured on the BBC Radio 4 programme Saturday Live. Banks reaffirmed his atheism in this appearance, explaining death as an important "part of the totality of life" that should be treated realistically instead of feared. Banks appeared on the BBC television programme Question Time, a show that features political discussion. In 2006 he captained a team of writers to victory in a special series of BBC Two's University Challenge. Banks also won a 2006 edition of BBC One's Celebrity Mastermind; the author selected "Malt whisky and the distilleries of Scotland" as his specialist subject. His final interview was with Kirsty Wark, broadcast on BBC2 Scotland as Iain Banks: Raw Spirit 12 June 2013. BBC One Scotland and BBC2 broadcast an adaptation of his novel Stonemouth in June 2015. Theatre Banks was involved in the stage production The Curse of Iain Banks, written by Maxton Walker and performed at the Edinburgh Fringe festival in 1999. Banks collaborated frequently with its soundtrack composer Gary Lloyd, for instance on a song collection they co-composed as a tribute to the fictional band Frozen Gold from Banks's novel Espedair Street. Lloyd also scored for a spoken word and music production of his novel The Bridge, which Banks himself voiced and which featured a cast of 40 musicians, released on CD by Codex Records in 1996. Lloyd recorded Banks for including in the play as a disembodied voice of himself in one of the cast member's dreams. Lloyd explained his collaboration with Banks on their first versions of Espedair Street (later versions being dated between 2005 and 2013) in a Guardian article prior to the opening of The Curse of Iain Banks: When he [Banks] first played them to me, I think he was worried that they might not be up to scratch (some of them dated back to 1973 and had never been heard). He needn't have worried. They're fantastic. We're slaving away to get the songs to the stage where we can go into the studio and make a demo. Iain bashes out melodies on his state-of-the-art Apple Mac in Edinburgh and sends them down to me in Chester where I put them onto my Atari. Politics Banks' political stance has been termed "left of centre" and in 2002 endorsed the Scottish Socialist Party. He was an Honorary Associate of the National Secular Society and a Distinguished Supporter of the Humanist Society Scotland. As a signatory to the Declaration of Calton Hill, he supported Scottish independence. In November 2012, Banks backed the campaign group emerging from the Radical Independence Conference held in that month. He opined that the independence movement was marked by cooperation: "Scots just seem to be more communitarian than the consensus expressed by the UK population as a whole." In late 2004, Banks joined a group of UK politicians and media figures campaigning to have Prime Minister Tony Blair impeached after the 2003 invasion of Iraq. In protest, he cut up his passport and posted it to 10 Downing Street. In a Socialist Review interview, Banks
|
1986, and in 1987 Espedair Street, which was later broadcast as a series on BBC Radio 4. His first published science fiction book, Consider Phlebas, emerged in 1987 and as the first of several in the acclaimed Culture series. Banks cited Robert A. Heinlein, Isaac Asimov, Arthur C. Clarke, Brian Aldiss, M. John Harrison and Dan Simmons as influences. The Crow Road, published in 1992, was adapted as a BBC television series. Banks continued to write both science fiction and mainstream. His final novel The Quarry appeared in June 2013, the month of his death. Banks published work under two names. His parents had meant to name him "Iain Menzies Banks", but his father mistakenly registered him as "Iain Banks". Banks still used the middle name and submitted The Wasp Factory for publication as "Iain M. Banks". Banks's editor inquired about the possibility of omitting the 'M' as it appeared "too fussy" and the potential existed for confusion with Rosie M. Banks, a romantic novelist in the Jeeves novels by P. G. Wodehouse; Banks agreed to the omission. After three mainstream novels, Banks's publishers agreed to publish his first science fiction (SF) novel Consider Phlebas. To create a distinction between the mainstream and the SF, Banks suggested returning the 'M' to his name, which was then used in all of his science fiction works. By his death in June 2013, Banks had published 26 novels. A 27th novel The Quarry was published posthumously. His final work, a poetry collection, appeared in February 2015. In an interview in January 2013, he also mentioned he had the plot idea for another novel in the Culture series, which would most likely have been his next book and was planned for publication in 2014. Banks wrote in various categories, but enjoyed science fiction most. In September 2012 Banks became a Guest of Honour at the 2014 World Science Fiction Convention, Loncon 3. Radio and television Banks was the subject of The Strange Worlds of Iain Banks South Bank Show (1997), a TV documentary that examined his mainstream writing, and was an in-studio guest for the final episode of Marc Riley's Rocket Science radio show, broadcast on BBC Radio 6 Music. An audio version of The Business, set to contemporary music, arranged by Paul Oakenfold, was broadcast in October 1999 on Galaxy Fm as the tenth Urban Soundtracks. Banks's The State of the Art, adapted for radio by Paul Cornell, was broadcast on BBC Radio 4 in 2009 with Nadia Molinari producing and directing. In 1998 Espedair Street was dramatised as a serial for Radio 4, presented by Paul Gambaccini in the style of a Radio 1 documentary. In 2011 Banks featured on the BBC Radio 4 programme Saturday Live. Banks reaffirmed his atheism in this appearance, explaining death as an important "part of the totality of life" that should be treated realistically instead of feared. Banks appeared on the BBC television programme Question Time, a show that features political discussion. In 2006 he captained a team of writers to victory in a special series of BBC Two's University Challenge. Banks also won a 2006 edition of BBC One's Celebrity Mastermind; the author selected "Malt whisky and the distilleries of Scotland" as his specialist subject. His final interview was with Kirsty Wark, broadcast on BBC2 Scotland as Iain Banks: Raw Spirit 12 June 2013. BBC One Scotland and BBC2 broadcast an adaptation of his novel Stonemouth in June 2015. Theatre Banks was involved in the stage production The Curse of Iain Banks, written by Maxton Walker and performed at the Edinburgh Fringe festival in 1999. Banks collaborated frequently with its soundtrack composer Gary Lloyd, for instance on a song collection they co-composed as a tribute to the fictional band Frozen Gold from Banks's novel Espedair Street. Lloyd also scored for a spoken word and music production of his novel The Bridge, which Banks himself voiced and which featured a cast of 40 musicians, released on CD by Codex Records in 1996. Lloyd recorded Banks for including in the play as a disembodied voice of himself in one of the cast member's dreams. Lloyd explained his collaboration with Banks on their first versions of Espedair Street (later versions being dated between 2005 and 2013) in a Guardian article prior to the opening of The Curse of Iain Banks: When he [Banks] first played them to me, I think he was worried that they might not be up to scratch (some of them dated back to 1973 and had never been heard). He needn't have worried. They're fantastic. We're slaving away to get the songs to the stage where we can go into the studio and make a demo. Iain bashes out melodies on his state-of-the-art Apple Mac in Edinburgh and sends them down to me in Chester where I put them onto my Atari. Politics Banks' political stance has been termed "left of centre" and in 2002 endorsed the Scottish Socialist Party. He was an Honorary Associate of the National Secular Society and a Distinguished Supporter of the Humanist Society Scotland. As a signatory to the Declaration of Calton Hill, he supported Scottish independence. In November 2012, Banks backed the campaign group emerging from the Radical Independence Conference held in that month. He opined that the independence movement was marked by cooperation: "Scots just seem to be more communitarian than the consensus expressed by the UK population as a whole." In late 2004, Banks joined a group of UK politicians and media figures campaigning to have Prime Minister Tony Blair impeached after the 2003 invasion of Iraq. In protest, he cut up his passport and posted it to 10 Downing Street. In a Socialist Review interview, Banks explained that his passport protest occurred after he had "abandoned the idea of crashing my Land Rover through the gates of Fife dockyard, after spotting the guys armed with machine guns." Banks relayed his concerns about the Iraq invasion in his book Raw Spirit and through the protagonist Alban McGill in the novel The Steep Approach to Garbadale, who confronts another character with arguments of a similar kind. In 2010, Banks called for a cultural and educational boycott of Israel after the Gaza flotilla raid incident. In a letter to The Guardian newspaper, Banks said he had instructed his agent to turn down any further book translation deals with Israeli publishers: Appeals to reason, international law, U. N. resolutions and simple human decency mean – it is now obvious – nothing to Israel... I would urge all writers, artists and others in the creative arts, as well as those academics engaging in joint educational projects with Israeli institutions, to consider doing everything they can to convince Israel of its moral degradation and ethical isolation, preferably by simply having nothing more to do with this outlaw state. An extract from Banks's contribution to the written collection Generation Palestine: Voices from the Boycott, Divestment and Sanctions Movement, entitled "Our People", appeared in The Guardian in the wake of the author's cancer revelation. The extract conveys the author's support for the Boycott, Divestment and Sanctions (BDS) campaign issued by a Palestinian civil society against Israel until the country complies with what it holds are international law and Palestinian rights. This commenced in 2005 and applies lessons from Banks's experience with South Africa's apartheid era. The continuation of Banks's boycott of Israeli publishers for the sale of rights to his novels was confirmed in the extract and Banks further explained, "I don't buy Israeli-sourced products or food, and my partner and I try to support Palestinian-sourced products wherever possible." Personal life Banks met his first wife Annie in London before the 1984 release of his first book. They lived in Faversham in the south of England, then split up in 1988. Banks returned to Edinburgh and dated another woman for two years until she left him. Iain and Annie were reconciled a year later and they moved to Fife. They were married in Hawaii in 1992, but in 2007, after 15 years of marriage, they announced their separation. In 1998 Banks was in a near-fatal accident when his car rolled off the road. In February 2007,
|
types), and, particularly in Italy, types modelled on handwritten scripts and calligraphy employed by humanists. Printers congregated in urban centres where there were scholars, ecclesiastics, lawyers, and nobles and professionals who formed their major customer base. Standard works in Latin inherited from the medieval tradition formed the bulk of the earliest printed works, but as books became cheaper, vernacular works (or translations into vernaculars of standard works) began to appear. Famous examples Famous incunabula include two from Mainz, the Gutenberg Bible of 1455 and the Peregrinatio in terram sanctam of 1486, printed and illustrated by Erhard Reuwich; the Nuremberg Chronicle written by Hartmann Schedel and printed by Anton Koberger in 1493; and the Hypnerotomachia Poliphili printed by Aldus Manutius with important illustrations by an unknown artist. Other printers of incunabula were Günther Zainer of Augsburg, Johannes Mentelin and Heinrich Eggestein of Strasbourg, Heinrich Gran of Haguenau, William Caxton of Bruges and London, and Nicolas Jenson of Venice. The first incunable to have woodcut illustrations was Ulrich Boner's Der Edelstein, printed by Albrecht Pfister in Bamberg in 1461. Post-incunable Many incunabula are undated, needing complex bibliographical analysis to place them correctly. The post-incunabula period marks a time of development during which the printed book evolved fully as a mature artefact with a standard format. After about 1540 books tended to conform to a template that included the author, title-page, date, seller, and place of printing. This makes it much easier to identify any particular edition. As noted above, the end date for identifying a printed book as an incunable is convenient but was chosen arbitrarily; it does not reflect any notable developments in the printing process around the year 1500. Books printed for a number of years after 1500 continued to look much like incunables, with the notable exception of the small format books printed in italic type introduced by Aldus Manutius in 1501. The term post-incunable is sometimes used to refer to books printed "after 1500—how long after, the experts have not yet agreed." For books printed in the UK, the term generally covers 1501–1520, and for books printed in mainland Europe, 1501–1540. Statistical data The data in this section were derived from the Incunabula Short-Title Catalogue (ISTC). The number of printing towns and cities stands at 282. These are situated in some 18 countries in terms of present-day boundaries. In descending order of the number of editions printed in each, these are: Italy, Germany, France, Netherlands, Switzerland, Spain, Belgium, England, Austria, the Czech Republic, Portugal, Poland, Sweden, Denmark, Turkey, Croatia, Serbia, Montenegro, and Hungary (see diagram). The following table shows the 20 main 15th century printing locations; as with all data in this section, exact figures are given, but should be treated as close estimates (the total editions recorded in
|
Italy, types modelled on handwritten scripts and calligraphy employed by humanists. Printers congregated in urban centres where there were scholars, ecclesiastics, lawyers, and nobles and professionals who formed their major customer base. Standard works in Latin inherited from the medieval tradition formed the bulk of the earliest printed works, but as books became cheaper, vernacular works (or translations into vernaculars of standard works) began to appear. Famous examples Famous incunabula include two from Mainz, the Gutenberg Bible of 1455 and the Peregrinatio in terram sanctam of 1486, printed and illustrated by Erhard Reuwich; the Nuremberg Chronicle written by Hartmann Schedel and printed by Anton Koberger in 1493; and the Hypnerotomachia Poliphili printed by Aldus Manutius with important illustrations by an unknown artist. Other printers of incunabula were Günther Zainer of Augsburg, Johannes Mentelin and Heinrich Eggestein of Strasbourg, Heinrich Gran of Haguenau, William Caxton of Bruges and London, and Nicolas Jenson of Venice. The first incunable to have woodcut illustrations was Ulrich Boner's Der Edelstein, printed by Albrecht Pfister in Bamberg in 1461. Post-incunable Many incunabula are undated, needing complex bibliographical analysis to place them correctly. The post-incunabula period marks a time of development during which the printed book evolved fully as a mature artefact with a standard format. After about 1540 books tended to conform to a template that included the author, title-page, date, seller, and place of printing. This makes it much easier to identify any particular edition. As noted above, the end date for identifying a printed book as an incunable is convenient but was chosen arbitrarily; it does not reflect any notable developments in the printing process around the year 1500. Books printed for a number of years after 1500 continued to look much like incunables, with the notable exception of the small format books printed in italic type introduced by Aldus Manutius in 1501. The term post-incunable is sometimes used to refer to books printed "after 1500—how long after, the experts have not yet agreed." For books printed in the UK, the term generally covers 1501–1520, and for books printed in mainland Europe, 1501–1540. Statistical data The data in this section were derived from the Incunabula Short-Title Catalogue (ISTC). The number of printing towns and cities stands at 282. These are situated in some 18 countries in terms of present-day boundaries. In descending order of the number of editions printed in each, these are: Italy, Germany, France, Netherlands, Switzerland, Spain, Belgium, England, Austria, the Czech Republic, Portugal, Poland, Sweden, Denmark, Turkey, Croatia, Serbia, Montenegro, and Hungary (see diagram). The following table shows the 20 main 15th century printing locations; as with all data in this section, exact figures are given, but should be treated as close estimates (the total editions recorded in ISTC at May 2013 is 28,395): The 18 languages that incunabula are printed in, in descending order, are: Latin, German, Italian, French, Dutch, Spanish, English, Hebrew, Catalan, Czech, Greek, Church Slavonic, Portuguese, Swedish, Breton, Danish, Frisian and Sardinian (see diagram). Only about one edition in ten (i.e. just over 3,000) has any illustrations, woodcuts or metalcuts. The "commonest" incunable is Schedel's Nuremberg Chronicle ("Liber Chronicarum") of 1493, with about 1,250 surviving copies (which is also the most heavily illustrated). Many incunabula are unique, but on average about 18 copies survive of each. This makes the Gutenberg Bible, at 48 or 49 known copies, a relatively common (though extremely valuable) edition. Counting extant incunabula is complicated by the fact that most libraries consider a single volume of a multi-volume work as a separate item, as well as fragments or copies lacking more than half the total leaves. A complete incunable may consist of a slip, or up to ten volumes. In terms of format, the 29,000-odd editions comprise: 2,000 broadsides, 9,000 folios, 15,000 quartos, 3,000 octavos, 18 12mos, 230 16mos, 20 32mos, and 3 64mos. ISTC at present cites 528 extant copies of books printed by Caxton, which together with 128 fragments makes 656 in total, though many are broadsides or very imperfect (incomplete). Apart from migration to mainly North American and Japanese universities, there has been little movement of incunabula in the last five centuries. None were printed in the Southern Hemisphere, and the latter appears to possess less than 2,000 copies, about 97.75% remain north of the equator. However, many incunabula are sold at auction or through the rare book trade every year. Major collections The British Library's Incunabula Short
|
In complex geometry, a line through the origin in the direction of an isotropic vector is an isotropic line. Isotropic coordinates Isotropic coordinates are coordinates on an isotropic chart for Lorentzian manifolds. Isotropy groupAn isotropy group is the group of isomorphisms from any object to itself in a groupoid. An isotropy representation is a representation of an isotropy group. Isotropic position A probability distribution over a vector space is in isotropic position if its covariance matrix is the identity. Isotropic vector field The vector field generated by a point source is said to be isotropic if, for any spherical neighborhood centered at the point source, the magnitude of the vector determined by any point on the sphere is invariant under a change in direction. For an example, starlight appears to be isotropic. Physics Quantum mechanics or particle physics When a spinless particle (or even an unpolarized particle with spin) decays, the resulting decay distribution must be isotropic in the rest frame of the decaying particle regardless of the detailed physics of the decay. This follows from rotational invariance of the Hamiltonian, which in turn is guaranteed for a spherically symmetric potential. Kinetic theory of gases is also an example of isotropy. It is assumed that the molecules move in random directions and as a consequence, there is an equal probability of a molecule moving in any direction. Thus when there are many molecules in the gas, with high probability there will be very similar numbers moving in one direction as any other, demonstrating approximate isotropy. Fluid dynamics Fluid flow is isotropic if there is no directional preference (e.g. in fully developed 3D turbulence). An example of anisotropy is in flows with a background density as gravity works in only one direction. The apparent surface separating two differing isotropic fluids would be referred to as an isotrope. Thermal expansion A solid is said to be isotropic if the expansion of solid is equal in all directions when thermal energy is provided to the solid. Electromagnetics An isotropic medium is one such that the permittivity, ε, and permeability, μ, of the medium are uniform in all directions of the medium, the simplest instance being free space. Optics Optical isotropy means having the same optical properties in all directions. The individual reflectance or transmittance of the domains is averaged for micro-heterogeneous samples if the macroscopic reflectance or transmittance is to be calculated. This can be verified simply by investigating, e.g., a polycrystalline material under a polarizing microscope having the polarizers crossed: If the crystallites are larger than the resolution limit, they will be visible. CosmologyThe Big Bang theory of the
|
is said to be isotropic if the expansion of solid is equal in all directions when thermal energy is provided to the solid. Electromagnetics An isotropic medium is one such that the permittivity, ε, and permeability, μ, of the medium are uniform in all directions of the medium, the simplest instance being free space. Optics Optical isotropy means having the same optical properties in all directions. The individual reflectance or transmittance of the domains is averaged for micro-heterogeneous samples if the macroscopic reflectance or transmittance is to be calculated. This can be verified simply by investigating, e.g., a polycrystalline material under a polarizing microscope having the polarizers crossed: If the crystallites are larger than the resolution limit, they will be visible. CosmologyThe Big Bang theory of the evolution of the observable universe assumes that space is isotropic. It also assumes that space is homogeneous. These two assumptions together are known as the cosmological principle. As of 2006, the observations suggest that, on distance scales much larger than galaxies, galaxy clusters are "Great" features, but small compared to so-called multiverse scenarios. Here homogeneous means that the universe is the same everywhere (no preferred location) and isotropic implies that there is no preferred direction. Materials science In the study of mechanical properties of materials, "isotropic" means having identical values of a property in all directions. This definition is also used in geology and mineralogy. Glass and metals are examples of isotropic materials. Common anisotropic materials include wood, because its material properties are different parallel and perpendicular to the grain, and layered rocks such as slate. Isotropic materials are useful since they are easier to shape, and their behavior is easier to predict. Anisotropic materials can be tailored to the forces an object is expected to experience. For example, the fibers in carbon fiber materials and rebars in reinforced concrete are oriented to withstand tension. Microfabrication In industrial processes, such as etching steps, isotropic means that the process proceeds at the same rate, regardless of direction. Simple chemical reaction and removal of a substrate by an acid, a solvent or a reactive gas
|
for Developing Countries supports research travel of mathematicians based in developing countries as well as mathematics research conferences in the developing world through its Grants Program which is open to mathematicians throughout the developing world, including countries that are not (yet) members of the IMU. African Mathematics Millennium Science Initiative (AMMSI) is a network of mathematics centers in sub-Saharan Africa that organizes conferences and workshops, visiting lectureships and an extensive scholarship program for mathematics graduate students doing PhD work on the African continent. Mentoring African Research in Mathematics (MARM): IMU supported the London Mathematical Society (LMS) in founding the MARM programme, which supports mathematics and its teaching in the countries of sub-Saharan Africa via a mentoring partnership between mathematicians in the United Kingdom and African colleagues, together with their students. It focuses on cultivating long-term mentoring relations between individual mathematicians and students. Volunteer Lecturer Program (VLP) of IMU identifies mathematicians interested in contributing to the formation of young mathematicians in the developing world. The Volunteer Lecturer Program maintains a database of mathematic volunteers willing to offer month-long intensive courses at the advanced undergraduate or graduate level in degree programmes at universities in the developing world. IMU also seeks applications from universities and mathematics degree programmes in the developing world that are in need of volunteer lecturers, and that can provide the necessary conditions for productive collaboration in the teaching of advanced mathematics. IMU also supports the International Commission on Mathematical Instruction (ICMI) with its programmes, exhibits and workshops in emerging countries, especially in Asia and Africa. IMU released a report in 2008, Mathematics in Africa: Challenges and Opportunities, on the current state of mathematics in Africa and on opportunities for new initiatives to support mathematical development. In 2014, the IMU's Commission for Developing Countries CDC released an update of the report. Additionally, reports about Mathematics in Latin America and the Caribbean and South East Asia. were published. In July 2014 IMU released the report: The International Mathematical Union in the Developing World: Past, Present and Future (July 2014). MENAO Symposium at the ICM In 2014, the IMU held a day-long symposium prior to the opening of the International Congress of Mathematicians (ICM), entitled Mathematics in Emerging Nations: Achievements and Opportunities (MENAO). Approximately 260 participants from around the world, including representatives of embassies, scientific institutions, private business and foundations attended this session. Attendees heard inspiring stories of individual mathematicians and specific developing nations. Members Member Countries: Associate Members: Sociedad Ecuatoriana de Matemática - SEdeM Mathematical Society of Kyrgyzstan Mathematics Association of Kenya (MAK) Mathematical Association of Thailand, The Center for Promotion of Mathematical Research of Thailand (CEPMART) Committee for Mathematics of Cambodia Mathematical Society of the Republic of Moldova Committee for Mathematics of Nepal Committee for Mathematics of Oman Affiliate Members: African Mathematical Union (AMU) European Mathematical Society (EMS) South East Asian Mathematical Society (SEAMS) Unión Matemática de América Latina y el Caribe (UMALCA) Candidacies for Membership: Currently there are no candidacies for membership. Presidents List of presidents of the International Mathematical Union from 1952 to the present: 1952–1954: Marshall Harvey Stone (vice: Émile Borel, Erich Kamke) 1955–1958: Heinz Hopf (vice: Arnaud Denjoy, W. V. D. Hodge) 1959–1962: Rolf Nevanlinna (vice: Pavel Alexandrov, Marston Morse) 1963–1966: Georges de Rham (vice: Henri Cartan, Kazimierz Kuratowski) 1967–1970: Henri Cartan (vice: Mikhail Lavrentyev, Deane Montgomery) 1971–1974: K. S. Chandrasekharan (vice: Abraham Adrian Albert, Lev Pontryagin) 1975–1978: Deane Montgomery (vice: J. W. S. Cassels, Miron Nicolescu, Gheorghe Vrânceanu) 1979–1982: Lennart Carleson (vice: Masayoshi Nagata, Yuri Vasilyevich Prokhorov) 1983–1986: Jürgen Moser (vice: Ludvig Faddeev, Jean-Pierre Serre) 1987–1990: Ludvig Faddeev (vice: Walter Feit, Lars Hörmander) 1991–1994: Jacques-Louis Lions (vice: John H. Coates, David Mumford) 1995–1998: David Mumford (vice: Vladimir Arnold, Albrecht Dold) 1999–2002: Jacob Palis (vice: Simon Donaldson, Shigefumi Mori) 2003–2006: John M. Ball (vice: Jean-Michel Bismut, Masaki Kashiwara) 2007–2010: László Lovász (vice: Zhi-Ming Ma, Claudio Procesi) 2011–2014: Ingrid Daubechies (vice: Christiane Rousseau, Marcelo Viana) 2015–2018: Shigefumi Mori (vice: Alicia Dickenstein, Vaughan Jones) 2019–2022: Carlos Kenig (vice: Nalini Joshi, Loyiso Nongxa) References
|
are national mathematics organizations from more than 80 countries. The objectives of the International Mathematical Union (IMU) are: promoting international cooperation in mathematics, supporting and assisting the International Congress of Mathematicians (ICM) and other international scientific meetings/conferences, acknowledging outstanding research contributions to mathematics through the awarding of scientific prizes, and encouraging and supporting other international mathematical activities, considered likely to contribute to the development of mathematical science in any of its aspects, whether pure, applied, or educational. The IMU was established in 1920, but dissolved in September 1932 and then re-established 1950 de facto at the Constitutive Convention in New York, de jure on September 10, 1951, when ten countries had become members. The last milestone was the General Assembly in March 1952, in Rome, Italy where the activities of the new IMU were inaugurated and the first Executive Committee, President and various commissions were elected. In 1952 the IMU was also readmitted to the ICSU. The past president of the Union is Shigefumi Mori (2015–2018). The current president is Carlos Kenig. At the 16th meeting of the IMU General Assembly in Bangalore, India, in August 2010, Berlin was chosen as the location of the permanent office of the IMU, which was opened on January 1, 2011, and is hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS), an institute of the Gottfried Wilhelm Leibniz Scientific Community, with about 120 scientists engaging in mathematical research applied to complex problems in industry and commerce. Commissions and committees IMU has a close relationship to mathematics education through its International Commission on Mathematical Instruction (ICMI). This commission is organized similarly to IMU with its own Executive Committee and General Assembly. Developing countries are a high priority for the IMU and a significant percentage of its budget, including grants received from individuals, mathematical societies, foundations, and funding agencies, is spent on activities for developing countries. Since 2011 this has been coordinated by the Commission for Developing Countries (CDC). The Committee for Women in Mathematics (CWM) is concerned with issues related to women in mathematics worldwide. It organizes the World Meeting for Women in Mathematics as a satellite event of ICM. The International Commission on the History of Mathematics (ICHM) is operated jointly by the IMU and the Division of the History of Science (DHS) of the International Union of History and Philosophy of Science (IUHPS). The Committee on Electronic Information and Communication (CEIC) advises IMU on matters concerning mathematical information, communication, and publishing. Prizes The scientific prizes awarded by the IMU are deemed to be the highest distinctions in the mathematical world. The opening ceremony of the International Congress of Mathematicians (ICM) is where the awards are presented: Fields Medals (two to four medals are given since 1936), the Rolf Nevanlinna Prize (since 1986), the Carl Friedrich Gauss Prize (since 2006), and the Chern Medal Award (since 2010). Membership and General Assembly The IMU's members are Member Countries and each Member country is represented through an Adhering Organization, which may be its principal academy, a mathematical society, its research council or some other institution or association of institutions, or an appropriate agency of its government. A country starting to develop its mathematical culture and interested in building links to mathematicians all over the world is invited to join IMU as an Associate Member. For the purpose of facilitating jointly sponsored activities and jointly pursuing the objectives of the IMU, multinational mathematical societies and professional societies can join IMU as an Affiliate Member. Every four years the IMU membership gathers in a General Assembly (GA) which consists of delegates appointed by the Adhering Organizations, together with the members of the executive committee. All important decisions are made at the GA, including the election of the officers, establishment of commissions, the approval of the budget, and any changes to the statutes and by-laws. Organization and Executive Committee The International Mathematical Union is administered by an executive committee (EC) which conducts the business of the Union. The EC consists of
|
Council (IRC; 1919-1931). In 1998, Members agreed that the Council’s current composition and activities would be better reflected by modifying the name from the International Council of Scientific Unions to the International Council for Science, while its rich history and strong identity would be well served by retaining the existing acronym, ICSU. Universality of science The Principle of Freedom and Responsibility in Science: the free and responsible practice of science is fundamental to scientific advancement and human and environmental well-being. Such practice, in all its aspects, requires freedom of movement, association, expression and communication for scientists, as well as equitable access to data, information, and other resources for research. It requires responsibility at all levels to carry out and communicate scientific work with integrity, respect, fairness, trustworthiness, and transparency, recognizing its benefits and possible harms. In advocating the free and responsible practice of science, the council promotes equitable opportunities for access to science and its benefits, and opposes discrimination based on such factors as ethnic origin, religion, citizenship, language, political or other opinion, sex, gender identity, sexual orientation, disability, or age. The International Science Council's Committee on Freedom and Responsibility in Science (CFRS) "oversees this commitment
|
science for the benefit of society. To do this, the ICSU mobilized the knowledge and resources of the international scientific community to: Identify and address major issues of importance to science and society. Facilitate interaction amongst scientists across all disciplines and from all countries. Promote the participation of all scientists – regardless of race, citizenship, language, political stance, or gender – in the international scientific endeavour. Provide independent, authoritative advice to stimulate constructive dialogue between the scientific community and governments, civil society, and the private sector." Activities focused on three areas: International Research Collaboration, Science for Policy, and Universality of Science. History In July 2018, the ICSU became the International Science Council (ISC). The ICSU itself was one of the oldest non-governmental organizations in the world, representing the evolution and expansion of two earlier bodies known as the International Association of Academies (IAA; 1899-1914) and the International Research Council (IRC; 1919-1931). In 1998, Members agreed that the Council’s current composition and activities would be better reflected by modifying the name from the International Council of Scientific Unions to the International Council for Science, while its rich history and strong identity would be well served by retaining the existing acronym, ICSU. Universality of science The Principle of Freedom and Responsibility in Science: the free and responsible practice of science is fundamental to scientific advancement and human and environmental well-being. Such practice, in all its aspects, requires freedom of movement, association, expression and communication for scientists, as well as equitable access to data, information, and other resources for research. It requires responsibility at all levels to carry out and communicate scientific work with integrity, respect, fairness, trustworthiness, and transparency, recognizing its benefits
|
of this meeting, making it one of the most important historical international collaborations of chemistry societies. Since this time, IUPAC has been the official organization held with the responsibility of updating and maintaining official organic nomenclature. IUPAC as such was established in 1919. One notable country excluded from this early IUPAC is Germany. Germany's exclusion was a result of prejudice towards Germans by the Allied powers after World War I. Germany was finally admitted into IUPAC during 1929. However, Nazi Germany was removed from IUPAC during World War II. During World War II, IUPAC was affiliated with the Allied powers, but had little involvement during the war effort itself. After the war, East and West Germany were readmitted to IUPAC in 1973. Since World War II, IUPAC has been focused on standardizing nomenclature and methods in science without interruption. In 2016, IUPAC denounced the use of chlorine as a chemical weapon. The organization pointed out their concerns in a letter to Ahmet Üzümcü, the director of the Organisation for the Prohibition of Chemical Weapons (OPCW), in regards to the practice of utilizing chlorine for weapon usage in Syria among other locations. The letter stated, "Our organizations deplore the use of chlorine in this manner. The indiscriminate attacks, possibly carried out by a member state of the Chemical Weapons Convention (CWC), is of concern to chemical scientists and engineers around the globe and we stand ready to support your mission of implementing the CWC." According to the CWC, "the use, stockpiling, distribution, development or storage of any chemical weapons is forbidden by any of the 192 state party signatories." Committees and governance IUPAC is governed by several committees that all have different responsibilities. The committees are as follows: Bureau, CHEMRAWN (Chem Research Applied to World Needs) Committee, Committee on Chemistry Education, Committee on Chemistry and Industry, Committee on Printed and Electronic Publications, Evaluation Committee, Executive Committee, Finance Committee, Interdivisional Committee on Terminology, Nomenclature and Symbols, Project Committee, and Pure and Applied Chemistry Editorial Advisory Board. Each committee is made up of members of different National Adhering Organizations from different countries. The steering committee hierarchy for IUPAC is as follows: All committees have an allotted budget to which they must adhere. Any committee may start a project. If a project's spending becomes too much for a committee to continue funding, it must take the issue to the Project Committee. The project committee either increases the budget or decides on an external funding plan. The Bureau and Executive Committee oversee operations of the other committees. Nomenclature IUPAC committee has a long history of officially naming organic and inorganic compounds. IUPAC nomenclature is developed so that any compound can be named under one set of standardized rules to avoid duplicate names. The first publication on IUPAC nomenclature of organic compounds was A Guide to IUPAC Nomenclature of Organic Compounds in 1900, which contained information from the International Congress of Applied Chemistry. Basic spellings IUPAC establishes rules for harmonized spelling of some chemicals to reduce variation among different local English-language
|
Organizations and three Associate National Adhering Organizations. IUPAC's Inter-divisional Committee on Nomenclature and Symbols (IUPAC nomenclature) is the recognized world authority in developing standards for the naming of the chemical elements and compounds. Since its creation, IUPAC has been run by many different committees with different responsibilities. These committees run different projects which include standardizing nomenclature, finding ways to bring chemistry to the world, and publishing works. IUPAC is best known for its works standardizing nomenclature in chemistry, but IUPAC has publications in many science fields including chemistry, biology and physics. Some important work IUPAC has done in these fields includes standardizing nucleotide base sequence code names; publishing books for environmental scientists, chemists, and physicists; and improving education in science. IUPAC is also known for standardizing the atomic weights of the elements through one of its oldest standing committees, the Commission on Isotopic Abundances and Atomic Weights (CIAAW). Creation and history The need for an international standard for chemistry was first addressed in 1860 by a committee headed by German scientist Friedrich August Kekulé von Stradonitz. This committee was the first international conference to create an international naming system for organic compounds. The ideas that were formulated in that conference evolved into the official IUPAC nomenclature of organic chemistry. IUPAC stands as a legacy of this meeting, making it one of the most important historical international collaborations of chemistry societies. Since this time, IUPAC has been the official organization held with the responsibility of updating and maintaining official organic nomenclature. IUPAC as such was established in 1919. One notable country excluded from this early IUPAC is Germany. Germany's exclusion was a result of prejudice towards Germans by the Allied powers after World War I. Germany was finally admitted into IUPAC during 1929. However, Nazi Germany was removed from IUPAC during World War II. During World War II, IUPAC was affiliated with the Allied powers, but had little involvement during the war effort itself. After the war, East and West Germany were readmitted to IUPAC in 1973. Since World War II, IUPAC has been focused on standardizing nomenclature and methods in science without interruption. In 2016, IUPAC denounced the use of chlorine as a chemical weapon. The organization pointed out their concerns in a letter to Ahmet Üzümcü, the director of the Organisation for the Prohibition of Chemical Weapons (OPCW), in regards to the practice of utilizing chlorine for weapon usage in Syria among other locations. The letter stated, "Our organizations deplore the use of chlorine in this manner. The indiscriminate attacks, possibly carried out by a member state of the Chemical Weapons Convention (CWC), is of concern to chemical scientists and engineers around the globe and we stand ready to support your mission of implementing the CWC." According to the CWC, "the use, stockpiling, distribution, development or storage of any chemical weapons is forbidden by any of the 192 state party signatories." Committees and governance IUPAC is governed by several committees that all have different responsibilities. The committees are as follows: Bureau, CHEMRAWN (Chem Research Applied to World Needs) Committee, Committee on Chemistry Education, Committee on Chemistry and Industry, Committee on Printed and Electronic Publications, Evaluation Committee, Executive Committee, Finance Committee, Interdivisional Committee on Terminology, Nomenclature and Symbols, Project Committee, and Pure and Applied Chemistry Editorial Advisory Board. Each committee is made up of members of different National Adhering Organizations from different countries. The steering committee hierarchy for IUPAC is as follows: All committees have an allotted budget to which they must adhere. Any committee may start a project. If a project's spending becomes too much for a committee to continue funding, it must take the issue to the Project Committee. The project committee either increases the budget or decides on an external funding plan. The Bureau and Executive Committee oversee operations of the other committees. Nomenclature IUPAC committee has a long history of officially naming organic and inorganic compounds. IUPAC nomenclature is developed so that any compound can be named under one set of standardized rules to avoid duplicate names. The first publication on IUPAC nomenclature of organic compounds was A Guide to IUPAC Nomenclature of Organic Compounds in 1900, which contained information from the International Congress
|
in Monaco. During the 19th century, many maritime nations established hydrographic offices to provide means for improving the navigation of naval and merchant vessels by providing nautical publications, nautical charts, and other navigational services. There were substantial differences in hydrographic procedures charts, and publications. In 1889, an International Maritime Conference was held at Washington, D.C., and it was proposed to establish a "permanent international commission." Similar proposals were made at the sessions of the International Congress of Navigation held at Saint Petersburg in 1908 and the International Maritime Conference held at Saint Petersburg in 1912. In 1919, the national Hydrographers of Great Britain and France cooperated in taking the necessary steps to convene an international conference of Hydrographers. London was selected as the most suitable place for this conference, and on 24 July 1919, the First International Conference opened, attended by the Hydrographers of 24 nations. The object of the conference was "To consider the advisability of all maritime nations adopting similar methods in preparation, construction, and production of their charts and all hydrographic publications; of rendering the results in the most convenient form to enable them to be readily used; of instituting a prompt system of mutual exchange of hydrographic information between all countries; and of providing an opportunity to consultations and discussions to be carried out on hydrographic subjects generally by the hydrographic experts of the world." This is still the major purpose of the IHO. As a result of the 1919 Conference, a permanent organization was formed and statutes for its operations were prepared. The IHB, now the IHO, began its activities in 1921 with 18 nations as members. The Principality of Monaco was selected as the seat of the Organization as a result of the offer of Albert I of Monaco to provide suitable accommodation for the Bureau in the Principality. Functions The IHO develops hydrographic and nautical charting standards. These standards are subsequently adopted and used by its member countries and others in their surveys, nautical charts, and
|
The International Hydrographic Organization (IHO) is an intergovernmental organisation representing hydrography. As of January 2022 the IHO comprised 97 Member States. A principal aim of the IHO is to ensure that the world's seas, oceans and navigable waters are properly surveyed and charted. It does this through the setting of international standards, the co-ordination of the endeavours of the world's national hydrographic offices, and through its capacity building program. The IHO enjoys observer status at the United Nations, where it is the recognised competent authority on hydrographic surveying and nautical charting. When referring to hydrography and nautical charting in Conventions and similar Instruments, it is the IHO standards and specifications that are normally used. History The IHO was established in 1921 as the International Hydrographic Bureau (IHB). The present name was adopted in 1970, as part of a new international Convention on the IHO adopted by the then member nations. The former name International Hydrographic Bureau was retained to describe the IHO secretariat until 8 November 2016, when a significant revision to the Convention on the IHO entered into force. Thereafter the IHB became known as the "IHO Secretariat", comprising an elected Secretary-General and two supporting Directors, together with a small permanent staff (18 in 2020), at the Organization's headquarters in Monaco. During the 19th century, many maritime nations established hydrographic offices to provide means for improving the navigation of naval and merchant vessels by providing nautical publications, nautical charts, and other navigational services. There were substantial differences in hydrographic procedures charts, and publications. In 1889, an International Maritime Conference was held at Washington, D.C., and it was proposed to establish a "permanent international commission." Similar proposals were made at the sessions of the International Congress of Navigation held at Saint Petersburg in 1908 and the International Maritime Conference held at Saint Petersburg in 1912. In 1919, the national Hydrographers of Great Britain and France cooperated in taking the necessary steps to convene an international conference of Hydrographers. London was selected as the most suitable place for this conference, and on 24 July 1919, the First International Conference opened, attended by the Hydrographers of 24 nations. The object of the conference was "To consider the advisability of all maritime nations adopting similar methods in preparation, construction, and production of their charts and all hydrographic publications; of rendering the results in the most convenient form to enable them to be readily used; of instituting a prompt system of mutual exchange of hydrographic information between all countries; and of providing an opportunity to consultations and discussions to be carried out on hydrographic subjects generally by the hydrographic experts of the world." This is still the major purpose of the IHO. As a result of the 1919 Conference, a permanent organization was formed and statutes for its
|
have their very expensive machines ($2M USD in the mid-1950s) sitting idle while operators set up jobs manually. These first operating systems were essentially scheduled work queues. It is generally thought that the first operating system used for real work was GM-NAA I/O, produced by General Motors' Research division in 1956. IBM enhanced one of GM-NAA I/O's successors, the SHARE Operating System, and provided it to customers under the name IBSYS. As software became more complex and important, the cost of supporting it on so many different designs became burdensome, and this was one of the factors which led IBM to develop System/360 and its operating systems. The second generation (transistor-based) products were a mainstay of IBM's business and IBM continued to make them for several years after the introduction of the System/360. (Some IBM 7094s remained in service into the 1980s.) Smaller machines Prior to System/360, IBM also sold computers smaller in scale that were not considered mainframes, though they were still bulky and expensive by modern standards. These included: IBM 650 (vacuum tube logic, decimal architecture, drum memory, business and scientific) IBM 305 RAMAC (vacuum tube logic, first computer with disk storage; see: Early IBM disk storage) IBM 1400 series (business data processing; very successful and many 1400 peripherals were used with the 360s) IBM 1620 (decimal architecture, engineering, scientific, and education) IBM had difficulty getting customers to upgrade from the smaller machines to the mainframes because so much software had to be rewritten. The 7010 was introduced in 1962 as a mainframe-sized 1410. The later Systems 360 and 370 could emulate the 1400 machines. A desk-size machine with a different instruction set, the IBM 1130, was released concurrently with the System/360 to address the niche occupied by the 1620. It used the same EBCDIC character encoding as the 360 and was mostly programmed in Fortran, which was relatively easy to adapt to larger machines when necessary. IBM also introduced smaller machines after S/360. These included: IBM System/7 (semiconductor memory, process control, incompatible replacement for IBM 1800 IBM Series/1 IBM 3790 IBM 8100 IBM System/3 (Introduced 96 column card) Midrange computer is a designation used by IBM for a class of computer systems which fall in between mainframes and microcomputers. IBM System/360 All that changed with the announcement of the System/360 (S/360) in April, 1964. The System/360 was a single series of compatible models for both commercial and scientific use. The number "360" suggested a "360 degree," or "all-around" computer system. System/360 incorporated features which had previously been present on only either the commercial line (such as decimal arithmetic and byte addressing) or the engineering and scientific line (such as floating-point arithmetic). Some of the arithmetic units and addressing features were optional on some models of the System/360. However, models were upward compatible and most were also downward compatible. The System/360 was also the first computer in wide use to include dedicated hardware provisions for the use of operating systems. Among these were supervisor and application mode programs and instructions, as well as built-in memory protection facilities. Hardware memory protection was provided to protect the operating system from the user programs (tasks) and user tasks from each other. The new machine also had a larger address space than the older mainframes, 24 bits addressing 8-bit bytes vs. a typical 18 bits addressing 36-bit words. The smaller models in the System/360 line (e.g. the 360/30) were intended to replace the 1400 series while providing an easier upgrade path to the larger 360s. To smooth the transition from the second generation to the new line, IBM used the 360's microprogramming capability to emulate the more popular older models. Thus 360/30s with this added cost feature could run 1401 programs and the larger 360/65s could run 7094 programs. To run old programs, the 360 had to be halted and restarted in emulation mode. Many customers kept using their old software and one of the features of the later System/370 was the ability to
|
or "all-around" computer system. System/360 incorporated features which had previously been present on only either the commercial line (such as decimal arithmetic and byte addressing) or the engineering and scientific line (such as floating-point arithmetic). Some of the arithmetic units and addressing features were optional on some models of the System/360. However, models were upward compatible and most were also downward compatible. The System/360 was also the first computer in wide use to include dedicated hardware provisions for the use of operating systems. Among these were supervisor and application mode programs and instructions, as well as built-in memory protection facilities. Hardware memory protection was provided to protect the operating system from the user programs (tasks) and user tasks from each other. The new machine also had a larger address space than the older mainframes, 24 bits addressing 8-bit bytes vs. a typical 18 bits addressing 36-bit words. The smaller models in the System/360 line (e.g. the 360/30) were intended to replace the 1400 series while providing an easier upgrade path to the larger 360s. To smooth the transition from the second generation to the new line, IBM used the 360's microprogramming capability to emulate the more popular older models. Thus 360/30s with this added cost feature could run 1401 programs and the larger 360/65s could run 7094 programs. To run old programs, the 360 had to be halted and restarted in emulation mode. Many customers kept using their old software and one of the features of the later System/370 was the ability to switch to emulation mode and back under operating system control. Operating systems for the System/360 family included OS/360 (with PCP, MFT, and MVT), BOS/360, TOS/360, and DOS/360. The System/360 later evolved into the System/370, the System/390, and the 64-bit zSeries, System z, and zEnterprise machines. System/370 introduced virtual memory capabilities in all models other than the very first System/370 models; the OS/VS1 variant of OS/360 MFT, the OS/VS2 (SVS) variant of OS/360 MVT, and the DOS/VS variant of DOS/360 were introduced to use the virtual memory capabilities, followed by MVS, which, unlike the earlier virtual-memory operating systems, ran separate programs in separate address spaces, rather than running all programs in a single virtual address space. The virtual memory capabilities also allowed the system to support virtual machines; the VM/370 hypervisor would run one or more virtual machines running either standard System/360 or System/370 operating systems or the single-user Conversational Monitor System (CMS). A time-sharing VM system could run multiple virtual machines, one per user, with each virtual machine running an instance of CMS. Today's systems The zSeries family, introduced in 2000 with the z900, included IBM's newly designed 64-bit z/Architecture. Processor units The different processors on current IBM mainframes are: CP, Central Processor: general-purpose processor IFL, Integrated Facility for Linux: dedicated to Linux OSes (optionally under z/VM) ICF, Integrated Coupling Facility: designed to support Parallel Sysplex operations SAP,
|
complex contains one of the world's only six-sided immersive virtual reality labs (C6), as well as the 240 seat 3D-capable Alliant Energy Lee Liu Auditorium, the Multimodal Experience Testbed and Laboratory (METaL), and the User Experience Lab (UX Lab). All of which supports the research of more than 50 faculty and 200 graduate, undergraduate, and postdoctoral students. The Plant Sciences Institute was founded in 1999. PSI's research focus is to understand the effects of genotype (genetic makeup) and environment on phenotypes (traits) sufficiently well that it will be able to predict the phenotype of a given genotype in a given environment. The institute is housed in the Roy J. Carver Co-Laboratory and is home to the Plant Sciences Institute Faculty Scholars program. There is also the Iowa State University Northeast Research Farm in Nashua. Campus Recognition Iowa State's campus contains over 160 buildings. Several buildings, as well as the Marston Water Tower, are listed on the National Register of Historic Places. The central campus includes of trees, plants, and classically designed buildings. The landscape's most dominant feature is the central lawn, which was listed as a "medallion site" by the American Society of Landscape Architects in 1999, one of only three central campuses designated as such. The other two were Harvard University and the University of Virginia. Thomas Gaines, in The Campus As a Work of Art, proclaimed the Iowa State campus to be one of the twenty-five most beautiful campuses in the country. Gaines noted Iowa State's park-like expanse of central campus, and the use of trees and shrubbery to draw together ISU's varied building architecture. Over decades, campus buildings, including the Campanile, Beardshear Hall, and Curtiss Hall, circled and preserved the central lawn, creating a space where students study, relax, and socialize. Campanile The campanile was constructed during 1897-1898 as a memorial to Margaret MacDonald Stanton, Iowa State's first dean of women, who died on July 25, 1895. The tower is located on ISU's central campus, just north of the Memorial Union. The site was selected by Margaret's husband, Edgar W. Stanton, with the help of then-university president William M. Beardshear. The campanile stands tall on a 16 by 16 foot (5 by 5 m) base, and cost $6,510.20 to construct. The campanile is widely seen as one of the major symbols of Iowa State University. It is featured prominently on the university's official ring and the university's mace, and is also the subject of the university's alma mater, The Bells of Iowa State. Lake LaVerne Named for Dr. LaVerne W. Noyes, who also donated the funds to see that Alumni Hall could be completed after sitting unfinished and unused from 1905 to 1907. Dr. Noyes is an 1872 alumnus. Lake LaVerne is located west of the Memorial Union and south of Alumni Hall, Carver Hall, and Music Hall. The lake was a gift from Dr. Noyes in 1916. Lake LaVerne is the home of two mute swans named Sir Lancelot and Elaine, donated to Iowa State by VEISHEA 1935. In 1944, 1970, and 1971 cygnets (baby swans) made their home on Lake LaVerne. Previously Sir Lancelot and Elaine were trumpeter swans but were too aggressive and in 1999 were replaced with two mute swans. In early spring 2003, Lake LaVerne welcomed its newest and most current mute swan duo. In support of Iowa Department of Natural Resources efforts to re-establish the trumpeter swans in Iowa, university officials avoided bringing breeding pairs of male and female mute swans to Iowa State which means the current Sir Lancelot and Elaine are both female. Reiman Gardens Iowa State has maintained a horticulture garden since 1914. Reiman Gardens is the third location for these gardens. Today's gardens began in 1993 with a gift from Bobbi and Roy Reiman. Construction began in 1994 and the Gardens' initial were officially dedicated on September 16, 1995. Reiman Gardens has since grown to become a site consisting of a dozen distinct garden areas, an indoor conservatory and an indoor butterfly "wing", butterfly emergence cases, a gift shop, and several supporting greenhouses. Located immediately south of Jack Trice Stadium on the ISU campus, Reiman Gardens is a year-round facility that has become one of the most visited attractions in central Iowa. The Gardens has received a number of national, state, and local awards since its opening, and its rose gardens are particularly noteworthy. It was honored with the President's Award in 2000 by All American Rose Selections, Inc., which is presented to one public garden in the United States each year for superior rose maintenance and display: “For contributing to the public interest in rose growing through its efforts in maintaining an outstanding public rose garden.” University museums The university museums consist of the Brunnier Art Museum, Farm House Museum, the Art on Campus Program, the Christian Petersen Art Museum, and the Elizabeth and Byron Anderson Sculpture Garden. The Museums include a multitude of unique exhibits, each promoting the understanding and delight of the visual arts as well as attempt to incorporate a vast interaction between the arts, sciences, and technology. Brunnier Art Museum The Brunnier Art Museum, Iowa's only accredited museum emphasizing a decorative arts collection, is one of the nation's few museums located within a performing arts and conference complex, the Iowa State Center. Founded in 1975, the museum is named after its benefactors, Iowa State alumnus Henry J. Brunnier and his wife Ann. The decorative arts collection they donated, called the Brunnier Collection, is extensive, consisting of ceramics, glass, dolls, ivory, jade, and enameled metals. Other fine and decorative art objects from the University Art Collection include prints, paintings, sculptures, textiles, carpets, wood objects, lacquered pieces, silver, and furniture. About eight to 12 annual changing exhibitions and permanent collection exhibitions provide educational opportunities for all ages, from learning the history of a quilt hand-stitched over 100 years ago to discovering how scientists analyze the physical properties of artists' materials, such as glass or stone. Lectures, receptions, conferences, university classes, panel discussions, gallery walks, and gallery talks are presented to assist with further interpretation of objects. Farm House Museum Located near the center of the Iowa State campus, the Farm House Museum sits as a monument to early Iowa State history and culture as well as a National Historic Landmark. As the first building on campus, the Farm House was built in 1860 before campus was occupied by students or even classrooms. The college's first farm tenants primed the land for agricultural experimentation. This early practice lead to Iowa State Agricultural College and Model Farm opening its doors to Iowa students for free in 1869 under the Morrill Act (or Land-grant Act) of 1862. Many prominent figures have made the Farm House their home throughout its 150 years of use. The first president of the college, Adonijah Welch, briefly stayed at the Farm House and even wrote his inaugural speech in a bedroom on the second floor. James “Tama Jim” Wilson resided for much of the 1890s with his family at the Farm House until he joined President William McKinley's cabinet as U.S. Secretary of Agriculture. Agriculture Dean Charles Curtiss and his young family replaced Wilson and became the longest resident of Farm House. In 1976, over 110 years after the initial construction, the Farm House became a museum after much time and effort was put into restoring the early beauty of the modest farm home. Today, faculty, students, and community members can enjoy the museum while honoring its significance in shaping a nationally recognized land-grant university. Its collection boasts a large collection of 19th and early 20th century decorative arts, furnishings and material culture reflecting Iowa State and Iowa heritage. Objects include furnishings from Carrie Chapman Catt and Charles Curtiss, a wide variety of quilts, a modest collection of textiles and apparel, and various china and glassware items. As with many sites on the Iowa State University Campus, The Farm House Museum has a few old myths and legends associated with it. There are rumors of a ghost changing silverware and dinnerware, unexplained rattling furniture, and curtains that have opened seemingly by themselves. The Farm House Museum is a unique on-campus educational resource providing a changing environment of exhibitions among the historical permanent collection objects that are on display. A walk through the Farm House Museum immerses visitors in the Victorian era (1860–1910) as well as exhibits colorful Iowa and local Ames history. Art on Campus Collection Iowa State is home to one of the largest campus public art programs in the United States. Over 2,000 works of public art, including 600 by significant national and international artists, are located across campus in buildings, courtyards, open spaces and offices. The traditional public art program began during the Depression in the 1930s when Iowa State College's President Raymond Hughes envisioned that "the arts would enrich and provide substantial intellectual exploration into our college curricula." Hughes invited Grant Wood to create the Library's agricultural murals that speak to the founding of Iowa and Iowa State College and Model Farm. He also offered Christian Petersen a one-semester sculptor residency to design and build the fountain and bas relief at the Dairy Industry Building. In 1955, 21 years later, Petersen retired having created 12 major sculptures for the campus and hundreds of small studio sculptures. The Art on Campus Collection is a campus-wide resource of over 2000 public works of art. Programs, receptions, dedications, university classes, Wednesday Walks, and educational tours are presented on a regular basis to enhance visual literacy and aesthetic appreciation of this diverse collection. Christian Petersen Art Museum The Christian Petersen Art Museum in Morrill Hall is named for the nation's first permanent campus artist-in-residence, Christian Petersen, who sculpted and taught at Iowa State from 1934 through 1955, and is considered the founding artist of the Art on Campus Collection. Named for Justin Smith Morrill who created the Morrill Land-Grant Colleges Act, Morrill Hall was completed in 1891. Originally constructed to fill the capacity of a library, museum, and chapel, its original uses are engraved in the exterior stonework on the east side. The building was vacated in 1996 when it was determined unsafe and was also listed in the National Register of Historic Places the same year. In 2005, $9 million was raised to renovate the building and convert it into a museum. Completed and reopened in March 2007, Morrill Hall is home to the Christian Petersen Art Museum. As part of University Museums, the Christian Petersen Art Museum at Morrill Hall is the home of the Christian Petersen Art Collection, the Art on Campus Program, the University Museums's Visual Literacy and Learning Program, and Contemporary Changing Art Exhibitions Program. Located within the Christian Petersen Art Museum are the Lyle and Nancy Campbell Art Gallery, the Roy and Bobbi Reiman Public Art Studio Gallery, the Margaret Davidson Center for the Study of the Art on Campus Collection, the Edith D. and Torsten E. Lagerstrom Loaned Collections Center, and the Neva M. Petersen Visual Learning Gallery. University Museums shares the James R. and Barbara R. Palmer Small Objects Classroom in Morrill Hall. Anderson Sculpture Garden The Elizabeth and Byron Anderson Sculpture Garden is located by the Christian Petersen Art Museum at historic Morrill Hall. The sculpture garden design incorporates sculptures, a gathering arena, and sidewalks and pathways. Planted with perennials, ground cover, shrubs, and flowering trees, the landscape design provides a distinctive setting for important works of 20th and 21st century sculpture, primarily American. Ranging from forty-four inches to nearly nine feet high and from bronze to other metals, these works of art represent the richly diverse character of modern and contemporary sculpture. The sculpture garden is adjacent to Iowa State's central campus. Adonijah Welch, ISU's first president, envisioned a picturesque campus with a winding road encircling the college's majestic buildings, vast lawns of green grass, many varieties of trees sprinkled throughout to provide shade, and shrubbery and flowers for fragrance. Today, the central lawn continues to be an iconic place for all Iowa Staters, and enjoys national acclaim as one of the most beautiful campuses in the country. The new Elizabeth and Byron Anderson Sculpture Garden further enhances the beauty of Iowa State. Sustainability Iowa State's composting facility is capable of processing over 10,000 tons of organic waste every year. The school's $3 million revolving loan fund loans money for energy efficiency and conservation projects on campus. In the 2011 College Sustainability Report Card issued by the Sustainable Endowments Institute, the university received a B grade. Student life Residence halls Iowa State operates 20 on-campus residence halls. The residence halls are divided into geographical areas. The Union Drive Association (UDA) consists of four residence halls located on the west side of campus, including Friley Hall, which has been declared one of the largest residence halls in the country. The Richardson Court Association (RCA) consists of 12 residence halls on the east side of campus. The Towers Residence Association (TRA) are located south of the main campus. Two of the four towers, Knapp and Storms Halls, were imploded in 2005; however, Wallace and Wilson Halls still stand. Buchanan Hall and Geoffroy Hall are nominally considered part of the RCA, despite their distance from the other buildings. ISU operates two apartment complexes for upperclassmen, Frederiksen Court and SUV Apartments. Student government The governing body for ISU students is ISU Student Government. The ISU Student Government is composed of a president, vice president, finance director, cabinet appointed by the president, a clerk appointed by the vice president, senators representing each college and residence area at the university, a nine-member judicial branch and an election commission. Student organizations ISU has over 900 student organizations on campus that represent a variety of interests. Organizations are supported by Iowa State's Student Activities Center. Many student organization offices are housed in the Memorial Union. The Memorial Union at Iowa State University opened in September 1928 and is currently home to a number of University departments and student organizations, a bowling alley, the University Book Store, and the Hotel Memorial Union. The original building was designed by architect, William T. Proudfoot. The building employs a classical style of architecture reflecting Greek and Roman influences. The building's design specifically complements the designs of the major buildings surrounding the University's Central Campus area, Beardshear Hall to the west, Curtiss Hall to the east, and MacKay Hall to the north. The style utilizes columns with Corinthian capitals, Palladian windows, triangular pediments, and formally balanced facades. Designed to be a living memorial for ISU students lost in World War I, the building includes a solemn memorial hall, named the Gold Star Room, which honors the names of the dead World War I, World War II, Korean, Vietnam, and War on Terrorism veterans engraved in marble. Symbolically, the hall was built directly over a library (the Browsing Library) and a small chapel, the symbol being that no country would ever send its young men to die in a war for a noble cause without a solid foundation on both education (the library) and religion (the chapel). Renovations and additions have continued through the years to include: elevators, bowling lanes, a parking ramp, a book store, food court, and additional wings. Music The Choral Division of the Department of Music and Theater at Iowa State University consists of over 400 choristers in four main ensembles – the Iowa State Singers, Cantamus, the Iowa Statesmen, and Lyrica – and multiple small ensembles including three a cappella groups, Count Me In (female), Shy of a Dozen (male), and "Hymn and Her" (co-ed). Greek community ISU is home to an active Greek community. There are 50 chapters that involve 14.6 percent of undergraduate students. Collectively, fraternity and sorority members have raised over $82,000 for philanthropies and committed 31,416 hours to community service. In 2006, the ISU Greek community was named the best large Greek community in the Midwest. The ISU Greek Community has received multiple Jellison and Sutherland Awards from Association for Fraternal Leadership and Values, formerly the Mid-American Greek Council Association. These awards recognize the top Greek Communities in the Midwest. The first fraternity, Delta Tau Delta, was established at Iowa State in 1875, six years after the first graduating class entered Iowa State. The first sorority, I.C. Sorocis, was established only two years later, in 1877. I.C. Sorocis later became a chapter of the first national sorority at Iowa State, Pi Beta Phi. Anti-Greek rioting occurred in 1888. As reported in The Des Moines Register, "The anti-secret society men of the college met in a mob last night about 11 o'clock in front of the society rooms in chemical and physical hall, determined to break up a joint meeting of three secret societies." In 1891, President William Beardshear banned students from joining secret college fraternities, resulting in the eventual closing of all formerly established fraternities. President Storms lifted the ban in 1904. Following the lifting of the fraternity ban, the first thirteen national fraternities (IFC) installed on the Iowa State campus between 1904 and 1913 were, in order, Sigma Nu, Sigma Alpha Epsilon, Beta Theta Pi, Phi Gamma Delta, Alpha Tau Omega, Kappa Sigma, Theta Xi, Acacia, Phi Sigma Kappa, Delta Tau Delta, Pi Kappa Alpha, and Phi Delta Theta. Though some have suspended their chapters at various times, eleven of the original thirteen fraternities were active in 2008. Many of these chapters existed on campus as local fraternities before being reorganized as national fraternities, prior to 1904. In the Spring of 2014, it was announced that Alpha Phi sorority would be coming to Iowa state in the Fall of 2014, with Delta Gamma sorority Following in the near future. School newspaper The Iowa State Daily is the university's student newspaper. The Daily has its roots from a news sheet titled the Clipper, which was started in the spring of 1890 by a group of students at Iowa Agricultural College led by F.E. Davidson. The Clipper soon led to the creation of the Iowa Agricultural College Student, and the beginnings of what would one day become the Iowa State Daily. It was awarded the 2016 Best All-Around Daily Student Newspaper by the Society of Professional Journalists. Campus radio 88.5 KURE is the university's student-run radio station. Programming for KURE includes ISU sports coverage, talk shows, the annual quiz contest Kaleidoquiz, and various music genres. Student television ISUtv is the university's student-run television station. It is housed in the former WOI-TV station that was established in 1950. The student organization of ISUtv has many programs including Newswatch, a twice weekly news spot, Cyclone InCyders, the campus sports show, Fortnightly News, a satirical/comedy program, and Cy's Eyes on the Skies, a twice weekly weather show. Athletics The "Cyclones" name dates back to 1895. That year, Iowa suffered an unusually high number of devastating cyclones (as tornadoes were called at the time). In September, Iowa Agricultural College's football team traveled to Northwestern University and defeated that team by a score of 36–0. The next day, the Chicago Tribunes headline read "Struck by a Cyclone: It Comes from Iowa and Devastates Evanston Town." The article began, "Northwestern might as well have tried to play football with an Iowa cyclone as with the Iowa team it met yesterday." The nickname stuck. The school colors are cardinal and gold. The mascot is Cy the Cardinal, introduced in 1954. Since a cyclone was determined to be difficult to depict in costume, the cardinal was chosen in reference to the school colors. A contest was held to select a name for the mascot, with the name Cy being chosen as the winner. The Iowa State Cyclones are a member of the Big 12 Conference and compete in NCAA Division I Football Bowl Subdivision (FBS), fielding 16 varsity teams in 12 sports. The Cyclones also compete in and are a founding member of the Central States Collegiate Hockey League of the American Collegiate Hockey Association. Iowa State's intrastate archrival is the University of Iowa with whom it competes annually for the Iowa Corn Cy-Hawk Series trophy, an annual athletic competition between the two schools. Sponsored by the Iowa Corn Growers Association, the competition includes all head-to-head regular season competitions between the two rival universities in all sports. Football Football first made its way onto the Iowa State campus in 1878 as a recreational sport, but it was not until 1892 that Iowa State organized its first team to represent the school in football. In 1894, college president William M. Beardshear spearheaded the foundation of an athletic association to officially sanction Iowa State football teams. The 1894 team finished with a 6–1 mark. The Cyclones compete each year for traveling trophies. Since 1977, Iowa State and Iowa compete annually for the Cy-Hawk Trophy. Iowa State competes in an annual rivalry game against Kansas State known as Farmageddon and against former conference foe Missouri for the Telephone Trophy. The Cyclones also compete against the Iowa Hawkeyes, their in-state rival. The Cyclones play their home games at Jack Trice Stadium, named after Jack Trice, ISU's first African-American athlete and also the first and only Iowa State athlete to die from injuries sustained during athletic competition. Trice died three days after his first
|
Computer or the ABC. An ABC Team consisting of Ames Laboratory and Iowa State engineers, technicians, researchers and students unveiled a working replica of the Atanasoff–Berry Computer in 1997 which can be seen on display on campus in the Durham Computation Center. Birth of cooperative extension The Extension Service traces its roots to farmers' institutes developed at Iowa State in the late 19th century. Committed to community, Iowa State pioneered the outreach mission of being a land-grant college through creation of the first Extension Service in 1902. In 1906, the Iowa Legislature enacted the Agricultural Extension Act making funds available for demonstration projects. It is believed this was the first specific legislation establishing state extension work, for which Iowa State assumed responsibility. The national extension program was created in 1914 based heavily on the Iowa State model. VEISHEA celebration Iowa State is widely known for VEISHEA, an annual education and entertainment festival that was held on campus each spring. The name VEISHEA was derived from the initials of ISU's five original colleges, forming an acronym as the university existed when the festival was founded in 1922: Veterinary Medicine Engineering Industrial Science Home Economics Agriculture VEISHEA was the largest student run festival in the nation, bringing in tens of thousands of visitors to the campus each year. The celebration featured an annual parade and many open-house demonstrations of the university facilities and departments. Campus organizations exhibited products, technologies, and held fund raisers for various charity groups. In addition, VEISHEA brought speakers, lecturers, and entertainers to Iowa State, and throughout its over eight decade history, it has hosted such distinguished guests as Bob Hope, John Wayne, Presidents Harry Truman, Ronald Reagan, and Lyndon Johnson, and performers Diana Ross, Billy Joel, Sonny and Cher, The Who, The Goo Goo Dolls, Bobby V, and The Black Eyed Peas. The 2007 VEISHEA festivities marked the start of Iowa State's year-long sesquicentennial celebration. On August 8, 2014, President Steven Leath announced that VEISHEA would no longer be an annual event at Iowa State and the name VEISHEA would be retired. Manhattan Project Iowa State played a role in the development of the atomic bomb during World War II as part of the Manhattan Project, a research and development program begun in 1942 under the Army Corps of Engineers. The process to produce large quantities of high-purity uranium metal became known as the Ames process. One-third of the uranium metal used in the world's first controlled nuclear chain reaction was produced at Iowa State under the direction of Frank Spedding and Harley Wilhelm. The Ames Project received the Army/Navy E Award for Excellence in Production on October 12, 1945, for its work with metallic uranium as a vital war material. Today, ISU is the only university in the United States that has a U.S. Department of Energy research laboratory physically located on its campus. Research Ames Laboratory Iowa State is the only university in the United States that has a U.S. Department of Energy research laboratory physically located on its campus. Operated by Iowa State, the Ames Laboratory is one of ten national DOE Office of Science research laboratories. ISU research for the government provided Ames Laboratory its start in the 1940s with the development of a highly efficient process for producing high-purity uranium for atomic energy. Today, Ames Laboratory continues its leading status in current materials research and focuses diverse fundamental and applied research strengths upon issues of national concern, cultivates research talent, and develops and transfers technologies to improve industrial competitiveness and enhance U.S. economic security. Ames Laboratory employs more than 430 full- and part-time employees, including more than 250 scientists and engineers. Students make up more than 20 percent of the paid workforce. The Ames Laboratory is the U.S. home to 2011 Nobel Prize in Chemistry winner Dan Shechtman and is intensely engaged with the international scientific community, including hosting a large number of international visitors each year. ISU Research Park The ISU Research Park is a 230-acre development with over 270,000 square feet of building space located just south of the Iowa State campus in Ames. Though closely connected with the university, the research park operates independently to help tenants reach their proprietary goals, linking technology creation, business formation, and development assistance with established technology firms and the marketplace. The ISU Research Park Corporation was established in 1987 as a not-for-profit, independent, corporation operating under a board of directors appointed by Iowa State University and the ISU Foundation. The corporation manages both the Research Park and incubator programs. Other research institutes Iowa State is involved in a number of other significant research and creative endeavors, multidisciplinary collaboration, technology transfer, and strategies addressing real-world problems. In 2010, the Biorenewables Research Laboratory opened in a LEED-Gold certified building that complements and helps replace labs and offices across Iowa State and promotes interdisciplinary, systems-level research and collaboration. The Lab houses the Bioeconomy Institute, the Biobased Industry Center, and the National Science Foundation Engineering Research Center for Biorenewable Chemicals, a partnership of six universities as well as the Max Planck Society in Germany and the Technical University of Denmark. The Engineering Teaching and Research Complex was built in 1999 and is home to Stanley and Helen Howe Hall and Gary and Donna Hoover Hall. The complex is occupied by the Virtual Reality Applications Center (VRAC), Center for Industrial Research and Service (CIRAS), Department of Aerospace Engineering and Engineering Mechanics, Department of Materials Science and Engineering, Engineering Computer Support Services, Engineering Distance Education, and Iowa Space Grant Consortium. And the complex contains one of the world's only six-sided immersive virtual reality labs (C6), as well as the 240 seat 3D-capable Alliant Energy Lee Liu Auditorium, the Multimodal Experience Testbed and Laboratory (METaL), and the User Experience Lab (UX Lab). All of which supports the research of more than 50 faculty and 200 graduate, undergraduate, and postdoctoral students. The Plant Sciences Institute was founded in 1999. PSI's research focus is to understand the effects of genotype (genetic makeup) and environment on phenotypes (traits) sufficiently well that it will be able to predict the phenotype of a given genotype in a given environment. The institute is housed in the Roy J. Carver Co-Laboratory and is home to the Plant Sciences Institute Faculty Scholars program. There is also the Iowa State University Northeast Research Farm in Nashua. Campus Recognition Iowa State's campus contains over 160 buildings. Several buildings, as well as the Marston Water Tower, are listed on the National Register of Historic Places. The central campus includes of trees, plants, and classically designed buildings. The landscape's most dominant feature is the central lawn, which was listed as a "medallion site" by the American Society of Landscape Architects in 1999, one of only three central campuses designated as such. The other two were Harvard University and the University of Virginia. Thomas Gaines, in The Campus As a Work of Art, proclaimed the Iowa State campus to be one of the twenty-five most beautiful campuses in the country. Gaines noted Iowa State's park-like expanse of central campus, and the use of trees and shrubbery to draw together ISU's varied building architecture. Over decades, campus buildings, including the Campanile, Beardshear Hall, and Curtiss Hall, circled and preserved the central lawn, creating a space where students study, relax, and socialize. Campanile The campanile was constructed during 1897-1898 as a memorial to Margaret MacDonald Stanton, Iowa State's first dean of women, who died on July 25, 1895. The tower is located on ISU's central campus, just north of the Memorial Union. The site was selected by Margaret's husband, Edgar W. Stanton, with the help of then-university president William M. Beardshear. The campanile stands tall on a 16 by 16 foot (5 by 5 m) base, and cost $6,510.20 to construct. The campanile is widely seen as one of the major symbols of Iowa State University. It is featured prominently on the university's official ring and the university's mace, and is also the subject of the university's alma mater, The Bells of Iowa State. Lake LaVerne Named for Dr. LaVerne W. Noyes, who also donated the funds to see that Alumni Hall could be completed after sitting unfinished and unused from 1905 to 1907. Dr. Noyes is an 1872 alumnus. Lake LaVerne is located west of the Memorial Union and south of Alumni Hall, Carver Hall, and Music Hall. The lake was a gift from Dr. Noyes in 1916. Lake LaVerne is the home of two mute swans named Sir Lancelot and Elaine, donated to Iowa State by VEISHEA 1935. In 1944, 1970, and 1971 cygnets (baby swans) made their home on Lake LaVerne. Previously Sir Lancelot and Elaine were trumpeter swans but were too aggressive and in 1999 were replaced with two mute swans. In early spring 2003, Lake LaVerne welcomed its newest and most current mute swan duo. In support of Iowa Department of Natural Resources efforts to re-establish the trumpeter swans in Iowa, university officials avoided bringing breeding pairs of male and female mute swans to Iowa State which means the current Sir Lancelot and Elaine are both female. Reiman Gardens Iowa State has maintained a horticulture garden since 1914. Reiman Gardens is the third location for these gardens. Today's gardens began in 1993 with a gift from Bobbi and Roy Reiman. Construction began in 1994 and the Gardens' initial were officially dedicated on September 16, 1995. Reiman Gardens has since grown to become a site consisting of a dozen distinct garden areas, an indoor conservatory and an indoor butterfly "wing", butterfly emergence cases, a gift shop, and several supporting greenhouses. Located immediately south of Jack Trice Stadium on the ISU campus, Reiman Gardens is a year-round facility that has become one of the most visited attractions in central Iowa. The Gardens has received a number of national, state, and local awards since its opening, and its rose gardens are particularly noteworthy. It was honored with the President's Award in 2000 by All American Rose Selections, Inc., which is presented to one public garden in the United States each year for superior rose maintenance and display: “For contributing to the public interest in rose growing through its efforts in maintaining an outstanding public rose garden.” University museums The university museums consist of the Brunnier Art Museum, Farm House Museum, the Art on Campus Program, the Christian Petersen Art Museum, and the Elizabeth and Byron Anderson Sculpture Garden. The Museums include a multitude of unique exhibits, each promoting the understanding and delight of the visual arts as well as attempt to incorporate a vast interaction between the arts, sciences, and technology. Brunnier Art Museum The Brunnier Art Museum, Iowa's only accredited museum emphasizing a decorative arts collection, is one of the nation's few museums located within a performing arts and conference complex, the Iowa State Center. Founded in 1975, the museum is named after its benefactors, Iowa State alumnus Henry J. Brunnier and his wife Ann. The decorative arts collection they donated, called the Brunnier Collection, is extensive, consisting of ceramics, glass, dolls, ivory, jade, and enameled metals. Other fine and decorative art objects from the University Art Collection include prints, paintings, sculptures, textiles, carpets, wood objects, lacquered pieces, silver, and furniture. About eight to 12 annual changing exhibitions and permanent collection exhibitions provide educational opportunities for all ages, from learning the history of a quilt hand-stitched over 100 years ago to discovering how scientists analyze the physical properties of artists' materials, such as glass or stone. Lectures, receptions, conferences, university classes, panel discussions, gallery walks, and gallery talks are presented to assist with further interpretation of objects. Farm House Museum Located near the center of the Iowa State campus, the Farm House Museum sits as a monument to early Iowa State history and culture as well as a National Historic Landmark. As the first building on campus, the Farm House was built in 1860 before campus was occupied by students or even classrooms. The college's first farm tenants primed the land for agricultural experimentation. This early practice lead to Iowa State Agricultural College and Model Farm opening its doors to Iowa students for free in 1869 under the Morrill Act (or Land-grant Act) of 1862. Many prominent figures have made the Farm House their home throughout its 150 years of use. The first president of the college, Adonijah Welch, briefly stayed at the Farm House and even wrote his inaugural speech in a bedroom on the second floor. James “Tama Jim” Wilson resided for much of the 1890s with his family at the Farm House until he joined President William McKinley's cabinet as U.S. Secretary of Agriculture. Agriculture Dean Charles Curtiss and his young family replaced Wilson and became the longest resident of Farm House. In 1976, over 110 years after the initial construction, the Farm House became a museum after much time and effort was put into restoring the early beauty of the modest farm home. Today, faculty, students, and community members can enjoy the museum while honoring its significance in shaping a nationally recognized land-grant university. Its collection boasts a large collection of 19th and early 20th century decorative arts, furnishings and material culture reflecting Iowa State and Iowa heritage. Objects include furnishings from Carrie Chapman Catt and Charles Curtiss, a wide variety of quilts, a modest collection of textiles and apparel, and various china and glassware items. As with many sites on the Iowa State University Campus, The Farm House Museum has a few old myths and legends associated with it. There are rumors of a ghost changing silverware and dinnerware, unexplained rattling furniture, and curtains that have opened seemingly by themselves. The Farm House Museum is a unique on-campus educational resource providing a changing environment of exhibitions among the historical permanent collection objects that are on display. A walk through the Farm House Museum immerses visitors in the Victorian era (1860–1910) as well as exhibits colorful Iowa and local Ames history. Art on Campus Collection Iowa State is home to one of the largest campus public art programs in the United States. Over 2,000 works of public art, including 600 by significant national and international artists, are located across campus in buildings, courtyards, open spaces and offices. The traditional public art program began during the Depression in the 1930s when Iowa State College's President Raymond Hughes envisioned that "the arts would enrich and provide substantial intellectual exploration into our college curricula." Hughes invited Grant Wood to create the Library's agricultural murals that speak to the founding of Iowa and Iowa State College and Model Farm. He also offered Christian Petersen a one-semester sculptor residency to design and build the fountain and bas relief at the Dairy Industry Building. In 1955, 21 years later, Petersen retired having created 12 major sculptures for the campus and hundreds of small studio sculptures. The Art on Campus Collection is a campus-wide resource of over 2000 public works of art. Programs, receptions, dedications, university classes, Wednesday Walks, and educational tours are presented on a regular basis to enhance visual literacy and aesthetic appreciation of this diverse collection. Christian Petersen Art Museum The Christian Petersen Art Museum in Morrill Hall is named for the nation's first permanent campus artist-in-residence, Christian Petersen, who sculpted and taught at Iowa State from 1934 through 1955, and is considered the founding artist of the Art on Campus Collection. Named for Justin Smith Morrill who created the Morrill Land-Grant Colleges Act, Morrill Hall was completed in 1891. Originally constructed to fill the capacity of a library, museum, and chapel, its original uses are engraved in the exterior stonework on the east side. The building was vacated in 1996 when it was determined unsafe and was also listed in the National Register of Historic Places the same year. In 2005, $9 million was raised to renovate the building and convert it into a museum. Completed and reopened in March 2007, Morrill Hall is home to the Christian Petersen Art Museum. As part of University Museums, the Christian Petersen Art Museum at Morrill Hall is the home of the Christian Petersen Art Collection, the Art on Campus Program, the University Museums's Visual Literacy and Learning Program, and Contemporary Changing Art Exhibitions Program. Located within the Christian Petersen Art Museum are the Lyle and Nancy Campbell Art Gallery, the Roy and Bobbi Reiman Public Art Studio Gallery, the Margaret Davidson Center for the Study of the Art on Campus Collection, the Edith D. and Torsten E. Lagerstrom Loaned Collections Center, and the Neva M. Petersen Visual Learning Gallery. University Museums shares the James R. and Barbara R. Palmer Small Objects Classroom in Morrill Hall. Anderson Sculpture Garden The Elizabeth and Byron Anderson Sculpture Garden is located by the Christian Petersen Art Museum at historic Morrill Hall. The sculpture garden design incorporates sculptures, a gathering arena, and sidewalks and pathways. Planted with perennials, ground cover, shrubs, and flowering trees, the landscape design provides a distinctive setting for important works of 20th and 21st century sculpture, primarily American. Ranging from forty-four inches to nearly nine feet high and from bronze to other metals, these works of art represent the richly diverse character of modern and contemporary sculpture. The sculpture garden is adjacent to Iowa State's central campus. Adonijah Welch, ISU's first president, envisioned a picturesque campus with a winding road encircling the college's majestic buildings, vast lawns of green grass, many varieties of trees sprinkled throughout to provide shade, and shrubbery and flowers for fragrance. Today, the central lawn continues to be an iconic place for all Iowa Staters, and enjoys national acclaim as one of the most beautiful campuses in the country. The new Elizabeth and Byron Anderson Sculpture Garden further enhances the beauty of Iowa State. Sustainability Iowa State's composting facility is capable of processing over 10,000 tons of organic waste every year. The school's $3 million revolving loan fund loans money for energy efficiency and conservation projects on campus. In the 2011 College Sustainability Report Card issued by the Sustainable Endowments Institute, the university received a B grade. Student life Residence halls Iowa State operates 20 on-campus residence halls. The residence halls are divided into geographical areas. The Union Drive Association (UDA) consists of four residence halls located on the west side of campus, including Friley Hall, which has been declared one of the largest residence halls in the country. The Richardson Court Association (RCA) consists of 12 residence halls on the east side of campus. The Towers Residence Association (TRA) are located south of the main campus. Two of the four towers, Knapp and Storms Halls, were imploded in 2005; however, Wallace and Wilson Halls still stand. Buchanan Hall and Geoffroy Hall are nominally considered part of the RCA, despite their distance from the other buildings. ISU operates two apartment complexes for upperclassmen, Frederiksen Court and SUV Apartments. Student government The governing body for ISU students is ISU Student Government. The ISU Student Government is composed of a president, vice president, finance director, cabinet appointed by the president, a clerk appointed by the vice president, senators representing each college and residence area at the university, a nine-member judicial branch and an election commission. Student organizations ISU has over 900 student organizations on campus that represent a variety of interests. Organizations are supported by
|
by which a gene product is either induced or inhibited Chemistry Induction period, the time interval between cause and measurable effect Inductive cleavage, in organic chemistry Inductive effect, the redistribution of electron density through molecular sigma bonds Asymmetric induction, the formation of one specific stereoisomer in the presence of a nearby chiral center Computing Grammar induction, in computing Inductive bias, in computing Inductive probability, in computing Inductive programming, in computing Rule induction, in computing Word-sense induction, in computing Mathematics Backward induction in game theory and economics Induced representation, in representation theory Mathematical induction, a method of proof in the field of mathematics Parabolic induction, a method of constructing group representations Statistical induction, also known as statistical inference Strong induction, or complete induction, a variant of mathematical induction Structural induction, a generalization of mathematical induction Transfinite induction, a kind
|
induction and inhibition, a process in which a molecule induces the expression of an enzyme Morphogenesis, the biological process that causes an organism to develop its shape Regulation of gene expression, the means by which a gene product is either induced or inhibited Chemistry Induction period, the time interval between cause and measurable effect Inductive cleavage, in organic chemistry Inductive effect, the redistribution of electron density through molecular sigma bonds Asymmetric induction, the formation of one specific stereoisomer in the presence of a nearby chiral center Computing Grammar induction, in computing Inductive bias,
|
(Australia, Brazil, Czecho-Slovakia, Denmark, the Netherlands, Norway, Poland, Romania, South Africa, and Spain) had joined the Union, bringing the total membership to 19 countries. Although the Union was officially formed eight months after the end of World War I, international collaboration in astronomy had been strong in the pre-war era (e.g., the Astronomische Gesellschaft Katalog projects since 1868, the Astrographic Catalogue since 1887, and the International Union for Solar research since 1904). The first 50 years of the Union's history are well documented. Subsequent history is recorded in the form of reminiscences of past IAU Presidents and General Secretaries. Twelve of the fourteen past General Secretaries in the period 1964-2006 contributed their recollections of the Union's history in IAU Information Bulletin No. 100. Six past IAU Presidents in the period 1976–2003 also contributed their recollections in IAU Information Bulletin No. 104. Composition As of 1 August 2019, the IAU has a total of 13,701 individual members, who are professional astronomers from 102 countries worldwide; 81.7% of individual members are male, while 18.3% are female. Membership also includes 82 national members, professional astronomical communities representing their country's affiliation with the IAU. National members include the Australian Academy of Science, the Chinese Astronomical Society, the French Academy of Sciences, the Indian National Science Academy, the National Academies (United States), the National Research Foundation of South Africa, the National Scientific and Technical Research Council (Argentina), KACST (Saudi Arabia), the Council of German Observatories, the Royal Astronomical Society (United Kingdom), the Royal Astronomical Society of New Zealand, the Royal Swedish Academy of Sciences, the Russian Academy of Sciences, and the Science Council of Japan, among many others. The sovereign body of the IAU is its General Assembly, which comprises all members. The Assembly determines IAU policy, approves the Statutes and By-Laws of the Union (and amendments proposed thereto) and elects various committees. The right to vote on matters brought before the Assembly varies according to the type of business under discussion. The Statutes consider such business to be divided into two categories: issues of a "primarily scientific nature" (as determined by the Executive Committee), upon which voting is restricted to individual members, and all other matters (such as Statute revision and procedural questions), upon which voting is restricted to the representatives of national members. On budget matters (which fall into the second category), votes are weighted according to the relative subscription levels of the national members. A second category vote requires a turnout of at least two-thirds of national members to be valid. An absolute majority is sufficient for approval in any vote, except for Statute revision which requires a two-thirds majority. An equality of votes is resolved by the vote of the President of the Union. List of national members Africa Algeria Egypt Ethiopia Ghana Madagascar (Observer) Morocco (Observer) Mozambique (Observer) Nigeria South Africa Asia Armenia People's Republic of China Republic of China Cyprus Georgia (Suspended) India Indonesia Iran (Suspended) Israel Japan Jordan Kazakhstan Korea, Democratic People's Republic of (Interim) Korea, Republic of Lebanon (Interim) Malaysia Mongolia (Interim) Philippines Palestine Saudi Arabia (Suspended) Syria (Observer) Tajikistan Thailand Turkey United Arab Emirates Vietnam (Interim) Europe Austria Belgium Bulgaria Denmark Croatia Czech Republic Estonia Finland France Germany Greece Hungary Iceland Ireland Italy Latvia Lithuania Netherlands Norway Poland Portugal Romania Russian Federation Serbia Slovakia Slovenia Spain Sweden Switzerland Ukraine United Kingdom Vatican City State North America Canada Costa Rica (Interim) Honduras (Interim) Mexico Panama (Interim) United States Oceania Australia New Zealand South America Argentina Bolivia (Suspended) Brazil Chile Colombia Peru (Suspended) Uruguay (Observer) Venezuela Terminated national members Azerbaijan Cuba North Macedonia Uzbekistan General Assemblies Since 1922, the IAU General Assembly meets every three years, except for the period between 1938 and 1948, due to World War II. After a Polish request in 1967, and by a controversial decision of the then President of the IAU, an Extraordinary IAU General Assembly was held in September 1973 in Warsaw, Poland, to commemorate the 500th anniversary of the birth of Nicolaus Copernicus, soon after the regular 1973 GA had been held in Sydney, Australia. List of the presidents of the IAU Sources. Commission 46: Education in astronomy Commission
|
nongovernmental organisation with the objective of advancing astronomy in all aspects, including promoting astronomical research, outreach, education, and development through global cooperation. It was founded in 1919 and is based in Paris, France. The IAU is composed of individual members, who include both professional astronomers and junior scientists, and national members, such as professional associations, national societies, or academic institutions. Individual members are organised into divisions, committees, and working groups centered on particular subdisciplines, subjects, or initiatives. As of 2018, the Union had over 13,700 individual members, spanning 90 countries, and 82 national members. Among the key activities of the IAU is serving as a forum for scientific conferences. It sponsors nine annual symposia and holds a triannual General Assembly that sets policy and includes various scientific meetings. The Union is best known for being the leading authority in assigning official names and designations to astronomical objects, and for setting uniform definitions for astronomical principles. It also coordinates with national and international partners, such as UNESCO, to fulfill its mission. The IAU is a member of the International Science Council (ISC), which is composed of international scholarly and scientific institutions and national academies of sciences. Function The International Astronomical Union is an international association of professional astronomers, at the PhD level and beyond, active in professional research and education in astronomy. Among other activities, it acts as the recognized authority for assigning designations and names to celestial bodies (stars, planets, asteroids, etc.) and any surface features on them. The IAU is a member of the International Science Council (ISC). Its main objective is to promote and safeguard the science of astronomy in all its aspects through international cooperation. The IAU maintains friendly relations with organizations that include amateur astronomers in their membership. The IAU has its head office on the second floor of the Institut d'Astrophysique de Paris in the 14th arrondissement of Paris. This organisation has many working groups. For example, the Working Group for Planetary System Nomenclature (WGPSN), which maintains the astronomical naming conventions and planetary nomenclature for planetary bodies, and the Working Group on Star Names (WGSN), which catalogues and standardizes proper names for stars. The IAU is also responsible for the system of astronomical telegrams which are produced and distributed on its behalf by the Central Bureau for Astronomical Telegrams. The Minor Planet Center also operates under the IAU, and is a "clearinghouse" for all non-planetary or non-moon bodies in the Solar System. History The IAU was founded on 28 July 1919, at the Constitutive Assembly of the International Research Council (now the International Science Council) held in Brussels, Belgium. Two subsidiaries of the IAU were also created at this assembly: the International Time Commission seated at the International Time Bureau in Paris, France, and the International Central Bureau of Astronomical Telegrams initially seated in Copenhagen, Denmark. The seven initial member states were Belgium, Canada, France, Great Britain, Greece, Japan, and the United States, soon to be followed by Italy and Mexico. The first executive committee consisted of Benjamin Baillaud (President, France), Alfred Fowler (General Secretary, UK), and four vice presidents: William Campbell (USA), Frank Dyson (UK), Georges Lecointe (Belgium), and Annibale Riccò (Italy). Thirty-two Commissions (referred to initially as Standing Committees) were appointed at the Brussels meeting and focused on topics ranging from relativity to minor planets. The reports of these 32 Commissions formed the main substance of the first General Assembly, which took place in Rome, Italy, 2–10 May 1922. By the end of the first General Assembly, ten additional nations (Australia, Brazil, Czecho-Slovakia, Denmark, the Netherlands, Norway, Poland, Romania, South Africa, and Spain) had joined the Union, bringing the total membership to 19 countries. Although the Union was officially formed eight months after the end of World War I, international collaboration in astronomy had been strong in the pre-war era (e.g., the Astronomische Gesellschaft Katalog projects since 1868, the Astrographic Catalogue since 1887, and the International Union for Solar research
|
its generalization from numbers to arbitrary partially ordered sets A statistical level of measurement Interval estimate Interval (graph theory) Space-time interval, the distance between two points in 4-space Arts and entertainment Dramatic arts Intermission, (British English: interval), a break in a theatrical performance Entr'acte, a French term for the same, but used in English often to mean a musical performance played during the break Interval (play), a 1939 play by Sumner Locke Elliott Interval (film), a
|
break Interval (play), a 1939 play by Sumner Locke Elliott Interval (film), a 1973 film starring Merle Oberon Music Interval (music), the relationship in pitch between two notes Intervals (band), a Canadian progressive metal band Intervals (See You Next Tuesday album), 2008 Intervals (Ahmad Jamal album), 1980 Sport Playing time (cricket)#Intervals, the breaks between play in cricket Interval training, a
|
Libyan head of state Muammar Gaddafi, President Laurent Gbagbo of Ivory Coast and former Vice President Jean-Pierre Bemba of the Democratic Republic of the Congo. The ICC has faced a number of criticisms from states and society, including objections about its jurisdiction, accusations of bias, questioning of the fairness of its case-selection and trial procedures, as well as doubts about its effectiveness. History The establishment of an international tribunal to judge political leaders accused of international crimes was first proposed during the Paris Peace Conference in 1919 following the First World War by the Commission of Responsibilities. The issue was addressed again at a conference held in Geneva under the auspices of the League of Nations in 1937, which resulted in the conclusion of the first convention stipulating the establishment of a permanent international court to try acts of international terrorism. The convention was signed by 13 states, but none ratified it and the convention never entered into force. Following the Second World War, the allied powers established two ad hoc tribunals to prosecute Axis leaders accused of war crimes. The International Military Tribunal, which sat in Nuremberg, prosecuted German leaders while the International Military Tribunal for the Far East in Tokyo prosecuted Japanese leaders. In 1948 the United Nations General Assembly first recognised the need for a permanent international court to deal with atrocities of the kind prosecuted after the Second World War. At the request of the General Assembly, the International Law Commission (ILC) drafted two statutes by the early 1950s but these were shelved during the Cold War, which made the establishment of an international criminal court politically unrealistic. Benjamin B. Ferencz, an investigator of Nazi war crimes after the Second World War, and the Chief Prosecutor for the United States Army at the Einsatzgruppen Trial, became a vocal advocate of the establishment of international rule of law and of an international criminal court. In his first book published in 1975, entitled Defining International Aggression: The Search for World Peace, he advocated for the establishment of such a court. A second major advocate was Robert Kurt Woetzel, who co-edited Toward a Feasible International Criminal Court in 1970 and created the Foundation for the Establishment of an International Criminal Court in 1971. Towards a permanent international criminal court In June 1989, the Prime Minister of Trinidad and Tobago, A. N. R. Robinson, revived the idea of a permanent international criminal court by proposing the creation of such a court to deal with the illegal drug trade. Following Trinidad and Tobago's proposal, the General Assembly tasked the ILC with once again drafting a statute for a permanent court. While work began on the draft, the United Nations Security Council established two ad hoc tribunals in the early 1990s: The International Criminal Tribunal for the former Yugoslavia, created in 1993 in response to large-scale atrocities committed by armed forces during Yugoslav Wars, and the International Criminal Tribunal for Rwanda, created in 1994 following the Rwandan genocide. The creation of these tribunals further highlighted to many the need for a permanent international criminal court. In 1994, the ILC presented its final draft statute for the International Criminal Court to the General Assembly and recommended that a conference be convened to negotiate a treaty that would serve as the Court's statute. To consider major substantive issues in the draft statute, the General Assembly established the Ad Hoc Committee on the Establishment of an International Criminal Court, which met twice in 1995. After considering the Committee's report, the General Assembly created the Preparatory Committee on the Establishment of the ICC to prepare a consolidated draft text. From 1996 to 1998, six sessions of the Preparatory Committee were held at the United Nations headquarters in New York City, during which NGOs provided input and attended meetings under the umbrella organisation of the Coalition for the International Criminal Court (CICC). In January 1998, the Bureau and coordinators of the Preparatory Committee convened for an Inter-Sessional meeting in Zutphen in the Netherlands to technically consolidate and restructure the draft articles into a draft. Finally, the General Assembly convened a conference in Rome in June 1998, with the aim of finalizing the treaty to serve as the Court's statute. On 17 July 1998, the Rome Statute of the International Criminal Court was adopted by a vote of 120 to seven, with 21 countries abstaining. The seven countries that voted against the treaty were China, Iraq, Israel, Libya, Qatar, the United States, and Yemen. Israel's opposition to the treaty stemmed from the inclusion in the list of war crimes "the action of transferring population into occupied territory". Following 60 ratifications, the Rome Statute entered into force on 1 July 2002 and the International Criminal Court was formally established. The first bench of 18 judges was elected by the Assembly of States Parties in February 2003. They were sworn in at the inaugural session of the Court on 11 March 2003. The Court issued its first arrest warrants on 8 July 2005, and the first pre-trial hearings were held in 2006. The Court issued its first judgment in 2012 when it found Congolese rebel leader Thomas Lubanga Dyilo guilty of war crimes related to using child soldiers. In 2010, the states parties of the Rome Statute held the first Review Conference of the Rome Statute of the International Criminal Court in Kampala, Uganda. The Review Conference led to the adoption of two resolutions that amended the crimes under the jurisdiction of the Court. Resolution 5 amended Article 8 on war crimes, criminalizing the use of certain kinds of weapons in non-international conflicts whose use was already forbidden in international conflicts. Resolution 6, pursuant to Article 5(2) of the Statute, provided the definition and a procedure for jurisdiction over the crime of aggression. Opposition to the Court During the administration of Barack Obama, US opposition to the ICC evolved to "positive engagement", although no effort was made to ratify the Rome Statute. The administration of Donald Trump was considerably more hostile to the Court, threatening prosecutions and financial sanctions on ICC judges and staff in US courts as well as imposing visa bans in response to any investigation against American nationals in connection to alleged crimes and atrocities perpetrated by the US in Afghanistan. The threat included sanctions against any of over 120 countries which have ratified the Court for cooperating in the process. Following the imposition of sanctions on 11 June 2020 by the Trump administration, the court branded the sanctions an "attack against the interests of victims of atrocity crimes" and an "unacceptable attempt to interfere with the rule of law". The UN also regretted the effect sanctions may have on trials and investigations underway, saying its independence must be protected. In October 2016, after repeated claims that the court was biased against African states, Burundi, South Africa and the Gambia announced their withdrawals from the Rome Statute. However, following Gambia's presidential election later that year, which ended the long rule of Yahya Jammeh, Gambia rescinded its withdrawal notification. A decision by the High Court of South Africa in early 2017 ruled that the attempted withdrawal was unconstitutional, as it had not been agreed by Parliament, prompting the South African government to inform the UN that it was revoking its decision to withdraw. In November 2017, Fatou Bensouda advised the court to consider seeking charges for human rights abuses committed during the War in Afghanistan such as alleged rapes and tortures by the United States Armed Forces and the Central Intelligence Agency, crime against humanity committed by the Taliban, and war crimes committed by the Afghan National Security Forces. John Bolton, National Security Advisor of the United States, stated that ICC Court had no jurisdiction over the US, which did not ratify the Rome Statute. In 2020, overturning the previous decision not to proceed, senior judges at the ICC authorized an investigation into the alleged war crimes in Afghanistan. However, in June 2020, the decision to proceed led the Trump administration to power an economic and legal attack on the court. “The US government has reason to doubt the honesty of the ICC. The Department of Justice has received substantial credible information that raises serious concerns about a long history of financial corruption and malfeasance at the highest levels of the office of the prosecutor,” Attorney General William Barr said. The ICC responded with a statement expressing "profound regret at the announcement of further threats and coercive actions." "These attacks constitute an escalation and an unacceptable attempt to interfere with the rule of law and the Court’s judicial proceedings", the statement said. "They are announced with the declared aim of influencing the actions of ICC officials in the context of the court’s independent and objective investigations and impartial judicial proceedings." Following the announcement that the ICC would open a preliminary investigation on the Philippines in connection to its escalating drug war, President Rodrigo Duterte announced on 14 March 2018 that the Philippines would start to submit plans to withdraw, completing the process on 17 March 2019. The ICC pointed out that it retained jurisdiction over the Philippines during the period when it was a state party to the Rome Statute, from November 2011 to March 2019. On 30 September 2020, prominent United States human rights lawyers announced that they would sue Trump and his Administration, including Secretary of State Mike Pompeo, Treasury secretary Steven Mnuchin, attorney general William Barr, and OFAC director Andrea Gacki, and the departments they head, on the grounds that Trump's executive order had gagged them, violating their right to free speech, and impeded their work in trying to obtain justice on behalf of victims of war crimes. One of the plaintiffs, Diane Marie Amann, stated that, as a result of sanctions against the chief prosecutor at the ICC, she herself risked having her family assets seized if she continued to work for children who are bought and sold by traffickers, killed, tortured, sexually abused and forced to become child soldiers. However, on January 4, 2021, U.S. District Judge Katherine Polk Failla in Manhattan issued a preliminary injunction against the White House (and its Executive Order 13928 that was issued in June 2020 from President Donald Trump) from imposing criminal or civil penalties against four law professors. Structure The ICC is governed by the Assembly of States Parties, which is made up of the states that are party to the Rome Statute. The Assembly elects officials of the Court, approves its budget, and adopts amendments to the Rome Statute. The Court itself, however, is composed of four organs: the Presidency, the Judicial Divisions, the Office of the Prosecutor, and the Registry. State parties Assembly of States Parties The Court's management oversight and legislative body, the Assembly of States Parties, consists of one representative from each state party. Each state party has one vote and "every effort" has to be made to reach decisions by consensus. If consensus cannot be reached, decisions are made by vote. The Assembly is presided over by a president and two vice-presidents, who are elected by the members to three-year terms. The Assembly meets in full session once a year, alternating between New York and The Hague, and may also hold special sessions where circumstances require. Sessions are open to observer states and non-governmental organizations. The Assembly elects the judges and prosecutors, decides the Court's budget, adopts important texts (such as the Rules of Procedure and Evidence), and provides management oversight to the other organs of the Court. Article 46 of the Rome Statute allows the Assembly to remove from office a judge or prosecutor who "is found to have committed serious misconduct or a serious breach of his or her duties" or "is unable to exercise the functions required by this Statute". The states parties cannot interfere with the judicial functions of the Court. Disputes concerning individual cases are settled by the Judicial Divisions. In 2010, Kampala, Uganda hosted the Assembly's Rome Statute Review Conference. Organs of the Court The Court has four organs: the Presidency, the Judicial Division, the Office of the Prosecutor, and the Registry. Presidency The Presidency is responsible for the proper administration of the Court (apart from the Office of the Prosecutor). It comprises the President and the First and Second Vice-Presidents—three judges of the Court who are elected to the Presidency by their fellow judges for a maximum of two three-year terms. As of March 2021, the President is Piotr Hofmański from Poland, who took office on 11 March 2021, succeeding Chile Eboe-Osuji. His first term will expire in 2024. Judicial Divisions The Judicial Divisions consist of the 18 judges of the Court, organized into three chambers—the Pre-Trial Chamber, Trial Chamber and Appeals Chamber—which carry out the judicial functions of the Court. Judges are elected to the Court by the Assembly of States Parties. They serve nine-year terms and are not generally eligible for re-election. All judges must be nationals of states parties to the Rome Statute, and no two judges may be nationals of the same state. They must be "persons of high moral character, impartiality and integrity who possess the qualifications required in their respective States for appointment to the highest judicial offices". The Prosecutor or any person being investigated or prosecuted may request the disqualification of a judge from "any case in which his or her impartiality might reasonably be doubted on any ground". Any request for the disqualification of a judge from a particular case is decided by an absolute majority of the other judges. A judge may be removed from office if he or she "is found to have committed serious misconduct or a serious breach of his or her duties" or is unable to exercise his or her functions. The removal of a judge requires both a two-thirds majority of the other judges and a two-thirds majority of the states parties. Office of the Prosecutor The Office of the Prosecutor (OTP) is responsible for conducting investigations and prosecutions. It is headed by the Chief Prosecutor, who is assisted by one or more Deputy Prosecutors. The Rome Statute provides that the Office of the Prosecutor shall act independently; as such, no member of the Office may seek or act on instructions from any external source, such as states, international organisations, non-governmental organisations or individuals. The Prosecutor may open an investigation under three circumstances: when a situation is referred to him or her by a state party; when a situation is referred to him or her by the United Nations Security Council, acting to address a threat to international peace and security; or when the Pre-Trial Chamber authorises him or her to open an investigation on the basis of information received from other sources, such as individuals or non-governmental organisations. Any person being investigated or prosecuted may request the disqualification of a prosecutor from any case "in which their impartiality might reasonably be doubted on any ground". Requests for the disqualification of prosecutors are decided by the Appeals Chamber. A prosecutor may be removed from office by an absolute majority of the states parties if he or she "is found to have committed serious misconduct or a serious breach of his or her duties" or is unable to exercise his or her functions. However, critics of the Court argue that there are "insufficient checks and balances on the authority of the ICC prosecutor and judges" and "insufficient protection against politicized prosecutions or other abuses". Luis Moreno-Ocampo, chief ICC prosecutor, stressed in 2011 the importance of politics in prosecutions: "You cannot say al-Bashir is in London, arrest him. You need a political agreement." Henry Kissinger says the checks and balances are so weak that the prosecutor "has virtually unlimited discretion in practice". As of 16 June 2012, the Prosecutor has been Fatou Bensouda of Gambia, who had been elected as the new Prosecutor on 12 December 2011. She has been elected for nine years. Her predecessor, Luis Moreno Ocampo of Argentina, had been in office from 2003 to 2012. On Friday, 12 February 2021, British barrister Karim Khan was selected in a secret ballot against three other candidates to replace the current lead prosecutor Fatou Bensouda as the new prosecutor. His nine-year term will start on 16 June 2021. While he was a British barrister, Khan headed the United Nations’ special investigative team when it looked into Islamic State crimes in Iraq. At the ICC, he has been a lead defense counsel on cases from Kenya, Sudan and Libya. Policy Paper A Policy Paper is a document published by the Office of the Prosecutor occasionally where the particular considerations given to the topics in focus of the Office and often criteria for case selection are stated. While a policy paper does not give the Court jurisdiction over a new category of crimes, it promises what the Office of Prosecutor will consider when selecting cases in the upcoming term of service. OTP's policy papers are subject to revision. The five following Policy Papers have been published since the start of the ICC: 1 September 2007: Policy Paper on the Interest of Justice 12 April 2010: Policy Paper on Victims' Participation 1 November 2013: Policy Paper on Preliminary Examinations 20 June 2014: Policy Paper on Sexual and Gender-Based Crimes 15 September 2016: Policy paper on case selection and prioritisation 15 November 2016: Policy on Children Environmental crimes On the Policy Paper published in September 2016 it was announced that the International Criminal Court will focus on environmental crimes when selecting the cases. According to this document, the Office will give particular consideration to prosecuting Rome Statute crimes that are committed by means of, or that result in, "inter alia, the destruction of the environment, the illegal exploitation of natural resources or the illegal dispossession of land". This has been interpreted as a major shift towards the environmental crimes and a move with significant effects. Registry The Registry is responsible for the non-judicial aspects of the administration and servicing of the Court. This includes, among other things, "the administration of legal aid matters, court management, victims and witnesses matters, defence counsel, detention unit, and the traditional services provided by administrations in international organisations, such as finance, translation, building management, procurement and personnel". The Registry is headed by the Registrar, who is elected by the judges to a five-year term. The previous Registrar was Herman von Hebel, who was elected on 8 March 2013. The current Registrar is Peter Lewis, who was elected on 28 March 2018. Jurisdiction and admissibility The Rome Statute requires that several criteria exist in a particular case before an individual can be prosecuted by the Court. The Statute contains three jurisdictional requirements and three admissibility requirements. All criteria must be met for a case to proceed. The three jurisdictional requirements are (1) subject-matter jurisdiction (what acts constitute crimes), (2) territorial or personal jurisdiction (where the crimes were committed or who committed them), and (3) temporal jurisdiction (when the crimes were committed). Process The process to establish the Court's jurisdiction may be "triggered" by any one of three possible sources: (1) a State party, (2) the Security Council or (3) a Prosecutor. It is then up to the Prosecutor acting ex proprio motu ("of his own motion" so to speak) to initiate an investigation under the requirements of Article 15 of the Rome Statute. The procedure is slightly different when referred by a State Party or the Security Council, in which cases the Prosecutor does not need authorization of the Pre-Trial Chamber to initiate the investigation. Where there is a reasonable basis to proceed, it is mandatory for the Prosecutor to initiate an investigation. The factors listed in Article 53 considered for reasonable basis include whether the case would be admissible, and whether there are substantial reasons to believe that an investigation would not serve the interests of justice (the latter stipulates balancing against the gravity of the crime and the interests of the victims). Subject-matter jurisdiction requirements The Court's subject-matter jurisdiction means the crimes for which individuals can be prosecuted. Individuals can only be prosecuted for crimes that are listed in the Statute. The primary crimes are listed in article 5 of the Statute and defined in later articles: genocide (defined in article 6), crimes against humanity (defined in article 7), war crimes (defined in article 8), and crimes of aggression (defined in article 8 bis) (which is not yet within the jurisdiction of the Court; see below). In addition, article 70 defines offences against the administration of justice, which is a fifth category of crime for which individuals can be prosecuted. Genocide Article 6 defines the crime of genocide as "acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group". There are five such acts which constitute crimes of genocide under article 6: Killing members of a group Causing serious bodily or mental harm to members of the group Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction Imposing measures intended to prevent births within the group Forcibly transferring children of the group to another group The definition of these crimes is identical to those contained within the Convention on the Prevention and Punishment of the Crime of Genocide of 1948. In the Akayesu case the Court concluded that inciting directly and publicly others to commit genocide is in itself constitutive of a crime. Crimes against humanity Article 7 defines crimes against humanity as acts "committed as part of a widespread or systematic attack directed against any civilian population, with knowledge of the attack". The article lists 16 such as individual crimes: Murder Extermination Enslavement Deportation or forcible transfer of population Imprisonment or other severe deprivation of physical liberty Torture Rape Sexual slavery Enforced prostitution Forced pregnancy Enforced sterilization Sexual violence Persecution Enforced disappearance of persons Apartheid Other inhumane acts War crimes Article 8 defines war crimes depending on whether an armed conflict is either international (which generally means it is fought between states) or non-international (which generally means that it is fought between non-state actors, such as rebel groups, or between a state and such non-state actors). In total there are 74 war crimes listed in article 8. The most serious crimes, however, are those that constitute either grave breaches of the Geneva Conventions of 1949, which only apply to international conflicts, and serious violations of article 3 common to the Geneva Conventions of 1949, which apply to non-international conflicts. There are 11 crimes which constitute grave breaches of the Geneva Conventions and which are applicable only to international armed conflicts: Willful killing Torture Inhumane treatment Biological experiments Willfully causing great suffering Destruction and appropriation of property Compelling service in hostile forces Denying a fair trial Unlawful deportation and transfer Unlawful confinement Taking hostages There are seven crimes which constitute serious violations of article 3 common to the Geneva Conventions and which are applicable only to non-international armed conflicts: Murder Mutilation Cruel treatment Torture Outrages upon personal dignity Taking hostages Sentencing or execution without due process Additionally, there are 56 other crimes defined by article 8: 35 that apply to international armed conflicts and 21 that apply to non-international armed conflicts. Such crimes include attacking civilians or civilian objects, attacking peacekeepers, causing excessive incidental death or damage, transferring populations into occupied territories, treacherously killing or wounding, denying quarter, pillaging, employing poison, using expanding bullets, rape and other forms of sexual violence, and conscripting or using child soldiers. Crimes of aggression Article 8 bis defines crimes of aggression. The Statute originally provided that the Court could not exercise its jurisdiction over the crime of aggression until such time as the states parties agreed on a definition of the crime and set out the conditions under which it could be prosecuted. Such an amendment was adopted at the first review conference of the ICC in Kampala, Uganda, in June 2010. However, this amendment specified that the ICC would not be allowed to exercise jurisdiction of the crime of aggression until two further conditions had been satisfied: (1) the amendment has entered into force for 30 states parties and (2) on or after 1 January 2017, the Assembly of States Parties has voted in favor of allowing the Court to exercise jurisdiction. On 26 June 2016 the first condition was satisfied and the state parties voted in favor of allowing the Court to exercise jurisdiction on 14 December 2017. The Court's jurisdiction to prosecute crimes of aggression was accordingly activated on 17 July 2018. The Statute, as amended, defines the crime of aggression as "the planning, preparation, initiation or execution, by a person in a position effectively to exercise control over or to direct the political or military action of a State, of an act of aggression which, by its character, gravity and scale, constitutes a manifest violation of the Charter of the United Nations." The Statute defines an "act of aggression" as "the use of
|
and the age or infirmity of the alleged perpetrator, and his or her role in the alleged crime". Individual criminal responsibility The Court has jurisdiction over natural persons. A person who commits a crime within the jurisdiction of the Court is individually responsible and liable for punishment in accordance with the Rome Statute. In accordance with the Rome Statute, a person shall be criminally responsible and liable for punishment for a crime within the jurisdiction of the Court if that person: Commits such a crime, whether as an individual, jointly with another or through another person, regardless of whether that other person is criminally responsible; Orders, solicits or induces the commission of such a crime which in fact occurs or is attempted; For the purpose of facilitating the commission of such a crime, aids, abets or otherwise assists in its commission or its attempted commission, including providing the means for its commission; In any other way contributes to the commission or attempted commission of such a crime by a group of persons acting with a common purpose. In respect of the crime of genocide, directly and publicly incites others to commit genocide; Attempts to commit such a crime by taking action that commences its execution by means of a substantial step, but the crime does not occur because of circumstances independent of the person's intentions Procedure Trial Trials are conducted under a hybrid common law and civil law judicial system, but it has been argued the procedural orientation and character of the court is still evolving. A majority of the three judges present, as triers of fact in a bench trial, may reach a decision, which must include a full and reasoned statement. Trials are supposed to be public, but proceedings are often closed, and such exceptions to a public trial have not been enumerated in detail. In camera proceedings are allowed for protection of witnesses or defendants as well as for confidential or sensitive evidence. Hearsay and other indirect evidence is not generally prohibited, but it has been argued the court is guided by hearsay exceptions which are prominent in common law systems. There is no subpoena or other means to compel witnesses to come before the court, although the court has some power to compel testimony of those who chose to come before it, such as fines. Rights of the accused The Rome Statute provides that all persons are presumed innocent until proven guilty beyond reasonable doubt, and establishes certain rights of the accused and persons during investigations. These include the right to be fully informed of the charges against him or her; the right to have a lawyer appointed, free of charge; the right to a speedy trial; and the right to examine the witnesses against him or her. To ensure "equality of arms" between defence and prosecution teams, the ICC has established an independent Office of Public Counsel for the Defence (OPCD) to provide logistical support, advice and information to defendants and their counsel. The OPCD also helps to safeguard the rights of the accused during the initial stages of an investigation. However, Thomas Lubanga's defence team say they were given a smaller budget than the Prosecutor and that evidence and witness statements were slow to arrive. Victim participation One of the great innovations of the Statute of the International Criminal Court and its Rules of Procedure and Evidence is the series of rights granted to victims. For the first time in the history of international criminal justice, victims have the possibility under the Statute to present their views and observations before the Court. Participation before the Court may occur at various stages of proceedings and may take different forms, although it will be up to the judges to give directions as to the timing and manner of participation. Participation in the Court's proceedings will in most cases take place through a legal representative and will be conducted "in a manner which is not prejudicial or inconsistent with the rights of the accused and a fair and impartial trial". The victim-based provisions within the Rome Statute provide victims with the opportunity to have their voices heard and to obtain, where appropriate, some form of reparation for their suffering. It is the aim of this attempted balance between retributive and restorative justice that, it is hoped, will enable the ICC to not only bring criminals to justice but also help the victims themselves obtain some form of justice. Justice for victims before the ICC comprises both procedural and substantive justice, by allowing them to participate and present their views and interests, so that they can help to shape truth, justice and reparations outcomes of the Court. Article 43(6) establishes a Victims and Witnesses Unit to provide "protective measures and security arrangements, counseling and other appropriate assistance for witnesses, victims who appear before the Court, and others who are at risk on account of testimony given by such witnesses." Article 68 sets out procedures for the "Protection of the victims and witnesses and their participation in the proceedings." The Court has also established an Office of Public Counsel for Victims, to provide support and assistance to victims and their legal representatives. The ICC does not have its own witness protection program, but rather must rely on national programs to keep witnesses safe. Reparations Victims before the International Criminal Court can also claim reparations under Article 75 of the Rome Statute. Reparations can only be claimed when a defendant is convicted and at the discretion of the Court's judges. So far the Court has ordered reparations against Thomas Lubanga. Reparations can include compensation, restitution and rehabilitation, but other forms of reparations may be appropriate for individual, collective or community victims. Article 79 of the Rome Statute establishes a Trust Fund to provide assistance before a reparation order to victims in a situation or to support reparations to victims and their families if the convicted person has no money. Co-operation by states not party to Rome Statute One of the principles of international law is that a treaty does not create either obligations or rights for third states without their consent, and this is also enshrined in the 1969 Vienna Convention on the Law of Treaties. The co-operation of the non-party states with the ICC is envisioned by the Rome Statute of the International Criminal Court to be of voluntary nature. However, even states that have not acceded to the Rome Statute might still be subjects to an obligation to co-operate with ICC in certain cases. When a case is referred to the ICC by the UN Security Council all UN member states are obliged to co-operate, since its decisions are binding for all of them. Also, there is an obligation to respect and ensure respect for international humanitarian law, which stems from the Geneva Conventions and Additional Protocol I, which reflects the absolute nature of international humanitarian law. In relation to co-operation in investigation and evidence gathering, it is implied from the Rome Statute that the consent of a non-party state is a prerequisite for ICC Prosecutor to conduct an investigation within its territory, and it seems that it is even more necessary for him to observe any reasonable conditions raised by that state, since such restrictions exist for states party to the Statute. Taking into account the experience of the International Criminal Tribunal for the former Yugoslavia (which worked with the principle of the primacy, instead of complementarity) in relation to co-operation, some scholars have expressed their pessimism as to the possibility of ICC to obtain co-operation of non-party states. As for the actions that ICC can take towards non-party states that do not co-operate, the Rome Statute stipulates that the Court may inform the Assembly of States Parties or Security Council, when the matter was referred by it, when non-party state refuses to co-operate after it has entered into an ad hoc arrangement or an agreement with the Court. Amnesties and national reconciliation processes It is unclear to what extent the ICC is compatible with reconciliation processes that grant amnesty to human rights abusers as part of agreements to end conflict. Article 16 of the Rome Statute allows the Security Council to prevent the Court from investigating or prosecuting a case, and Article 53 allows the Prosecutor the discretion not to initiate an investigation if he or she believes that "an investigation would not serve the interests of justice". Former ICC president Philippe Kirsch has said that "some limited amnesties may be compatible" with a country's obligations genuinely to investigate or prosecute under the Statute. It is sometimes argued that amnesties are necessary to allow the peaceful transfer of power from abusive regimes. By denying states the right to offer amnesty to human rights abusers, the International Criminal Court may make it more difficult to negotiate an end to conflict and a transition to democracy. For example, the outstanding arrest warrants for four leaders of the Lord's Resistance Army are regarded by some as an obstacle to ending the insurgency in Uganda. Czech politician Marek Benda argues that "the ICC as a deterrent will in our view only mean the worst dictators will try to retain power at all costs". However, the United Nations and the International Committee of the Red Cross maintain that granting amnesty to those accused of war crimes and other serious crimes is a violation of international law. Facilities Headquarters The official seat of the Court is in The Hague, Netherlands, but its proceedings may take place anywhere. The Court moved into its first permanent premises in The Hague, located at Oude Waalsdorperweg 10, on 14 December 2015. Part of The Hague's International Zone, which also contains the Peace Palace, Europol, Eurojust, ICTY, OPCW and The Hague World Forum, the court facilities are situated on the site of the Alexanderkazerne, a former military barracks, adjacent to the dune landscape on the northern edge of the city. The ICC's detention centre is a short distance away. Development The land and financing for the new construction were provided by the Netherlands. In addition, the host state organised and financed the architectural design competition which started at the end of 2008. Three architects were chosen by an international jury from a total of 171 applicants to enter into further negotiations. The Danish firm schmidt hammer lassen were ultimately selected to design the new premises since its design met all the ICC criteria, such as design quality, sustainability, functionality and costs. Demolition of the barracks started in November 2011 and was completed in August 2012. In October 2012 the tendering procedure for the General Contractor was completed and the combination Visser & Smit Bouw and Boele & van Eesteren ("Courtys") was selected. Architecture The building has a compact footprint and consists of six connected building volumes with a garden motif. The tallest volume with a green facade, placed in the middle of the design, is the Court Tower that accommodates three courtrooms. The rest of the building's volumes accommodate the offices of the different organs of the ICC. Provisional headquarters, 2002–2015 Until late 2015, the ICC was housed in interim premises in The Hague provided by the Netherlands. Formerly belonging to KPN, the provisional headquarters were located at Maanweg 174 in the east-central portion of the city. Detention centre The ICC's detention centre accommodates both those convicted by the court and serving sentences as well as those suspects detained pending the outcome of their trial. It comprises twelve cells on the premises of the Scheveningen branch of the Haaglanden Penal Institution, The Hague, close to the ICC's headquarters in the Alexanderkazerne. Suspects held by the former International Criminal Tribunal for the former Yugoslavia were held in the same prison and shared some facilities, like the fitness room, but had no contact with suspects held by the ICC. Other offices The ICC maintains a liaison office in New York and field offices in places where it conducts its activities. As of 18 October 2007, the Court had field offices in Kampala, Kinshasa, Bunia, Abéché and Bangui. Finance The ICC is financed by contributions from the states parties. The amount payable by each state party is determined using the same method as the United Nations: each state's contribution is based on the country's capacity to pay, which reflects factors such as a national income and population. The maximum amount a single country can pay in any year is limited to 22% of the Court's budget; Japan paid this amount in 2008. The Court spent €80.5 million in 2007. The Assembly of States Parties approved a budget of €90.4 million for 2008, €101.2 million for 2009, and €141.6 million for 2017. , the ICC's staff consisted of 800 persons from approximately 100 states. Trial history to date To date, the Prosecutor has opened investigations in fourteen situations: Afghanistan; Burundi; two in the Central African Republic; Côte d'Ivoire; Darfur, Sudan; the Democratic Republic of the Congo; Georgia; Kenya; Libya; Mali; Uganda; Bangladesh/Myanmar, Palestine and Venezuela. Additionally, the Office of the Prosecutor is conducting preliminary examinations in six situations: Colombia; Guinea; Nigeria; the Philippines; Ukraine and Bolivia. The Court's Pre-Trial Chambers have Thomas Lubanga, Germain Katanga and Mathieu Ngudjolo Chui were tried by the ICC. Lubanga and Katanga were convicted and sentenced to 14 and 12 years imprisonment, respectively, whereas Chui was acquitted. The judgment of Jean-Pierre Bemba was rendered in March 2016. Bemba was convicted on two counts of crimes against humanity and three counts of war crimes. This marked the first time the ICC convicted someone of sexual violence as they added rape to his conviction. Bemba's convictions were overturned by the Court's Appeal Chamber in June 2018. Trials in the Ntaganda case (DR Congo), the Bemba et al. case and the Laurent Gbagbo-Blé Goudé trial in the Côte d'Ivoire situation are ongoing. The Banda trial in the situation of Darfur, Sudan, was scheduled to begin in 2014 but the start date was vacated. Charges against Ugandan Dominic Ongwen and Malian Ahmed al-Faqi have been confirmed; as of March 2020 both were awaiting their trials. On 6 July 2020, two Uyghur activist groups filed a complaint with the ICC calling for it to investigate PRC officials for crimes against Uyghurs including allegations of genocide. The ICC does not have the necessary jurisdiction for this in the absence of a referral from the UN Security Council, as the PRC is not a party to the Rome Statute. Investigations and preliminary examinations Currently, the Office of the Prosecutor has Notes Relationships United Nations Unlike the International Court of Justice, the ICC is legally independent from the United Nations. However, the Rome Statute grants certain powers to the United Nations Security Council, which limits its functional independence. Article 13 allows the Security Council to refer to the Court situations that would not otherwise fall under the Court's jurisdiction (as it did in relation to the situations in Darfur and Libya, which the Court could not otherwise have prosecuted as neither Sudan nor Libya are state parties). Article 16 allows the Security Council to require the Court to defer from investigating a case for a period of 12 months. Such a deferral may be renewed indefinitely by the Security Council. This sort of an arrangement gives the ICC some of the advantages inhering in the organs of the United Nations such as using the enforcement powers of the Security Council, but it also creates a risk of being tainted with the political controversies of the Security Council. The Court cooperates with the UN in many different areas, including the exchange of information and logistical support. The Court reports to the UN each year on its activities, and some meetings of the Assembly of States Parties are held at UN facilities. The relationship between the Court and the UN is governed by a "Relationship Agreement between the International Criminal Court and the United Nations". Nongovernmental organizations During the 1970s and 1980s, international human rights and humanitarian Nongovernmental Organizations (or NGOs) began to proliferate at exponential rates. Concurrently, the quest to find a way to punish international crimes shifted from being the exclusive responsibility of legal experts to being shared with international human rights activism. NGOs helped birth the ICC through advocacy and championing for the prosecution of perpetrators of crimes against humanity. NGOs closely monitor the organization's declarations and actions, ensuring that the work that is being executed on behalf of the ICC is fulfilling its objectives and responsibilities to civil society. According to Benjamin Schiff, "From the Statute Conference onward, the relationship between the ICC and the NGOs has probably been closer, more consistent, and more vital to the Court than have analogous relations between NGOs and any other international organization." There are a number of NGOs working on a variety of issues related to the ICC. The NGO Coalition for the International Criminal Court has served as a sort of umbrella for NGOs to coordinate with each other on similar objectives related to the ICC. The CICC has 2,500 member organizations in 150 different countries. The original steering committee included representatives from the World Federalist Movement, the International Commission of Jurists, Amnesty International, the Lawyers Committee for Human Rights, Human Rights Watch, Parliamentarians for Global Action, and No Peace Without Justice. Today, many of the NGOs with which the ICC cooperates are members of the CICC. These organizations come from a range of backgrounds, spanning from major international NGOs such as Human Rights Watch and Amnesty International, to smaller, more local organizations focused on peace and justice missions. Many work closely with states, such as the International Criminal Law Network, founded and predominantly funded by the Hague municipality and the Dutch Ministries of Defense and Foreign Affairs. The CICC also claims organizations that are themselves federations, such as the International Federation of Human Rights Leagues (FIDH). CICC members subscribe to three principles that permit them to work under the umbrella of the CICC, so long as their objectives match them: Promoting worldwide ratification and implementation of the Rome Statute of the ICC Maintaining the integrity of the Rome Statute of the ICC, and Ensuring the ICC will be as fair, effective and independent as possible The NGOs that work under the CICC do not normally pursue agendas exclusive to the work of the Court, rather they may work for broader causes, such as general human rights issues, victims' rights, gender rights, rule of law, conflict mediation, and peace. The CICC coordinates their efforts to improve the efficiency of NGOs' contributions to the Court and to pool their influence on major common issues. From the ICC side, it has been useful to have the CICC channel NGO contacts with the Court so that its officials do not have to interact individually with thousands of separate organizations. NGOs have been crucial to the evolution of the ICC, as they assisted in the creation of the normative climate that urged states to seriously consider the Court's formation. Their legal experts helped shape the Statute, while their lobbying efforts built support for it. They advocate Statute ratification globally and work at expert and political levels within member states for passage of necessary domestic legislation. NGOs are greatly represented at meetings for the Assembly of States Parties, and they use the ASP meetings to press for decisions promoting their priorities. Many of these NGOs have reasonable access to important officials at the ICC because of their involvement during the Statute process. They are engaged in monitoring, commenting upon, and assisting in the ICC's activities. The ICC often depends on NGOs to interact with local populations. The Registry Public Information Office personnel and Victims Participation and Reparations Section officials hold seminars for local leaders, professionals and the media to spread the word about the Court. These are the kinds of events that are often hosted or organized by local NGOs. Because there can be challenges with determining which of these NGOs are legitimate, CICC regional representatives often have the ability to help screen and identify trustworthy organizations. However, NGOs are also "sources of criticism, exhortation and pressure upon" the ICC. The ICC heavily depends on NGOs for its operations. Although NGOs and states cannot directly impact the judicial nucleus of the organization, they can impart information on crimes, can help locate victims and witnesses, and can promote and organize victim participation. NGOs outwardly comment on the Court's operations, "push for expansion of its activities especially in the new justice areas of outreach in conflict areas, in victims' participation and reparations, and in upholding due-process standards and defense 'equality of arms' and so implicitly set an agenda for the future evolution of the ICC." The relatively uninterrupted progression of NGO involvement with the ICC may mean that NGOs have become repositories of more institutional historical knowledge about the ICC than its national representatives, and have greater expertise than some of the organization's employees themselves. While NGOs look to mold the ICC to satisfy the interests and priorities that they have worked for since the early 1990s, they unavoidably press against the limits imposed upon the ICC by the states that are members of the organization. NGOs can pursue their own mandates, irrespective of whether they are compatible with those of other NGOs, while the ICC must respond to the complexities of its own mandate as well as those of the states and NGOs. Another issue has been that NGOs possess "exaggerated senses of their ownership over the organization and, having been vital to and successful in promoting the Court, were not managing to redefine their roles to permit the Court its necessary independence." Additionally, because there does exist such a gap between the large human rights organizations and the smaller peace-oriented organizations, it is difficult for ICC officials to manage and gratify all of their NGOs. "ICC officials recognize that the NGOs pursue their own agendas, and that they will seek to pressure the ICC in the direction of their own priorities rather than necessarily understanding or being fully sympathetic to the myriad constraints and pressures under which the Court operates." Both the ICC and the NGO community avoid criticizing each other publicly or vehemently, although NGOs have released advisory and cautionary messages regarding the ICC. They avoid taking stances that could potentially give the Court's adversaries, particularly the US, more motive to berate the organization. Criticisms African accusations of Western imperialism The ICC has been accused of bias and as being a tool of Western imperialism, only punishing leaders from small, weak states while ignoring crimes committed by richer and more powerful states. This sentiment has been expressed particularly by African leaders due to an alleged disproportionate focus of the Court on Africa, while it claims to have a global mandate; until January 2016, all nine situations which the ICC had been investigating were in African countries. The prosecution of Kenyan Deputy President William Ruto and President Uhuru Kenyatta (both charged before coming into office) led to the Kenyan parliament passing a motion calling for Kenya's withdrawal from the ICC, and the country called on the other 33 African states party to the ICC to withdraw their support, an issue which was discussed at a special African Union (AU) summit in October 2013. Though the ICC has denied the charge of disproportionately targeting African leaders, and claims to stand up for victims wherever they may be, Kenya was not alone in criticising the ICC. Sudanese President Omar al-Bashir visited Kenya, South Africa, China, Nigeria, Saudi Arabia, United Arab Emirates, Egypt, Ethiopia, Qatar and several other countries despite an outstanding ICC warrant for his arrest but was not arrested; he said that the charges against him are "exaggerated" and that the ICC was a part of a "Western plot" against him. Ivory Coast's government opted not to transfer former first lady Simone Gbagbo to the court but to instead try her at home. Rwanda's ambassador to the African Union, Joseph Nsengimana, argued that "It is not only the case of Kenya. We have seen international justice become more and more a political matter." Ugandan President Yoweri Museveni accused the ICC of "mishandling complex African issues". Ethiopian Prime Minister Hailemariam Desalegn, at the time AU chairman, told the UN General Assembly at the
|
MMORPG Anarchy Online International Cricket Captain (series), a video game series about cricket management Internet Chess Club, a website for playing chess Icecrown Citadel, in the MMORPG World of Warcraft: Wrath of the Lich King Judicial courts Illinois Commerce Commission, a quasi-judicial tribunal which regulates public utility services in the U.S. state of Illinois International Criminal Court, an intergovernmental organization and international tribunal headquartered in The Hague, the Netherlands Organizations Government Interstate Commerce Commission, a now-defunct U.S. government regulatory body International Control Commission, which oversaw the 1954 Geneva Accords ending the First Indochina War International Computing Centre, based in Geneva, Switzerland, established by the UN in 1971 International Computation Centre, in Rome, Italy, created by UNESCO in 1951, now the Intergovernmental Bureau for Informatics International Certificate of Competence for Operators of Pleasure Craft, a European boating license Isthmian Canal Commission, a body set up to administer the Panama Canal Zone Politics International Communist Current, a communist organization Inuit Circumpolar Council, a non-governmental organization representing several peoples in the far north International Coordinating Committee of National Human Rights Institutions Israel on Campus Coalition Religion International Christian Concern, a human rights organization International Christian Church, a group of Stone-Campbell Restoration churches led by Kip McKean and split off from the ICOC International Churches of Christ, a group of Stone-Campbell Restoration Movement Christian churches International Critical Commentary, an academic level biblical commentary series Irish Council of Churches, an ecumenical Christian body Sports Illinois College Conference, a defunct American collegiate athletic conference Indiana Collegiate Conference, a defunct American collegiate athletic conference International Champions Cup, friendly association football tournament of mostly European clubs International Cricket Council, the governing body of cricket International Co-ordination Committee of World Sports Organizations for the Disabled, 1982–1989 predecessor of the International Paralympic Committee Business Industrial Credit Company, (later Industrial Credit Corporation) purchased by the Halifax Information Control Company,
|
Interstate Commerce Commission, a now-defunct U.S. government regulatory body International Control Commission, which oversaw the 1954 Geneva Accords ending the First Indochina War International Computing Centre, based in Geneva, Switzerland, established by the UN in 1971 International Computation Centre, in Rome, Italy, created by UNESCO in 1951, now the Intergovernmental Bureau for Informatics International Certificate of Competence for Operators of Pleasure Craft, a European boating license Isthmian Canal Commission, a body set up to administer the Panama Canal Zone Politics International Communist Current, a communist organization Inuit Circumpolar Council, a non-governmental organization representing several peoples in the far north International Coordinating Committee of National Human Rights Institutions Israel on Campus Coalition Religion International Christian Concern, a human rights organization International Christian Church, a group of Stone-Campbell Restoration churches led by Kip McKean and split off from the ICOC International Churches of Christ, a group of Stone-Campbell Restoration Movement Christian churches International Critical Commentary, an academic level biblical commentary series Irish Council of Churches, an ecumenical Christian body Sports Illinois College Conference, a defunct American collegiate athletic conference Indiana Collegiate Conference, a defunct American collegiate athletic conference International Champions Cup, friendly association football tournament of mostly European clubs International Cricket Council, the governing body of cricket International Co-ordination Committee of World Sports Organizations for the Disabled, 1982–1989 predecessor of the International Paralympic Committee Business Industrial Credit Company, (later Industrial Credit Corporation) purchased by the Halifax Information Control Company, an information technology consulting firm headquartered in Columbus, Ohio Innovative Communications Corporation, a telecommunications company in the United States Virgin Islands International Chamber of Commerce, supporting global trade and globalisation International Code Council, US-based building codes organisation International Controls Corporation, an American holding company founded by Robert Vesco International Culinary Center, a cooking school with several locations Other organizations Imperial Camel Corps, an historic British Commonwealth military unit Incarnation Children's Center, a New York orphanage, specializing in care of children with HIV/AIDS Indian Cinematograph Committee, an Indian government committee overseeing censorship and cinema Inter-Cooperative Council at the University of Michigan, a student housing cooperative in Ann Arbor, Michigan International Association for Cereal Science and Technology, formerly International Association for Cereal Chemistry International Camp on Communication and Computers, European
|
a film in Esperanto starring William Shatner Incubus (2006 film), a horror film starring Tara Reid The Incubus (1982 film), a horror film starring John Cassavetes François Sagat's Incubus, a 2011/2012 gay pornographic film, and directorial debut for François Sagat Music Incubus (band), an American alternative rock band from California Opprobrium (band), American death metal band from
|
Incubus (band), an American alternative rock band from California Opprobrium (band), American death metal band from Louisiana originally known as Incubus "Incubus", a song by British neo-progressive rock band Marillion from 1984's Fugazi (album) Other The Incubus, a nickname given to radio
|
years of war with the Celts and Iberians. The result was the creation of the province of Hispania. It was divided into Hispania Ulterior and Hispania Citerior during the late Roman Republic, and during the Roman Empire, it was divided into Hispania Tarraconensis in the northeast, Hispania Baetica in the south and Lusitania in the southwest. Hispania supplied the Roman Empire with silver, food, olive oil, wine, and metal. The emperors Trajan, Hadrian, Marcus Aurelius, and Theodosius I, the philosopher Seneca the Younger, and the poets Martial and Lucan were born from families living on the Iberian Peninsula. During their 600-year occupation of the Iberian Peninsula, the Romans introduced the Latin language that influenced many of the languages that exist today in the Iberian peninsula. Pre-modern Iberia In the early fifth century, Germanic peoples occupied the peninsula, namely the Suebi, the Vandals (Silingi and Hasdingi) and their allies, the Alans. Only the kingdom of the Suebi (Quadi and Marcomanni) would endure after the arrival of another wave of Germanic invaders, the Visigoths, who occupied all of the Iberian Peninsula and expelled or partially integrated the Vandals and the Alans. The Visigoths eventually occupied the Suebi kingdom and its capital city, Bracara (modern day Braga), in 584–585. They would also occupy the province of the Byzantine Empire (552–624) of Spania in the south of the peninsula and the Balearic Islands. In 711, a Muslim army conquered the Visigothic Kingdom in Hispania. Under Tariq ibn Ziyad, the Islamic army landed at Gibraltar and, in an eight-year campaign, occupied all except the northern kingdoms of the Iberian Peninsula in the Umayyad conquest of Hispania. Al-Andalus (, tr. al-ʾAndalūs, possibly "Land of the Vandals"), is the Arabic name given to Muslim Iberia. The Muslim conquerors were Arabs and Berbers; following the conquest, conversion and arabization of the Hispano-Roman population took place, (muwalladum or Muladí). After a long process, spurred on in the 9th and 10th centuries, the majority of the population in Al-Andalus eventually converted to Islam. The Muslims were referred to by the generic name Moors. The Muslim population was divided per ethnicity (Arabs, Berbers, Muladí), and the supremacy of Arabs over the rest of group was a recurrent causal for strife, rivalry and hatred, particularly between Arabs and Berbers. Arab elites could be further divided in the Yemenites (first wave) and the Syrians (second wave). Christians and Jews were allowed to live as part of a stratified society under the dhimmah system, although Jews became very important in certain fields. Some Christians migrated to the Northern Christian kingdoms, while those who stayed in Al-Andalus progressively arabised and became known as musta'arab (mozarabs). The slave population comprised the Ṣaqāliba (literally meaning "slavs", although they were slaves of generic European origin) as well as Sudanese slaves. The Umayyad rulers faced a major Berber Revolt in the early 740s; the uprising originally broke out in North Africa (Tangier) and later spread across the peninsula. Following the Abbasid takeover from the Umayyads and the shift of the economic centre of the Islamic Caliphate from Damascus to Baghdad, the western province of al-Andalus was marginalised and ultimately became politically autonomous as independent emirate in 756, ruled by one of the last surviving Umayyad royals, Abd al-Rahman I. Al-Andalus became a center of culture and learning, especially during the Caliphate of Córdoba. The Caliphate reached the height of its power under the rule of Abd-ar-Rahman III and his successor al-Hakam II, becoming then, in the view of Jaime Vicens Vives, "the most powerful state in Europe". Abd-ar-Rahman III also managed to expand the clout of Al-Andalus across the Strait of Gibraltar, waging war, as well as his successor, against the Fatimid Empire. Between the 8th and 12th centuries, Al-Andalus enjoyed a notable urban vitality, both in terms of the growth of the preexisting cities as well as in terms of founding of new ones: Córdoba reached a population of 100,000 by the 10th century, Toledo 30,000 by the 11th century and Seville 80,000 by the 12th century. During the Middle Ages, the North of the peninsula housed many small Christian polities including the Kingdom of Castile, the Kingdom of Aragon, the Kingdom of Navarre, the Kingdom of León or the Kingdom of Portugal, as well as a number of counties that spawned from the Carolingian Marca Hispanica. Christian and Muslim polities fought and allied among themselves in variable alliances. The Christian kingdoms progressively expanded south taking over Muslim territory in what is historiographically known as the "Reconquista" (the latter concept has been however noted as product of the claim to a pre-existing Spanish Catholic nation and it would not necessarily convey adequately "the complexity of centuries of warring and other more peaceable interactions between Muslim and Christian kingdoms in medieval Iberia between 711 and 1492"). The Caliphate of Córdoba was subsumed in a period of upheaval and civil war (the Fitna of al-Andalus) and collapsed in the early 11th century, spawning a series of ephemeral statelets, the taifas. Until the mid 11th century, most of the territorial expansion southwards of the Kingdom of Asturias/León was carried out through a policy of agricultural colonization rather than through military operations; then, profiting from the feebleness of the taifa principalities, Ferdinand I of León seized Lamego and Viseu (1057–1058) and Coimbra (1064) away from the Taifa of Badajoz (at times at war with the Taifa of Seville); Meanwhile, in the same year Coimbra was conquered, in the Northeastern part of the Iberian Peninsula, the Kingdom of Aragon took Barbastro from the Hudid Taifa of Lérida as part of an international expedition sanctioned by Pope Alexander II. Most critically, Alfonso VI of León-Castile conquered Toledo and its wider taifa in 1085, in what it was seen as a critical event at the time, entailing also a huge territorial expansion, advancing from the Sistema Central to La Mancha. In 1086, following the siege of Zaragoza by Alfonso VI of León-Castile, the Almoravids, religious zealots originally from the deserts of the Maghreb, landed in the Iberian Peninsula, and, having inflicted a serious defeat to Alfonso VI at the battle of Zalaca, began to seize control of the remaining taifas. The Almoravids in the Iberian peninsula progressively relaxed strict observance of their faith, and treated both Jews and Mozarabs harshly, facing uprisings across the peninsula, initially in the Western part. The Almohads, another North-African Muslim sect of Masmuda Berber origin who had previously undermined the Almoravid rule south of the Strait of Gibraltar, first entered the peninsula in 1146. Somewhat straying from the trend taking place in other locations of the Latin West since the 10th century, the period comprising the 11th and 13th centuries was not one of weakening monarchical power in the Christian kingdoms. The relatively novel concept of "frontier" (Sp: frontera), already reported in Aragon by the second half of the 11th century become widespread in the Christian Iberian kingdoms by the beginning of the 13th century, in relation to the more or less conflictual border with Muslim lands. By the beginning of the 13th century, a power reorientation took place in the Iberian Peninsula (parallel to the Christian expansion in Southern Iberia and the increasing commercial impetus of Christian powers across the Mediterranean) and to a large extent, trade-wise, the Iberian Peninsula reorientated towards the North away from the Muslim World. During the Middle Ages, the monarchs of Castile and León, from Alfonso V and Alfonso VI (crowned Hispaniae Imperator) to Alfonso X and Alfonso XI tended to embrace an imperial ideal based on a dual Christian and Jewish ideology. Merchants from Genoa and Pisa were conducting an intense trading activity in Catalonia already by the 12th century, and later in Portugal. Since the 13th century, the Crown of Aragon expanded overseas; led by Catalans, it attained an overseas empire in the Western Mediterranean, with a presence in Mediterranean islands such as the Balearics, Sicily and Sardinia, and even conquering Naples in the mid-15th century. Genoese merchants invested heavily in the Iberian commercial enterprise with Lisbon becoming, according to Virgínia Rau, the "great centre of Genoese trade" in the early 14th century. The Portuguese would later detach their trade to some extent from Genoese influence. The Nasrid Kingdom of Granada, neighbouring the Strait of Gibraltar and founded upon a vassalage relationship with the Crown of Castile, also insinuated itself into the European mercantile network, with its ports fostering intense trading relations with the Genoese as well, but also with the Catalans, and to a lesser extent, with the Venetians, the Florentines, and the Portuguese. Between 1275 and 1340, Granada became involved in the "crisis of the Strait", and was caught in a complex geopolitical struggle ("a kaleidoscope of alliances") with multiple powers vying for dominance of the Western Mediterranean, complicated by the unstable relations of Muslim Granada with the Marinid Sultanate. The conflict reached a climax in the 1340 Battle of Río Salado, when, this time in alliance with Granada, the Marinid Sultan (and Caliph pretender) Abu al-Hasan Ali ibn Othman made the last Marinid attempt to set up a power base in the Iberian Peninsula. The lasting consequences of the resounding Muslim defeat to an alliance of Castile and Portugal with naval support from Aragon and Genoa ensured Christian supremacy over the Iberian Peninsula and the preeminence of Christian fleets in the Western Mediterranean. The 1348–1350 bubonic plague devastated large parts of the Iberian Peninsula, leading to a sudden economic cessation. Many settlements in northern Castile and Catalonia were left forsaken. The plague marked the start of the hostility and downright violence towards religious minorities (particularly the Jews) as an additional consequence in the Iberian realms. The 14th century was a period of great upheaval in the Iberian realms. After the death of Peter the Cruel of Castile (reigned 1350–69), the House of Trastámara succeeded to the throne in the person of Peter's half brother, Henry II (reigned 1369–79). In the kingdom of Aragón, following the death without heirs of John I (reigned 1387–96) and Martin I (reigned 1396–1410), a prince of the House of Trastámara, Ferdinand I (reigned 1412–16), succeeded to the Aragonese throne. The Hundred Years' War also spilled over into the Iberian peninsula, with Castile particularly taking a role in the conflict by providing key naval support to France that helped lead to that nation's eventual victory. After the accession of Henry III to the throne of Castile, the populace, exasperated by the preponderance of Jewish influence, perpetrated a massacre of Jews at Toledo. In 1391, mobs went from town to town throughout Castile and Aragon, killing an estimated 50,000 Jews, or even as many as 100,000, according to Jane Gerber. Women and children were sold as slaves to Muslims, and many synagogues were converted into churches. According to Hasdai Crescas, about 70 Jewish communities were destroyed. During the 15th century, Portugal, which had ended its southwards territorial expansion across the Iberian Peninsula in 1249 with the conquest of the Algarve, initiated an overseas expansion in parallel to the rise of the House of Aviz, conquering Ceuta (1415) arriving at Porto Santo (1418), Madeira and the Azores, as well as establishing additional outposts along the North-African Atlantic coast. In addition, already in the Early Modern Period, between the completion of the Granada War in 1492 and the death of Ferdinand of Aragon in 1516, the Hispanic Monarchy would make strides in the imperial expansion along the Mediterranean coast of the Maghreb. During the Late Middle Ages, the Jews acquired considerable power and influence in Castile and Aragon. Throughout the late Middle Ages, the Crown of Aragon took part in the mediterranean slave trade, with Barcelona (already in the 14th century), Valencia (particularly in the 15th century) and, to a lesser extent, Palma de Mallorca (since the 13th century), becoming dynamic centres in this regard, involving chiefly eastern and Muslim peoples. Castile engaged later in this economic activity, rather by adhering to the incipient atlantic slave trade involving sub-saharan people thrusted by Portugal (Lisbon being the largest slave centre in Western Europe) since the mid 15th century, with Seville becoming another key hub for the slave trade. Following the advance in the conquest of the Nasrid kingdom
|
the throne in the person of Peter's half brother, Henry II (reigned 1369–79). In the kingdom of Aragón, following the death without heirs of John I (reigned 1387–96) and Martin I (reigned 1396–1410), a prince of the House of Trastámara, Ferdinand I (reigned 1412–16), succeeded to the Aragonese throne. The Hundred Years' War also spilled over into the Iberian peninsula, with Castile particularly taking a role in the conflict by providing key naval support to France that helped lead to that nation's eventual victory. After the accession of Henry III to the throne of Castile, the populace, exasperated by the preponderance of Jewish influence, perpetrated a massacre of Jews at Toledo. In 1391, mobs went from town to town throughout Castile and Aragon, killing an estimated 50,000 Jews, or even as many as 100,000, according to Jane Gerber. Women and children were sold as slaves to Muslims, and many synagogues were converted into churches. According to Hasdai Crescas, about 70 Jewish communities were destroyed. During the 15th century, Portugal, which had ended its southwards territorial expansion across the Iberian Peninsula in 1249 with the conquest of the Algarve, initiated an overseas expansion in parallel to the rise of the House of Aviz, conquering Ceuta (1415) arriving at Porto Santo (1418), Madeira and the Azores, as well as establishing additional outposts along the North-African Atlantic coast. In addition, already in the Early Modern Period, between the completion of the Granada War in 1492 and the death of Ferdinand of Aragon in 1516, the Hispanic Monarchy would make strides in the imperial expansion along the Mediterranean coast of the Maghreb. During the Late Middle Ages, the Jews acquired considerable power and influence in Castile and Aragon. Throughout the late Middle Ages, the Crown of Aragon took part in the mediterranean slave trade, with Barcelona (already in the 14th century), Valencia (particularly in the 15th century) and, to a lesser extent, Palma de Mallorca (since the 13th century), becoming dynamic centres in this regard, involving chiefly eastern and Muslim peoples. Castile engaged later in this economic activity, rather by adhering to the incipient atlantic slave trade involving sub-saharan people thrusted by Portugal (Lisbon being the largest slave centre in Western Europe) since the mid 15th century, with Seville becoming another key hub for the slave trade. Following the advance in the conquest of the Nasrid kingdom of Granada, the seizure of Málaga entailed the addition of another notable slave centre for the Crown of Castile. By the end of the 15th century (1490) the Iberian kingdoms (including here the Balearic Islands) had an estimated population of 6.525 million (Crown of Castile, 4.3 million; Portugal, 1.0 million; Principality of Catalonia, 0.3 million; Kingdom of Valencia, 0.255 million; Kingdom of Granada, 0.25 million; Kingdom of Aragon, 0.25 million; Kingdom of Navarre, 0.12 million and the Kingdom of Mallorca, 0.05 million). For three decades in the 15th century, the Hermandad de las Marismas, the trading association formed by the ports of Castile along the Cantabrian coast, resembling in some ways the Hanseatic League, fought against the latter, an ally of England, a rival of Castile in political and economic terms. Castile sought to claim the Gulf of Biscay as its own. In 1419, the powerful Castilian navy thoroughly defeated a Hanseatic fleet in La Rochelle. In the late 15th century, the imperial ambition of the Iberian powers was pushed to new heights by the Catholic Monarchs in Castile and Aragon, and by Manuel I in Portugal. The last Muslim stronghold, Granada, was conquered by a combined Castilian and Aragonese force in 1492. As many as 100,000 Moors died or were enslaved in the military campaign, while 200,000 fled to North Africa. Muslims and Jews throughout the period were variously tolerated or shown intolerance in different Christian kingdoms. After the fall of Granada, all Muslims and Jews were ordered to convert to Christianity or face expulsion—as many as 200,000 Jews were expelled from Spain. Historian Henry Kamen estimates that some 25,000 Jews died en route from Spain. The Jews were also expelled from Sicily and Sardinia, which were under Aragonese rule, and an estimated 37,000 to 100,000 Jews left. In 1497, King Manuel I of Portugal forced all Jews in his kingdom to convert or leave. That same year he expelled all Muslims that were not slaves, and in 1502 the Catholic Monarchs followed suit, imposing the choice of conversion to Christianity or exile and loss of property. Many Jews and Muslims fled to North Africa and the Ottoman Empire, while others publicly converted to Christianity and became known respectively as Marranos and Moriscos (after the old term Moors). However, many of these continued to practice their religion in secret. The Moriscos revolted several times and were ultimately forcibly expelled from Spain in the early 17th century. From 1609 to 1614, over 300,000 Moriscos were sent on ships to North Africa and other locations, and, of this figure, around 50,000 died resisting the expulsion, and 60,000 died on the journey. The change of relative supremacy from Portugal to the Hispanic Monarchy in the late 15th century has been described as one of the few cases of avoidance of the Thucydides Trap. Modern Iberia Challenging the conventions about the advent of modernity, Immanuel Wallerstein pushed back the origins of the capitalist modernity to the Iberian expansion of the 15th century. During the 16th century Spain created a vast empire in the Americas, with a state monopoly in Seville becoming the center of the ensuing transatlantic trade, based on bullion. Iberian imperialism, starting by the Portuguese establishment of routes to Asia and the posterior transatlantic trade with the New World by Spaniards and Portuguese (along Dutch, English and French), precipitated the economic decline of the Italian Peninsula. The 16th century was one of population growth with increased pressure over resources; in the case of the Iberian Peninsula a part of the population moved to the Americas meanwhile Jews and Moriscos were banished, relocating to other places in the Mediterranean Basin. Most of the Moriscos remained in Spain after the Morisco revolt in Las Alpujarras during the mid-16th century, but roughly 300,000 of them were expelled from the country in 1609–1614, and emigrated en masse to North Africa. In 1580, after the political crisis that followed the 1578 death of King Sebastian, Portugal became a dynastic composite entity of the Hapsburg Monarchy; thus, the whole peninsula was united politically during the period known as the Iberian Union (1580–1640). During the reign of Philip II of Spain (I of Portugal), the Councils of Portugal, Italy, Flanders and Burgundy were added to the group of counselling institutions of the Hispanic Monarchy, to which the Councils of Castile, Aragon, Indies, Chamber of Castile, Inquisition, Orders, and Crusade already belonged, defining the organization of the Royal court that underpinned the through which the empire operated. During the Iberian union, the "first great wave" of the transatlantic slave trade happened, according to Enriqueta Vila Villar, as new markets opened because of the unification gave thrust to the slave trade. By 1600, the percentage of urban population for Spain was roughly 11.4%, while for Portugal the urban population was estimated as 14.1%, which were both above the 7.6% European average of the time (edged only by the Low Countries and the Italian Peninsula). Some striking differences appeared among the different Iberian realms. Castile, extending across a 60% of the territory of the peninsula and having 80% of the population was a rather urbanised country, yet with a widespread distribution of cities. Meanwhile, the urban population in the Crown of Aragon was highly concentrated in a handful of cities: Zaragoza (Kingdom of Aragon), Barcelona (Principality of Catalonia), and, to a lesser extent in the Kingdom of Valencia, in Valencia, Alicante and Orihuela. The case of Portugal presented an hypertrophied capital, Lisbon (which greatly increased its population during the 16th century, from 56,000 to 60,000 inhabitants by 1527, to roughly 120,000 by the third quarter of the century) with its demographic dynamism stimulated by the Asian trade, followed at great distance by Porto and Évora (both roughly accounting for 12,500 inhabitants). Throughout most of the 16th century, both Lisbon and Seville were among the Western Europe's largest and most dynamic cities. The 17th century has been largely considered as a very negative period for the Iberian economies, seen as a time of recession, crisis or even decline, the urban dynamism chiefly moving to Northern Europe. A dismantling of the inner city network in the Castilian plateau took place during this period (with a parallel accumulation of economic activity in the capital, Madrid), with only New Castile resisting recession in the interior. Regarding the Atlantic façade of Castile, aside from the severing of trade with Northern Europe, inter-regional trade with other regions in the Iberian Peninsula also suffered to some extent. In Aragon, suffering from similar problems than Castile, the expelling of the Moriscos in 1609 in the Kingdom of Valencia aggravated the recession. Silk turned from a domestic industry into a raw commodity to be exported. However, the crisis was uneven (affecting longer the centre of the peninsula), as both Portugal and the Mediterranean coastline recovered in the later part of the century by fuelling a sustained growth. The aftermath of the intermittent 1640–1668 Portuguese Restoration War brought the House of Braganza as the new ruling dynasty in the Portuguese territories across the world (bar Ceuta), putting an end to the Iberian Union. Despite both Portugal and Spain starting their path towards modernization with the liberal revolutions of the first half of the 19th century, this process was, concerning structural changes in the geographical distribution of the population, relatively tame compared to what took place after World War II in the Iberian Peninsula, when strong urban development ran in parallel to substantial rural flight patterns. Geography and geology The Iberian Peninsula is the westernmost of the three major southern European peninsulas—the Iberian, Italian, and Balkan. It is bordered on the southeast and east by the Mediterranean Sea, and on the north, west, and southwest by the Atlantic Ocean. The Pyrenees mountains are situated along the northeast edge of the peninsula, where it adjoins the rest of Europe. Its southern tip, located in Tarifa is the southernmost point of the European continent and is very close to the northwest coast of Africa, separated from it by the Strait of Gibraltar and the Mediterranean Sea. The Iberian Peninsula encompasses 583,254 km2 and has very contrasting and uneven relief. The mountain ranges of the Iberian Peninsula are mainly distributed from west to east, and in some cases reach altitudes of approximately 3000 mamsl, resulting in the region having the second highest mean altitude (637 mamsl) in Western Europe. The Iberian Peninsula extends from the southernmost extremity at Punta de Tarifa to the northernmost extremity at Punta de Estaca de Bares over a distance between lines of latitude of about based on a degree length of per degree, and from the westernmost extremity at Cabo da Roca to the easternmost extremity at Cap de Creus over a distance between lines of longitude at 40° N latitude of about based on an estimated degree length of about for that latitude. The irregular, roughly octagonal shape of the peninsula contained within this spherical quadrangle was compared to an ox-hide by the geographer Strabo. About three quarters of that rough octagon is the Meseta Central, a vast plateau ranging from 610 to 760 m in altitude. It is located approximately in the centre, staggered slightly to the east and tilted slightly toward the west (the conventional centre of the Iberian Peninsula has long been considered Getafe just south of Madrid). It is ringed by mountains and contains the sources of most of the rivers, which find their way through gaps in the mountain barriers on all sides. Coastline The coastline of the Iberian Peninsula is , on the Mediterranean side and on the Atlantic side. The coast has been inundated over time, with sea levels having risen from a minimum of lower than today at the Last Glacial Maximum (LGM) to its current level at 4,000 years BP. The coastal shelf created by sedimentation during that time remains below the surface; however, it was never very extensive on the Atlantic side, as the continental shelf drops rather steeply into the depths. An estimated length of Atlantic shelf is only wide. At the isobath, on the edge, the shelf drops off to . The submarine topography of the coastal waters of the Iberian Peninsula has been studied extensively in the process of drilling for oil. Ultimately, the shelf drops into the Bay of Biscay on the north (an abyss), the Iberian abyssal plain at on the west, and Tagus abyssal plain to the south. In the north, between the continental shelf and the abyss, is an extension called the Galicia Bank, a plateau that also contains the Porto, Vigo, and Vasco da Gama seamounts, which form the Galicia interior basin. The southern border of these features is marked by Nazaré Canyon, which splits the continental shelf and leads directly into the abyss. Rivers The major rivers flow through the wide valleys between the mountain systems. These are the Ebro, Douro, Tagus, Guadiana and Guadalquivir. All rivers in the Iberian Peninsula are subject to seasonal variations in flow. The Tagus is the longest river on the peninsula and, like the Douro, flows westwards with its lower course in Portugal. The Guadiana river bends southwards and forms the border between Spain and Portugal in the last stretch of its course. Mountains The terrain of the Iberian Peninsula is largely mountainous. The major mountain systems are: The Pyrenees and their foothills, the Pre-Pyrenees, crossing the isthmus of the peninsula so completely as to allow no passage except by mountain road, trail, coastal road or tunnel. Aneto in the Maladeta massif, at 3,404 m, is the highest point The Cantabrian Mountains along the northern coast with the massive Picos de Europa. Torre de Cerredo, at 2,648 m, is the highest point The Galicia/Trás-os-Montes Massif in the Northwest is made up of very old heavily eroded rocks. Pena Trevinca, at 2,127 m, is the highest point The Sistema Ibérico, a complex system at the heart of the peninsula, in its central/eastern region. It contains a great number of ranges and divides the watershed of the Tagus, Douro and Ebro rivers. Moncayo, at 2,313 m, is the highest point The Sistema Central, dividing the Iberian Plateau into a northern and a southern half and stretching into Portugal (where the highest point of Continental Portugal (1,993 m) is located in the Serra da Estrela). Pico Almanzor in the Sierra de Gredos is the highest point, at 2,592 m The Montes de Toledo, which also stretches into Portugal from the La Mancha natural region at the eastern end. Its highest point, at 1,603 m, is La Villuerca in the Sierra de Villuercas, Extremadura The Sierra Morena, which divides the watershed of the Guadiana and Guadalquivir rivers. At 1,332 m, Bañuela is the highest point The Baetic System, which stretches between Cádiz and Gibraltar and northeast towards Alicante Province. It is divided into three subsystems: Prebaetic System, which begins west of the Sierra Sur de Jaén, reaching the Mediterranean Sea shores in Alicante Province. La Sagra is the highest point at 2,382 m. Subbaetic System, which is in a central position within the Baetic Systems, stretching from Cape Trafalgar in Cádiz Province across Andalusia to the Region of Murcia. The highest point, at , is Peña de la Cruz in Sierra Arana. Penibaetic System, located in the far southeastern area stretching between Gibraltar across the Mediterranean coastal Andalusian provinces. It includes the highest point in the peninsula, the 3,478 m high Mulhacén in the Sierra Nevada. Geology The Iberian Peninsula contains rocks of every geological period from the Ediacaran to the Recent, and almost every kind of rock is represented. World-class mineral deposits can also be found there. The core of the Iberian Peninsula consists of a Hercynian cratonic block known as the Iberian Massif. On the northeast, this is bounded by the Pyrenean fold belt, and on the southeast it is bounded by the Baetic System. These twofold chains are part of the Alpine belt. To the west, the peninsula is delimited by the continental boundary formed by the magma-poor opening of the Atlantic Ocean. The Hercynian Foldbelt is mostly buried by Mesozoic and Tertiary cover rocks to the east, but nevertheless outcrops through the Sistema Ibérico and the Catalan Mediterranean System. The Iberian Peninsula features one of the largest Lithium deposits belts in Europe (an otherwise relatively scarce resource in the continent), scattered along the Iberian Massif's and . Also in the Iberian Massif, and similarly to other Hercynian blocks in Europe, the peninsula hosts some uranium deposits, largely located in the Central Iberian Zone unit. The Iberian Pyrite Belt, located in the SW quadrant of the Peninsula, ranks among the most important volcanogenic massive sulphide districts on Earth, and it has been exploited
|
footing. History The theorem was first proved by Bernard Bolzano in 1817. Bolzano used the following formulation of the theorem: Let be continuous functions on the interval between and such that and . Then there is an between and such that . The equivalence between this formulation and the modern one can be shown by setting to the appropriate constant function. Augustin-Louis Cauchy provided the modern formulation and a proof in 1821. Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange. The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast, who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable. Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions. Generalizations The intermediate value theorem is closely linked to the topological notion of connectedness and follows from the basic properties of connected sets in metric spaces and connected subsets of R in particular: If and are metric spaces, is a continuous map, and is a connected subset, then is connected. (*) A subset is connected if and only if it satisfies the following property: . (**) In fact, connectedness is a topological property and (*) generalizes to topological spaces: If and are topological spaces, is a continuous map, and is a connected space, then is connected. The preservation of connectedness under continuous maps can be thought of as a generalization of the intermediate value theorem, a property of real valued functions of a real variable, to continuous functions in general spaces. Recall the first version of the intermediate value theorem, stated previously: The intermediate value theorem is an immediate consequence of these two properties of connectedness: The intermediate value theorem generalizes in a natural way: Suppose that X is a connected topological space and (Y, <) is a totally ordered set equipped with the order topology, and let f : X → Y be a continuous map. If a and b are two points in X and u is a point in Y lying between f(a) and f(b) with respect to <, then there exists c in X such that f(c) = u. The original theorem is recovered by noting that R is connected and that its natural topology is the order topology. The Brouwer fixed-point theorem is a related theorem that, in one dimension, gives a special case of the intermediate value theorem. Converse is false A Darboux function is a real-valued function f that has the "intermediate value property," i.e., that satisfies the conclusion of the intermediate value theorem: for any two values a and b in the domain of f, and any y between f(a) and f(b), there is some c between a and b with f(c) = y. The intermediate value theorem says that every continuous function is a Darboux function. However, not every Darboux function is continuous; i.e., the converse of the intermediate value theorem is false. As an example,
|
some point within the interval. This has two important corollaries: If a continuous function has values of opposite sign inside an interval, then it has a root in that interval (Bolzano's theorem). The image of a continuous function over an interval is itself an interval. Motivation This captures an intuitive property of continuous functions over the real numbers: given f continuous on [1, 2] with the known values f(1) = 3 and f(2) = 5, then the graph of y = f(x) must pass through the horizontal line y = 4 while x moves from 1 to 2. It represents the idea that the graph of a continuous function on a closed interval can be drawn without lifting a pencil from the paper. Theorem The intermediate value theorem states the following: Consider an interval of real numbers and a continuous function . Then Version I. if is a number between and , that is, then there is a such that . Version II. the image set is also an interval, and it contains , Remark: Version II states that the set of function values has no gap. For any two function values , even if they are outside the interval between and , all points in the interval are also function values, . A subset of the real numbers with no internal gap is an interval. Version I is naturally contained in Version II. Relation to completeness The theorem depends on, and is equivalent to, the completeness of the real numbers. The intermediate value theorem does not apply to the rational numbers Q because gaps exist between rational numbers; irrational numbers fill those gaps. For example, the function for satisfies and . However, there is no rational number such that , because is an irrational number. Proof The theorem may be proven as a consequence of the completeness property of the real numbers as follows: We shall prove the first case, . The second case is similar. Let be the set of all such that . Then is non-empty since is an element of . Since is non-empty and bounded above by , by completeness, the supremum exists. That is, is the smallest number that is greater than or equal to every member of . We claim that . Fix some . Since is continuous, there
|
had launched numerous air raids against Iraq air bases, destroying 47 jets (including Iraq's brand new Mirage F-1 fighter jets from France); this gave the Iranians air superiority over the battlefield while allowing them to monitor Iraqi troop movements. On 29 April, Iran launched the offensive. 70,000 Revolutionary Guard and Basij members struck on several axes—Bostan, Susangerd, the west bank of the Karun River, and Ahvaz. The Basij launched human wave attacks, which were followed up by the regular army and Revolutionary Guard support along with tanks and helicopters. Under heavy Iranian pressure, the Iraqi forces retreated. By 12 May, Iran had driven out all Iraqi forces from the Susangerd area. The Iranians captured several thousand Iraqi troops and a large number of tanks. Nevertheless, the Iranians took many losses as well, especially among the Basij. The Iraqis retreated to the Karun River, with only Khorramshahr and a few outlying areas remaining in their possession. Saddam ordered 70,000 troops to be placed around the city of Khorramshahr. The Iraqis created a hastily constructed defence line around the city and outlying areas. To discourage airborne commando landings, the Iraqis also placed metal spikes and destroyed cars in areas likely to be used as troop landing zones. Saddam Hussein even visited Khorramshahr in a dramatic gesture, swearing that the city would never be relinquished. However, Khorramshahr's only resupply point was across the Shatt al-Arab, and the Iranian air force began bombing the supply bridges to the city, while their artillery zeroed in on the besieged garrison. Liberation of Khorramshahr (Second Battle of Khorramshahr) In the early morning hours of 23 May 1982, the Iranians began the drive towards Khorramshahr across the Karun River. This part of Operation Beit ol-Moqaddas was spearheaded by the 77th Khorasan division with tanks along with the Revolutionary Guard and Basij. The Iranians hit the Iraqis with destructive air strikes and massive artillery barrages, crossed the Karun River, captured bridgeheads, and launched human wave attacks towards the city. Saddam's defensive barricade collapsed; in less than 48 hours of fighting, the city fell and 19,000 Iraqis surrendered to the Iranians. A total of 10,000 Iraqis were killed or wounded in Khorramshahr, while the Iranians suffered 30,000 casualties. During the whole of Operation Beit ol-Moqaddas, 33,000 Iraqi soldiers were captured by the Iranians. State of Iraqi armed forces The fighting had battered the Iraqi military: its strength fell from 210,000 to 150,000 troops; over 20,000 Iraqi soldiers were killed and over 30,000 captured; two out of four active armoured divisions and at least three mechanised divisions fell to less than a brigade's strength; and the Iranians had captured over 450 tanks and armoured personnel carriers. The Iraqi Air Force was also left in poor shape: after losing up to 55 aircraft since early December 1981, they had only 100 intact fighter-bombers and interceptors. A defector who flew his MiG-21 to Syria in June 1982 revealed that the Iraqi Air Force had only three squadrons of fighter-bombers capable of mounting operations into Iran. The Iraqi Army Air Corps was in slightly better shape, and could still operate more than 70 helicopters. Despite this, the Iraqis still held 3,000 tanks, while Iran held 1,000. At this point, Saddam believed that his army was too demoralised and damaged to hold onto Khuzestan and major swathes of Iranian territory, and withdrew his remaining forces, redeploying them in defence along the border. However, his troops continued to occupy some key Iranian border areas of Iran, including the disputed territories that prompted his invasion, notably the Shatt al-Arab waterway. In response to their failures against the Iranians in Khorramshahr, Saddam ordered the executions of Generals Juwad Shitnah and Salah al-Qadhi and Colonels Masa and al-Jalil. At least a dozen other high-ranking officers were also executed during this time. This became an increasingly common punishment for those who failed him in battle. International response in 1982 In April 1982, the rival Ba'athist regime in Syria, one of the few nations that supported Iran, closed the Kirkuk–Baniyas pipeline that had allowed Iraqi oil to reach tankers on the Mediterranean, reducing the Iraqi budget by $5 billion per month. Journalist Patrick Brogan wrote, "It appeared for a while that Iraq would be strangled economically before it was defeated militarily." Syria's closure of the Kirkuk–Baniyas pipeline left Iraq with the pipeline to Turkey as the only means of exporting oil, along with transporting oil by tanker truck to the port of Aqaba in Jordan. However, the Turkish pipeline had a capacity of only , which was insufficient to pay for the war. However, Saudi Arabia, Kuwait, and the other Gulf states saved Iraq from bankruptcy by providing it with an average of $60 billion in subsidies per year. Though Iraq had previously been hostile towards other Gulf states, "the threat of Persian fundamentalism was far more feared." They were especially inclined to fear Iranian victory after Ayatollah Khomeini declared monarchies to be illegitimate and an un-Islamic form of government. Khomeini's statement was widely received as a call to overthrow the Gulf monarchies. Journalists John Bulloch and Harvey Morris wrote: The virulent Iranian campaign, which at its peak seemed to be making the overthrow of the Saudi regime a war aim on a par with the defeat of Iraq, did have an effect on the Kingdom [of Saudi Arabia], but not the one the Iranians wanted: instead of becoming more conciliatory, the Saudis became tougher, more self-confident, and less prone to seek compromise. Saudi Arabia was said to provide Iraq with $1 billion per month starting in mid-1982. Iraq began receiving support from the United States and west European countries as well. Saddam was given diplomatic, monetary, and military support by the United States, including massive loans, political influence, and intelligence on Iranian deployments gathered by American spy satellites. The Iraqis relied heavily on American satellite footage and radar planes to detect Iranian troop movements, and they enabled Iraq to move troops to the site before the battle. With Iranian success on the battlefield, the United States increased its support of the Iraqi government, supplying intelligence, economic aid, and dual-use equipment and vehicles, as well as normalizing its intergovernmental relations (which had been broken during the 1967 Six-Day War). President Ronald Reagan decided that the United States "could not afford to allow Iraq to lose the war to Iran", and that the United States "would do whatever was necessary to prevent Iraq from losing". In March 1982, Reagan signed National Security Study Memorandum (NSSM) 4-82—seeking "a review of U.S. policy toward the Middle East"—and in June Reagan signed a National Security Decision Directive (NSDD) co-written by NSC official Howard Teicher, which determined: "The United States could not afford to allow Iraq to lose the war to Iran." In 1982, Reagan removed Iraq from the list of countries "supporting terrorism" and sold weapons such as howitzers to Iraq via Jordan. France sold Iraq millions of dollars worth of weapons, including Gazelle helicopters, Mirage F-1 fighters, and Exocet missiles. Both the United States and West Germany sold Iraq dual-use pesticides and poisons that would be used to create chemical weapons and other weapons, such as Roland missiles. At the same time, the Soviet Union, angered with Iran for purging and destroying the communist Tudeh Party, sent large shipments of weapons to Iraq. The Iraqi Air Force was replenished with Soviet, Chinese, and French fighter jets and attack/transport helicopters. Iraq also replenished their stocks of small arms and anti-tank weapons such as AK-47s and rocket-propelled grenades from its supporters. The depleted tank forces were replenished with more Soviet and Chinese tanks, and the Iraqis were reinvigorated in the face of the coming Iranian onslaught. Iran was portrayed as the aggressor, and would be seen as such until the 1990–1991 Persian Gulf War, when Iraq would be condemned. Iran did not have the money to purchase arms to the same extent as Iraq did. They counted on China, North Korea, Libya, Syria, and Japan for supplying anything from weapons and munitions to logistical and engineering equipment. Ceasefire proposal On 20 June 1982, Saddam announced that he wanted to sue for peace and proposed an immediate ceasefire and withdrawal from Iranian territory within two weeks. Khomeini responded by saying the war would not end until a new government was installed in Iraq and reparations paid. He proclaimed that Iran would invade Iraq and would not stop until the Ba'ath regime was replaced by an Islamic republic. Iran supported a government in exile for Iraq, the Supreme Council of the Islamic Revolution in Iraq, led by exiled Iraqi cleric Mohammad Baqer al-Hakim, which was dedicated to overthrowing the Ba'ath party. They recruited POWs, dissidents, exiles, and Shias to join the Badr Brigade, the military wing of the organisation. The decision to invade Iraq was taken after much debate within the Iranian government. One faction, comprising Prime Minister Mir-Hossein Mousavi, Foreign Minister Ali Akbar Velayati, President Ali Khamenei, Army Chief of Staff General Ali Sayad Shirazi as well as Major General Qasem-Ali Zahirnejad, wanted to accept the ceasefire, as most of Iranian soil had been recaptured. In particular, General Shirazi and Zahirnejad were both opposed to the invasion of Iraq on logistical grounds, and stated they would consider resigning if "unqualified people continued to meddle with the conduct of the war". Of the opposing view was a hardline faction led by the clerics on the Supreme Defence Council, whose leader was the politically powerful speaker of the Majlis, Akbar Hashemi Rafsanjani. Iran also hoped that their attacks would ignite a revolt against Saddam's rule by the Shia and Kurdish population of Iraq, possibly resulting in his downfall. They were successful in doing so with the Kurdish population, but not the Shia. Iran had captured large quantities of Iraqi equipment (enough to create several tank battalions, Iran once again had 1,000 tanks) and also managed to clandestinely procure spare parts as well. At a cabinet meeting in Baghdad, Minister of Health Riyadh Ibrahim Hussein suggested that Saddam could step down temporarily as a way of easing Iran towards a ceasefire, and then afterwards would come back to power. Saddam, annoyed, asked if anyone else in the Cabinet agreed with the Health Minister's idea. When no one raised their hand in support, he escorted Riyadh Hussein to the next room, closed the door, and shot him with his pistol. Saddam returned to the room and continued with his meeting. Iran invades Iraq Iraqi tactics against Iranian invasion For the most part, Iraq remained on the defensive for the next five years, unable and unwilling to launch any major offensives, while Iran launched more than 70 offensives. Iraq's strategy changed from holding territory in Iran to denying Iran any major gains in Iraq (as well as holding onto disputed territories along the border). Saddam commenced a policy of total war, gearing most of his country towards defending against Iran. By 1988, Iraq was spending 40–75% of its GDP on military equipment. Saddam had also more than doubled the size of the Iraqi army, from 200,000 soldiers (12 divisions and three independent brigades) to 500,000 (23 divisions and nine brigades). Iraq also began launching air raids against Iranian border cities, greatly increasing the practice by 1984. By the end of 1982, Iraq had been resupplied with new Soviet and Chinese materiel, and the ground war entered a new phase. Iraq used newly acquired T-55, T-62 and T-72 tanks (as well as Chinese copies), BM-21 truck-mounted rocket launchers, and Mi-24 helicopter gunships to prepare a Soviet-type three-line defence, replete with obstacles such as barbed wire, minefields, fortified positions and bunkers. The Combat Engineer Corps built bridges across water obstacles, laid minefields, erected earthen revetments, dug trenches, built machinegun nests, and prepared new defence lines and fortifications. Iraq began to focus on using defense in depth to defeat the Iranians. Iraq created multiple static defense lines to bleed the Iranians through sheer size. When faced against large Iranian attack, where human waves would overrun Iraq's forward entrenched infantry defences, the Iraqis would often retreat, but their static defences would bleed the Iranians and channel them into certain directions, drawing them into traps or pockets. Iraqi air and artillery attacks would then pin the Iranians down, while tanks and mechanised infantry attacks using mobile warfare would push them back. Sometimes, the Iraqis would launch "probing attacks" into the Iranian lines to provoke them into launching their attacks sooner. While Iranian human wave attacks were successful against the dug in Iraqi forces in Khuzestan, they had trouble breaking through Iraq's defense in depth lines. Iraq had a logistical advantage in their defence: the front was located near the main Iraqi bases and arms depots, allowing their army to be efficiently supplied. By contrast, the front in Iran was a considerable distance away from the main Iranian bases and arms depots, and as such, Iranian troops and supplies had to travel through mountain ranges before arriving at the front. In addition, Iran's military power was weakened once again by large purges in 1982, resulting from another supposedly attempted coup. Operation Ramadan (First Battle of Basra) The Iranian generals wanted to launch an all-out attack on Baghdad and seize it before the weapon shortages continued to manifest further. Instead, that was rejected as being unfeasible, and the decision was made to capture one area of Iraq after the other in the hopes that a series of blows delivered foremost by the Revolutionary Guards Corps would force a political solution to the war (including Iraq withdrawing completely from the disputed territories along the border). The Iranians planned their attack in southern Iraq, near Basra. Called Operation Ramadan, it involved over 180,000 troops from both sides, and was one of the largest land battles since World War II. Iranian strategy dictated that they launch their primary attack on the weakest point of the Iraqi lines; however, the Iraqis were informed of Iran's battle plans and moved all of their forces to the area the Iranians planned to attack. The Iraqis were equipped with tear gas to use against the enemy, which would be the first major use of chemical warfare during the conflict, throwing an entire attacking division into chaos. Over 100,000 Revolutionary Guards and Basij volunteer forces charged towards the Iraqi lines. The Iraqi troops had entrenched themselves in formidable defenses, and had set up a network of bunkers and artillery positions. The Basij used human waves, and were even used to bodily clear the Iraqi minefields and allow the Revolutionary Guards to advance. Combatants came so close to one another that Iranians were able to board Iraqi tanks and throw grenades inside the hulls. By the eighth day, the Iranians had gained inside Iraq and had taken several causeways. Iran's Revolutionary Guards also used the T-55 tanks they had captured in earlier battles. However, the attacks came to a halt and the Iranians turned to defensive measures. Seeing this, Iraq used their Mi-25 helicopters, along with Gazelle helicopters armed with Euromissile HOT, against columns of Iranian mechanised infantry and tanks. These "hunter-killer" teams of helicopters, which had been formed with the help of East German advisors, proved to be very costly for the Iranians. Aerial dogfights occurred between Iraqi MiGs and Iranian F-4 Phantoms. On 16 July, Iran tried again further north and managed to push the Iraqis back. However, only from Basra, the poorly equipped Iranian forces were surrounded on three sides by Iraqis with heavy weaponry. Some were captured, while many were killed. Only a last-minute attack by Iranian AH-1 Cobra helicopters stopped the Iraqis from routing the Iranians. Three more similar attacks occurred around the Khorramshahr-Baghdad road area towards the end of the month, but none were significantly successful. Iraq had concentrated three armoured divisions, the 3rd, 9th, and 10th, as a counter-attack force to attack any penetrations. They were successful in defeating the Iranian breakthroughs, but suffered heavy losses. The 9th Armoured Division in particular had to be disbanded, and was never reformed. The total casualty toll had grown to include 80,000 soldiers and civilians. 400 Iranian tanks and armored vehicles were destroyed or abandoned, while Iraq lost no fewer than 370 tanks. Fighting during the rest of 1982 After Iran's failure in Operation Ramadan, they carried out only a few smaller attacks. Iran launched two limited offensives aimed at reclaiming the Sumar Hills and isolating the Iraqi pocket at Naft shahr at the international border, both of which were part of the disputed territories still under Iraqi occupation. They then aimed to capture the Iraqi border town of Mandali. They planned to take the Iraqis by surprise using Basij militiamen, army helicopters, and some armoured forces, then stretch their defences and possibly break through them to open a road to Baghdad for future exploitation. During Operation Muslim ibn Aqil (1–7 October), Iran recovered of disputed territory straddling the international border and reached the outskirts of Mandali before being stopped by Iraqi helicopter and armoured attacks. During Operation Muharram (1–21 November), the Iranians captured part of the Bayat oilfield with the help of their fighter jets and helicopters, destroying 105 Iraqi tanks, 70 APCs, and 7 planes with few losses. They nearly breached the Iraqi lines but failed to capture Mandali after the Iraqis sent reinforcements, including brand new T-72 tanks, which possessed armour that could not be pierced from the front by Iranian TOW missiles. The Iranian advance was also impeded by heavy rains. 3,500 Iraqis and an unknown number of Iranians died, with only minor gains for Iran. 1983–84: Strategic stalemate and war of attrition After the failure of the 1982 summer offensives, Iran believed that a major effort along the entire breadth of the front would yield victory. During the course of 1983, the Iranians launched five major assaults along the front, though none achieved substantial success, as the Iranians staged more massive "human wave" attacks. By this time, it was estimated that no more than 70 Iranian fighter aircraft were still operational at any given time; Iran had its own helicopter repair facilities, left over from before the revolution, and thus often used helicopters for close air support. Iranian fighter pilots had superior training compared to their Iraqi counterparts (as most had received training from US officers before the 1979 revolution) and would continue to dominate in combat. However, aircraft shortages, the size of defended territory/airspace, and American intelligence supplied to Iraq allowed the Iraqis to exploit gaps in Iranian airspace. Iraqi air campaigns met little opposition, striking over half of Iran, as the Iraqis were able to gain air superiority towards the end of the war. Operation Before the Dawn In Operation Before the Dawn, launched 6 February 1983, the Iranians shifted focus from the southern to the central and northern sectors. Employing 200,000 "last reserve" Revolutionary Guard troops, Iran attacked along a stretch near al-Amarah, Iraq, about southeast of Baghdad, in an attempt to reach the highways connecting northern and southern Iraq. The attack was stalled by of hilly escarpments, forests, and river torrents blanketing the way to al-Amarah, but the Iraqis could not force the Iranians back. Iran directed artillery on Basra, Al Amarah, and Mandali. The Iranians suffered a large number of casualties clearing minefields and breaching Iraqi anti-tank mines, which Iraqi engineers were unable to replace. After this battle, Iran reduced its use of human wave attacks, though they still remained a key tactic as the war went on. Further Iranian attacks were mounted in the Mandali–Baghdad north-central sector in April 1983, but were repelled by Iraqi mechanised and infantry divisions. Casualties were high, and by the end of 1983, an estimated 120,000 Iranians and 60,000 Iraqis had been killed. Iran, however, held the advantage in the war of attrition; in 1983, Iran had an estimated population of 43.6 million to Iraq's 14.8 million, and the discrepancy continued to grow throughout the war. Dawn Operations From early 1983–1984, Iran launched a series of four Valfajr (Dawn) Operations (that eventually numbered to 10). During Operation Dawn-1, in early February 1983, 50,000 Iranian forces attacked westward from Dezful and were confronted by 55,000 Iraqi forces. The Iranian objective was to cut off the road from Basra to Baghdad in the central sector. The Iraqis carried out 150 air sorties against the Iranians, and even bombed Dezful, Ahvaz, and Khorramshahr in retribution. The Iraqi counterattack was broken up by Iran's 92nd Armoured Division. During Operation Dawn-2, the Iranians directed insurgency operations by proxy in April 1983 by supporting the Kurds in the north. With Kurdish support, the Iranians attacked on 23 July 1983, capturing the Iraqi town of Haj Omran and maintaining it against an Iraqi poison gas counteroffensive. This operation incited Iraq to later conduct indiscriminate chemical attacks against the Kurds. The Iranians attempted to further exploit activities in the north on 30 July 1983, during Operation Dawn-3. Iran saw an opportunity to sweep away Iraqi forces controlling the roads between the Iranian mountain border towns of Mehran, Dehloran and Elam. Iraq launched airstrikes, and equipped attack helicopters with chemical warheads; while ineffective, it demonstrated both the Iraqi general staff's and Saddam's increasing interest in using chemical weapons. In the end, 17,000 had been killed on both sides, with no gain for either country. The focus of Operation Dawn-4 in September 1983 was the northern sector in Iranian Kurdistan. Three Iranian regular divisions, the Revolutionary Guard, and Kurdistan Democratic Party (KDP) elements amassed in Marivan and Sardasht in a move to threaten the major Iraqi city Suleimaniyah. Iran's strategy was to press Kurdish tribes to occupy the Banjuin Valley, which was within of Suleimaniyah and from the oilfields of Kirkuk. To stem the tide, Iraq deployed Mi-8 attack helicopters equipped with chemical weapons and executed 120 sorties against the Iranian force, which stopped them into Iraqi territory. 5,000 Iranians and 2,500 Iraqis died. Iran gained of its territory back in the north, gained of Iraqi land, and captured 1,800 Iraqi prisoners while Iraq abandoned large quantities of valuable weapons and war materiel in the field. Iraq responded to these losses by firing a series of SCUD-B missiles into the cities of Dezful, Masjid Soleiman, and Behbehan. Iran's use of artillery against Basra while the battles in the north raged created multiple fronts, which effectively confused and wore down Iraq. Iran's change in tactics Previously, the Iranians had outnumbered the Iraqis on the battlefield, but Iraq expanded their military draft (pursuing a policy of total war), and by 1984, the armies were equal in size. By 1986, Iraq had twice as many soldiers as Iran. By 1988, Iraq would have 1 million soldiers, giving it the fourth largest army in the world. Some of their equipment, such as tanks, outnumbered the Iranians' by at least five to one. Iranian commanders, however, remained more tactically skilled. After the Dawn Operations, Iran attempted to change tactics. In the face of increasing Iraqi defense in depth, as well as increased armaments and manpower, Iran could no longer rely on simple human wave attacks. Iranian offensives became more complex and involved extensive maneuver warfare using primarily light infantry. Iran launched frequent, and sometimes smaller offensives to slowly gain ground and deplete the Iraqis through attrition. They wanted to drive Iraq into economic failure by wasting money on weapons and war mobilization, and to deplete their smaller population by bleeding them dry, in addition to creating an anti-government insurgency (they were successful in Kurdistan, but not southern Iraq). Iran also supported their attacks with heavy weaponry when possible and with better planning (although the brunt of the battles still fell to the infantry). The Army and Revolutionary Guards worked together better as their tactics improved. Human wave attacks became less frequent (although still used). To negate the Iraqi advantage of defense in depth, static positions, and heavy firepower, Iran began to focus on fighting in areas where the Iraqis could not use their heavy weaponry, such as marshes, valleys, and mountains, and frequently using infiltration tactics. Iran began training troops in infiltration, patrolling, night-fighting, marsh warfare, and mountain warfare. They also began training thousands of Revolutionary Guard commandos in amphibious warfare, as southern Iraq is marshy and filled with wetlands. Iran used speedboats to cross the marshes and rivers in southern Iraq and landed troops on the opposing banks, where they would dig and set up pontoon bridges across the rivers and wetlands to allow heavy troops and supplies to cross. Iran also learned to integrate foreign guerrilla units as part of their military operations. On the northern front, Iran began working heavily with the Peshmerga, Kurdish guerrillas. Iranian military advisors organised the Kurds into raiding parties of 12 guerrillas, which would attack Iraqi command posts, troop formations, infrastructure (including roads and supply lines), and government buildings. The oil refineries of Kirkuk became a favourite target, and were often hit by homemade Peshmerga rockets. Battle of the Marshes By 1984, the Iranian ground forces were reorganised well enough for the Revolutionary Guard to start Operation Kheibar, which lasted from 24 February to 19 March. On 15 February 1984, the Iranians began launching attacks against the central section of the front, where the Second Iraqi Army Corps was deployed: 250,000 Iraqis faced 250,000 Iranians. The goal of this new major offensive was the capture of Basra-Baghdad Highway, cutting off Basra from Baghdad and setting the stage for an eventual attack upon the city. The Iraqi high command had assumed that the marshlands above Basra were natural barriers to attack, and had not reinforced them. The marshes negated Iraqi advantage in armor, and absorbed artillery rounds and bombs. Prior to the attack, Iranian commandos on helicopters had landed behind Iraqi lines and destroyed Iraqi artillery. Iran launched two preliminary attacks prior to the main offensive, Operation Dawn 5 and Dawn 6. They saw the Iranians attempting to capture Kut al-Imara, Iraq and sever the highway connecting Baghdad to Basra, which would impede Iraqi coordination of supplies and defences. Iranian troops crossed the river on motorboats in a surprise attack, though only came within of the highway. Operation Kheibar began on 24 February with Iranian infantrymen crossing the Hawizeh Marshes using motorboats and transport helicopters in an amphibious assault. The Iranians attacked the vital oil-producing Majnoon Island by landing troops via helicopters onto the islands and severing the communication lines between Amareh and Basra. They then continued the attack towards Qurna. By 27 February, they had captured the island, but suffered catastrophic helicopter losses to the IrAF. On that day, a massive array of Iranian helicopters transporting Pasdaran troops were intercepted by Iraqi combat aircraft (MiGs, Mirages and Sukhois). In what was essentially an aerial slaughter, Iraqi jets shot down 49 of the 50 Iranian helicopters. At times, fighting took place in waters over deep. Iraq ran live electrical cables through the water, electrocuting numerous Iranian troops and then displaying their corpses on state television. By 29 February, the Iranians had reached the outskirts of Qurna and were closing in on the Baghdad–Basra highway. They had broken out of the marshes and returned to open terrain, where they were confronted by conventional Iraqi weapons, including artillery, tanks, air power, and mustard gas. 1,200 Iranian soldiers were killed in the counter-attack. The Iranians retreated back to the marshes, though they still held onto them along with Majnoon Island. The Battle of the Marshes saw an Iraqi defence that had been under continuous strain since 15 February; they were relieved by their use of chemical weapons and defence-in-depth, where they layered defensive lines: even if the Iranians broke through the first line, they were usually unable to break through the second due to exhaustion and heavy losses. They also largely relied on Mi-24 Hind to "hunt" the Iranian troops in the marshes, and at least 20,000 Iranians were killed in the marsh battles. Iran used the marshes as a springboard for future attacks/infiltrations. Four years into the war, the human cost to Iran had been 170,000 combat fatalities and 340,000 wounded. Iraqi combat fatalities were estimated at 80,000 with 150,000 wounded. "Tanker War" and the "War of the Cities" Unable to launch successful ground attacks against Iran, Iraq used their now expanded air force to carry out strategic bombing against Iranian shipping, economic targets, and cities in order to damage Iran's economy and morale. Iraq also wanted to provoke Iran into doing something that would cause the superpowers to be directly involved in the conflict on the Iraqi side. Attacks on shipping The so-called "Tanker War" started when Iraq attacked the oil terminal and oil tankers at Kharg Island in early 1984. Iraq's aim in attacking Iranian shipping was to provoke the Iranians to retaliate with extreme measures, such as closing the Strait of Hormuz to all maritime traffic, thereby bringing American intervention; the United States had threatened several times to intervene if the Strait of Hormuz were closed. As a result, the Iranians limited their retaliatory attacks to Iraqi shipping, leaving the strait open to general passage. Iraq declared that all ships going to or from Iranian ports in the northern zone of the Persian Gulf were subject to attack. They used F-1 Mirage, Super Etendard, Mig-23, Su-20/22, and Super Frelon helicopters armed with Exocet anti-ship missiles as well as Soviet-made air-to-surface missiles to enforce their threats. Iraq repeatedly bombed Iran's main oil export facility on Kharg Island, causing increasingly heavy damage. As a first response to these attacks, Iran attacked a Kuwaiti tanker carrying Iraqi oil near Bahrain on 13 May 1984, as well as a Saudi tanker in Saudi waters on 16 May. Because Iraq had become landlocked during the course of the war, they had to rely on their Arab allies, primarily Kuwait, to transport their oil. Iran attacked tankers carrying Iraqi oil from Kuwait, later attacking tankers from any Persian Gulf state supporting Iraq. Attacks on ships of noncombatant nations in the Persian Gulf sharply increased thereafter, with both nations attacking oil tankers and merchant ships of neutral nations in an effort to deprive their opponent of trade. The Iranian attacks against Saudi shipping led to Saudi F-15s shooting down a pair of F-4 Phantom II fighters on 5 June 1984. The air and small-boat attacks, however, did little damage to Persian Gulf state economies, and Iran moved its shipping port to Larak Island in the Strait of Hormuz. The Iranian Navy imposed a naval blockade of Iraq, using its British-built frigates to stop and inspect any ships thought to be trading with Iraq. They operated with virtual impunity, as Iraqi pilots had little training in hitting naval targets. Some Iranian warships attacked tankers with ship-to-ship missiles, while others used their radars to guide land-based anti-ship missiles to their targets. Iran began to rely on its new Revolutionary Guard's navy, which used Boghammar speedboats fitted with rocket launchers and heavy machine guns. These speedboats would launch surprise attacks against tankers and cause substantial damage. Iran also used F-4 Phantom II fighters and helicopters to launch Maverick missiles and unguided rockets at tankers. A U.S. Navy ship, , was struck on 17 May 1987 by two Exocet anti-ship missiles fired from an Iraqi F-1 Mirage plane. The missiles had been fired at about the time the plane was given a routine radio warning by Stark. The frigate did not detect the missiles with radar, and warning was given by the lookout only moments before they struck. Both missiles hit the ship, and one exploded in crew quarters, killing 37 sailors and wounding 21. Lloyd's of London, a British insurance market, estimated that the Tanker War damaged 546 commercial vessels and killed about 430 civilian sailors. The largest portion of the attacks was directed by Iraq against vessels in Iranian waters, with the Iraqis launching three times as many attacks as the Iranians. But Iranian speedboat attacks on Kuwaiti shipping led Kuwait to formally petition foreign powers on 1 November 1986 to protect its shipping. The Soviet Union agreed to charter tankers starting in 1987, and the United States Navy offered to provide protection for foreign tankers reflagged and flying the U.S. flag starting 7 March 1987 in Operation Earnest Will. Neutral tankers shipping to Iran were unsurprisingly not protected by Earnest Will, resulting in reduced foreign tanker traffic to Iran, since they risked Iraqi air attack. Iran accused the United States of helping Iraq. During the course of the war, Iran attacked two Soviet merchant ships. Seawise Giant, the largest ship ever built, was struck by Iraqi Exocet missiles as it was carrying Iranian crude oil out of the Persian Gulf. Attacks on cities Meanwhile, Iraq's air force also began carrying out strategic bombing raids against Iranian cities. While Iraq had launched numerous attacks with aircraft and missiles against border cities from the beginning of the war and sporadic raids on Iran's main cities, this was the first systematic strategic bombing that Iraq carried out during the war. This would become known as the "War of the Cities". With the help of the USSR and the west, Iraq's air force had been rebuilt and expanded. Meanwhile, Iran, due to sanctions and lack of spare parts, had heavily curtailed its air force operations. Iraq used Tu-22 Blinder and Tu-16 Badger strategic bombers to carry out long-range high-speed raids on Iranian cities, including Tehran. Fighter-bombers such as the Mig-25 Foxbat and Su-22 Fitter were used against smaller or shorter range targets, as well as escorting the strategic bombers. Civilian and industrial targets were hit by the raids, and each successful raid inflicted economic damage from regular strategic bombing. In response, the Iranians deployed their F-4 Phantoms to combat the Iraqis, and eventually they deployed F-14s as well. By 1986, Iran also expanded their air defense network heavily to relieve the pressure on the air force. By later in the war, Iraqi raids primarily consisted of indiscriminate missile attacks while air attacks were used only on fewer, more important targets. Starting in 1987, Saddam also ordered several chemical attacks on civilian targets in Iran, such as the town of Sardasht. Iran also launched several retaliatory air raids on Iraq, while primarily shelling border cities such as Basra. Iran also bought some Scud missiles from Libya, and launched them against Baghdad. These too inflicted damage upon Iraq. On 7 February 1984, during the first war of the cities, Saddam ordered his air force to attack eleven Iranian cities; bombardments ceased on 22 February 1984. Though Saddam intended the attacks to demoralise Iran and force them to negotiate, they had little effect, and Iran quickly repaired the damage. Moreover, Iraq's air force took heavy losses and Iran struck back, hitting Baghdad and other Iraqi cities. The attacks resulted in tens of thousands of civilian casualties on both sides, and became known as the first "war of the cities". It was estimated that 1,200 Iranian civilians were killed during the raids in February alone. There would be five such major exchanges throughout the course of the war, and multiple minor ones. While interior cities such as Tehran, Tabriz, Qom, Isfahan and Shiraz received numerous raids, the cities of western Iran suffered the most. Strategic situation in 1984 By 1984, Iran's losses were estimated to be 300,000 soldiers, while Iraq's losses were estimated to be 150,000. Foreign analysts agreed that both Iran and Iraq failed to use their modern equipment properly, and both sides failed to carry out modern military assaults that could win the war. Both sides also abandoned equipment in the battlefield because their technicians were unable to carry out repairs. Iran and Iraq showed little internal coordination on the battlefield, and in many cases units were left to fight on their own. As a result, by the end of 1984, the war was a stalemate. One limited offensive Iran launched (Dawn 7) took place from 18 to 25 October 1984, when they recaptured the Iranian city of Mehran, which had been occupied by the Iraqis from the beginning of the war. 1985–86: Offensives and retreats By 1985, Iraqi armed forces were receiving financial support from Saudi Arabia, Kuwait, and other Persian Gulf states, and were making substantial arms purchases from the Soviet Union, China, and France. For the first time since early 1980, Saddam launched new offensives. On 6 January 1986, the Iraqis launched an offensive attempting to retake Majnoon Island. However, they were quickly bogged down into a stalemate against 200,000 Iranian infantrymen, reinforced by amphibious divisions. However, they managed to gain a foothold in the southern part of the island. Iraq also carried out another "war of the cities" between 12 and 14 March, hitting up to 158 targets in over 30 towns and cities, including Tehran. Iran responded by launching 14 Scud missiles for the first time, purchased from Libya. More Iraqi air attacks were carried out in August, resulting in hundreds of additional civilian casualties. Iraqi attacks against both Iranian and neutral oil tankers in Iranian waters continued, with Iraq carrying out 150 airstrikes using French bought Super Etendard and Mirage F-1 jets as well as Super Frelon helicopters, armed with Exocet missiles. Operation Badr The Iraqis attacked again on 28 January 1985; they were defeated, and the Iranians retaliated on 11 March 1985 with a major offensive directed against the Baghdad-Basra highway (one of the few major offensives conducted in 1985), codenamed Operation Badr (after the Battle of Badr, Muhammad's first military victory in Mecca). Ayatollah Khomeini urged Iranians on, declaring: It is our belief that Saddam wishes to return Islam to blasphemy and polytheism...if America becomes victorious...and grants victory to Saddam, Islam will receive such a blow that it will not be able to raise its head for a long time...The issue is one of Islam versus blasphemy, and not of Iran versus Iraq. This operation was similar to Operation Kheibar, though it invoked more planning. Iran used 100,000 troops, with 60,000 more in reserve. They assessed the marshy terrain, plotted points where they could land tanks, and constructed pontoon bridges across the marshes. The Basij forces were also equipped with anti-tank weapons. The ferocity of the Iranian offensive broke through the Iraqi lines. The Revolutionary Guard, with the support of tanks and artillery, broke through north of Qurna on 14 March. That same night 3,000 Iranian troops reached and crossed the Tigris River using pontoon bridges and captured part of the Baghdad–Basra Highway 6, which they had failed to achieve in Operations Dawn 5 and 6. Saddam responded by launching chemical attacks against the Iranian positions along the highway and by initiating the aforementioned second "war of the cities", with an air and missile campaign against twenty to thirty Iranian population centres, including Tehran. Under General Sultan Hashim Ahmad al-Tai and General Jamal Zanoun (both considered to be among Iraq's most skilled commanders), the Iraqis launched air attacks against the Iranian positions and pinned them down. They then launched a pincer attack using mechanized infantry and heavy artillery. Chemical weapons were used, and the Iraqis also flooded Iranian trenches with specially constructed pipes delivering water from the Tigris River. The Iranians retreated back to the Hoveyzeh marshes while being attacked by helicopters, and the highway was recaptured by the Iraqis. Operation Badr resulted in 10,000–12,000 Iraqi casualties and 15,000 Iranian ones. Strategic situation at the beginning of 1986 The failure of the human wave attacks in earlier years had prompted Iran to develop a better working relationship between the Army and the Revolutionary Guard and to mould the Revolutionary Guard units into a more conventional fighting force. To combat Iraq's use of chemical weapons, Iran began producing an antidote. They also created and fielded their own homemade drones, the Mohajer 1's, fitted with six RPG-7's to launch attacks. They were primarily used in observation, being used for up to 700 sorties. For the rest of 1986, and until the spring of 1988, the Iranian Air Force's efficiency in air defence increased, with weapons being repaired or replaced and new tactical methods being used. For example, the Iranians would loosely integrate their SAM Sites and interceptors to create "killing fields" in which dozens of Iraqi planes were lost (which was reported in the West as the Iranian Air Force using F-14s as "mini-AWACs"). The Iraqi Air Force reacted by increasing the sophistication of its equipment, incorporating modern electronic countermeasure pods, decoys such as chaff and flare, and anti-radiation missiles. Due to the heavy losses in the last war of the cities, Iraq reduced their use of aerial attacks on Iranian cities. Instead, they would launch Scud missiles, which the Iranians could not stop. Since the range of the Scud missile was too short to reach Tehran, they converted them to al-Hussein missiles with the help of East German engineers, cutting up their Scuds into three chunks and attaching them together. Iran responded to these attacks by using their own Scud missiles. Compounding the extensive foreign help to Iraq, Iranian attacks were severely hampered by their shortages of weaponry, particularly heavy weapons as large amounts had been lost during the war. Iran still managed to maintain 1,000 tanks (often by capturing Iraqi ones) and additional artillery, but many needed repairs to be operational. However, by this time Iran managed to procure spare parts from various sources, helping them to restore some weapons. They secretly imported some weapons, such as RBS-70 anti-aircraft MANPADS. In an exception to the United States' support for Iraq, in exchange for Iran using its influence to help free western hostages in Lebanon, the United States secretly sold Iran some limited supplies (in Ayatollah Rafsanjani's postwar interview, he stated that during the period when Iran was succeeding, for a short time the United States supported Iran, then shortly after began helping Iraq again). Iran managed to get some advanced weapons, such as anti-tank TOW missiles, which worked better than rocket-propelled grenades. Iran later reverse-engineered and produced those weapons themselves. All of these almost certainly helped increase the effectiveness of Iran, although it did not reduce the human cost of their attacks. First Battle of al-Faw On the night of 10–11 February 1986, the Iranians launched Operation Dawn 8, in which 30,000 troops comprising five Army divisions and men from the Revolutionary Guard and Basij advanced in a two-pronged offensive to capture the al-Faw peninsula in southern Iraq, the only area touching the Persian Gulf. The capture of Al Faw and Umm Qasr was a major goal for Iran. Iran began with a feint attack against Basra, which was stopped by the Iraqis; Meanwhile, an amphibious strike force landed at the foot of the peninsula. The resistance, consisting of several thousand poorly trained soldiers of the Iraqi Popular Army, fled or were defeated, and the Iranian forces set up pontoon bridges crossing the Shatt al-Arab, allowing 30,000 soldiers to cross in a short period of time. They drove north along the peninsula almost unopposed, capturing it after only 24 hours of fighting. Afterwards they dug in and set up defenses. The sudden capture of al-Faw shocked the Iraqis, since they had thought it impossible for the Iranians to cross the Shatt al-Arab. On 12 February 1986, the Iraqis began a counter-offensive to retake al-Faw, which failed after a week of heavy fighting. On 24 February 1986, Saddam sent one of his best commanders, General Maher Abd al-Rashid, and the Republican Guard to begin a new offensive to recapture al-Faw. A new round of heavy fighting took place. However, their attempts again ended in failure, costing them many tanks and aircraft: their 15th mechanised division was almost completely wiped out. The capture of al-Faw and the failure of the Iraqi counter-offensives were blows to the Ba'ath regime's prestige, and led the Gulf countries to fear that Iran might win the war. Kuwait in particular felt menaced with Iranian troops only away, and increased its support of Iraq accordingly. In March 1986, the Iranians tried to follow up their success by attempting to take Umm Qasr, which would have completely severed Iraq from the Gulf and placed Iranian troops on the border with Kuwait. However, the offensive failed due to Iranian shortages of armor. By this time, 17,000 Iraqis and 30,000 Iranians were made casualties. The First Battle of al-Faw ended in March, but heavy combat operations lasted on the peninsula into 1988, with neither side being able to displace the other. The battle bogged down into a World War I-style stalemate in the marshes of the peninsula. Battle of Mehran Immediately after the Iranian capture of al-Faw, Saddam declared a new offensive against Iran, designed to drive deep into the state. The Iranian border city of Mehran, on the foot of the Zagros Mountains, was selected as the first target. On 15–19 May, Iraqi Army's Second Corps, supported by helicopter gunships, attacked and captured the city. Saddam then offered the Iranians to exchange Mehran for al-Faw. The Iranians rejected the offer. Iraq then continued the attack, attempting to push deeper into Iran. However, Iraq's attack was quickly warded off by Iranian AH-1 Cobra helicopters with TOW missiles, which destroyed numerous Iraqi tanks and vehicles. The Iranians built up their forces on the heights surrounding Mehran. On 30 June, using mountain warfare tactics they launched their attack, recapturing the city by 3 July. Saddam ordered the Republican Guard to retake the city on 4 July, but their attack was ineffective. Iraqi losses were heavy enough to allow the Iranians to also capture territory inside Iraq, and depleted the Iraqi military enough to prevent them from launching a major offensive for the next two years. Iraq's defeats at al-Faw and at Mehran were severe blows to the prestige of the Iraqi regime, and western powers, including the US, became more determined to prevent an Iraqi loss. Strategic situation at the end of 1986 Through the eyes of international observers, Iran was prevailing in the war by the end of 1986. In the northern front, the Iranians began launching attacks toward the city of Suleimaniya with the help of Kurdish fighters, taking the Iraqis by surprise. They came within of the city before being stopped by chemical and army attacks. Iran's army had also reached the Meimak Hills, only from Baghdad. Iraq managed to contain Iran's offensives in the south, but was under serious pressure, as the Iranians were slowly overwhelming them. Iraq responded by launching another "war of the cities". In one attack, Tehran's main oil refinery was hit, and in another instance, Iraq damaged Iran's Assadabad satellite dish, disrupting Iranian overseas telephone and telex service for almost two weeks. Civilian areas were also hit, resulting in many casualties. Iraq continued to attack oil tankers via air. Iran responded by launching Scud missiles and air attacks at Iraqi targets. Iraq continued to attack Kharg Island and the oil tankers and facilities as well. Iran created a tanker shuttle service of 20 tankers to move oil from Kharg to Larak Island, escorted by Iranian fighter jets. Once moved to Larak, the oil would be moved to oceangoing tankers (usually neutral). They also rebuilt the oil terminals damaged by Iraqi air raids and moved shipping to Larak Island, while attacking foreign tankers that carried Iraqi oil (as Iran had blocked Iraq's access to the open sea with the capture of al-Faw). By now they almost always used the armed speedboats of the IRGC navy, and attacked many tankers. The tanker war escalated drastically, with attacks nearly doubling in 1986 (the majority carried out by Iraq). Iraq got permission from the Saudi government to use its airspace to attack Larak Island, although due to the distance attacks were less frequent there. The escalating tanker war in the Gulf became an ever-increasing concern to foreign powers, especially the United States. In April 1986, Ayatollah Khomeini issued a fatwa declaring that the war must be won by March 1987. The Iranians increased recruitment efforts, obtaining 650,000 volunteers. The animosity between the Army and the Revolutionary Guard arose again, with the Army wanting to use more refined, limited military attacks while the Revolutionary Guard wanted to carry out major offensives. Iran, confident in its successes, began planning their largest offensives of the war, which they called their "final offensives". Iraq's dynamic defense strategy Faced with their recent defeats in al-Faw and Mehran, Iraq appeared to be losing the war. Iraq's generals, angered by Saddam's interference, threatened a full-scale mutiny against the Ba'ath Party unless they were allowed to conduct operations freely. In one of the few times during his career, Saddam gave in to the demands of his generals. Up to this point, Iraqi strategy was to ride out Iranian attacks. However, the defeat at al-Faw led Saddam to declare the war to be Al-Defa al-Mutaharakha (The Dynamic Defense), and announcing that all civilians had to take part in the war effort. The universities were closed and all of the male students were drafted into the military. Civilians were instructed to clear marshlands to prevent Iranian amphibious infiltrations and to help build fixed defenses. The government tried to integrate the Shias into the war effort by recruiting many as part of the Ba'ath Party. In an attempt to counterbalance the religious fervor of the Iranians and gain support from the devout masses, the regime also began to promote religion and, on the surface, Islamization, despite the fact that Iraq was run by a secular regime. Scenes of Saddam praying and making pilgrimages to shrines became common on state-run television. While Iraqi morale had been low throughout the war, the attack on al-Faw raised patriotic fervor, as the Iraqis feared invasion. Saddam also recruited volunteers from other Arab countries into the Republican Guard, and received much technical support from foreign nations as well. While Iraqi military power had been depleted in recent battles, through heavy foreign purchases and support, they were able to expand their military even to much larger proportions by 1988. At the same time, Saddam ordered the genocidal al-Anfal Campaign in an attempt to crush the Kurdish resistance, who were now allied with Iran. The result was the deaths of several hundred thousand Iraqi Kurds, and the destruction of villages, towns, and cities. Iraq began to try to perfect its maneuver tactics. The Iraqis began to prioritize the professionalization of their military. Prior to 1986, the conscription-based Iraqi regular army and the volunteer-based Iraqi Popular Army conducted the bulk of the operations in the war, to little effect. The Republican Guard, formerly an elite praetorian guard, was expanded as a volunteer army and filled with Iraq's best generals. Loyalty to the state was no longer a primary requisite for joining. After the war, due to Saddam's paranoia, the former duties of the Republican Guard were transferred to a new unit, the Special Republican Guard. Full-scale war games against hypothetical Iranian positions were carried out in the western Iraqi desert against mock targets, and they were repeated over the course of a full year until the forces involved fully memorized their attacks. Iraq built its military massively, eventually possessing the 4th largest in the world, in order to overwhelm the Iranians through sheer size. 1987–88: Towards a ceasefire Meanwhile, Iran continued to attack as the Iraqis were planning their strike. In 1987 the Iranians renewed a series of major human wave offensives in both northern and southern Iraq. The Iraqis had elaborately fortified Basra with 5 defensive rings, exploiting natural waterways such as the Shatt-al-Arab and artificial ones, such as Fish Lake and the Jasim River, along with earth barriers. Fish Lake was a massive lake filled with mines, underwater barbed wire, electrodes and sensors. Behind each waterway and defensive line was radar-guided artillery, ground attack aircraft and helicopters, all capable of firing poison gas or conventional munitions. The Iranian strategy was to penetrate the Iraqi defences and encircle Basra, cutting off the city as well as the Al-Faw peninsula from the rest of Iraq. Iran's plan was for three assaults: a diversionary attack near Basra, the main offensive and another diversionary attack using Iranian tanks in the north to divert Iraqi heavy armor from Basra. For these battles, Iran had re-expanded their military by recruiting many new Basij and Pasdaran volunteers. Iran brought 150,000–200,000 total troops into the battles. Karbala operations Operation Karbala-4 On 25 December 1986, Iran launched Operation Karbala-4 (Karbala referring to Hussein ibn Ali's Battle of Karbala). According to Iraqi General Ra'ad al-Hamdani, this was a diversionary attack. The Iranians launched an amphibious assault against the Iraqi island of Umm al-Rassas in the Shatt-Al-Arab river, parallel to Khoramshahr. They then set up a pontoon bridge and continued the attack, eventually capturing the island in a costly success but failing to advance further; the Iranians had 60,000 casualties, while the Iraqis 9,500. The Iraqi commanders exaggerated Iranian losses to Saddam, and it was assumed that the main Iranian attack on Basra had been fully defeated and that it would take the Iranians six months to recover. When the main Iranian attack, Operation Karbala 5, began, many Iraqi troops were on leave. Operation Karbala-5 (Sixth Battle of Basra) The Siege of Basra, code-named Operation Karbala-5 (), was an offensive operation carried out by Iran in an effort to capture the Iraqi port city of Basra in early 1987. This battle, known for its extensive casualties and ferocious conditions, was the biggest battle of the war and proved to be the beginning of the end of the Iran–Iraq War. While Iranian forces crossed the border and captured the eastern section of Basra Governorate, the operation ended in a stalemate. Operation Karbala-6 At the same time as Operation Karbala 5, Iran also launched Operation Karbala-6 against the Iraqis in Qasr-e Shirin in central Iran to prevent the Iraqis from rapidly transferring units down to defend against the Karbala-5 attack. The attack was carried out by Basij infantry and the Revolutionary Guard's 31st Ashura and the Army's 77th Khorasan armored divisions. The Basij attacked the Iraqi lines, forcing the Iraqi infantry to retreat. An Iraqi armored counter-attack surrounded the Basij in a pincer movement, but the Iranian tank divisions attacked, breaking the encirclement. The Iranian attack was finally stopped by mass Iraqi chemical weapons attacks. Iranian war-weariness Operation Karbala-5 was a severe blow to Iran's military and morale. To foreign observers, it appeared that Iran was continuing to strengthen. By 1988, Iran had become self-sufficient in many areas, such as anti-tank TOW missiles, Scud ballistic missiles (Shahab-1), Silkworm anti-ship missiles, Oghab tactical rockets, and producing spare parts for their weaponry. Iran had also improved its air defenses with smuggled surface to air missiles. Iran was even producing UAV's and the Pilatus PC-7 propeller aircraft for observation. Iran also doubled their stocks of artillery, and was self-sufficient in the manufacture of ammunition and small arms. While it was not obvious to foreign observers, the Iranian public had become increasingly war-weary and disillusioned with the fighting, and relatively few volunteers joined the fight in 1987–88. Because the Iranian war effort relied on popular mobilization, their military strength actually declined, and Iran was unable to launch any major offensives after Karbala-5. As a result, for the first time since 1982, the momentum of the fighting shifted towards the regular army. Since the regular army was conscription based, it made the war even less popular. Many Iranians began to try to escape the conflict. As early as May 1985, anti-war demonstrations took place in 74 cities throughout Iran, which were crushed by the regime, resulting in some protesters being shot and killed. By 1987, draft-dodging had become a serious problem, and the Revolutionary Guards and police set up roadblocks throughout cities to capture those who tried to evade conscription. Others, particularly the more nationalistic and religious, the clergy, and the Revolutionary Guards, wished to continue the war. The leadership acknowledged that the war was a stalemate, and began to plan accordingly. No more "final offensives" were planned. The head of the Supreme Defense Council Hashemi Rafsanjani announced during a news conference the end of human wave attacks. Mohsen Rezaee, head of the IRGC, announced that Iran would focus exclusively on limited attacks and infiltrations, while arming and supporting opposition groups inside of Iraq. On the Iranian home front, sanctions, declining oil prices, and Iraqi attacks on Iranian oil facilities and shipping took a heavy toll on the economy. While the attacks themselves were not as destructive as some analysts believed, the U.S.-led Operation Earnest Will (which protected Iraqi and allied oil tankers, but not Iranian ones) led many neutral countries to stop trading with Iran because of rising insurance and fear of air attack. Iranian oil and non-oil exports fell by 55%, inflation reached 50% by 1987, and unemployment skyrocketed. At the same time, Iraq was experiencing crushing debt and shortages of workers, encouraging its leadership to try to end the war quickly. Strategic situation in late 1987 By the end of 1987, Iraq possessed 5,550 tanks (outnumbering the Iranians six to one) and 900 fighter aircraft (outnumbering the Iranians ten to one). After Operation Karbala-5, Iraq only had 100 qualified fighter pilots remaining; therefore, Iraq began to invest in recruiting foreign pilots from countries such as Belgium, South Africa, Pakistan, East Germany and the Soviet Union. They replenished their manpower by integrating volunteers from other Arab countries into their army. Iraq also became self-sufficient in chemical weapons and some conventional ones and received much equipment from abroad. Foreign support helped Iraq bypass its economic troubles and massive debt to continue the war and increase the size of its military. While the southern and central fronts were at a stalemate, Iran began to focus on carrying out offensives in northern Iraq with the help of the Peshmerga (Kurdish insurgents). The Iranians used a combination of semi-guerrilla and infiltration tactics in the Kurdish mountains with the Peshmerga. During Operation Karbala-9 in early April, Iran captured territory near Suleimaniya, provoking a severe poison gas counter-attack. During Operation Karbala-10, Iran attacked near the same area, capturing more territory. During Operation Nasr-4, the Iranians surrounded the city of Suleimaniya and, with the help of the Peshmerga, infiltrated over 140 km into Iraq and raided and threatened to capture the oil-rich city of Kirkuk and other northern oilfields. Nasr-4 was considered to be Iran's most successful individual operation of the war but Iranian forces were unable to consolidate their gains and continue their advance; while these offensives coupled with the Kurdish uprising sapped Iraqi strength, losses in the north would not mean a catastrophic failure for Iraq. On 20 July, the UN Security Council passed the U.S.-sponsored Resolution 598, which called for an end to the fighting and a return to pre-war boundaries. This resolution was noted by Iran for being the first resolution to call for a return to the pre-war borders, and setting up a commission to determine the aggressor and compensation. Air and tanker war in 1987 With the stalemate on land, the air/tanker war began to play an increasingly major role in the conflict. The Iranian air force had become very small, with only 20 F-4 Phantoms, 20 F-5 Tigers, and 15 F-14 Tomcats in operation, although Iran managed to restore some damaged planes to service. The Iranian Air Force, despite its once sophisticated equipment, lacked enough equipment and personnel to sustain the war of attrition that had developed, and was unable to lead an outright onslaught against Iraq. The Iraqi Air Force, however, had originally lacked modern equipment and experienced pilots, but after pleas from Iraqi military leaders, Saddam decreased political influence on everyday operations and left the fighting to his combatants. The Soviets began delivering more advanced aircraft and weapons to Iraq, while the French improved training for flight crews and technical personnel and continually introduced new methods for countering Iranian weapons and tactics. Iranian ground air defense still shot down many Iraqi aircraft. The main Iraqi air effort had shifted to the destruction of Iranian war-fighting capability (primarily Persian Gulf oil fields, tankers, and Kharg Island), and starting in late 1986, the Iraqi Air Force began a comprehensive campaign against the Iranian economic infrastructure. By late 1987, the Iraqi Air Force could count on direct American support for conducting long-range operations against Iranian infrastructural targets and oil installations deep in the Persian Gulf. U.S. Navy ships tracked and reported movements of Iranian shipping and defences. In the massive Iraqi air strike against Kharg Island, flown on 18 March 1988, the Iraqis destroyed two supertankers but lost five aircraft to Iranian F-14 Tomcats, including two Tupolev Tu-22Bs and one Mikoyan MiG-25RB. The U.S. Navy was now becoming more involved in the fight in the Persian Gulf, launching Operations Earnest Will and Prime Chance against the Iranians. The attacks on oil tankers continued. Both Iran and Iraq carried out frequent attacks during the first four months of the year. Iran was effectively waging a naval guerilla war with its IRGC navy speedboats, while Iraq attacked with its aircraft. In 1987, Kuwait asked to reflag its tankers to the U.S. flag. They did so in March, and the U.S. Navy began Operation Earnest Will to escort the tankers. The result of Earnest Will would be that, while oil tankers shipping Iraqi/Kuwaiti oil were protected, Iranian tankers and neutral tankers shipping to Iran would be unprotected, resulting in both losses for Iran and the undermining of its trade with foreign countries, damaging Iran's economy further. Iran deployed Silkworm missiles to attack ships, but only a few were actually fired. Both the United States and Iran jockeyed for influence in the Gulf. To discourage the United States from escorting tankers, Iran secretly mined some areas. The United States began to escort the reflagged tankers, but one was damaged by a mine while under escort. While being a public-relations victory for Iran, the United States increased its reflagging efforts. While Iran mined the Persian Gulf, their speedboat attacks were reduced, primarily attacking unflagged tankers shipping in the area. On 24 September, US Navy SEALS captured the Iranian mine-laying ship Iran Ajr, a diplomatic disaster for the already isolated Iranians. Iran had previously sought to maintain at least a pretense of plausible deniability regarding its use of mines, but the Navy SEALS captured and photographed extensive evidence of Iran Ajrs mine-laying activities. On 8 October, the U.S. Navy destroyed four Iranian speedboats, and in response to Iranian Silkworm missile attacks on Kuwaiti oil tankers, launched Operation Nimble Archer, destroying two Iranian oil rigs in the Persian Gulf. During November and December, the Iraqi air force launched a bid to destroy all Iranian airbases in Khuzestan and the remaining Iranian air force. Iran managed to shoot down 30 Iraqi fighters with fighter jets, anti-aircraft guns, and missiles, allowing the Iranian air force to survive to the end of the war. On 28 June, Iraqi fighter bombers attacked the Iranian town of Sardasht near the border, using chemical mustard gas bombs. While many towns and cities had been bombed before, and troops attacked with gas, this was the first time that the Iraqis had attacked a civilian area with poison gas. One quarter of the town's then population of 20,000 was burned and stricken, and 113 were killed immediately, with many more dying and suffering health effects over following decades. Saddam ordered the attack in order to test the effects of the newly developed "dusty mustard" gas, which was designed to be even more crippling than traditional mustard gas. While little known outside of Iran (unlike the later Halabja chemical attack), the Sardasht bombing (and future similar attacks) had a tremendous effect on the Iranian people's psyche. 1988: Iraqi offensives and UN ceasefire By 1988, with massive equipment imports and reduced Iranian volunteers, Iraq was ready to launch major offensives against Iran. In February 1988, Saddam began the fifth and most deadly "war of the cities". Over the next two months, Iraq launched over 200 al-Hussein missiles at 37 Iranian cities. Saddam also threatened to use chemical weapons in his missiles, which caused 30% of Tehran's population to leave the city. Iran retaliated, launching at least 104 missiles against Iraq in 1988 and shelling Basra. This event was nicknamed the "Scud Duel" in the foreign media. In all, Iraq launched 520 Scuds and al-Husseins against Iran and Iran fired 177 in return. The Iranian attacks were too few in number to deter Iraq from launching their attacks. Iraq also increased their airstrikes against Kharg Island and Iranian oil tankers. With their tankers protected by U.S. warships, they could operate with virtual impunity. In addition, the West supplied Iraq's air force with laser-guided smart bombs, allowing them to attack economic targets while evading anti-aircraft defenses. These attacks began to have a major toll on the Iranian economy and morale and caused many casualties. Iran's Kurdistan Operations In March 1988, the Iranians carried out Operation Dawn 10, Operation Beit ol-Moqaddas 2, and Operation Zafar 7 in Iraqi Kurdistan with the aim of capturing the Darbandikhan Dam and the power plant at Lake Dukan, which supplied Iraq with much of its electricity and water, as well as the city of Suleimaniya. Iran hoped that the capture of these areas would bring more favourable terms to the ceasefire agreement. This infiltration offensive was carried out in conjunction with the Peshmerga. Iranian airborne commandos landed behind the Iraqi lines and Iranian helicopters hit Iraqi tanks with TOW missiles. The Iraqis were taken by surprise, and Iranian F-5E Tiger fighter jets even damaged the Kirkuk oil refinery. Iraq carried out executions of multiple officers for these failures in March–April 1988, including Colonel Jafar Sadeq. The Iranians used infiltration tactics in the Kurdish mountains, captured the town of Halabja and began to fan out across the province. Though the Iranians advanced to within sight of Dukan and captured around and 4,000 Iraqi troops, the offensive failed due to the Iraqi use of chemical warfare. The Iraqis launched the deadliest chemical weapons attacks of the war. The Republican Guard launched 700 chemical shells, while the other artillery divisions launched 200–300 chemical shells each, unleashing a chemical cloud over the Iranians, killing or wounding 60% of them, the blow was felt particularly by the Iranian 84th infantry division and 55th paratrooper division. The Iraqi special forces then stopped the remains of the Iranian force. In retaliation for Kurdish collaboration with the Iranians, Iraq launched a massive poison gas attack against Kurdish civilians in Halabja, recently taken by the Iranians, killing thousands of civilians. Iran airlifted foreign journalists to the ruined city, and the images of the dead were shown throughout the world, but Western mistrust of Iran and collaboration with Iraq led them to also blame Iran for the attack. Second Battle of al-Faw On 17 April 1988, Iraq launched Operation Ramadan Mubarak (Blessed Ramadan), a surprise attack against the 15,000 Basij troops on the al-Faw peninsula. The attack was preceded by Iraqi diversionary attacks in northern Iraq, with a massive artillery and air barrage of Iranian front lines. Key areas, such as supply lines, command posts, and ammunition depots, were hit by a storm of mustard gas and nerve gas, as well as by conventional explosives. Helicopters landed Iraqi commandos behind Iranian lines on al-Faw while the main Iraqi force made a frontal assault. Within 48 hours, all of the Iranian forces had been killed or cleared from the al-Faw Peninsula. The day was celebrated in Iraq as Faw Liberation Day throughout Saddam's rule. The Iraqis had planned the offensive well. Prior to the attack, the Iraqi soldiers gave themselves poison gas antidotes to shield themselves from the effect of the saturation of gas. The heavy and well executed use of chemical weapons was the decisive factor in the victory. Iraqi losses were relatively light, especially compared to Iran's casualties. Ra'ad al-Hamdani later recounted that the recapture of al-Faw marked "the highest point of experience and expertise that the Iraqi Army reached." The Iranians eventually managed to halt the Iraqi drive as they pushed towards Khuzestan. To the shock of the Iranians, rather than breaking off the offensive, the Iraqis kept up their drive, and a new force attacked the Iranian positions around Basra. Following this, the Iraqis launched a sustained drive to clear the Iranians out of all of southern Iraq. One of the most successful Iraqi tactics was the "one-two punch" attack using chemical weapons. Using artillery, they would saturate the Iranian front line with rapidly
|
ran live electrical cables through the water, electrocuting numerous Iranian troops and then displaying their corpses on state television. By 29 February, the Iranians had reached the outskirts of Qurna and were closing in on the Baghdad–Basra highway. They had broken out of the marshes and returned to open terrain, where they were confronted by conventional Iraqi weapons, including artillery, tanks, air power, and mustard gas. 1,200 Iranian soldiers were killed in the counter-attack. The Iranians retreated back to the marshes, though they still held onto them along with Majnoon Island. The Battle of the Marshes saw an Iraqi defence that had been under continuous strain since 15 February; they were relieved by their use of chemical weapons and defence-in-depth, where they layered defensive lines: even if the Iranians broke through the first line, they were usually unable to break through the second due to exhaustion and heavy losses. They also largely relied on Mi-24 Hind to "hunt" the Iranian troops in the marshes, and at least 20,000 Iranians were killed in the marsh battles. Iran used the marshes as a springboard for future attacks/infiltrations. Four years into the war, the human cost to Iran had been 170,000 combat fatalities and 340,000 wounded. Iraqi combat fatalities were estimated at 80,000 with 150,000 wounded. "Tanker War" and the "War of the Cities" Unable to launch successful ground attacks against Iran, Iraq used their now expanded air force to carry out strategic bombing against Iranian shipping, economic targets, and cities in order to damage Iran's economy and morale. Iraq also wanted to provoke Iran into doing something that would cause the superpowers to be directly involved in the conflict on the Iraqi side. Attacks on shipping The so-called "Tanker War" started when Iraq attacked the oil terminal and oil tankers at Kharg Island in early 1984. Iraq's aim in attacking Iranian shipping was to provoke the Iranians to retaliate with extreme measures, such as closing the Strait of Hormuz to all maritime traffic, thereby bringing American intervention; the United States had threatened several times to intervene if the Strait of Hormuz were closed. As a result, the Iranians limited their retaliatory attacks to Iraqi shipping, leaving the strait open to general passage. Iraq declared that all ships going to or from Iranian ports in the northern zone of the Persian Gulf were subject to attack. They used F-1 Mirage, Super Etendard, Mig-23, Su-20/22, and Super Frelon helicopters armed with Exocet anti-ship missiles as well as Soviet-made air-to-surface missiles to enforce their threats. Iraq repeatedly bombed Iran's main oil export facility on Kharg Island, causing increasingly heavy damage. As a first response to these attacks, Iran attacked a Kuwaiti tanker carrying Iraqi oil near Bahrain on 13 May 1984, as well as a Saudi tanker in Saudi waters on 16 May. Because Iraq had become landlocked during the course of the war, they had to rely on their Arab allies, primarily Kuwait, to transport their oil. Iran attacked tankers carrying Iraqi oil from Kuwait, later attacking tankers from any Persian Gulf state supporting Iraq. Attacks on ships of noncombatant nations in the Persian Gulf sharply increased thereafter, with both nations attacking oil tankers and merchant ships of neutral nations in an effort to deprive their opponent of trade. The Iranian attacks against Saudi shipping led to Saudi F-15s shooting down a pair of F-4 Phantom II fighters on 5 June 1984. The air and small-boat attacks, however, did little damage to Persian Gulf state economies, and Iran moved its shipping port to Larak Island in the Strait of Hormuz. The Iranian Navy imposed a naval blockade of Iraq, using its British-built frigates to stop and inspect any ships thought to be trading with Iraq. They operated with virtual impunity, as Iraqi pilots had little training in hitting naval targets. Some Iranian warships attacked tankers with ship-to-ship missiles, while others used their radars to guide land-based anti-ship missiles to their targets. Iran began to rely on its new Revolutionary Guard's navy, which used Boghammar speedboats fitted with rocket launchers and heavy machine guns. These speedboats would launch surprise attacks against tankers and cause substantial damage. Iran also used F-4 Phantom II fighters and helicopters to launch Maverick missiles and unguided rockets at tankers. A U.S. Navy ship, , was struck on 17 May 1987 by two Exocet anti-ship missiles fired from an Iraqi F-1 Mirage plane. The missiles had been fired at about the time the plane was given a routine radio warning by Stark. The frigate did not detect the missiles with radar, and warning was given by the lookout only moments before they struck. Both missiles hit the ship, and one exploded in crew quarters, killing 37 sailors and wounding 21. Lloyd's of London, a British insurance market, estimated that the Tanker War damaged 546 commercial vessels and killed about 430 civilian sailors. The largest portion of the attacks was directed by Iraq against vessels in Iranian waters, with the Iraqis launching three times as many attacks as the Iranians. But Iranian speedboat attacks on Kuwaiti shipping led Kuwait to formally petition foreign powers on 1 November 1986 to protect its shipping. The Soviet Union agreed to charter tankers starting in 1987, and the United States Navy offered to provide protection for foreign tankers reflagged and flying the U.S. flag starting 7 March 1987 in Operation Earnest Will. Neutral tankers shipping to Iran were unsurprisingly not protected by Earnest Will, resulting in reduced foreign tanker traffic to Iran, since they risked Iraqi air attack. Iran accused the United States of helping Iraq. During the course of the war, Iran attacked two Soviet merchant ships. Seawise Giant, the largest ship ever built, was struck by Iraqi Exocet missiles as it was carrying Iranian crude oil out of the Persian Gulf. Attacks on cities Meanwhile, Iraq's air force also began carrying out strategic bombing raids against Iranian cities. While Iraq had launched numerous attacks with aircraft and missiles against border cities from the beginning of the war and sporadic raids on Iran's main cities, this was the first systematic strategic bombing that Iraq carried out during the war. This would become known as the "War of the Cities". With the help of the USSR and the west, Iraq's air force had been rebuilt and expanded. Meanwhile, Iran, due to sanctions and lack of spare parts, had heavily curtailed its air force operations. Iraq used Tu-22 Blinder and Tu-16 Badger strategic bombers to carry out long-range high-speed raids on Iranian cities, including Tehran. Fighter-bombers such as the Mig-25 Foxbat and Su-22 Fitter were used against smaller or shorter range targets, as well as escorting the strategic bombers. Civilian and industrial targets were hit by the raids, and each successful raid inflicted economic damage from regular strategic bombing. In response, the Iranians deployed their F-4 Phantoms to combat the Iraqis, and eventually they deployed F-14s as well. By 1986, Iran also expanded their air defense network heavily to relieve the pressure on the air force. By later in the war, Iraqi raids primarily consisted of indiscriminate missile attacks while air attacks were used only on fewer, more important targets. Starting in 1987, Saddam also ordered several chemical attacks on civilian targets in Iran, such as the town of Sardasht. Iran also launched several retaliatory air raids on Iraq, while primarily shelling border cities such as Basra. Iran also bought some Scud missiles from Libya, and launched them against Baghdad. These too inflicted damage upon Iraq. On 7 February 1984, during the first war of the cities, Saddam ordered his air force to attack eleven Iranian cities; bombardments ceased on 22 February 1984. Though Saddam intended the attacks to demoralise Iran and force them to negotiate, they had little effect, and Iran quickly repaired the damage. Moreover, Iraq's air force took heavy losses and Iran struck back, hitting Baghdad and other Iraqi cities. The attacks resulted in tens of thousands of civilian casualties on both sides, and became known as the first "war of the cities". It was estimated that 1,200 Iranian civilians were killed during the raids in February alone. There would be five such major exchanges throughout the course of the war, and multiple minor ones. While interior cities such as Tehran, Tabriz, Qom, Isfahan and Shiraz received numerous raids, the cities of western Iran suffered the most. Strategic situation in 1984 By 1984, Iran's losses were estimated to be 300,000 soldiers, while Iraq's losses were estimated to be 150,000. Foreign analysts agreed that both Iran and Iraq failed to use their modern equipment properly, and both sides failed to carry out modern military assaults that could win the war. Both sides also abandoned equipment in the battlefield because their technicians were unable to carry out repairs. Iran and Iraq showed little internal coordination on the battlefield, and in many cases units were left to fight on their own. As a result, by the end of 1984, the war was a stalemate. One limited offensive Iran launched (Dawn 7) took place from 18 to 25 October 1984, when they recaptured the Iranian city of Mehran, which had been occupied by the Iraqis from the beginning of the war. 1985–86: Offensives and retreats By 1985, Iraqi armed forces were receiving financial support from Saudi Arabia, Kuwait, and other Persian Gulf states, and were making substantial arms purchases from the Soviet Union, China, and France. For the first time since early 1980, Saddam launched new offensives. On 6 January 1986, the Iraqis launched an offensive attempting to retake Majnoon Island. However, they were quickly bogged down into a stalemate against 200,000 Iranian infantrymen, reinforced by amphibious divisions. However, they managed to gain a foothold in the southern part of the island. Iraq also carried out another "war of the cities" between 12 and 14 March, hitting up to 158 targets in over 30 towns and cities, including Tehran. Iran responded by launching 14 Scud missiles for the first time, purchased from Libya. More Iraqi air attacks were carried out in August, resulting in hundreds of additional civilian casualties. Iraqi attacks against both Iranian and neutral oil tankers in Iranian waters continued, with Iraq carrying out 150 airstrikes using French bought Super Etendard and Mirage F-1 jets as well as Super Frelon helicopters, armed with Exocet missiles. Operation Badr The Iraqis attacked again on 28 January 1985; they were defeated, and the Iranians retaliated on 11 March 1985 with a major offensive directed against the Baghdad-Basra highway (one of the few major offensives conducted in 1985), codenamed Operation Badr (after the Battle of Badr, Muhammad's first military victory in Mecca). Ayatollah Khomeini urged Iranians on, declaring: It is our belief that Saddam wishes to return Islam to blasphemy and polytheism...if America becomes victorious...and grants victory to Saddam, Islam will receive such a blow that it will not be able to raise its head for a long time...The issue is one of Islam versus blasphemy, and not of Iran versus Iraq. This operation was similar to Operation Kheibar, though it invoked more planning. Iran used 100,000 troops, with 60,000 more in reserve. They assessed the marshy terrain, plotted points where they could land tanks, and constructed pontoon bridges across the marshes. The Basij forces were also equipped with anti-tank weapons. The ferocity of the Iranian offensive broke through the Iraqi lines. The Revolutionary Guard, with the support of tanks and artillery, broke through north of Qurna on 14 March. That same night 3,000 Iranian troops reached and crossed the Tigris River using pontoon bridges and captured part of the Baghdad–Basra Highway 6, which they had failed to achieve in Operations Dawn 5 and 6. Saddam responded by launching chemical attacks against the Iranian positions along the highway and by initiating the aforementioned second "war of the cities", with an air and missile campaign against twenty to thirty Iranian population centres, including Tehran. Under General Sultan Hashim Ahmad al-Tai and General Jamal Zanoun (both considered to be among Iraq's most skilled commanders), the Iraqis launched air attacks against the Iranian positions and pinned them down. They then launched a pincer attack using mechanized infantry and heavy artillery. Chemical weapons were used, and the Iraqis also flooded Iranian trenches with specially constructed pipes delivering water from the Tigris River. The Iranians retreated back to the Hoveyzeh marshes while being attacked by helicopters, and the highway was recaptured by the Iraqis. Operation Badr resulted in 10,000–12,000 Iraqi casualties and 15,000 Iranian ones. Strategic situation at the beginning of 1986 The failure of the human wave attacks in earlier years had prompted Iran to develop a better working relationship between the Army and the Revolutionary Guard and to mould the Revolutionary Guard units into a more conventional fighting force. To combat Iraq's use of chemical weapons, Iran began producing an antidote. They also created and fielded their own homemade drones, the Mohajer 1's, fitted with six RPG-7's to launch attacks. They were primarily used in observation, being used for up to 700 sorties. For the rest of 1986, and until the spring of 1988, the Iranian Air Force's efficiency in air defence increased, with weapons being repaired or replaced and new tactical methods being used. For example, the Iranians would loosely integrate their SAM Sites and interceptors to create "killing fields" in which dozens of Iraqi planes were lost (which was reported in the West as the Iranian Air Force using F-14s as "mini-AWACs"). The Iraqi Air Force reacted by increasing the sophistication of its equipment, incorporating modern electronic countermeasure pods, decoys such as chaff and flare, and anti-radiation missiles. Due to the heavy losses in the last war of the cities, Iraq reduced their use of aerial attacks on Iranian cities. Instead, they would launch Scud missiles, which the Iranians could not stop. Since the range of the Scud missile was too short to reach Tehran, they converted them to al-Hussein missiles with the help of East German engineers, cutting up their Scuds into three chunks and attaching them together. Iran responded to these attacks by using their own Scud missiles. Compounding the extensive foreign help to Iraq, Iranian attacks were severely hampered by their shortages of weaponry, particularly heavy weapons as large amounts had been lost during the war. Iran still managed to maintain 1,000 tanks (often by capturing Iraqi ones) and additional artillery, but many needed repairs to be operational. However, by this time Iran managed to procure spare parts from various sources, helping them to restore some weapons. They secretly imported some weapons, such as RBS-70 anti-aircraft MANPADS. In an exception to the United States' support for Iraq, in exchange for Iran using its influence to help free western hostages in Lebanon, the United States secretly sold Iran some limited supplies (in Ayatollah Rafsanjani's postwar interview, he stated that during the period when Iran was succeeding, for a short time the United States supported Iran, then shortly after began helping Iraq again). Iran managed to get some advanced weapons, such as anti-tank TOW missiles, which worked better than rocket-propelled grenades. Iran later reverse-engineered and produced those weapons themselves. All of these almost certainly helped increase the effectiveness of Iran, although it did not reduce the human cost of their attacks. First Battle of al-Faw On the night of 10–11 February 1986, the Iranians launched Operation Dawn 8, in which 30,000 troops comprising five Army divisions and men from the Revolutionary Guard and Basij advanced in a two-pronged offensive to capture the al-Faw peninsula in southern Iraq, the only area touching the Persian Gulf. The capture of Al Faw and Umm Qasr was a major goal for Iran. Iran began with a feint attack against Basra, which was stopped by the Iraqis; Meanwhile, an amphibious strike force landed at the foot of the peninsula. The resistance, consisting of several thousand poorly trained soldiers of the Iraqi Popular Army, fled or were defeated, and the Iranian forces set up pontoon bridges crossing the Shatt al-Arab, allowing 30,000 soldiers to cross in a short period of time. They drove north along the peninsula almost unopposed, capturing it after only 24 hours of fighting. Afterwards they dug in and set up defenses. The sudden capture of al-Faw shocked the Iraqis, since they had thought it impossible for the Iranians to cross the Shatt al-Arab. On 12 February 1986, the Iraqis began a counter-offensive to retake al-Faw, which failed after a week of heavy fighting. On 24 February 1986, Saddam sent one of his best commanders, General Maher Abd al-Rashid, and the Republican Guard to begin a new offensive to recapture al-Faw. A new round of heavy fighting took place. However, their attempts again ended in failure, costing them many tanks and aircraft: their 15th mechanised division was almost completely wiped out. The capture of al-Faw and the failure of the Iraqi counter-offensives were blows to the Ba'ath regime's prestige, and led the Gulf countries to fear that Iran might win the war. Kuwait in particular felt menaced with Iranian troops only away, and increased its support of Iraq accordingly. In March 1986, the Iranians tried to follow up their success by attempting to take Umm Qasr, which would have completely severed Iraq from the Gulf and placed Iranian troops on the border with Kuwait. However, the offensive failed due to Iranian shortages of armor. By this time, 17,000 Iraqis and 30,000 Iranians were made casualties. The First Battle of al-Faw ended in March, but heavy combat operations lasted on the peninsula into 1988, with neither side being able to displace the other. The battle bogged down into a World War I-style stalemate in the marshes of the peninsula. Battle of Mehran Immediately after the Iranian capture of al-Faw, Saddam declared a new offensive against Iran, designed to drive deep into the state. The Iranian border city of Mehran, on the foot of the Zagros Mountains, was selected as the first target. On 15–19 May, Iraqi Army's Second Corps, supported by helicopter gunships, attacked and captured the city. Saddam then offered the Iranians to exchange Mehran for al-Faw. The Iranians rejected the offer. Iraq then continued the attack, attempting to push deeper into Iran. However, Iraq's attack was quickly warded off by Iranian AH-1 Cobra helicopters with TOW missiles, which destroyed numerous Iraqi tanks and vehicles. The Iranians built up their forces on the heights surrounding Mehran. On 30 June, using mountain warfare tactics they launched their attack, recapturing the city by 3 July. Saddam ordered the Republican Guard to retake the city on 4 July, but their attack was ineffective. Iraqi losses were heavy enough to allow the Iranians to also capture territory inside Iraq, and depleted the Iraqi military enough to prevent them from launching a major offensive for the next two years. Iraq's defeats at al-Faw and at Mehran were severe blows to the prestige of the Iraqi regime, and western powers, including the US, became more determined to prevent an Iraqi loss. Strategic situation at the end of 1986 Through the eyes of international observers, Iran was prevailing in the war by the end of 1986. In the northern front, the Iranians began launching attacks toward the city of Suleimaniya with the help of Kurdish fighters, taking the Iraqis by surprise. They came within of the city before being stopped by chemical and army attacks. Iran's army had also reached the Meimak Hills, only from Baghdad. Iraq managed to contain Iran's offensives in the south, but was under serious pressure, as the Iranians were slowly overwhelming them. Iraq responded by launching another "war of the cities". In one attack, Tehran's main oil refinery was hit, and in another instance, Iraq damaged Iran's Assadabad satellite dish, disrupting Iranian overseas telephone and telex service for almost two weeks. Civilian areas were also hit, resulting in many casualties. Iraq continued to attack oil tankers via air. Iran responded by launching Scud missiles and air attacks at Iraqi targets. Iraq continued to attack Kharg Island and the oil tankers and facilities as well. Iran created a tanker shuttle service of 20 tankers to move oil from Kharg to Larak Island, escorted by Iranian fighter jets. Once moved to Larak, the oil would be moved to oceangoing tankers (usually neutral). They also rebuilt the oil terminals damaged by Iraqi air raids and moved shipping to Larak Island, while attacking foreign tankers that carried Iraqi oil (as Iran had blocked Iraq's access to the open sea with the capture of al-Faw). By now they almost always used the armed speedboats of the IRGC navy, and attacked many tankers. The tanker war escalated drastically, with attacks nearly doubling in 1986 (the majority carried out by Iraq). Iraq got permission from the Saudi government to use its airspace to attack Larak Island, although due to the distance attacks were less frequent there. The escalating tanker war in the Gulf became an ever-increasing concern to foreign powers, especially the United States. In April 1986, Ayatollah Khomeini issued a fatwa declaring that the war must be won by March 1987. The Iranians increased recruitment efforts, obtaining 650,000 volunteers. The animosity between the Army and the Revolutionary Guard arose again, with the Army wanting to use more refined, limited military attacks while the Revolutionary Guard wanted to carry out major offensives. Iran, confident in its successes, began planning their largest offensives of the war, which they called their "final offensives". Iraq's dynamic defense strategy Faced with their recent defeats in al-Faw and Mehran, Iraq appeared to be losing the war. Iraq's generals, angered by Saddam's interference, threatened a full-scale mutiny against the Ba'ath Party unless they were allowed to conduct operations freely. In one of the few times during his career, Saddam gave in to the demands of his generals. Up to this point, Iraqi strategy was to ride out Iranian attacks. However, the defeat at al-Faw led Saddam to declare the war to be Al-Defa al-Mutaharakha (The Dynamic Defense), and announcing that all civilians had to take part in the war effort. The universities were closed and all of the male students were drafted into the military. Civilians were instructed to clear marshlands to prevent Iranian amphibious infiltrations and to help build fixed defenses. The government tried to integrate the Shias into the war effort by recruiting many as part of the Ba'ath Party. In an attempt to counterbalance the religious fervor of the Iranians and gain support from the devout masses, the regime also began to promote religion and, on the surface, Islamization, despite the fact that Iraq was run by a secular regime. Scenes of Saddam praying and making pilgrimages to shrines became common on state-run television. While Iraqi morale had been low throughout the war, the attack on al-Faw raised patriotic fervor, as the Iraqis feared invasion. Saddam also recruited volunteers from other Arab countries into the Republican Guard, and received much technical support from foreign nations as well. While Iraqi military power had been depleted in recent battles, through heavy foreign purchases and support, they were able to expand their military even to much larger proportions by 1988. At the same time, Saddam ordered the genocidal al-Anfal Campaign in an attempt to crush the Kurdish resistance, who were now allied with Iran. The result was the deaths of several hundred thousand Iraqi Kurds, and the destruction of villages, towns, and cities. Iraq began to try to perfect its maneuver tactics. The Iraqis began to prioritize the professionalization of their military. Prior to 1986, the conscription-based Iraqi regular army and the volunteer-based Iraqi Popular Army conducted the bulk of the operations in the war, to little effect. The Republican Guard, formerly an elite praetorian guard, was expanded as a volunteer army and filled with Iraq's best generals. Loyalty to the state was no longer a primary requisite for joining. After the war, due to Saddam's paranoia, the former duties of the Republican Guard were transferred to a new unit, the Special Republican Guard. Full-scale war games against hypothetical Iranian positions were carried out in the western Iraqi desert against mock targets, and they were repeated over the course of a full year until the forces involved fully memorized their attacks. Iraq built its military massively, eventually possessing the 4th largest in the world, in order to overwhelm the Iranians through sheer size. 1987–88: Towards a ceasefire Meanwhile, Iran continued to attack as the Iraqis were planning their strike. In 1987 the Iranians renewed a series of major human wave offensives in both northern and southern Iraq. The Iraqis had elaborately fortified Basra with 5 defensive rings, exploiting natural waterways such as the Shatt-al-Arab and artificial ones, such as Fish Lake and the Jasim River, along with earth barriers. Fish Lake was a massive lake filled with mines, underwater barbed wire, electrodes and sensors. Behind each waterway and defensive line was radar-guided artillery, ground attack aircraft and helicopters, all capable of firing poison gas or conventional munitions. The Iranian strategy was to penetrate the Iraqi defences and encircle Basra, cutting off the city as well as the Al-Faw peninsula from the rest of Iraq. Iran's plan was for three assaults: a diversionary attack near Basra, the main offensive and another diversionary attack using Iranian tanks in the north to divert Iraqi heavy armor from Basra. For these battles, Iran had re-expanded their military by recruiting many new Basij and Pasdaran volunteers. Iran brought 150,000–200,000 total troops into the battles. Karbala operations Operation Karbala-4 On 25 December 1986, Iran launched Operation Karbala-4 (Karbala referring to Hussein ibn Ali's Battle of Karbala). According to Iraqi General Ra'ad al-Hamdani, this was a diversionary attack. The Iranians launched an amphibious assault against the Iraqi island of Umm al-Rassas in the Shatt-Al-Arab river, parallel to Khoramshahr. They then set up a pontoon bridge and continued the attack, eventually capturing the island in a costly success but failing to advance further; the Iranians had 60,000 casualties, while the Iraqis 9,500. The Iraqi commanders exaggerated Iranian losses to Saddam, and it was assumed that the main Iranian attack on Basra had been fully defeated and that it would take the Iranians six months to recover. When the main Iranian attack, Operation Karbala 5, began, many Iraqi troops were on leave. Operation Karbala-5 (Sixth Battle of Basra) The Siege of Basra, code-named Operation Karbala-5 (), was an offensive operation carried out by Iran in an effort to capture the Iraqi port city of Basra in early 1987. This battle, known for its extensive casualties and ferocious conditions, was the biggest battle of the war and proved to be the beginning of the end of the Iran–Iraq War. While Iranian forces crossed the border and captured the eastern section of Basra Governorate, the operation ended in a stalemate. Operation Karbala-6 At the same time as Operation Karbala 5, Iran also launched Operation Karbala-6 against the Iraqis in Qasr-e Shirin in central Iran to prevent the Iraqis from rapidly transferring units down to defend against the Karbala-5 attack. The attack was carried out by Basij infantry and the Revolutionary Guard's 31st Ashura and the Army's 77th Khorasan armored divisions. The Basij attacked the Iraqi lines, forcing the Iraqi infantry to retreat. An Iraqi armored counter-attack surrounded the Basij in a pincer movement, but the Iranian tank divisions attacked, breaking the encirclement. The Iranian attack was finally stopped by mass Iraqi chemical weapons attacks. Iranian war-weariness Operation Karbala-5 was a severe blow to Iran's military and morale. To foreign observers, it appeared that Iran was continuing to strengthen. By 1988, Iran had become self-sufficient in many areas, such as anti-tank TOW missiles, Scud ballistic missiles (Shahab-1), Silkworm anti-ship missiles, Oghab tactical rockets, and producing spare parts for their weaponry. Iran had also improved its air defenses with smuggled surface to air missiles. Iran was even producing UAV's and the Pilatus PC-7 propeller aircraft for observation. Iran also doubled their stocks of artillery, and was self-sufficient in the manufacture of ammunition and small arms. While it was not obvious to foreign observers, the Iranian public had become increasingly war-weary and disillusioned with the fighting, and relatively few volunteers joined the fight in 1987–88. Because the Iranian war effort relied on popular mobilization, their military strength actually declined, and Iran was unable to launch any major offensives after Karbala-5. As a result, for the first time since 1982, the momentum of the fighting shifted towards the regular army. Since the regular army was conscription based, it made the war even less popular. Many Iranians began to try to escape the conflict. As early as May 1985, anti-war demonstrations took place in 74 cities throughout Iran, which were crushed by the regime, resulting in some protesters being shot and killed. By 1987, draft-dodging had become a serious problem, and the Revolutionary Guards and police set up roadblocks throughout cities to capture those who tried to evade conscription. Others, particularly the more nationalistic and religious, the clergy, and the Revolutionary Guards, wished to continue the war. The leadership acknowledged that the war was a stalemate, and began to plan accordingly. No more "final offensives" were planned. The head of the Supreme Defense Council Hashemi Rafsanjani announced during a news conference the end of human wave attacks. Mohsen Rezaee, head of the IRGC, announced that Iran would focus exclusively on limited attacks and infiltrations, while arming and supporting opposition groups inside of Iraq. On the Iranian home front, sanctions, declining oil prices, and Iraqi attacks on Iranian oil facilities and shipping took a heavy toll on the economy. While the attacks themselves were not as destructive as some analysts believed, the U.S.-led Operation Earnest Will (which protected Iraqi and allied oil tankers, but not Iranian ones) led many neutral countries to stop trading with Iran because of rising insurance and fear of air attack. Iranian oil and non-oil exports fell by 55%, inflation reached 50% by 1987, and unemployment skyrocketed. At the same time, Iraq was experiencing crushing debt and shortages of workers, encouraging its leadership to try to end the war quickly. Strategic situation in late 1987 By the end of 1987, Iraq possessed 5,550 tanks (outnumbering the Iranians six to one) and 900 fighter aircraft (outnumbering the Iranians ten to one). After Operation Karbala-5, Iraq only had 100 qualified fighter pilots remaining; therefore, Iraq began to invest in recruiting foreign pilots from countries such as Belgium, South Africa, Pakistan, East Germany and the Soviet Union. They replenished their manpower by integrating volunteers from other Arab countries into their army. Iraq also became self-sufficient in chemical weapons and some conventional ones and received much equipment from abroad. Foreign support helped Iraq bypass its economic troubles and massive debt to continue the war and increase the size of its military. While the southern and central fronts were at a stalemate, Iran began to focus on carrying out offensives in northern Iraq with the help of the Peshmerga (Kurdish insurgents). The Iranians used a combination of semi-guerrilla and infiltration tactics in the Kurdish mountains with the Peshmerga. During Operation Karbala-9 in early April, Iran captured territory near Suleimaniya, provoking a severe poison gas counter-attack. During Operation Karbala-10, Iran attacked near the same area, capturing more territory. During Operation Nasr-4, the Iranians surrounded the city of Suleimaniya and, with the help of the Peshmerga, infiltrated over 140 km into Iraq and raided and threatened to capture the oil-rich city of Kirkuk and other northern oilfields. Nasr-4 was considered to be Iran's most successful individual operation of the war but Iranian forces were unable to consolidate their gains and continue their advance; while these offensives coupled with the Kurdish uprising sapped Iraqi strength, losses in the north would not mean a catastrophic failure for Iraq. On 20 July, the UN Security Council passed the U.S.-sponsored Resolution 598, which called for an end to the fighting and a return to pre-war boundaries. This resolution was noted by Iran for being the first resolution to call for a return to the pre-war borders, and setting up a commission to determine the aggressor and compensation. Air and tanker war in 1987 With the stalemate on land, the air/tanker war began to play an increasingly major role in the conflict. The Iranian air force had become very small, with only 20 F-4 Phantoms, 20 F-5 Tigers, and 15 F-14 Tomcats in operation, although Iran managed to restore some damaged planes to service. The Iranian Air Force, despite its once sophisticated equipment, lacked enough equipment and personnel to sustain the war of attrition that had developed, and was unable to lead an outright onslaught against Iraq. The Iraqi Air Force, however, had originally lacked modern equipment and experienced pilots, but after pleas from Iraqi military leaders, Saddam decreased political influence on everyday operations and left the fighting to his combatants. The Soviets began delivering more advanced aircraft and weapons to Iraq, while the French improved training for flight crews and technical personnel and continually introduced new methods for countering Iranian weapons and tactics. Iranian ground air defense still shot down many Iraqi aircraft. The main Iraqi air effort had shifted to the destruction of Iranian war-fighting capability (primarily Persian Gulf oil fields, tankers, and Kharg Island), and starting in late 1986, the Iraqi Air Force began a comprehensive campaign against the Iranian economic infrastructure. By late 1987, the Iraqi Air Force could count on direct American support for conducting long-range operations against Iranian infrastructural targets and oil installations deep in the Persian Gulf. U.S. Navy ships tracked and reported movements of Iranian shipping and defences. In the massive Iraqi air strike against Kharg Island, flown on 18 March 1988, the Iraqis destroyed two supertankers but lost five aircraft to Iranian F-14 Tomcats, including two Tupolev Tu-22Bs and one Mikoyan MiG-25RB. The U.S. Navy was now becoming more involved in the fight in the Persian Gulf, launching Operations Earnest Will and Prime Chance against the Iranians. The attacks on oil tankers continued. Both Iran and Iraq carried out frequent attacks during the first four months of the year. Iran was effectively waging a naval guerilla war with its IRGC navy speedboats, while Iraq attacked with its aircraft. In 1987, Kuwait asked to reflag its tankers to the U.S. flag. They did so in March, and the U.S. Navy began Operation Earnest Will to escort the tankers. The result of Earnest Will would be that, while oil tankers shipping Iraqi/Kuwaiti oil were protected, Iranian tankers and neutral tankers shipping to Iran would be unprotected, resulting in both losses for Iran and the undermining of its trade with foreign countries, damaging Iran's economy further. Iran deployed Silkworm missiles to attack ships, but only a few were actually fired. Both the United States and Iran jockeyed for influence in the Gulf. To discourage the United States from escorting tankers, Iran secretly mined some areas. The United States began to escort the reflagged tankers, but one was damaged by a mine while under escort. While being a public-relations victory for Iran, the United States increased its reflagging efforts. While Iran mined the Persian Gulf, their speedboat attacks were reduced, primarily attacking unflagged tankers shipping in the area. On 24 September, US Navy SEALS captured the Iranian mine-laying ship Iran Ajr, a diplomatic disaster for the already isolated Iranians. Iran had previously sought to maintain at least a pretense of plausible deniability regarding its use of mines, but the Navy SEALS captured and photographed extensive evidence of Iran Ajrs mine-laying activities. On 8 October, the U.S. Navy destroyed four Iranian speedboats, and in response to Iranian Silkworm missile attacks on Kuwaiti oil tankers, launched Operation Nimble Archer, destroying two Iranian oil rigs in the Persian Gulf. During November and December, the Iraqi air force launched a bid to destroy all Iranian airbases in Khuzestan and the remaining Iranian air force. Iran managed to shoot down 30 Iraqi fighters with fighter jets, anti-aircraft guns, and missiles, allowing the Iranian air force to survive to the end of the war. On 28 June, Iraqi fighter bombers attacked the Iranian town of Sardasht near the border, using chemical mustard gas bombs. While many towns and cities had been bombed before, and troops attacked with gas, this was the first time that the Iraqis had attacked a civilian area with poison gas. One quarter of the town's then population of 20,000 was burned and stricken, and 113 were killed immediately, with many more dying and suffering health effects over following decades. Saddam ordered the attack in order to test the effects of the newly developed "dusty mustard" gas, which was designed to be even more crippling than traditional mustard gas. While little known outside of Iran (unlike the later Halabja chemical attack), the Sardasht bombing (and future similar attacks) had a tremendous effect on the Iranian people's psyche. 1988: Iraqi offensives and UN ceasefire By 1988, with massive equipment imports and reduced Iranian volunteers, Iraq was ready to launch major offensives against Iran. In February 1988, Saddam began the fifth and most deadly "war of the cities". Over the next two months, Iraq launched over 200 al-Hussein missiles at 37 Iranian cities. Saddam also threatened to use chemical weapons in his missiles, which caused 30% of Tehran's population to leave the city. Iran retaliated, launching at least 104 missiles against Iraq in 1988 and shelling Basra. This event was nicknamed the "Scud Duel" in the foreign media. In all, Iraq launched 520 Scuds and al-Husseins against Iran and Iran fired 177 in return. The Iranian attacks were too few in number to deter Iraq from launching their attacks. Iraq also increased their airstrikes against Kharg Island and Iranian oil tankers. With their tankers protected by U.S. warships, they could operate with virtual impunity. In addition, the West supplied Iraq's air force with laser-guided smart bombs, allowing them to attack economic targets while evading anti-aircraft defenses. These attacks began to have a major toll on the Iranian economy and morale and caused many casualties. Iran's Kurdistan Operations In March 1988, the Iranians carried out Operation Dawn 10, Operation Beit ol-Moqaddas 2, and Operation Zafar 7 in Iraqi Kurdistan with the aim of capturing the Darbandikhan Dam and the power plant at Lake Dukan, which supplied Iraq with much of its electricity and water, as well as the city of Suleimaniya. Iran hoped that the capture of these areas would bring more favourable terms to the ceasefire agreement. This infiltration offensive was carried out in conjunction with the Peshmerga. Iranian airborne commandos landed behind the Iraqi lines and Iranian helicopters hit Iraqi tanks with TOW missiles. The Iraqis were taken by surprise, and Iranian F-5E Tiger fighter jets even damaged the Kirkuk oil refinery. Iraq carried out executions of multiple officers for these failures in March–April 1988, including Colonel Jafar Sadeq. The Iranians used infiltration tactics in the Kurdish mountains, captured the town of Halabja and began to fan out across the province. Though the Iranians advanced to within sight of Dukan and captured around and 4,000 Iraqi troops, the offensive failed due to the Iraqi use of chemical warfare. The Iraqis launched the deadliest chemical weapons attacks of the war. The Republican Guard launched 700 chemical shells, while the other artillery divisions launched 200–300 chemical shells each, unleashing a chemical cloud over the Iranians, killing or wounding 60% of them, the blow was felt particularly by the Iranian 84th infantry division and 55th paratrooper division. The Iraqi special forces then stopped the remains of the Iranian force. In retaliation for Kurdish collaboration with the Iranians, Iraq launched a massive poison gas attack against Kurdish civilians in Halabja, recently taken by the Iranians, killing thousands of civilians. Iran airlifted foreign journalists to the ruined city, and the images of the dead were shown throughout the world, but Western mistrust of Iran and collaboration with Iraq led them to also blame Iran for the attack. Second Battle of al-Faw On 17 April 1988, Iraq launched Operation Ramadan Mubarak (Blessed Ramadan), a surprise attack against the 15,000 Basij troops on the al-Faw peninsula. The attack was preceded by Iraqi diversionary attacks in northern Iraq, with a massive artillery and air barrage of Iranian front lines. Key areas, such as supply lines, command posts, and ammunition depots, were hit by a storm of mustard gas and nerve gas, as well as by conventional explosives. Helicopters landed Iraqi commandos behind Iranian lines on al-Faw while the main Iraqi force made a frontal assault. Within 48 hours, all of the Iranian forces had been killed or cleared from the al-Faw Peninsula. The day was celebrated in Iraq as Faw Liberation Day throughout Saddam's rule. The Iraqis had planned the offensive well. Prior to the attack, the Iraqi soldiers gave themselves poison gas antidotes to shield themselves from the effect of the saturation of gas. The heavy and well executed use of chemical weapons was the decisive factor in the victory. Iraqi losses were relatively light, especially compared to Iran's casualties. Ra'ad al-Hamdani later recounted that the recapture of al-Faw marked "the highest point of experience and expertise that the Iraqi Army reached." The Iranians eventually managed to halt the Iraqi drive as they pushed towards Khuzestan. To the shock of the Iranians, rather than breaking off the offensive, the Iraqis kept up their drive, and a new force attacked the Iranian positions around Basra. Following this, the Iraqis launched a sustained drive to clear the Iranians out of all of southern Iraq. One of the most successful Iraqi tactics was the "one-two punch" attack using chemical weapons. Using artillery, they would saturate the Iranian front line with rapidly dispersing cyanide and nerve gas, while longer-lasting mustard gas was launched via fighter-bombers and rockets against the Iranian rear, creating a "chemical wall" that blocked reinforcement. Operation Praying Mantis The same day as Iraq's attack on al-Faw peninsula, the United States Navy launched Operation Praying Mantis in retaliation against Iran for damaging a warship with a mine. Iran lost oil platforms, destroyers, and frigates in this battle, which ended only when President Reagan decided that the Iranian navy had been damaged enough. In spite of this, the Revolutionary Guard Navy continued their speedboat attacks against oil tankers. The defeats at al-Faw and in the Persian Gulf nudged Iranian leadership towards quitting the war, especially when facing the prospect of fighting the Americans. Iranian counteroffensive Faced with such losses, Khomeini appointed the cleric Hashemi Rafsanjani as the Supreme Commander of the Armed Forces, though he had in actuality occupied that position for months. Rafsanjani ordered a last desperate counter-attack into Iraq, which was launched 13 June 1988. The Iranians infiltrated through the Iraqi trenches and moved into Iraq and managed to strike Saddam's presidential palace in Baghdad using fighter aircraft. After three days of fighting, the decimated Iranians were driven back to their original positions again as the Iraqis launched 650 helicopter and 300 aircraft sorties. Operation Forty Stars On 18 June, Iraq launched Operation Forty Stars ( chehel cheragh) in conjunction with the Mujahideen-e-Khalq (MEK) around Mehran. With 530 aircraft sorties and heavy use of nerve gas, they crushed the Iranian forces in the area, killing 3,500 and nearly destroying a Revolutionary Guard division. Mehran was captured once again and occupied by the MEK. Iraq also launched air raids on Iranian population centres and economic targets, setting 10 oil installations on fire. Tawakalna ala Allah operations On 25 May 1988, Iraq launched the first of five Tawakalna ala Allah Operations, consisting of one of the largest artillery barrages in history, coupled with chemical weapons. The marshes had been dried by drought, allowing the Iraqis to use tanks to bypass Iranian field fortifications, expelling the Iranians from the border town of Shalamcheh after less than 10 hours of combat. On 25 June, Iraq launched the second Tawakal ala Allah operation against the Iranians on Majnoon Island. Iraqi commandos used amphibious craft to block the Iranian rear, then used hundreds of tanks with massed conventional and chemical artillery barrages to recapture the island after 8 hours of combat. Saddam appeared live on Iraqi television to "lead" the charge against the Iranians. The majority of the Iranian defenders were killed during the quick assault. The final two Tawakal ala Allah operations took place near al-Amarah and Khaneqan. By 12 July, the Iraqis had captured the city of Dehloran, inside Iran, along with 2,500 troops and much armour and material, which took four days to transport to Iraq. These losses included more than 570 of the 1,000 remaining Iranian tanks, over 430 armored vehicles, 45 self-propelled artillery, 300 towed artillery pieces, and 320 antiaircraft guns. These figures only included what Iraq could actually put to use; total amount of captured materiel was higher. Since March, the Iraqis claimed to have captured 1,298 tanks, 155 infantry fighting vehicles, 512 heavy artillery pieces, 6,196 mortars, 5,550 recoilless rifles and light guns, 8,050-man-portable rocket launchers, 60,694 rifles, 322 pistols, 454 trucks, and 1,600 light vehicles. The Iraqis withdrew from Dehloran soon after, claiming that they had "no desire to conquer Iranian territory". History professor Kaveh Farrokh considered this to be Iran's greatest military disaster during the war. Stephen Pelletier, a Journalist, Middle East expert, and author, noted that "Tawakal ala Allah ... resulted in the absolute destruction of Iran's military machine." During the 1988 battles, the Iranians put up little resistance, having been worn out by nearly eight years of war. They lost large amounts of equipment. On 2 July, Iran belatedly set up a joint central command which unified the Revolutionary Guard, Army, and Kurdish rebels, and dispelled the rivalry between the Army and the Revolutionary Guard. However, this came too late and, following the capture of 570 of their operable tanks and the destruction of hundreds more, Iran was believed to have fewer than 200 remaining operable tanks on the southern front, against thousands of Iraqi ones. The only area where the Iranians were not suffering major defeats was in Kurdistan. Iran accepts the ceasefire Saddam sent a warning to Khomeini in mid-1988, threatening to launch a new and powerful full-scale invasion and attack Iranian cities with weapons of mass destruction. Shortly afterwards, Iraqi aircraft bombed the Iranian town of Oshnavieh with poison gas, immediately killing and wounding over 2,000 civilians. The fear of an all out chemical attack against Iran's largely unprotected civilian population weighed heavily on the Iranian leadership, and they realized that the international community had no intention of restraining Iraq. The lives of the civilian population of Iran were becoming very disrupted, with a third of the urban population evacuating major cities in fear of the seemingly imminent chemical war. Meanwhile, Iraqi conventional bombs and missiles continuously hit towns and cities, destroying vital civilian and military infrastructure, and increasing the death toll. Iran replied with missile and air attacks, but not sufficiently to deter the Iraqis. With the threat of a new and even more powerful invasion, Commander-in-Chief Rafsanjani ordered the Iranians to retreat from Haj Omran, Kurdistan on 14 July. The Iranians did not publicly describe this as a retreat, instead calling it a "temporary withdrawal". By July, Iran's army inside Iraq had largely disintegrated. Iraq put up a massive display of captured Iranian weapons in Baghdad, claiming they captured 1,298 tanks, 5,550 recoil-less rifles, and thousands of other weapons. However, Iraq had taken heavy losses as well, and the battles were very costly. In July 1988, Iraqi aircraft dropped bombs on the Iranian Kurdish village of Zardan. Dozens of villages, such as Sardasht, and some larger towns, such as Marivan, Baneh and Saqqez, were once again attacked with poison gas, resulting in even heavier civilian casualties. On 3 July 1988, the USS Vincennes shot down Iran Air Flight 655, killing 290 passengers and crew. The lack of international sympathy disturbed the Iranian leadership, and they came to the conclusion that the United States was on the verge of waging a full-scale war against them, and that Iraq was on the verge of unleashing its entire chemical arsenal upon their cities. At this point, elements of the Iranian leadership, led by Rafsanjani (who had initially pushed for the extension of the war), persuaded Khomeini to accept a ceasefire. They stated that in order to win the war, Iran's military budget would have to be increased eightfold and the war would last until 1993. On 20 July 1988, Iran accepted Resolution 598, showing its willingness to accept a ceasefire. A statement from Khomeini was read out in a radio address, and he expressed deep displeasure and reluctance about accepting the ceasefire, Happy are those who have departed through martyrdom. Happy are those who have lost their lives in this convoy of light. Unhappy am I that I still survive and have drunk the poisoned chalice... The news of the end of the war was greeted with celebration in Baghdad, with people dancing in the streets; in Tehran, however, the end of the war was greeted with a somber mood. Operation Mersad and end of the war Operation Mersad ( "ambush") was the last big military operation of the war. Both Iran and Iraq had accepted Resolution 598, but despite the ceasefire, after seeing Iraqi victories in the previous months, Mujahadeen-e-Khalq (MEK) decided to launch an attack of its own and wished to advance all the way to Tehran. Saddam and the Iraqi high command decided on a two-pronged offensive across the border into central Iran and Iranian Kurdistan. Shortly after Iran accepted the ceasefire the MEK army began its offensive, attacking into Ilam province under cover of Iraqi air power. In the north, Iraq also launched an attack into Iraqi Kurdistan, which was blunted by the Iranians. On 26 July 1988, the MEK started their campaign in central Iran, Operation Forough Javidan (Eternal Light), with the support of the Iraqi army. The Iranians had withdrawn their remaining soldiers to Khuzestan in fear of a new Iraqi invasion attempt, allowing the Mujahedeen to advance rapidly towards Kermanshah, seizing Qasr-e Shirin, Sarpol-e Zahab, Kerend-e Gharb, and Islamabad-e-Gharb. The MEK expected the Iranian population to rise up and support their advance; the uprising never materialised but they reached deep into Iran. In response, the Iranian military launched its counter-attack, Operation Mersad, under Lieutenant General Ali Sayyad Shirazi. Iranian paratroopers landed behind the MEK lines while the Iranian Air Force and helicopters launched an air attack, destroying much of the enemy columns. The Iranians defeated the MEK in the city of Kerend-e Gharb on 29 July 1988. On 31 July, Iran drove the MEK out of Qasr-e-Shirin and Sarpol Zahab, though MEK claimed to have "voluntarily withdrawn" from the towns. Iran estimated that 4,500 MEK were killed, while 400 Iranian soldiers died. The last notable combat actions of the war took place on 3 August 1988, in the Persian Gulf when the Iranian navy fired on a freighter and Iraq launched chemical attacks on Iranian civilians, killing an unknown number of them and wounding 2,300. Iraq came under international pressure to curtail further offensives. Resolution 598 became effective on 8 August 1988, ending all combat operations between the two countries. By 20 August 1988, peace with Iran was restored. UN peacekeepers belonging to the UNIIMOG mission took the field, remaining on the Iran–Iraq border until 1991. The majority of Western analysts believe that the war had no winners while some believed that Iraq emerged as the victor of the war, based on Iraq's overwhelming successes between April and July 1988. While the war was now over, Iraq spent the rest of August and early September clearing the Kurdish resistance. Using 60,000 troops along with helicopter gunships, chemical weapons (poison gas), and mass executions, Iraq hit 15 villages, killing rebels and civilians, and forced tens of thousands of Kurds to relocate to settlements. Many Kurdish civilians fled to Iran. By 3 September 1988, the anti-Kurd campaign ended, and all resistance had been crushed. 400 Iraqi soldiers and 50,000–100,000 Kurdish civilians and soldiers had been killed. At the war's conclusion, it took several weeks for the Armed Forces of the Islamic Republic of Iran to evacuate Iraqi territory to honor pre-war international borders set by the 1975 Algiers Agreement. The last prisoners of war were exchanged in 2003. The Security Council did not identify Iraq as the aggressor of the war until 11 December 1991, some 11 years after Iraq invaded Iran and 16 months following Iraq's invasion of Kuwait. Aftermath Casualties The Iran–Iraq War was the deadliest conventional war ever fought between regular armies of developing countries. Encyclopædia Britannica states: "Estimates of total casualties range from 1,000,000 to twice that number. The number killed on both sides was perhaps 500,000, with Iran suffering the greatest losses." Iraqi casualties are estimated at 105,000–200,000 killed, while about 400,000 had been wounded and some 70,000 taken prisoner. Thousands of civilians on both sides died in air raids and ballistic missile attacks. Prisoners taken by both countries began to be released in 1990, though some were not released until more than 10 years after the end of the conflict. Cities on both sides had also been considerably damaged. While revolutionary Iran had been bloodied, Iraq was left with a large military and was a regional power, albeit with severe debt, financial problems, and labour shortages. According to Iranian government sources, the war cost Iran an estimated 200,000–220,000 killed, or up to 262,000 according to the conservative Western estimates. This includes 123,220 combatants, 60,711 MIA and 11,000–16,000 civilians. Combatants include 79,664 members of the Revolutionary Guard Corps and additional 35,170 soldiers from regular military. In addition, prisoners of war comprise 42,875 Iranian casualties, they were captured and kept in Iraqi detention centres from 2.5 to more than 15 years after the war was over. According to the Janbazan Affairs Organization, 398,587 Iranians sustained injuries that required prolonged medical and health care following primary treatment, including 52,195 (13%) injured due to the exposure to chemical warfare agents. From 1980 to 2012, 218,867 Iranians died due to war injuries and the mean age of combatants was 23 years old. This includes 33,430 civilians, mostly women and children. More than 144,000 Iranian children were orphaned as a consequence of these deaths. Other estimates put Iranian casualties up to 600,000. Both Iraq and Iran manipulated loss figures to suit their purposes. At the same time, Western analysts accepted improbable estimates. By April 1988, such casualties were estimated at between 150,000 and 340,000 Iraqis dead, and 450,000 to 730,000 Iranians. Shortly after the end of the war, it was thought that Iran suffered even more than a million dead. Considering the style of fighting on the ground and the fact that neither side penetrated deeply into the other's territory, USMC analysts believe events do not substantiate the high casualties claimed. The Iraqi government has claimed 800,000 Iranians were killed in action, four times more than Iranian official figures, whereas Iraqi intelligence privately put the number at 228,000–258,000 as of August 1986. Iraqi losses were also revised downwards over time. Peace talks and postwar situation With the ceasefire in place, and UN peacekeepers monitoring the border, Iran and Iraq sent their representatives to Geneva, Switzerland, to negotiate a peace agreement on the terms of the ceasefire. However, peace talks stalled. Iraq, in violation of the UN ceasefire, refused to withdraw its troops from of disputed territory at the border area unless the Iranians accepted Iraq's full sovereignty over the Shatt al-Arab waterway. Foreign powers continued to support Iraq, which wanted to gain at the negotiating table what they failed to achieve on the battlefield, and Iran was portrayed as the one not wanting peace. Iran, in response, refused to release 70,000 Iraqi prisoners of war (compared to 40,000 Iranian prisoners of war held by Iraq). They also continued to carry out a naval blockade of Iraq, although its effects were mitigated by Iraqi use of ports in friendly neighbouring Arab countries. Iran also began to improve relations with many of the states that opposed it during the war. Because of Iranian actions, by 1990, Saddam had become more conciliatory, and in a letter to the future fourth President of Iran Rafsanjani, he became more open to the idea of a peace agreement, although he still insisted on full sovereignty over the Shatt al-Arab. By 1990, Iran was undergoing military rearmament and reorganization, and purchased $10 billion worth of heavy weaponry from the USSR and China, including aircraft, tanks, and missiles. Rafsanjani reversed Iran's self-imposed ban on chemical weapons, and ordered the manufacture and stockpile of them (Iran destroyed them in 1993 after ratifying the Chemical Weapons Convention). As war with the western powers loomed, Iraq became concerned about the possibility of Iran mending its relations with the west in order to attack Iraq. Iraq had lost its support from the West, and its position in Iran was increasingly untenable. Saddam realized that if Iran attempted to expel the Iraqis from the disputed territories in the border area, it was likely they would succeed. Shortly after his invasion of Kuwait, Saddam wrote a letter to Rafsanjani stating that Iraq recognised Iranian rights over the eastern half of the Shatt al-Arab, a reversion to status quo ante bellum that he had repudiated a decade earlier, and that he would accept Iran's demands and withdraw Iraq's military from the disputed territories. A peace agreement was signed finalizing the terms of the UN resolution, diplomatic relations were restored, and by late 1990-early 1991, the Iraqi military withdrew. The UN peacekeepers withdrew from the border shortly afterward. Most of the prisoners of war were released in 1990, although some remained as late as 2003. Iranian politicians declared it to be the "greatest victory in the history of the Islamic Republic of Iran". Most historians and analysts consider the war to be a stalemate. Certain analysts believe that Iraq won, on the basis of the successes of their 1988 offensives which thwarted Iran's major territorial ambitions in Iraq and persuaded Iran to accept the ceasefire. Iranian analysts believe that they won the war because although they did not succeed in overthrowing the Iraqi government, they thwarted Iraq's major territorial ambitions in Iran, and that, two years after the war had ended, Iraq permanently gave up its claim of ownership over the entire Shatt al-Arab as well. On 9 December 1991, Javier Pérez de Cuéllar, UN Secretary General at the time, reported that Iraq's initiation of the war was unjustified, as was its occupation of Iranian territory and use of chemical weapons against civilians: That [Iraq's] explanations do not appear sufficient or acceptable to the international community is a fact...[the attack] cannot be justified under the charter of the United Nations, any recognized rules and principles of international law, or any principles of international morality, and entails the responsibility for conflict. Even if before the outbreak of the conflict there had been some encroachment by Iran on Iraqi territory, such encroachment did not justify Iraq's aggression against Iran—which was followed by Iraq's continuous occupation of Iranian territory during the conflict—in violation of the prohibition of the use of force, which is regarded as one of the rules of jus cogens...On one occasion I had to note with deep regret the experts' conclusion that "chemical weapons ha[d] been used against Iranian civilians in an area adjacent to an urban center lacking any protection against that kind of attack." He also stated that had the UN accepted this fact earlier, the war would have almost certainly not lasted as long as it did. Iran, encouraged by the announcement, sought reparations from Iraq, but never received any. Throughout the 1990s and early 2000s, Iran and Iraq relations remained balanced between a cold war and a cold peace. Despite renewed and somewhat thawed relations, both sides continued to have low level conflicts. Iraq continued to host and support the Mujahedeen-e-Khalq, which carried out multiple attacks throughout Iran up until the 2003 invasion of Iraq (including the assassination of Iranian general Ali Sayyad Shirazi in 1998, cross border raids, and mortar attacks). Iran carried out several airstrikes and missile attacks against Mujahedeen targets inside of Iraq (the largest taking place in 2001, when Iran fired 56 Scud missiles at Mujahedeen targets). In addition, according to General Hamdani, Iran continued to carry out low-level infiltrations of Iraqi territory, using Iraqi dissidents and anti-government activists rather than Iranian troops, in order to incite revolts. After the fall of Saddam in 2003, Hamdani claimed that Iranian agents infiltrated and created numerous militias in Iraq and built an intelligence system operating within the country. In 2005, the new government of Iraq apologised to Iran for starting the war. The Iraqi government also commemorated the war with various monuments, including the Hands of Victory and the al-Shaheed Monument, both in Baghdad. The war also helped to create a forerunner for the Coalition of the Gulf War, when the Gulf Arab states banded together early in the war to form the Gulf Cooperation Council to help Iraq fight Iran. Economic situation The economic loss at the time was believed to exceed $500 billion for each country ($1.2 trillion total). In addition, economic development stalled and oil exports were disrupted. Iraq had accrued more than $130 billion of international debt, excluding interest, and was also weighed down by a slowed GDP growth. Iraq's debt to Paris Club amounted to $21 billion, 85% of which had originated from the combined inputs of Japan, the USSR, France, Germany, the United States, Italy and the United Kingdom. The largest portion of Iraq's debt, amounting to $130 billion, was to its former Arab backers, with $67 billion loaned by Kuwait, Saudi Arabia, Qatar, UAE, and Jordan. After the war, Iraq accused Kuwait of slant drilling and stealing oil, inciting its invasion of Kuwait, which in turn worsened Iraq's financial situation: the United Nations Compensation Commission mandated Iraq to pay reparations of more than $200 billion to victims of the invasion, including Kuwait and the United States. To enforce payment, Iraq was put under a complete international embargo, which further strained the Iraqi economy and pushed its external debt to private and public sectors to more than $500 billion by the end of Saddam's rule. Combined with Iraq's negative economic growth after prolonged international sanctions, this produced a debt-to-GDP ratio of more than 1,000%, making Iraq the most indebted developing country in the world. The unsustainable economic situation compelled the new Iraqi government to request that a considerable portion of debt incurred during the Iran–Iraq war be written off. Much of the oil industry in both countries was damaged in air raids. Science and technology The war had its impact on medical science: a surgical intervention for comatose patients with penetrating brain injuries was created by Iranian physicians treating wounded soldiers, later establishing neurosurgery guidelines to treat civilians who had suffered blunt or penetrating skull injuries. Iranian physicians' experience in the war informed the medical care of U.S. congresswoman Gabby Giffords after the 2011 Tucson shooting. In addition to helping trigger the Persian Gulf War, the Iran–Iraq War also contributed to Iraq's defeat in the Persian Gulf War. Iraq's military was accustomed to fighting the slow moving Iranian infantry formations with artillery and static defenses, while using mostly unsophisticated tanks to gun down and shell the infantry and overwhelm the smaller Iranian tank force; in addition to being dependent on weapons of mass destruction to help secure victories. Therefore, they were rapidly overwhelmed by the high-tech, quick-maneuvering Coalition forces using modern doctrines such as AirLand Battle. Domestic situation Iraq At first, Saddam attempted to ensure that the Iraqi population suffered from the war as little as possible. There was rationing, but civilian projects begun before the war continued. At the same time, the already extensive personality cult around Saddam reached new heights while the regime tightened its control over the military. After the Iranian victories of the spring of 1982 and the Syrian closure of Iraq's main pipeline, Saddam did a volte-face on his policy towards the home front: a policy of austerity and total war was introduced, with the entire population being mobilised for the war effort. All Iraqis were ordered to donate blood and around 100,000 Iraqi civilians were ordered to clear the reeds in the southern marshes. Mass demonstrations of loyalty towards Saddam became more common. Saddam also began implementing a policy of discrimination against Iraqis of Iranian origin. In the summer of 1982, Saddam began a campaign of terror. More than 300 Iraqi Army officers were executed for their failures on the battlefield. In 1983, a major crackdown was launched on the leadership of the Shia community. Ninety members of the al-Hakim family, an influential family of Shia clerics whose leading members were the émigrés Mohammad Baqir al-Hakim and Abdul Aziz al-Hakim, were arrested, and 6 were hanged. The crackdown on Kurds saw 8,000 members of the Barzani clan, whose leader (Massoud Barzani) also led the Kurdistan Democratic Party, similarly executed. From 1983 onwards, a campaign of increasingly brutal repression was started against the Iraqi Kurds, characterised by Israeli historian Efraim Karsh as having "assumed genocidal proportions" by 1988. The al-Anfal Campaign was intended to "pacify" Iraqi Kurdistan permanently. By 1983, the Barzanis entered an alliance with Iran in defense against Saddam Hussein. Gaining civilian support To secure the loyalty of the Shia population, Saddam allowed more Shias into the Ba'ath Party and the government, and improved Shia living standards, which had been lower than those of the Iraqi Sunnis. Saddam had the state pay for restoring Imam Ali's tomb with white marble imported from Italy. The Baathists also increased their policies of repression against the Shia. The most infamous event was the massacre of 148 civilians of the Shia town of Dujail. Despite the costs of the war, the Iraqi regime made generous contributions to Shia waqf (religious endowments) as part of the price of buying Iraqi Shia support. The importance of winning Shia support was such that welfare services in Shia areas were expanded during a time in which the Iraqi regime was pursuing austerity in all other non-military fields. During the first years of the war in the early 1980s, the Iraqi government tried to accommodate the Kurds in order to focus on the war against Iran. In 1983, the Patriotic Union of Kurdistan agreed to cooperate with Baghdad, but the Kurdistan Democratic Party (KDP) remained opposed. In 1983, Saddam signed an autonomy agreement with Jalal Talabani of the Patriotic Union of Kurdistan (PUK), though Saddam later reneged on the agreement. By 1985, the PUK and KDP had joined forces, and Iraqi Kurdistan saw widespread guerrilla warfare up to the end of the war. Iran Israeli-British historian Ephraim Karsh argued that the Iranian government saw the outbreak of war as chance to strengthen its position and consolidate the Islamic revolution, noting that government propaganda presented it domestically as a glorious jihad and a test of Iranian national character. The Iranian regime followed a policy of total war from the beginning, and attempted to mobilise the nation as a whole. They established a group known as the Reconstruction Campaign, whose members were exempted from conscription and were instead sent into the countryside to work on farms to replace the men serving at the front. Iranian workers had a day's pay deducted from their pay cheques every month to help finance the war, and mass campaigns were launched to encourage the public to donate food, money, and blood. To further help finance the war, the Iranian government banned the import of all non-essential items, and launched a major effort to rebuild the damaged oil plants. According to former Iraqi general Ra'ad al-Hamdani, the Iraqis believed that in addition to the Arab revolts, the Revolutionary Guards would be drawn out of Tehran, leading to a counter-revolution in Iran that would cause Khomeini's government to collapse and thus ensure Iraqi victory. However, rather than turning against the revolutionary government as experts had predicted, Iran's people (including Iranian Arabs) rallied in support of the country and put up a stiff resistance. Civil unrest In June 1981, street battles broke out between the Revolutionary Guard and the left-wing Mujaheddin e-Khalq (MEK), continuing for several days and killing hundreds on both sides. In September, more unrest broke out on the streets of Iran as the MEK attempted to seize power. Thousands of left-wing Iranians (many of whom were not associated with the MEK) were shot and hanged by the government. The MEK began an assassination campaign that killed hundreds of regime officials by the fall of 1981. On 28 June 1981, they assassinated the secretary-general of the Islamic Republican Party, Mohammad Beheshti and on 30 August, killed Iran's president, Mohammad-Ali Rajai. The government responded with mass executions of suspected MEK members, a practice that lasted until 1985. In addition to the open civil conflict with the MEK, the Iranian government was faced with Iraqi-supported rebellions in Iranian Kurdistan, which were gradually put down through a campaign of systematic repression. 1985 also saw student anti-war demonstrations, which were crushed by government forces. Economy NEDSA commander announced in September 2020 that Iran spent $19.6 billion in the war. The war furthered the decline of the Iranian economy that had begun with the revolution in 1978–79. Between 1979 and 1981, foreign exchange reserves fell from $14.6 billion to $1 billion. As a result of the war, living standards dropped dramatically, and Iran was described by British journalists John Bulloch and Harvey Morris as "a dour and joyless place" ruled by a harsh regime that "seemed to have nothing to offer but endless war". Though Iran was becoming bankrupt, Khomeini interpreted Islam's prohibition of usury to mean they could not borrow against future oil revenues to meet war expenses. As a result, Iran funded the war by the income from oil exports after cash had run out. The revenue from oil dropped from $20 billion in 1982 to $5 billion in 1988.French historian Pierre Razoux argued that this sudden drop in economic industrial potential, in conjunction with the increasing aggression of Iraq, placed Iran in a challenging position that had little leeway other than accepting Iraq's conditions of peace. In January 1985, former prime minister and anti-war Islamic Liberation Movement co-founder Mehdi Bazargan criticised the war in a telegram to the United Nations, calling it un-Islamic and illegitimate and arguing that Khomeini should have accepted Saddam's truce offer in 1982 instead of attempting to overthrow the Ba'ath. In a public letter to Khomeini sent in May 1988, he added "Since 1986, you have not stopped proclaiming victory, and now you are calling upon population to resist until victory. Is that not an admission of failure on your part?" Khomeini was annoyed by Bazargan's telegram, and issued a lengthy public rebuttal in which he defended the war as both Islamic and just. By 1987, Iranian morale had begun to crumble, reflected in the failure of government campaigns to recruit "martyrs" for the front. Israeli historian Efraim Karsh points to the decline in morale in 1987–88 as being a major factor in Iran's decision to accept the ceasefire of 1988. Not all saw the war in negative terms. The Islamic Revolution of Iran was strengthened and radicalised. The Iranian government-owned Etelaat newspaper wrote, "There is not a single school or town that is excluded from the happiness of 'holy defence' of the nation, from drinking the exquisite elixir of martyrdom, or from the sweet death of the martyr, who dies in order to live forever in paradise." Comparison of Iraqi and Iranian military strength Iran's regular Army had been purged after the 1979 Revolution, with most high-ranking officers either having deserted (fled the country) or been executed. At the beginning of the war, Iraq held a clear advantage in armour, while both nations were roughly equal in terms of artillery. The gap only widened as the war went on. Iran started with a stronger air force, but over time, the balance of power reversed in Iraq's favour (as Iraq was constantly expanding its military, while Iran was under arms sanctions). Estimates for 1980 and 1987 were: The conflict has been compared to World War I in terms of the tactics used, including large-scale trench warfare with barbed wire stretched across trenches, manned machine gun posts, bayonet charges, human wave attacks across a no man's land, and extensive use of chemical weapons such as sulfur mustard by the Iraqi government against Iranian troops, civilians, and Kurds. The world powers United States and the Soviet Union, together with many Western and Arab countries, provided military, intelligence, economic, and political support for Iraq. On average, Iraq imported about $7 billion in weapons during every year of the war, accounting for fully 12% of global arms sales in the period. The value of Iraqi arms imports increased to between $12 billion and $14 billion during 1984–1987, whereas the value of Iranian arms imports fell from $14 billion in 1985 to $5.89 billion in 1986 and an estimated $6 billion to $8 billion in 1987. Iran was constrained by the price of oil during the 1980s oil glut as foreign countries were largely unwilling to extend credit to Iran, but Iraq financed its continued massive military expansion by taking on vast quantities of debt that allowed it to win a number of victories against Iran near the end of the war but that left the country bankrupt. Despite its larger population, by 1988 Iran's ground forces numbered only 600,000 whereas the Iraqi army had grown to include 1 million soldiers. Foreign support to Iraq and Iran During the war, Iraq was regarded by the West and the Soviet Union as a counterbalance to post-revolutionary Iran. The Soviet Union, Iraq's main arms supplier during the war, did not wish for the end of its alliance with Iraq, and was alarmed by Saddam's threats to find new arms suppliers in the West and China if the Kremlin did not provide him with the weapons he wanted. The Soviet Union hoped to use the threat of reducing arms supplies to Iraq as leverage for forming a Soviet-Iranian alliance. During the early years of the war, the United States lacked meaningful relations with either Iran or Iraq, the former due to the Iranian Revolution and the Iran hostage crisis and the latter because of Iraq's alliance with the Soviet Union and hostility towards Israel. Following Iran's success of repelling the Iraqi invasion and Khomeini's refusal to end the war in 1982, the United States made an outreach to Iraq, beginning with the restoration of diplomatic relations in 1984. The United States wished to both keep Iran away from Soviet influence and protect other Gulf states from any threat of Iranian expansion. As a result, it began to provide limited support to Iraq. In 1982, Henry Kissinger, former Secretary of State, outlined U.S. policy towards Iran: The focus of Iranian pressure at this moment is Iraq. There are few governments in the world less deserving of our support and less capable of using it. Had Iraq won the war, the fear in the Gulf and the threat to our interest would be scarcely less than it is today. Still, given the importance of the balance of power in the area, it is in our interests to promote a ceasefire in that conflict; though not a cost that will preclude an eventual rapprochement with Iran either if a more moderate regime replaces Khomeini's or if the present rulers wake up to geopolitical reality that the historic threat to Iran's independence has always come from the country with which it shares a border of : the Soviet Union. A rapprochement with Iran, of course, must await at a minimum Iran's abandonment of hegemonic aspirations in the Gulf. Richard Murphy, Assistant Secretary of State during the war, testified to Congress in 1984 that the Reagan administration believed a victory for either Iran or Iraq was "neither militarily feasible nor strategically desirable". Support to Iraq was given via technological aid, intelligence, the sale of dual-use chemical and biological warfare related technology and military equipment, and satellite intelligence. While there was direct combat between Iran and the United States, it is not universally agreed that the fighting between the United States and Iran was specifically to benefit Iraq, or for separate issues between the U.S. and Iran. American official ambiguity towards which side to support was summed up by Henry Kissinger when he remarked, "It's a pity they both can't lose." The Americans and the British also either blocked or watered down UN resolutions that condemned Iraq for using chemical weapons against the Iranians and their own Kurdish citizens. More than 30 countries provided support to Iraq, Iran, or both; most of the aid went to Iraq. Iran had a complex clandestine procurement network to obtain munitions and critical materials. Iraq had an even larger clandestine purchasing network, involving 10–12 allied countries, to maintain ambiguity over their arms purchases and to circumvent "official restrictions". Arab mercenaries and volunteers from Egypt and Jordan formed the Yarmouk Brigade and participated in the war alongside Iraqis. Iraq According to the Stockholm International Peace Institute, the Soviet Union, France, and China together accounted for over 90% of the value of Iraq's arms imports between 1980 and 1988. The United States pursued policies in favour of Iraq by reopening diplomatic channels, lifting restrictions on the export of dual-use technology, overseeing the transfer of third-party military hardware, and providing operational intelligence on the battlefield. France, which from the 1970s had been one of Iraq's closest allies, was a major supplier of military hardware. The French sold weapons equal to $5 billion, which comprised well over a quarter of Iraq's total arms stockpile. Citing French magazine Le Nouvel Observateur as the primary source, but also quoting French officials, the New York Times reported France had been sending chemical precursors of chemical weapons to Iraq, since 1986. China, which had no direct stake in the victory of either side and whose interests in the war were entirely commercial, freely sold arms to both sides. Iraq also made extensive use of front companies, middlemen, secret ownership of all or part of companies all over the world, forged end-user certificates, and other methods to hide what it was acquiring. Some transactions may have involved people, shipping, and manufacturing in as many as 10 countries. Support from Great Britain exemplified the methods by which Iraq would circumvent export controls. Iraq bought at least one British company with operations in the United Kingdom and the United States, and had a complex relationship with France and the Soviet Union, its major suppliers of actual weapons. Turkey took action against the Kurds in 1986, alleging they were attacking the Kurdistan Workers' Party (PKK), which prompted a harsh diplomatic intervention by Iran, which planned a new offensive against Iraq at the time and were counting on the support of Kurdish factions. Sudan supported Iraq directly during the war, sending a contingent to fight at the frontlines. The Sudanese unit consisted to a large degree of Ugandan refugees from the West Nile Region, recruited by Juma Oris. The United Nations Security Council initially called for a cease-fire after a week of fighting while Iraq was occupying Iranian territory, and renewed the call on later occasions. However, the UN did not come to Iran's aid to repel the Iraqi invasion, and the Iranians thus interpreted the UN as subtly biased in favour of Iraq. Financial support Iraq's main financial backers were the oil-rich Persian Gulf states, most notably Saudi Arabia ($30.9 billion), Kuwait ($8.2 billion), and the United Arab Emirates ($8 billion). In all, Iraq received $35 billion in loans from the West and between $30 and $40 billion from the Persian Gulf states during the 1980s. The Iraqgate scandal revealed
|
electronic articles or books (often dozens or hundreds of them) and reading parts of several articles in each session. Articles in the reading list are prioritized by the user. In the course of reading, key points of articles are broken up into flashcards, which are then learned and reviewed over an extended period of time with the help of a spaced repetition algorithm. This use of flashcards at later stages of the process is based on the spacing effect (the phenomenon whereby learning is greater when studying is spread out over time) and the testing effect (the finding that long-term memory is increased when some of the learning period is devoted to retrieving the to-be-remembered information through testing). It is targeted towards people who are trying to learn for life a large amount of information, particularly if that information comes from various sources. History The method itself is often credited to the Polish software developer Piotr Wozniak. He implemented the first version of incremental reading in 1999 in SuperMemo 99, providing the essential tools of the method: a prioritized reading list, and the possibility to extract portions of articles and to create cloze deletions. The term "incremental reading" itself appeared the next year with SuperMemo 2000. Later SuperMemo programmes subsequently enhanced
|
software developer Piotr Wozniak. He implemented the first version of incremental reading in 1999 in SuperMemo 99, providing the essential tools of the method: a prioritized reading list, and the possibility to extract portions of articles and to create cloze deletions. The term "incremental reading" itself appeared the next year with SuperMemo 2000. Later SuperMemo programmes subsequently enhanced the tools and techniques involved, such as webpage imports, material overload handling, etc. Limited incremental reading support for the text editor Emacs appeared in 2007. An Anki add-on for incremental reading was later published in 2011; for Anki 2.0 and 2.1, another add-on is available. Incremental reading was the first of a series of related concepts invented by Piotr Wozniak: incremental image learning, incremental video, incremental audio, incremental mail processing, incremental problem solving, and incremental writing. "Incremental learning" is the term Wozniak uses to refer to those concepts as a whole. Method When reading an electronic article, the user extracts the most important parts (similar to underlining or highlighting a paper article) and gradually distills them into flashcards. Flashcards are information presented in a question-answer format (making active recall possible). Cloze deletions are often used in incremental reading, as they are easy to create out of text. Both extracts and flashcards are scheduled independently from the original article. With time
|
determining intelligence via craniometry, arguing that both are based on the fallacy of reification, “our tendency to convert abstract concepts into entities”. Gould's argument sparked a great deal of debate, and the book is listed as one of Discover Magazines "25 Greatest Science Books of All Time". Along these same lines, critics such as Keith Stanovich do not dispute the capacity of IQ test scores to predict some kinds of achievement, but argue that basing a concept of intelligence on IQ test scores alone neglects other important aspects of mental ability. Robert Sternberg, another significant critic of IQ as the main measure of human cognitive abilities, argued that reducing the concept of intelligence to the measure of g does not fully account for the different skills and knowledge types that produce success in human society. Despite these objections, clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes. Test bias or differential item functioning Differential item functioning (DIF), sometimes referred to as measurement bias, is a phenomenon when participants from different groups (e.g. gender, race, disability) with the same latent abilities give different answers to specific questions on the same IQ test. DIF analysis measures such specific items on a test alongside measuring participants' latent abilities on other similar questions. A consistent different group response to a specific question among similar types of questions can indicate an effect of DIF. It does not count as differential item functioning if both groups have an equally valid chance of giving different responses to the same questions. Such bias can be a result of culture, educational level and other factors that are independent of group traits. DIF is only considered if test-takers from different groups with the same underlying latent ability level have a different chance of giving specific responses. Such questions are usually removed in order to make the test equally fair for both groups. Common techniques for analyzing DIF are item response theory (IRT) based methods, Mantel-Haenszel, and logistic regression. A 2005 study found that "differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American students," indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa. Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for autistic children; the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and may have resulted in incorrect claims that a majority of autistic children are of low intelligence. Flynn effect Since the early 20th century, raw scores on IQ tests have increased in most parts of the world. When a new version of an IQ test is normed, the standard scoring is set so performance at the population median results in a score of IQ 100. The phenomenon of rising raw score performance means if test-takers are scored by a constant standard scoring rule, IQ test scores have been rising at an average rate of around three IQ points per decade. This phenomenon was named the Flynn effect in the book The Bell Curve after James R. Flynn, the author who did the most to bring this phenomenon to the attention of psychologists. Researchers have been exploring the issue of whether the Flynn effect is equally strong on performance of all kinds of IQ test items, whether the effect may have ended in some developed nations, whether there are social subgroup differences in the effect, and what possible causes of the effect might be. A 2011 textbook, IQ and Human Intelligence, by N. J. Mackintosh, noted the Flynn effect demolishes the fears that IQ would be decreased. He also asks whether it represents a real increase in intelligence beyond IQ scores. A 2011 psychology textbook, lead authored by Harvard Psychologist Professor Daniel Schacter, noted that humans' inherited intelligence could be going down while acquired intelligence goes up. Research has revealed that the Flynn effect has slowed or reversed course in several Western countries beginning in the late 20th century. The phenomenon has been termed the negative Flynn effect. A study of Norwegian military conscripts' test records found that IQ scores have been falling for generations born after the year 1975, and that the underlying nature of both initial increasing and subsequent falling trends appears to be environmental rather than genetic. Age IQ can change to some degree over the course of childhood. In one longitudinal study, the mean IQ scores of tests at ages 17 and 18 were correlated at r=0.86 with the mean scores of tests at ages five, six, and seven and at r=0.96 with the mean scores of tests at ages 11, 12, and 13. For decades, practitioners' handbooks and textbooks on IQ testing have reported IQ declines with age after the beginning of adulthood. However, later researchers pointed out this phenomenon is related to the Flynn effect and is in part a cohort effect rather than a true aging effect. A variety of studies of IQ and aging have been conducted since the norming of the first Wechsler Intelligence Scale drew attention to IQ differences in different age groups of adults. The current consensus is that fluid intelligence generally declines with age after early adulthood, while crystallized intelligence remains intact. Both cohort effects (the birth year of the test-takers) and practice effects (test-takers taking the same form of IQ test more than once) must be controlled to gain accurate data. It is unclear whether any lifestyle intervention can preserve fluid intelligence into older ages. The exact peak age of fluid intelligence or crystallized intelligence remains elusive. Cross-sectional studies usually show that especially fluid intelligence peaks at a relatively young age (often in the early adulthood) while longitudinal data mostly show that intelligence is stable until mid-adulthood or later. Subsequently, intelligence seems to decline slowly. Genetics and environment Environmental and genetic factors play a role in determining IQ. Their relative importance has been the subject of much research and debate. Heritability The general figure for the heritability of IQ, according to an American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late adolescents and adults. Heritability measures for g factor in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.9 in adulthood. One proposed explanation is that people with different genes tend to reinforce the effects of those genes, for example by seeking out different environments. Shared family environment Family members have aspects of environments in common (for example, characteristics of the home). This shared family environment accounts for 0.25–0.35 of the variation in IQ in childhood. By late adolescence, it is quite low (zero in some studies). The effect for several other psychological traits is similar. These studies have not looked at the effects of extreme environments, such as in abusive families. Non-shared family environment and environment outside the family Although parents treat their children differently, such differential treatment explains only a small amount of nonshared environmental influence. One suggestion is that children react differently to the same environment because of different genes. More likely influences may be the impact of peers and other experiences outside the family. Individual genes A very large proportion of the over 17,000 human genes are thought to have an effect on the development and functionality of the brain. While a number of individual genes have been reported to be associated with IQ, none have a strong effect. Deary and colleagues (2009) reported that no finding of a strong single gene effect on IQ has been replicated. Recent findings of gene associations with normally varying intellectual differences in adults and children continue to show weak effects for any one gene. Gene-environment interaction David Rowe reported an interaction of genetic effects with socioeconomic status, such that the heritability was high in high-SES families, but much lower in low-SES families. In the US, this has been replicated in infants, children, adolescents, and adults. Outside the US, studies show no link between heritability and SES. Some effects may even reverse sign outside the US. Dickens and Flynn (2001) have argued that genes for high IQ initiate an environment-shaping feedback cycle, with genetic effects causing bright children to seek out more stimulating environments that then further increase their IQ. In Dickens' model, environment effects are modeled as decaying over time. In this model, the Flynn effect can be explained by an increase in environmental stimulation independent of it being sought out by individuals. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they enduringly raised children's drive to seek out cognitively demanding experiences. Interventions In general, educational interventions, as those described below, have shown short-term effects on IQ, but long-term follow-up is often missing. For example, in the US, very large intervention programs such as the Head Start Program have not produced lasting gains in IQ scores. Even when students improve their scores on standardized tests, they do not always improve their cognitive abilities, such as memory, attention and speed. More intensive, but much smaller projects, such as the Abecedarian Project, have reported lasting effects, often on socioeconomic status variables, rather than IQ. Recent studies have shown that training in using one's working memory may increase IQ. A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training. Further research will be needed to determine nature, extent and duration of the proposed transfer. Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable for extended periods of time. Music Musical training in childhood correlates with higher than average IQ. However, a study of 10,500 twins found no effects on IQ, suggesting that the correlation was caused by genetic confounders. A meta-analysis concluded that "Music training does not reliably enhance children and young adolescents' cognitive or academic skills, and that previous positive findings were probably due to confounding variables." It is popularly thought that listening to classical music raises IQ. However, multiple attempted replications (e.g.) have shown that this is at best a short-term effect (lasting no longer than 10 to 15 minutes), and is not related to IQ-increase. Brain anatomy Several neurophysiological factors have been correlated with intelligence in humans, including the ratio of brain weight to body weight and the size, shape, and activity level of different parts of the brain. Specific features that may affect IQ include the size and shape of the frontal lobes, the amount of blood and chemical activity in the frontal lobes, the total amount of gray matter in the brain, the overall thickness of the cortex, and the glucose metabolic rate. Health Health is important in understanding differences in IQ test scores and other measures of cognitive ability. Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood–brain barrier is less effective. Such impairment may sometimes be permanent, or sometimes be partially or wholly compensated for by later growth. Since about 2010, researchers such as Eppig, Hassel, and MacKenzie have found a very close and consistent link between IQ scores and infectious diseases, especially in the infant and preschool populations and the mothers of these children. They have postulated that fighting infectious diseases strains the child's metabolism and prevents full brain development. Hassel postulated that it is by far the most important factor in determining population IQ. However, they also found that subsequent factors such as good nutrition and regular quality schooling can offset early negative effects to some extent. Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Improvements in nutrition, and in public policy in general, have been implicated in worldwide IQ increases. Cognitive epidemiology is a field of research that examines the associations between intelligence test scores and health. Researchers in the field argue that intelligence measured at an early age is an important predictor of later health and mortality differences. Social correlations School performance The American Psychological Association's report Intelligence: Knowns and Unknowns states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. This means that the explained variance is 25%. Achieving good grades depends on many factors other than IQ, such as "persistence, interest in school, and willingness to study" (p. 81). It has been found that the correlation of IQ scores with school performance depends on the IQ measurement used. For undergraduate students, the Verbal IQ as measured by WAIS-R has been found to correlate significantly (0.53) with the grade point average (GPA) of the last 60 hours (credits). In contrast, Performance IQ correlation with the same GPA was only 0.22 in the same study. Some measures of educational aptitude correlate highly with IQ tests for instance, reported a correlation of 0.82 between g (general intelligence factor) and SAT scores; another research found a correlation of 0.81 between g and GCSE scores, with the explained variance ranging "from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design". Job performance According to Schmidt and Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability." The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6. The correlations were higher when the unreliability of measurement methods was controlled for. While IQ is more strongly correlated with reasoning and less so with motor function, IQ-test scores predict performance ratings in all occupations. That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) is more likely to influence performance. The prevailing view among academics is that it is largely through the quicker acquisition of job-relevant knowledge that higher IQ mediates job performance. This view has been challenged by Byington & Felps (2010), who argued that "the current applications of IQ-reflective tests allow individuals with high IQ scores to receive greater access to developmental resources, enabling them to acquire additional capabilities over time, and ultimately perform their jobs better." In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores. Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability, but not specific ability scores, predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability. The US military has minimum enlistment standards at about the IQ 85 level. There have been two experiments with lowering this to 80 but in both cases these men could not master soldiering well enough to justify their costs. Income It has been suggested that "in economic terms it appears that the IQ score measures something with decreasing marginal value" and it "is important to have enough of it, but having lots and lots does not buy you that much". However, large-scale longitudinal studies indicate an increase in IQ translates into an increase in performance at all levels of IQ: i.e. ability and job performance are monotonically linked at all IQ levels. The link from IQ to wealth is much less strong than that from IQ to job performance. Some studies indicate that IQ is unrelated to net worth. The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns stated that IQ scores accounted for about a quarter of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes. Charles Murray (1998) showed a more substantial effect of IQ on income independent of family background. In a meta-analysis, Strenze (2006) reviewed much of the literature and estimated the correlation between IQ and income to be about 0.23. Some studies assert that IQ only accounts for (explains) a sixth of the variation in income because many studies are based on young adults, many of whom have not yet reached their peak earning capacity, or even their education. On pg 568 of The g Factor, Arthur Jensen says that although the
|
human population by excluding people and groups judged to be inferior and promoting those judged to be superior, played a significant role in the history and culture of the United States during the Progressive Era, from the late 19th century until US involvement in World War II. The American eugenics movement was rooted in the biological determinist ideas of the British Scientist Sir Francis Galton. In 1883, Galton first used the word eugenics to describe the biological improvement of human genes and the concept of being "well-born". He believed that differences in a person's ability were acquired primarily through genetics and that eugenics could be implemented through selective breeding in order for the human race to improve in its overall quality, therefore allowing for humans to direct their own evolution. Goddard was a eugenicist. In 1908, he published his own version, The Binet and Simon Test of Intellectual Capacity, and cordially promoted the test. He quickly extended the use of the scale to the public schools (1913), to immigration (Ellis Island, 1914) and to a court of law (1914). Unlike Galton, who promoted eugenics through selective breeding for positive traits, Goddard went with the US eugenics movement to eliminate "undesirable" traits. Goddard used the term "feeble-minded" to refer to people who did not perform well on the test. He argued that "feeble-mindedness" was caused by heredity, and thus feeble-minded people should be prevented from giving birth, either by institutional isolation or sterilization surgeries. At first, sterilization targeted the disabled, but was later extended to poor people. Goddard's intelligence test was endorsed by the eugenicists to push for laws for forced sterilization. Different states adopted the sterilization laws at different paces. These laws, whose constitutionality was upheld by the Supreme Court in their 1927 ruling Buck v. Bell, forced over 60,000 people to go through sterilization in the United States. California's sterilization program was so effective that the Nazis turned to the government for advice on how to prevent the birth of the "unfit". While the US eugenics movement lost much of its momentum in the 1940s in view of the horrors of Nazi Germany, advocates of eugenics (including Nazi geneticist Otmar Freiherr von Verschuer) continued to work and promote their ideas in the United States. In later decades, some eugenic principles have made a resurgence as a voluntary means of selective reproduction, with some calling them "new eugenics". As it becomes possible to test for and correlate genes with IQ (and its proxies), ethicists and embryonic genetic testing companies are attempting to understand the ways in which the technology can be ethically deployed. Cattell–Horn–Carroll theory Raymond Cattell (1941) proposed two types of cognitive abilities in a revision of Spearman's concept of general intelligence. Fluid intelligence (Gf) was hypothesized as the ability to solve novel problems by using reasoning, and crystallized intelligence (Gc) was hypothesized as a knowledge-based ability that was very dependent on education and experience. In addition, fluid intelligence was hypothesized to decline with age, while crystallized intelligence was largely resistant to the effects of aging. The theory was almost forgotten, but was revived by his student John L. Horn (1966) who later argued Gf and Gc were only two among several factors, and who eventually identified nine or ten broad abilities. The theory continued to be called Gf-Gc theory. John B. Carroll (1993), after a comprehensive reanalysis of earlier data, proposed the three stratum theory, which is a hierarchical model with three levels. The bottom stratum consists of narrow abilities that are highly specialized (e.g., induction, spelling ability). The second stratum consists of broad abilities. Carroll identified eight second-stratum abilities. Carroll accepted Spearman's concept of general intelligence, for the most part, as a representation of the uppermost, third stratum. In 1999, a merging of the Gf-Gc theory of Cattell and Horn with Carroll's Three-Stratum theory has led to the Cattell–Horn–Carroll theory (CHC Theory), with g as the top of the hierarchy, ten broad abilities below, and further subdivided into seventy narrow abilities on the third stratum. CHC Theory has greatly influenced many of the current broad IQ tests. Modern tests do not necessarily measure all of these broad abilities. For example, quantitative knowledge and reading & writing ability may be seen as measures of school achievement and not IQ. Decision speed may be difficult to measure without special equipment. g was earlier often subdivided into only Gf and Gc, which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex. Modern comprehensive IQ tests do not stop at reporting a single IQ score. Although they still give an overall score, they now also give scores for many of these more restricted abilities, identifying particular strengths and weaknesses of an individual. Other theories An alternative to standard IQ tests, meant to test the proximal development of children, originated in the writings of psychologist Lev Vygotsky (1896–1934) during his last two years of his life. According to Vygotsky, the maximum level of complexity and difficulty of problems that a child is capable to solve under some guidance indicates their level of potential development. The difference between this level of potential and the lower level of unassisted performance indicates the child's zone of proximal development. Combination of the two indexesthe level of actual and the zone of the proximal developmentaccording to Vygotsky, provides a significantly more informative indicator of psychological development than the assessment of the level of actual development alone. His ideas on the zone of development were later developed in a number of psychological and educational theories and practices, most notably under the banner of dynamic assessment, which seeks to measure developmental potential (for instance, in the work of Reuven Feuerstein and his associates, who has criticized standard IQ testing for its putative assumption or acceptance of "fixed and immutable" characteristics of intelligence or cognitive functioning). Dynamic assessment has been further elaborated in the work of Ann Brown, and John D. Bransford and in theories of multiple intelligences authored by Howard Gardner and Robert Sternberg. J.P. Guilford's Structure of Intellect (1967) model of intelligence used three dimensions, which, when combined, yielded a total of 120 types of intelligence. It was popular in the 1970s and early 1980s, but faded owing to both practical problems and theoretical criticisms. Alexander Luria's earlier work on neuropsychological processes led to the PASS theory (1997). It argued that only looking at one general factor was inadequate for researchers and clinicians who worked with learning disabilities, attention disorders, intellectual disability, and interventions for such disabilities. The PASS model covers four kinds of processes (planning process, attention/arousal process, simultaneous processing, and successive processing). The planning processes involve decision making, problem solving, and performing activities and require goal setting and self-monitoring. The attention/arousal process involves selectively attending to a particular stimulus, ignoring distractions, and maintaining vigilance. Simultaneous processing involves the integration of stimuli into a group and requires the observation of relationships. Successive processing involves the integration of stimuli into serial order. The planning and attention/arousal components comes from structures located in the frontal lobe, and the simultaneous and successive processes come from structures located in the posterior region of the cortex. It has influenced some recent IQ tests, and been seen as a complement to the Cattell-Horn-Carroll theory described above. Current tests There are a variety of individually administered IQ tests in use in the English-speaking world. The most commonly used individual IQ test series is the Wechsler Adult Intelligence Scale (WAIS) for adults and the Wechsler Intelligence Scale for Children (WISC) for school-age test-takers. Other commonly used individual IQ tests (some of which do not label their standard scores as "IQ" scores) include the current versions of the Stanford-Binet Intelligence Scales, Woodcock-Johnson Tests of Cognitive Abilities, the Kaufman Assessment Battery for Children, the Cognitive Assessment System, and the Differential Ability Scales. IQ tests that measure intelligence also include: Raven's Progressive Matrices Cattell Culture Fair III Reynolds Intellectual Assessment Scales Thurstone's Primary Mental Abilities Kaufman Brief Intelligence Test Multidimensional Aptitude Battery II Das–Naglieri cognitive assessment system Naglieri Nonverbal Ability Test Wide Range Intelligence Test IQ scales are ordinally scaled. The raw score of the norming sample is usually (rank order) transformed to a normal distribution with mean 100 and standard deviation 15. While one standard deviation is 15 points, and two SDs are 30 points, and so on, this does not imply that mental ability is linearly related to IQ, such that IQ 50 would mean half the cognitive ability of IQ 100. In particular, IQ points are not percentage points. Reliability and validity Reliability Psychometricians generally regard IQ tests as having high statistical reliability. Reliability represents the measurement consistency of a test. A reliable test produces similar scores upon repetition. On aggregate, IQ tests exhibit high reliability, although test-takers may have varying scores when taking the same test on differing occasions, and may have varying scores when taking different IQ tests at the same age. Like all statistical quantities, any particular estimate of IQ has an associated standard error that measures uncertainty about the estimate. For modern tests, the confidence interval can be approximately 10 points and reported standard error of measurement can be as low as about three points. Reported standard error may be an underestimate, as it does not account for all sources of error. Outside influences such as low motivation or high anxiety can occasionally lower a person's IQ test score. For individuals with very low scores, the 95% confidence interval may be greater than 40 points, potentially complicating the accuracy of diagnoses of intellectual disability. By the same token, high IQ scores are also significantly less reliable than those near to the population median. Reports of IQ scores much higher than 160 are considered dubious. Validity as a measure of intelligence Reliability and validity are very different concepts. While reliability reflects reproducibility, validity refers to lack of bias. A biased test does not measure what it purports to measure. While IQ tests are generally considered to measure some forms of intelligence, they may fail to serve as an accurate measure of broader definitions of human intelligence inclusive of creativity and social intelligence. For this reason, psychologist Wayne Weiten argues that their construct validity must be carefully qualified, and not be overstated. According to Weiten, "IQ tests are valid measures of the kind of intelligence necessary to do well in academic work. But if the purpose is to assess intelligence in a broader sense, the validity of IQ tests is questionable." Some scientists have disputed the value of IQ as a measure of intelligence altogether. In The Mismeasure of Man (1981, expanded edition 1996), evolutionary biologist Stephen Jay Gould compared IQ testing with the now-discredited practice of determining intelligence via craniometry, arguing that both are based on the fallacy of reification, “our tendency to convert abstract concepts into entities”. Gould's argument sparked a great deal of debate, and the book is listed as one of Discover Magazines "25 Greatest Science Books of All Time". Along these same lines, critics such as Keith Stanovich do not dispute the capacity of IQ test scores to predict some kinds of achievement, but argue that basing a concept of intelligence on IQ test scores alone neglects other important aspects of mental ability. Robert Sternberg, another significant critic of IQ as the main measure of human cognitive abilities, argued that reducing the concept of intelligence to the measure of g does not fully account for the different skills and knowledge types that produce success in human society. Despite these objections, clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes. Test bias or differential item functioning Differential item functioning (DIF), sometimes referred to as measurement bias, is a phenomenon when participants from different groups (e.g. gender, race, disability) with the same latent abilities give different answers to specific questions on the same IQ test. DIF analysis measures such specific items on a test alongside measuring participants' latent abilities on other similar questions. A consistent different group response to a specific question among similar types of questions can indicate an effect
|
2011 onward as per Academic Review Committee's recommendations. Postgraduate Postgraduate courses in Engineering offer Master of Technology (MTech), MS (R) and PhD degrees. The institute also offers two-tier MSc courses in areas of basic sciences in which students are admitted through Joint Admission Test for MSc (JAM) exam. The institute also offers M.Des. (2 years), M.B.A. (2 years) and MSc (2 years) degrees. Admissions to MTech is made once a year through Graduate Aptitude Test in Engineering. Admissions to M. Des are made once a year through both Graduate Aptitude Test in Engineering (GATE) and Common Entrance Exam for Design (CEED). Until 2011, admissions to the M.B.A. program were accomplished through the Joint Management Entrance Test (JMET), held yearly, and followed by a Group Discussion/Personal Interview process. In 2011, JMET was replaced by Common Admission Test (CAT). Admissions Undergraduate admissions until 2012 were being done through the national-level Indian Institute of Technology Joint Entrance Examination (IIT-JEE). Following the Ministry of Human Resource Development's decision to replace IIT-JEE with a common engineering entrance examination, IIT Kanpur's admissions are now based on JEE (Joint Entrance Examination) -Advanced level along with other IITs. Postgraduate admissions are made through the Graduate Aptitude Test in Engineering and Common Admission Test. Rankings Internationally, IIT Kanpur was ranked 277 in QS World University Rankings for 2022. It was ranked 65 in QS Asia Rankings 2020 and 25 among BRICS nations in 2019. The Times Higher Education World University Rankings ranked it 601–800 globally in the 2020 ranking 125 in Asia and 77 among Emerging Economies University Rankings 2020. In India, IIT Kanpur was ranked third among engineering colleges by India Today in 2021. It was ranked fourth among engineering colleges in India by the National Institutional Ranking Framework (NIRF) in 2020, and sixth overall. The Department of Industrial and Management Engineering was ranked 16 among management schools in India by NIRF in 2020. Laboratories and other facilities The campus is spread over an area of . Facilities include the National Wind Tunnel Facility. Other large research centres include the Advanced Centre for Material Science, a Bio-technology centre, the Advanced Centre for Electronic Systems, and the Samtel Centre for Display Technology, Centre for Mechatronics, Centre for Laser Technology, Prabhu Goel Research Centre for Computer and Internet Security, Facility for Ecological and Analytical Testing. The departments have their own libraries. The institute has its own airfield, for flight testing and gliding. PK Kelkar Library (formerly Central Library) is an academic library of the institute with a collection of more than 300,000 volumes, and subscriptions to more than 1,000 periodicals. The library was renamed to its present name in 2003 after Dr. P K Kelkar, the first director of the institute. It is housed in a three-story building, with a total floor area of 6973 square metres. The Abstracting and Indexing periodicals, Microform and CD-ROM databases, technical reports, Standards and thesis are in the library. Each year, about 4,500 books and journal volumes are added to the library. The New Core Labs (NCL) is 3-storey building with state of the art physics and chemistry laboratories for courses in the first year. The New Core Labs also has Linux and Windows computer labs for the use of first year courses and a Mathematics department laboratory housing machines with high computing power. IIT Kanpur has set up the Startup Innovation and Incubation Centre (SIIC) (previously known as "SIDBI" Innovation and Incubation Centre) in collaboration with the Small Industries development Bank of India (SIDBI) aiming to aid innovation, research, and entrepreneurial activities in technology-based areas. SIIC helps business Start-ups to develop their ideas into commercially viable products. A team of students, working under the guidance of faculty members of the institute and scientists of Indian Space Research Organisation (ISRO) have designed and built India's first nano satellite Jugnu, which was successfully launched in orbit on 12 Oct 2011 by ISRO's PSLV-C18. Computer Centre The Computer Centre is one of the advanced computing service centre among academic institution in India. IT hosts IIT Kanpur website and provides personal web space for students and faculties. It also provides a spam filtered email server and high speed fibre optic Internet to all the hostels and the academics. Users have multiple options to choose among various interfaces to access mail service. It has Linux and windows laboratories equipped with dozens of high-end software like MATLAB, Autocad, Ansys, Abaqus etc. for use of students. Apart from departmental computer labs, computer centre hosts more than 300 Linux terminals and more than 100 Windows terminals and is continuously available to the students for academic work and recreation. Computer centre has recently adopted an open source software policy for its infrastructure and computing. Various high-end compute and GPU servers are remotely available from data centre for user computation. Computer centre has multiple super computing clusters for research and teaching activity. In June 2014 IIT Kanpur launched their 2nd supercomputer which is India's 5th most powerful supercomputer as of now. The new supercomputer 'Cluster Platform SL230s Gen8' manufactured by Hewlett-Packard has 15,360 cores and a theoretical peak (Rpeak) 307.2 TFlop/s and is the world's 192th most powerful supercomputer as of June 2015. Students' research related activity Research is controlled by the Office of the Dean of Research and Development. Under the aegis of the Office the students publish the quarterly NERD Magazine (Notes on Engineering Research and Development) which publishes scientific and technical content created by students. Articles may be original work done by students in the form of hobby projects, term projects, internships, or theses. Articles of general interest which are informative but do not reflect original work are also accepted. The institute is part of the European Research and Education Collaboration with Asia (EURECA) programme since 2008. Along with the magazine a student research organisation, PoWER (Promotion of Work Experience and Research) has been started. Under it several independent student groups are working on projects like the Lunar Rover for ISRO, alternate energy solutions under the Group for Environment and Energy Engineering, ICT solutions through a group Young Engineers, solution for diabetes, green community solutions through ideas like zero water and zero waste quality air approach. Through BRaIN (Biological Research and Innovation Network) students interested in solving biological problems get involved in research projects like genetically modifying fruit flies to study molecular systems and developing bio-sensors to detect alcohol levels. A budget of Rs 1.5 to 2 crore has been envisaged to support student projects that demonstrate technology. Defence Assisting the Indian Ordnance Factories in not only upgrading existing products, but also developing new weapon platforms. Jugnu The students of IIT Kanpur made a nano satellite called Jugnu, which was given by president Pratibha Patil to ISRO for launch. Jugnu is a remote sensing satellite which will be operated by the Indian Institute of Technology Kanpur. It is a nanosatellite which will be used to provide data for agriculture and disaster monitoring. It is a 3-kilogram (6.6 lb) spacecraft, which measures 34 centimetres (13 in) in length by 10 centimetres (3.9 in) in height and width. Its development programme cost around 25 million rupees. It has a design life of one year. Jugnu's primary instrument is the Micro Imaging System, a near infrared camera which will be used to observe vegetation. It also carries a GPS receiver to aid tracking, and is intended to demonstrate a microelectromechanical inertial measurement unit. IITK Motorsports IITK motorsports is the biggest and most comprehensive student initiative of the college, founded in January 2011. It is a group of students from varied disciplines who aim at designing and fabricating a Formula-style race car for international Formula SAE (Society of Automotive Engineers) events. Most of the components of the car, except the engine, tyres and wheel rims, are designed and manufactured by the team members themselves. The car is designed to provide maximum performance under the constraints of
|
over an area of . Facilities include the National Wind Tunnel Facility. Other large research centres include the Advanced Centre for Material Science, a Bio-technology centre, the Advanced Centre for Electronic Systems, and the Samtel Centre for Display Technology, Centre for Mechatronics, Centre for Laser Technology, Prabhu Goel Research Centre for Computer and Internet Security, Facility for Ecological and Analytical Testing. The departments have their own libraries. The institute has its own airfield, for flight testing and gliding. PK Kelkar Library (formerly Central Library) is an academic library of the institute with a collection of more than 300,000 volumes, and subscriptions to more than 1,000 periodicals. The library was renamed to its present name in 2003 after Dr. P K Kelkar, the first director of the institute. It is housed in a three-story building, with a total floor area of 6973 square metres. The Abstracting and Indexing periodicals, Microform and CD-ROM databases, technical reports, Standards and thesis are in the library. Each year, about 4,500 books and journal volumes are added to the library. The New Core Labs (NCL) is 3-storey building with state of the art physics and chemistry laboratories for courses in the first year. The New Core Labs also has Linux and Windows computer labs for the use of first year courses and a Mathematics department laboratory housing machines with high computing power. IIT Kanpur has set up the Startup Innovation and Incubation Centre (SIIC) (previously known as "SIDBI" Innovation and Incubation Centre) in collaboration with the Small Industries development Bank of India (SIDBI) aiming to aid innovation, research, and entrepreneurial activities in technology-based areas. SIIC helps business Start-ups to develop their ideas into commercially viable products. A team of students, working under the guidance of faculty members of the institute and scientists of Indian Space Research Organisation (ISRO) have designed and built India's first nano satellite Jugnu, which was successfully launched in orbit on 12 Oct 2011 by ISRO's PSLV-C18. Computer Centre The Computer Centre is one of the advanced computing service centre among academic institution in India. IT hosts IIT Kanpur website and provides personal web space for students and faculties. It also provides a spam filtered email server and high speed fibre optic Internet to all the hostels and the academics. Users have multiple options to choose among various interfaces to access mail service. It has Linux and windows laboratories equipped with dozens of high-end software like MATLAB, Autocad, Ansys, Abaqus etc. for use of students. Apart from departmental computer labs, computer centre hosts more than 300 Linux terminals and more than 100 Windows terminals and is continuously available to the students for academic work and recreation. Computer centre has recently adopted an open source software policy for its infrastructure and computing. Various high-end compute and GPU servers are remotely available from data centre for user computation. Computer centre has multiple super computing clusters for research and teaching activity. In June 2014 IIT Kanpur launched their 2nd supercomputer which is India's 5th most powerful supercomputer as of now. The new supercomputer 'Cluster Platform SL230s Gen8' manufactured by Hewlett-Packard has 15,360 cores and a theoretical peak (Rpeak) 307.2 TFlop/s and is the world's 192th most powerful supercomputer as of June 2015. Students' research related activity Research is controlled by the Office of the Dean of Research and Development. Under the aegis of the Office the students publish the quarterly NERD Magazine (Notes on Engineering Research and Development) which publishes scientific and technical content created by students. Articles may be original work done by students in the form of hobby projects, term projects, internships, or theses. Articles of general interest which are informative but do not reflect original work are also accepted. The institute is part of the European Research and Education Collaboration with Asia (EURECA) programme since 2008. Along with the magazine a student research organisation, PoWER (Promotion of Work Experience and Research) has been started. Under it several independent student groups are working on projects like the Lunar Rover for ISRO, alternate energy solutions under the Group for Environment and Energy Engineering, ICT solutions through a group Young Engineers, solution for diabetes, green community solutions through ideas like zero water and zero waste quality air approach. Through BRaIN (Biological Research and Innovation Network) students interested in solving biological problems get involved in research projects like genetically modifying fruit flies to study molecular systems and developing bio-sensors to detect alcohol levels. A budget of Rs 1.5 to 2 crore has been envisaged to support student projects that demonstrate technology. Defence Assisting the Indian Ordnance Factories in not only upgrading existing products, but also developing new weapon platforms. Jugnu The students of IIT Kanpur made a nano satellite called Jugnu, which was given by president Pratibha Patil to ISRO for launch. Jugnu is a remote sensing satellite which will be operated by the Indian Institute of Technology Kanpur. It is a nanosatellite which will be used to provide data for agriculture and disaster monitoring. It is a 3-kilogram (6.6 lb) spacecraft, which measures 34 centimetres (13 in) in length by 10 centimetres (3.9 in) in height and width. Its development programme cost around 25 million rupees. It has a design life of one year. Jugnu's primary instrument is the Micro Imaging System, a near infrared camera which will be used to observe vegetation. It also carries a GPS receiver to aid tracking, and is intended to demonstrate a microelectromechanical inertial measurement unit. IITK Motorsports IITK motorsports is the biggest and most comprehensive student initiative of the college, founded in January 2011. It is a group of students from varied disciplines who aim at designing and fabricating a Formula-style race car for international Formula SAE (Society of Automotive Engineers) events. Most of the components of the car, except the engine, tyres and wheel rims, are designed and manufactured by the team members themselves. The car is designed to provide maximum performance under the constraints of the event, while ensuring the driveability, reliability, driver safety and aesthetics of the car are not compromised. Maraal UAVs Researchers at IIT Kanpur have developed a series of solar powered UAVs named MARAAL-1 & MARAAL-2. Development of Maraal is notable as it is the first solar powered UAV developed in India. Maraal-2 is fully indigenous. Student life National events Antaragni: Antaragni is a non-profit organisation run by the students of IIT Kanpur. It was funded entirely by the Student Gymkhana of the university it began. Today the budget is almost Rs 1 crore, raised through sponsorship. It began as an inter-collegiate cultural event in 1964, and now draws in over 1,00,000 visitors from 300 colleges in India Annual cultural festival held over 4 days in October. The festival includes music, drama, literary games, fashion show and quizzing. There is a YouTube channel dedicated to the festival with 1,000+ subscribers. Techkriti: It was started in 1995 with an aim to encourage interest and innovation in technology among students and to provide a platform for industry and academia to interact. Megabucks (a business and entrepreneurship festival) used to be held independently but was merged with Techkriti in 2010. Notable speakers at Techkriti have included APJ Abdul Kalam, Vladimir Voevodsky, Douglas Osheroff, Oliver Smithies, Rakesh Sharma, David Griffiths and Richard Stallman. Udghosh: Udghosh is IIT Kanpur's annual sports festival Usually held in September.It started in 2004 as inter- college sports meet organised by the institute. UDGHOSH involves students from all over India competing in the university's sports facilities. The festival includes Motivational Talks, Mini Marathon, Gymnastic Shows and Sport Quizzes to various sports events. Vivekananda Youth Leadership Convention: Vivekananda Samiti, under Students Gymkhana, on behalf of the IIT Kanpur, has undertaken the celebration of 150th Birth Anniversary of Swami Vivekananda from 2011 to 2015. The convention has included Kiran Bedi, Bana Singh, Yogendra Singh Yadav, Raju Narayana Swamy, Arunima Sinha, Rajendra Singh and other personalities from different fields in previous years. E-summit: It started in 2013. The first E-Summit was scheduled for 16–18 Aug 2013. These three-day festival by Entrepreneurship Cell, IIT Kanpur on the theme Emerge on the Radar included talks by eminent personalities, workshops and competitions. Students' Gymkhana The Students' Gymkhana is the students' government organization of IIT Kanpur, established in 1962. The Students' Gymkhana functions mainly through the Students' Senate, an elected student representative body composed of senators elected from each batch and the six elected executives: President, Students' Gymkhana. General Secretary, Media and Culture. General Secretary, Games and Sports. General Secretary, Science and Technology. UG Secretary, Academics and Career PG Secretary, Academics and Career The number of senators in the Students' Senate is around 50–55. A senator is elected for every 150 students of IIT Kanpur. The meetings of the Students' Senate are chaired by the chairperson, Students' Senate, who is elected by the Senate. The Senate lays down the guidelines for the functions of the executives, their associated councils, the Gymkhana Festivals and other matters pertaining to the Student body at large. The Students' Senate has a say in the
|
have originated more than a billion years ago. The molecular origins of insulin go at least as far back as the simplest unicellular eukaryotes. Apart from animals, insulin-like proteins are also known to exist in the Fungi and Protista kingdoms. Insulin is produced by beta cells of the pancreatic islets in most vertebrates and by the Brockmann body in some teleost fish. Cone snails Conus geographus and Conus tulipa, venomous sea snails that hunt small fish, use modified forms of insulin in their venom cocktails. The insulin toxin, closer in structure to fishes' than to snails' native insulin, slows down the prey fishes by lowering their blood glucose levels. Production Insulin is produced exclusively in the beta cells of the pancreatic islets in mammals, and the Brockmann body in some fish. Human insulin is produced from the INS gene, located on chromosome 11. Rodents have two functional insulin genes; one is the homolog of most mammalian genes (Ins2), and the other is a retroposed copy that includes promoter sequence but that is missing an intron (Ins1). Transcription of the insulin gene increases in response to elevated blood glucose. This is primarily controlled by transcription factors that bind enhancer sequences in the ~400 base pairs before the gene's transcription start site. The major transcription factors influencing insulin secretion are PDX1, NeuroD1, and MafA. During a low-glucose state, PDX1 (pancreatic and duodenal homeobox protein 1) is located in the nuclear periphery as a result of interaction with HDAC1 and 2, which results in downregulation of insulin secretion. An increase in blood glucose levels causes phosphorylation of PDX1, which leads it to undergo nuclear translocation and bind the A3 element within the insulin promoter. Upon translocation it interacts with coactivators HAT p300 and SETD7. PDX1 affects the histone modifications through acetylation and deacetylation as well as methylation. It is also said to suppress glucagon. NeuroD1, also known as β2, regulates insulin exocytosis in pancreatic β cells by directly inducing the expression of genes involved in exocytosis. It is localized in the cytosol, but in response to high glucose it becomes glycosylated by OGT and/or phosphorylated by ERK, which causes translocation to the nucleus. In the nucleus β2 heterodimerizes with E47, binds to the E1 element of the insulin promoter and recruits co-activator p300 which acetylates β2. It is able to interact with other transcription factors as well in activation of the insulin gene. MafA is degraded by proteasomes upon low blood glucose levels. Increased levels of glucose make an unknown protein glycosylated. This protein works as a transcription factor for MafA in an unknown manner and MafA is transported out of the cell. MafA is then translocated back into the nucleus where it binds the C1 element of the insulin promoter. These transcription factors work synergistically and in a complex arrangement. Increased blood glucose can after a while destroy the binding capacities of these proteins, and therefore reduce the amount of insulin secreted, causing diabetes. The decreased binding activities can be mediated by glucose induced oxidative stress and antioxidants are said to prevent the decreased insulin secretion in glucotoxic pancreatic β cells. Stress signalling molecules and reactive oxygen species inhibits the insulin gene by interfering with the cofactors binding the transcription factors and the transcription factors itself. Several regulatory sequences in the promoter region of the human insulin gene bind to transcription factors. In general, the A-boxes bind to Pdx1 factors, E-boxes bind to NeuroD, C-boxes bind to MafA, and cAMP response elements to CREB. There are also silencers that inhibit transcription. Synthesis Insulin is synthesized as an inactive precursor molecule, a 110 amino acid-long protein called "preproinsulin". Preproinsulin is translated directly into the rough endoplasmic reticulum (RER), where its signal peptide is removed by signal peptidase to form "proinsulin". As the proinsulin folds, opposite ends of the protein, called the "A-chain" and the "B-chain", are fused together with three disulfide bonds. Folded proinsulin then transits through the Golgi apparatus and is packaged into specialized secretory vesicles. In the granule, proinsulin is cleaved by proprotein convertase 1/3 and proprotein convertase 2, removing the middle part of the protein, called the "C-peptide". Finally, carboxypeptidase E removes two pairs of amino acids from the protein's ends, resulting in active insulin – the insulin A- and B- chains, now connected with two disulfide bonds. The resulting mature insulin is packaged inside mature granules waiting for metabolic signals (such as leucine, arginine, glucose and mannose) and vagal nerve stimulation to be exocytosed from the cell into the circulation. Insulin and its related proteins have been shown to be produced inside the brain, and reduced levels of these proteins are linked to Alzheimer's disease. Insulin release is stimulated also by beta-2 receptor stimulation and inhibited by alpha-1 receptor stimulation. In addition, cortisol, glucagon and growth hormone antagonize the actions of insulin during times of stress. Insulin also inhibits fatty acid release by hormone sensitive lipase in adipose tissue. Structure Contrary to an initial belief that hormones would be generally small chemical molecules, as the first peptide hormone known of its structure, insulin was found to be quite large. A single protein (monomer) of human insulin is composed of 51 amino acids, and has a molecular mass of 5808 Da. The molecular formula of human insulin is C257H383N65O77S6. It is a combination of two peptide chains (dimer) named an A-chain and a B-chain, which are linked together by two disulfide bonds. The A-chain is composed of 21 amino acids, while the B-chain consists of 30 residues. The linking (interchain) disulfide bonds are formed at cysteine residues between the positions A7-B7 and A20-B19. There is an additional (intrachain) disulfide bond within the A-chain between cysteine residues at positions A6 and A11. The A-chain exhibits two α-helical regions at A1-A8 and A12-A19 which are antiparallel; while the B chain has a central α -helix (covering residues B9-B19) flanked by the disulfide bond on either sides and two β-sheets (covering B7-B10 and B20-B23). The amino acid sequence of insulin is strongly conserved and varies only slightly between species. Bovine insulin differs from human in only three amino acid residues, and porcine insulin in one. Even insulin from some species of fish is similar enough to human to be clinically effective in humans. Insulin in some invertebrates is quite similar in sequence to human insulin, and has similar physiological effects. The strong homology seen in the insulin sequence of diverse species suggests that it has been conserved across much of animal evolutionary history. The C-peptide of proinsulin, however, differs much more among species; it is also a hormone, but a secondary one. Insulin is produced and stored in the body as a hexamer (a unit of six insulin molecules), while the active form is the monomer. The hexamer is about 36000 Da in size. The six molecules are linked together as three dimeric units to form symmetrical molecule. An important feature is the presence of zinc atoms (Zn2+) on the axis of symmetry, which are surrounded by three water molecules and three histamine residues at position B10. The hexamer is an inactive form with long-term stability, which serves as a way to keep the highly reactive insulin protected, yet readily available. The hexamer-monomer conversion is one of the central aspects of insulin formulations for injection. The hexamer is far more stable than the monomer, which is desirable for practical reasons; however, the monomer is a much faster-reacting drug because diffusion rate is inversely related to particle size. A fast-reacting drug means insulin injections do not have to precede mealtimes by hours, which in turn gives people with diabetes more flexibility in their daily schedules. Insulin can aggregate and form fibrillar interdigitated beta-sheets. This can cause injection amyloidosis, and prevents the storage of insulin for long periods. Function Secretion Beta cells in the islets of Langerhans release insulin in two phases. The first-phase release is rapidly triggered in response to increased blood glucose levels, and lasts about 10 minutes. The second phase is a sustained, slow release of newly formed vesicles triggered independently of sugar, peaking in 2 to 3 hours. The two phases of the insulin release suggest that insulin granules are present in diverse stated populations or "pools". During the first phase of insulin exocytosis, most of the granules predispose for exocytosis are released after the calcium internalization. This pool is known as Readily Releasable Pool (RRP). The RRP granules represent 0.3-0.7% of the total insulin-containing granule population, and they are found immediately adjacent to the plasma membrane. During the second phase of exocytosis, insulin granules require mobilization of granules to the plasma membrane and a previous preparation to undergo their release. Thus, the second phase of insulin release is governed by the rate at which granules get ready for release. This pool is known as a Reserve Pool (RP). The RP is released slower than the RRP (RRP: 18 granules/min; RP: 6 granules/min). Reduced first-phase insulin release may be the earliest detectable beta cell defect predicting onset of type 2 diabetes. First-phase release and insulin sensitivity are independent predictors of diabetes. The description of first phase release is as follows: Glucose enters the β-cells through the glucose transporters, GLUT2. These glucose transporters have a relatively low affinity for glucose, ensuring that the rate of glucose entry into the β-cells is proportional to the extracellular glucose concentration (within the physiological range). At low blood sugar levels very little glucose enters the β-cells; at high blood glucose concentrations large quantities of glucose enter these cells. The glucose that enters the β-cell is phosphorylated to glucose-6-phosphate (G-6-P) by glucokinase (hexokinase IV) which is not inhibited by G-6-P in the way that the hexokinases in other tissues (hexokinase I – III) are affected by this product. This means that the intracellular G-6-P concentration remains proportional to the blood sugar concentration. Glucose-6-phosphate enters glycolytic pathway and then, via the pyruvate dehydrogenase reaction, into the Krebs cycle, where multiple, high-energy ATP molecules are produced by the oxidation of acetyl CoA (the Krebs cycle substrate), leading to a rise in the ATP:ADP ratio within the cell. An increased intracellular ATP:ADP ratio closes the ATP-sensitive SUR1/Kir6.2 potassium channel (see sulfonylurea receptor). This prevents potassium ions (K+) from leaving the cell by facilitated diffusion, leading to a buildup of intracellular potassium ions. As a result, the inside of the cell becomes less negative with respect to the outside, leading to the depolarization of the cell surface membrane. Upon depolarization, voltage-gated calcium ion (Ca2+) channels open, allowing calcium ions to move into the cell by facilitated diffusion. The cytosolic calcium ion concentration can also be increased by calcium release from intracellular stores via activation of ryanodine receptors. The calcium ion concentration in the cytosol of the beta cells can also, or additionally, be increased through the activation of phospholipase C resulting from the binding of an extracellular ligand (hormone or neurotransmitter) to a G protein-coupled membrane receptor. Phospholipase C cleaves the membrane phospholipid, phosphatidyl inositol 4,5-bisphosphate, into inositol 1,4,5-trisphosphate and diacylglycerol. Inositol 1,4,5-trisphosphate (IP3) then binds to receptor proteins in the plasma membrane of the endoplasmic reticulum (ER). This allows the release of Ca2+ ions from the ER via IP3-gated channels, which raises the cytosolic concentration of calcium ions independently of the effects of a high blood glucose concentration. Parasympathetic stimulation of the pancreatic islets operates via this pathway to increase insulin secretion into the blood. The significantly increased amount of calcium ions in the cells' cytoplasm causes the release into the blood of previously synthesized insulin, which has been stored in intracellular secretory vesicles. This is the primary mechanism for release of insulin. Other substances known to stimulate insulin release include the amino acids arginine and leucine, parasympathetic release of acetylcholine (acting via the phospholipase C pathway), sulfonylurea, cholecystokinin (CCK, also via phospholipase C), and the gastrointestinally derived incretins, such as glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP). Release of insulin is strongly inhibited by norepinephrine (noradrenaline), which leads to increased blood glucose levels during stress. It appears that release of catecholamines by the sympathetic nervous system has conflicting influences on insulin release by beta cells, because insulin release is inhibited by α2-adrenergic receptors and stimulated by β2-adrenergic receptors. The net effect of norepinephrine from sympathetic nerves and epinephrine from adrenal glands on insulin release is inhibition due to dominance of the α-adrenergic receptors. When the glucose level comes down to the usual physiologic value, insulin release from the β-cells slows or stops. If the blood glucose level drops lower than this, especially to dangerously low levels, release of hyperglycemic hormones (most prominently glucagon from islet of Langerhans alpha cells) forces release of glucose into the blood from the liver glycogen stores, supplemented by gluconeogenesis if the glycogen stores become depleted. By increasing blood glucose, the hyperglycemic hormones prevent or correct life-threatening hypoglycemia. Evidence of impaired first-phase insulin release can be seen in the glucose tolerance test, demonstrated by a substantially elevated blood glucose level at 30 minutes after the ingestion of a glucose load (75 or 100 g of glucose), followed by a slow drop over the next 100 minutes, to remain above 120 mg/100 ml after two hours after the start of the test. In a normal person the blood glucose level is corrected (and may even be slightly over-corrected) by the end of the test. An insulin spike is a 'first response' to blood glucose increase, this response is individual and dose specific although it was always previously assumed to be food type specific only. Oscillations Even during digestion, in general, one or two hours following a meal, insulin release from the pancreas is not continuous, but oscillates with a period of 3–6 minutes, changing from generating a blood insulin concentration more than about 800 p mol/l to less than 100 pmol/l (in rats). This is thought to avoid downregulation of insulin receptors in target cells, and to assist the liver in extracting insulin from the blood. This oscillation is important to consider when administering insulin-stimulating medication, since it is the oscillating blood concentration of insulin release, which should, ideally, be achieved, not a constant high concentration. This may be achieved by delivering insulin rhythmically to the portal vein, by light activated delivery, or by islet cell transplantation to the liver. Blood insulin level The blood insulin level can be measured in international units, such as µIU/mL or in molar concentration, such as pmol/L, where 1 µIU/mL equals 6.945 pmol/L. A typical blood level between meals is 8–11 μIU/mL (57–79 pmol/L). Signal transduction The effects of insulin are initiated by its binding to a receptor, the insulin receptor (IR), present in the cell membrane. The receptor molecule contains an α- and β subunits. Two molecules are joined to form what is known as a homodimer. Insulin binds to the α-subunits of the homodimer, which faces the extracellular side of the cells. The β subunits have tyrosine kinase enzyme activity which is triggered by the insulin binding. This activity provokes the autophosphorylation of the β subunits and subsequently the phosphorylation of proteins inside the cell known as insulin receptor substrates (IRS). The phosphorylation of the IRS activates a
|
the liver never leaves the liver); decrease of insulin causes glucose production by the liver from assorted substrates. Decreased proteolysis – decreasing the breakdown of protein Decreased autophagy – decreased level of degradation of damaged organelles. Postprandial levels inhibit autophagy completely. Increased amino acid uptake – forces cells to absorb circulating amino acids; decrease of insulin inhibits absorption. Arterial muscle tone – forces arterial wall muscle to relax, increasing blood flow, especially in microarteries; decrease of insulin reduces flow by allowing these muscles to contract. Increase in the secretion of hydrochloric acid by parietal cells in the stomach. Increased potassium uptake – forces cells synthesizing glycogen (a very spongy, "wet" substance, that increases the content of intracellular water, and its accompanying K+ ions) to absorb potassium from the extracellular fluids; lack of insulin inhibits absorption. Insulin's increase in cellular potassium uptake lowers potassium levels in blood plasma. This possibly occurs via insulin-induced translocation of the Na+/K+-ATPase to the surface of skeletal muscle cells. Decreased renal sodium excretion. Insulin also influences other body functions, such as vascular compliance and cognition. Once insulin enters the human brain, it enhances learning and memory and benefits verbal memory in particular. Enhancing brain insulin signaling by means of intranasal insulin administration also enhances the acute thermoregulatory and glucoregulatory response to food intake, suggesting that central nervous insulin contributes to the co-ordination of a wide variety of homeostatic or regulatory processes in the human body. Insulin also has stimulatory effects on gonadotropin-releasing hormone from the hypothalamus, thus favoring fertility. Degradation Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment, or it may be degraded by the cell. The two primary sites for insulin clearance are the liver and the kidney. The liver clears most insulin during first-pass transit, whereas the kidney clears most of the insulin in systemic circulation. Degradation normally involves endocytosis of the insulin-receptor complex, followed by the action of insulin-degrading enzyme. An insulin molecule produced endogenously by the beta cells is estimated to be degraded within about one hour after its initial release into circulation (insulin half-life ~ 4–6 minutes). Regulator of endocannabinoid metabolism Insulin is a major regulator of endocannabinoid (EC) metabolism and insulin treatment has been shown to reduce intracellular ECs, the 2-arachidonoylglycerol (2-AG) and anandamide (AEA), which correspond with insulin-sensitive expression changes in enzymes of EC metabolism. In insulin-resistant adipocytes, patterns of insulin-induced enzyme expression is disturbed in a manner consistent with elevated EC synthesis and reduced EC degradation. Findings suggest that insulin-resistant adipocytes fail to regulate EC metabolism and decrease intracellular EC levels in response to insulin stimulation, whereby obese insulin-resistant individuals exhibit increased concentrations of ECs. This dysregulation contributes to excessive visceral fat accumulation and reduced adiponectin release from abdominal adipose tissue, and further to the onset of several cardiometabolic risk factors that are associated with obesity and type 2 diabetes. Hypoglycemia Hypoglycemia, also known as "low blood sugar", is when blood sugar decreases to below normal levels. This may result in a variety of symptoms including clumsiness, trouble talking, confusion, loss of consciousness, seizures or death. A feeling of hunger, sweating, shakiness and weakness may also be present. Symptoms typically come on quickly. The most common cause of hypoglycemia is medications used to treat diabetes mellitus such as insulin and sulfonylureas. Risk is greater in diabetics who have eaten less than usual, exercised more than usual or have drunk alcohol. Other causes of hypoglycemia include kidney failure, certain tumors, such as insulinoma, liver disease, hypothyroidism, starvation, inborn error of metabolism, severe infections, reactive hypoglycemia and a number of drugs including alcohol. Low blood sugar may occur in otherwise healthy babies who have not eaten for a few hours. Diseases and syndromes There are several conditions in which insulin disturbance is pathologic: Diabetes mellitus – general term referring to all states characterized by hyperglycemia. It can be of the following types: Type 1 – autoimmune-mediated destruction of insulin-producing β-cells in the pancreas, resulting in absolute insulin deficiency Type 2 – either inadequate insulin production by the β-cells or insulin resistance or both because of reasons not completely understood. there is correlation with diet, with sedentary lifestyle, with obesity, with age and with metabolic syndrome. Causality has been demonstrated in multiple model organisms including mice and monkeys; importantly, non-obese people do get Type 2 diabetes due to diet, sedentary lifestyle and unknown risk factors, though it is important to note that this may not be a causal relationship. it is likely that there is genetic susceptibility to develop Type 2 diabetes under certain environmental conditions Other types of impaired glucose tolerance (see Diabetes) Insulinoma – a tumor of beta cells producing excess insulin or reactive hypoglycemia. Metabolic syndrome – a poorly understood condition first called syndrome X by Gerald Reaven. It is not clear whether the syndrome has a single, treatable cause, or is the result of body changes leading to type 2 diabetes. It is characterized by elevated blood pressure, dyslipidemia (disturbances in blood cholesterol forms and other blood lipids), and increased waist circumference (at least in populations in much of the developed world). The basic underlying cause may be the insulin resistance that precedes type 2 diabetes, which is a diminished capacity for insulin response in some tissues (e.g., muscle, fat). It is common for morbidities such as essential hypertension, obesity, type 2 diabetes, and cardiovascular disease (CVD) to develop. Polycystic ovary syndrome – a complex syndrome in women in the reproductive years where anovulation and androgen excess are commonly displayed as hirsutism. In many cases of PCOS, insulin resistance is present. Medical uses Biosynthetic human insulin (insulin human rDNA, INN) for clinical use is manufactured by recombinant DNA technology. Biosynthetic human insulin has increased purity when compared with extractive animal insulin, enhanced purity reducing antibody formation. Researchers have succeeded in introducing the gene for human insulin into plants as another method of producing insulin ("biopharming") in safflower. This technique is anticipated to reduce production costs. Several analogs of human insulin are available. These insulin analogs are closely related to the human insulin structure, and were developed for specific aspects of glycemic control in terms of fast action (prandial insulins) and long action (basal insulins). The first biosynthetic insulin analog was developed for clinical use at mealtime (prandial insulin), Humalog (insulin lispro), it is more rapidly absorbed after subcutaneous injection than regular insulin, with an effect 15 minutes after injection. Other rapid-acting analogues are NovoRapid and Apidra, with similar profiles. All are rapidly absorbed due to amino acid sequences that will reduce formation of dimers and hexamers (monomeric insulins are more rapidly absorbed). Fast acting insulins do not require the injection-to-meal interval previously recommended for human insulin and animal insulins. The other type is long acting insulin; the first of these was Lantus (insulin glargine). These have a steady effect for an extended period from 18 to 24 hours. Likewise, another protracted insulin analogue (Levemir) is based on a fatty acid acylation approach. A myristic acid molecule is attached to this analogue, which associates the insulin molecule to the abundant serum albumin, which in turn extends the effect and reduces the risk of hypoglycemia. Both protracted analogues need to be taken only once daily, and are used for type 1 diabetics as the basal insulin. A combination of a rapid acting and a protracted insulin is also available, making it more likely for patients to achieve an insulin profile that mimics that of the body's own insulin release. Insulin is also used in many cell lines, such as CHO-s, HEK 293 or Sf9, for the manufacturing of monoclonal antibodies, virus vaccines, and gene therapy products. Insulin is usually taken as subcutaneous injections by single-use syringes with needles, via an insulin pump, or by repeated-use insulin pens with disposable needles. Inhaled insulin is also available in the U.S. market. Unlike many medicines, insulin cannot be taken by mouth because, like nearly all other proteins introduced into the gastrointestinal tract, it is reduced to fragments, whereupon all activity is lost. There has been some research into ways to protect insulin from the digestive tract, so that it can be administered orally or sublingually. In 2021, the World Health Organization added insulin to its model list of essential medicines. History of study Discovery In 1869, while studying the structure of the pancreas under a microscope, Paul Langerhans, a medical student in Berlin, identified some previously unnoticed tissue clumps scattered throughout the bulk of the pancreas. The function of the "little heaps of cells", later known as the islets of Langerhans, initially remained unknown, but Édouard Laguesse later suggested they might produce secretions that play a regulatory role in digestion. Paul Langerhans' son, Archibald, also helped to understand this regulatory role. In 1889, the physician Oskar Minkowski, in collaboration with Joseph von Mering, removed the pancreas from a healthy dog to test its assumed role in digestion. On testing the urine, they found sugar, establishing for the first time a relationship between the pancreas and diabetes. In 1901, another major step was taken by the American physician and scientist Eugene Lindsay Opie, when he isolated the role of the pancreas to the islets of Langerhans: "Diabetes mellitus when the result of a lesion of the pancreas is caused by destruction of the islands of Langerhans and occurs only when these bodies are in part or wholly destroyed". Over the next two decades researchers made several attempts to isolate the islets' secretions. In 1906 George Ludwig Zuelzer achieved partial success in treating dogs with pancreatic extract, but he was unable to continue his work. Between 1911 and 1912, E.L. Scott at the University of Chicago tried aqueous pancreatic extracts and noted "a slight diminution of glycosuria", but was unable to convince his director of his work's value; it was shut down. Israel Kleiner demonstrated similar effects at Rockefeller University in 1915, but World War I interrupted his work and he did not return to it. In 1916, Nicolae Paulescu developed an aqueous pancreatic extract which, when injected into a diabetic dog, had a normalizing effect on blood sugar levels. He had to interrupt his experiments because of World War I, and in 1921 he wrote four papers about his work carried out in Bucharest and his tests on a diabetic dog. Later that year, he published "Research on the Role of the Pancreas in Food Assimilation". The name "insulin" was coined by Edward Albert Sharpey-Schafer in 1916 for a hypothetical molecule produced by pancreatic islets of Langerhans (Latin insula for islet or island) that controls glucose metabolism. Unbeknown to Sharpey-Schafer, Jean de Meyer had introduced the very similar word "insuline" in 1909 for the same molecule. Extraction and purification In October 1920, Canadian Frederick Banting concluded that the digestive secretions that Minkowski had originally studied were breaking down the islet secretion, thereby making it impossible to extract successfully. A surgeon by training, Banting knew that blockages of the pancreatic duct would lead most of the pancreas to atrophy, while leaving the islets of Langerhans intact. He reasoned that a relatively pure extract could be made from the islets once most of the rest of the pancreas was gone. He jotted a note to himself: "Ligate pancreatic ducts of the dog. Keep dogs alive till acini degenerate leaving islets. Try to isolate internal secretion of these and relieve glycosuria." In the spring of 1921, Banting traveled to Toronto to explain his idea to J.J.R. Macleod, Professor of Physiology at the University of Toronto. Macleod was initially skeptical, since Banting had no background in research and was not familiar with the latest literature, but he agreed to provide lab space for Banting to test out his ideas. Macleod also arranged for two undergraduates to be Banting's lab assistants that summer, but Banting required only one lab assistant. Charles Best and Clark Noble flipped a coin; Best won the coin toss and took the first shift. This proved unfortunate for Noble, as Banting kept Best for the entire summer and eventually shared half his Nobel Prize money and credit for the discovery with Best. On 30 July 1921, Banting and Best successfully isolated an extract ("isleton") from the islets of a duct-tied dog and injected it into a diabetic dog, finding that the extract reduced its blood sugar by 40% in 1 hour. Banting and Best presented their results to Macleod on his return to Toronto in the fall of 1921, but Macleod pointed out flaws with the experimental design, and suggested the experiments be repeated with more dogs and better equipment. He moved Banting and Best into a better laboratory and began paying Banting a salary from his research grants. Several weeks later, the second round of experiments was also a success, and Macleod helped publish their results privately in Toronto that November. Bottlenecked by the time-consuming task of duct-tying dogs and waiting several weeks to extract insulin, Banting hit upon the idea of extracting insulin from the fetal calf pancreas, which had not yet developed digestive glands. By December, they had also succeeded in extracting insulin from the adult cow pancreas. Macleod discontinued all other research in his laboratory to concentrate on the purification of insulin. He invited biochemist James Collip to help with this task, and the team felt ready for a clinical test within a month. On January 11, 1922, Leonard Thompson, a 14-year-old diabetic who lay dying at the Toronto General Hospital, was given the first injection of insulin. However, the extract was so impure that Thompson suffered a severe allergic reaction, and further injections were cancelled. Over the next 12 days, Collip worked day and night to improve the ox-pancreas extract. A second dose was injected on January 23, completely eliminating the glycosuria that was typical of diabetes without causing any obvious side-effects. The first American patient was Elizabeth Hughes, the daughter of U.S. Secretary of State Charles Evans Hughes. The first patient treated in the U.S. was future woodcut artist James D. Havens; Dr. John Ralston Williams imported insulin from Toronto to Rochester, New York, to treat Havens. Banting and Best never worked well with Collip, regarding him as something of an interloper, and Collip left the project soon after. Over the spring of 1922, Best managed to improve his techniques to the point where large quantities of insulin could be extracted on demand, but the preparation remained impure. The drug firm Eli Lilly and Company had offered assistance not long after the first publications in 1921, and they took Lilly up on the offer in April. In November, Lilly's head chemist, George B. Walden discovered isoelectric precipitation and was able to produce large quantities of highly refined insulin. Shortly thereafter, insulin was offered for sale to the general public. Patent Toward the end of January 1922, tensions mounted between the four "co-discoverers" of insulin and Collip briefly threatened to separately patent his purification process. John G. FitzGerald, director of the non-commercial public health institution Connaught Laboratories, therefore stepped in as peacemaker. The resulting agreement of 25 January 1922 established two key conditions: 1) that the collaborators would sign a contract agreeing not to take out a patent with a commercial pharmaceutical firm during an initial working period with Connaught; and 2) that no changes in research policy would be allowed unless first discussed among FitzGerald and the four collaborators. It helped contain disagreement and tied the research to Connaught's public mandate. Initially, Macleod and Banting were particularly reluctant to patent their process for insulin on grounds of medical ethics. However, concerns remained that a private third-party would hijack and monopolize the research (as Eli Lilly and Company had hinted), and that safe distribution would be difficult to guarantee without capacity for quality control. To this end, Edward Calvin Kendall gave valuable advice. He had isolated thyroxin at the Mayo Clinic in 1914 and patented the process through an arrangement between himself, the brothers Mayo, and the University of Minnesota, transferring the patent to the public university. On April 12, Banting, Best, Collip, Macleod, and FitzGerald wrote jointly to the president of the University of Toronto to propose a similar arrangement with the aim of assigning a patent to the Board of Governors of the University. The letter emphasized that:The patent would not be used for any other purpose than to prevent the taking out of a patent by other persons. When the details of the method of preparation are published anyone would be free to prepare the extract, but no one could secure a profitable monopoly.The assignment to the University of Toronto Board of Governors was completed on 15 January 1923, for the token payment of $1.00. The arrangement was congratulated in The World's Work in 1923 as "a step forward in medical ethics". It has also received much media attention in the 2010s regarding the issue of healthcare and drug affordability. Following further concern regarding Eli Lilly's attempts to separately patent parts of the manufacturing process, Connaught's Assistant Director and Head of the Insulin Division Robert Defries established a patent pooling policy which would require producers to freely share any improvements to the manufacturing process without compromising affordability. Structural analysis and synthesis Purified animal-sourced insulin was initially the only type of insulin available for experiments and diabetics. John Jacob Abel was the first to produce the crystallised form in 1926. Evidence of the protein nature was first given by Michael Somogyi, Edward A. Doisy, and Philip A. Shaffer in 1924. It was fully proven when Hans Jensen and Earl A. Evans Jr. isolated the amino acids phenylalanine and proline in 1935. The amino acid structure of insulin was first characterized in 1951 by Frederick Sanger, and the first synthetic insulin was produced simultaneously in the labs of Panayotis Katsoyannis at the University of Pittsburgh and Helmut Zahn at RWTH Aachen University in the mid-1960s. Synthetic crystalline bovine insulin was achieved by Chinese researchers in 1965. The complete 3-dimensional structure of insulin was determined by X-ray crystallography in Dorothy Hodgkin's laboratory in 1969. Dr. Hans E. Weber discovered preproinsulin while working as a research fellow at the University of California Los Angeles in 1974. In 1973-1974, Weber learned the techniques of how to isolate, purify, and translate messenger RNA. To further investigate insulin, he obtained pancreatic tissues from a slaughterhouse in Los Angeles and then later from animal stock at UCLA. He isolated and purified total messenger RNA from pancreatic islet cells which was then translated in oocytes from Xenopus laevis and precipitated using anti-insulin antibodies. When total translated protein was run on an SDS-polyacrylamide gel electrophoresis and sucrose gradient, peaks corresponding to insulin and proinsulin were isolated. However, to the surprise of Dr. Weber a third peak was isolated corresponding to a molecule larger than proinsulin. After reproducing the experiment several times, he consistently noted this large peak prior to proinsulin that he determined must be a larger precursor molecule upstream of proinsulin. In May of 1975, at the American Diabetes Association meeting in New York, Weber gave an oral presentation of his work where he was the first to name this precursor molecule "preproinsulin". Following this oral presentation, Weber was invited to dinner to discuss his paper and findings by Dr. Donald Steiner, a researcher who contributed to the characterization of proinsulin. A year later in April 1976, this molecule was further
|
the capacitance C. Circuit equivalence at short-time limit and long-time limit In a circuit, an inductor can behave differently at different time instant. However, it's usually easy to think about the short-time limit and long-time limit: In the long-time limit, after the magnetic flux through the inductor has stabilized, no voltage would be induced between the two sides of the inductor; Therefore, the long-time equivalence of an inductor is a wire (i.e. short circuit, or 0 V battery). In the short-time limit, if the inductor starts with a certain current I, since the current through the inductor is known at this instant, we can replace it with an ideal current source of current I. Specifically, if I=0 (no current goes through the inductor at initial instant), the short-time equivalence of an inductor is an open circuit (i.e. 0 A current source). Lenz's law The polarity (direction) of the induced voltage is given by Lenz's law, which states that the induced voltage will be such as to oppose the change in current. For example, if the current through an inductor is increasing, the induced voltage will be positive at the current's entrance point and negative at the exit point, tending to oppose the additional current. The energy from the external circuit necessary to overcome this potential "hill" is being stored in the magnetic field of the inductor. If the current is decreasing, the induced voltage will be negative at the current's entrance point and positive at the exit point, tending to maintain the current. In this case energy from the magnetic field is being returned to the circuit. Energy stored in an inductor One intuitive explanation as to why a potential difference is induced on a change of current in an inductor goes as follows: When there is a change in current through an inductor there is a change in the strength of the magnetic field. For example, if the current is increased, the magnetic field increases. This, however, does not come without a price. The magnetic field contains potential energy, and increasing the field strength requires more energy to be stored in the field. This energy comes from the electric current through the inductor. The increase in the magnetic potential energy of the field is provided by a corresponding drop in the electric potential energy of the charges flowing through the windings. This appears as a voltage drop across the windings as long as the current increases. Once the current is no longer increased and is held constant, the energy in the magnetic field is constant and no additional energy must be supplied, so the voltage drop across the windings disappears. Similarly, if the current through the inductor decreases, the magnetic field strength decreases, and the energy in the magnetic field decreases. This energy is returned to the circuit in the form of an increase in the electrical potential energy of the moving charges, causing a voltage rise across the windings. Derivation The work done per unit charge on the charges passing the inductor is . The negative sign indicates that the work is done against the emf, and is not done by the emf. The current is the charge per unit time passing through the inductor. Therefore the rate of work done by the charges against the emf, that is the rate of change of energy of the current, is given by From the constitutive equation for the inductor, so In a ferromagnetic core inductor, when the magnetic field approaches the level at which the core saturates, the inductance will begin to change, it will be a function of the current . Neglecting losses, the energy stored by an inductor with a current passing through it is equal to the amount of work required to establish the current through the inductor. This is given by: , where is the so-called "differential inductance" and is defined as: . In an air core inductor or a ferromagnetic core inductor below saturation, the inductance is constant (and equal to the differential inductance), so the stored energy is For inductors with magnetic cores, the above equation is only valid for linear regions of the magnetic flux, at currents below the saturation level of the inductor, where the inductance is approximately constant. Where this is not the case, the integral form must be used with variable. Ideal and real inductors The constitutive equation describes the behavior of an ideal inductor with inductance , and without resistance, capacitance, or energy dissipation. In practice, inductors do not follow this theoretical model; real inductors have a measurable resistance due to the resistance of the wire and energy losses in the core, and parasitic capacitance due to electric potentials between turns of the wire. A real inductor's capacitive reactance rises with frequency, and at a certain frequency, the inductor will behave as a resonant circuit. Above this self-resonant frequency, the capacitive reactance is the dominant part of the inductor's impedance. At higher frequencies, resistive losses in the windings increase due to the skin effect and proximity effect. Inductors with ferromagnetic cores experience additional energy losses due to hysteresis and eddy currents in the core, which increase with frequency. At high currents, magnetic core inductors also show sudden departure from ideal behavior due to nonlinearity caused by magnetic saturation of the core. Inductors radiate electromagnetic energy into surrounding space and may absorb electromagnetic emissions from other circuits, resulting in potential electromagnetic interference. An early solid-state electrical switching and amplifying device called a saturable reactor exploits saturation of the core as a means of stopping the inductive transfer of current via the core. Q factor The winding resistance appears as a resistance in series with the inductor; it is referred to as DCR (DC resistance). This resistance dissipates some of the reactive energy. The quality factor (or Q) of an inductor is the ratio of its inductive reactance to its resistance at a given frequency, and is a measure of its efficiency. The higher the Q factor of the inductor, the closer it approaches the behavior of an ideal inductor. High Q inductors are used with capacitors to make resonant circuits in radio transmitters and receivers. The higher the Q is, the narrower the bandwidth of the resonant circuit. The Q factor of an inductor is defined as, where L is the inductance, R is the DCR, and the product ωL is the inductive reactance: Q increases linearly with frequency if L and R are constant. Although they are constant at low frequencies, the parameters vary with frequency. For example, skin effect, proximity effect, and core losses increase R with frequency; winding capacitance and variations in permeability with frequency affect L. At low frequencies and within limits, increasing the number of turns N improves Q because L varies as N2 while R varies linearly with N. Similarly increasing the radius r of an inductor improves (or increases) Q because L varies with r2 while R varies linearly with r. So high Q air core inductors often have large diameters and many turns. Both of those examples assume the diameter of the wire stays the same, so both examples use proportionally more wire. If the total mass of wire is held constant, then there would be no advantage to increasing the number of turns or the radius of the turns because the wire would have to be proportionally thinner. Using a high permeability ferromagnetic core can greatly increase the inductance for the same amount of copper, so the core can also increase the Q. Cores however also introduce losses that increase with frequency. The core material is chosen for best results for the frequency band. High Q inductors must avoid saturation; one way is by using a (physically larger) air core inductor. At VHF or higher frequencies an air core is likely to be used. A well designed air core inductor may have a Q of several hundred. Applications Inductors are used extensively in analog circuits and signal processing. Applications range from the use of large inductors in power supplies, which in conjunction with filter capacitors remove ripple which is a multiple of the mains frequency (or the switching frequency for switched-mode power supplies) from the direct current output, to the small inductance of the ferrite bead or torus installed around a cable to prevent radio frequency interference from being transmitted down the wire. Inductors are used as the energy storage device in many switched-mode power supplies to produce DC current. The inductor supplies energy to the circuit to keep current flowing during the "off" switching periods and enables topographies where the output voltage is higher than the input voltage. A tuned circuit, consisting of an inductor connected to a capacitor, acts as a resonator for oscillating current. Tuned circuits are widely used in radio frequency equipment such as radio transmitters and receivers, as narrow bandpass filters to select a single frequency from a composite signal, and in electronic oscillators to generate sinusoidal signals. Two (or more) inductors in proximity that have coupled magnetic flux (mutual inductance) form a transformer, which is a fundamental component of every electric utility power grid. The efficiency of a transformer may decrease as the frequency increases due to eddy currents in the core material and skin effect on the windings. The size of the core can be decreased at higher frequencies. For this reason, aircraft use 400 hertz alternating current rather than the usual 50 or 60 hertz, allowing a great saving in weight from the use of smaller transformers. Transformers enable switched-mode power supplies that isolate the output from the input. Inductors are also employed in electrical transmission systems, where they are used to limit switching currents and fault currents. In this field, they are more commonly referred to as reactors. Inductors have parasitic effects which cause them to depart from ideal behavior. They create and suffer from electromagnetic interference (EMI). Their physical size prevents them from being integrated on semiconductor chips. So the use of inductors is declining in modern electronic devices, particularly compact portable devices. Real inductors are increasingly being replaced by active circuits such as the gyrator which can synthesize inductance using capacitors. Inductor construction An inductor usually consists of a coil of conducting material, typically insulated copper wire, wrapped around a core either of plastic (to create an air-core inductor) or of a ferromagnetic (or ferrimagnetic) material; the latter is called an "iron core" inductor. The high permeability of the ferromagnetic core increases the magnetic field and confines it closely to the inductor, thereby increasing the inductance. Low frequency inductors are constructed like transformers, with cores of electrical steel laminated to prevent eddy currents. 'Soft' ferrites are widely used for cores above audio frequencies, since they do not cause the large energy losses at high frequencies that ordinary iron alloys do. Inductors come in many shapes. Some inductors have an adjustable core, which enables changing of the inductance. Inductors used to block very high frequencies are sometimes made by stringing a ferrite bead on a wire. Small inductors can be etched directly onto a printed circuit board by laying out the trace in a spiral pattern. Some such planar inductors use a planar core. Small value inductors can also be built on integrated circuits using the same processes that are used to make interconnects. Aluminium interconnect is typically used, laid out in a spiral coil pattern. However, the small dimensions limit the inductance, and it is far more common to use a circuit called a gyrator that uses a capacitor and active components to behave similarly to an inductor. Regardless of the design, because of the low inductances and low power dissipation on-die
|
at low frequencies, the parameters vary with frequency. For example, skin effect, proximity effect, and core losses increase R with frequency; winding capacitance and variations in permeability with frequency affect L. At low frequencies and within limits, increasing the number of turns N improves Q because L varies as N2 while R varies linearly with N. Similarly increasing the radius r of an inductor improves (or increases) Q because L varies with r2 while R varies linearly with r. So high Q air core inductors often have large diameters and many turns. Both of those examples assume the diameter of the wire stays the same, so both examples use proportionally more wire. If the total mass of wire is held constant, then there would be no advantage to increasing the number of turns or the radius of the turns because the wire would have to be proportionally thinner. Using a high permeability ferromagnetic core can greatly increase the inductance for the same amount of copper, so the core can also increase the Q. Cores however also introduce losses that increase with frequency. The core material is chosen for best results for the frequency band. High Q inductors must avoid saturation; one way is by using a (physically larger) air core inductor. At VHF or higher frequencies an air core is likely to be used. A well designed air core inductor may have a Q of several hundred. Applications Inductors are used extensively in analog circuits and signal processing. Applications range from the use of large inductors in power supplies, which in conjunction with filter capacitors remove ripple which is a multiple of the mains frequency (or the switching frequency for switched-mode power supplies) from the direct current output, to the small inductance of the ferrite bead or torus installed around a cable to prevent radio frequency interference from being transmitted down the wire. Inductors are used as the energy storage device in many switched-mode power supplies to produce DC current. The inductor supplies energy to the circuit to keep current flowing during the "off" switching periods and enables topographies where the output voltage is higher than the input voltage. A tuned circuit, consisting of an inductor connected to a capacitor, acts as a resonator for oscillating current. Tuned circuits are widely used in radio frequency equipment such as radio transmitters and receivers, as narrow bandpass filters to select a single frequency from a composite signal, and in electronic oscillators to generate sinusoidal signals. Two (or more) inductors in proximity that have coupled magnetic flux (mutual inductance) form a transformer, which is a fundamental component of every electric utility power grid. The efficiency of a transformer may decrease as the frequency increases due to eddy currents in the core material and skin effect on the windings. The size of the core can be decreased at higher frequencies. For this reason, aircraft use 400 hertz alternating current rather than the usual 50 or 60 hertz, allowing a great saving in weight from the use of smaller transformers. Transformers enable switched-mode power supplies that isolate the output from the input. Inductors are also employed in electrical transmission systems, where they are used to limit switching currents and fault currents. In this field, they are more commonly referred to as reactors. Inductors have parasitic effects which cause them to depart from ideal behavior. They create and suffer from electromagnetic interference (EMI). Their physical size prevents them from being integrated on semiconductor chips. So the use of inductors is declining in modern electronic devices, particularly compact portable devices. Real inductors are increasingly being replaced by active circuits such as the gyrator which can synthesize inductance using capacitors. Inductor construction An inductor usually consists of a coil of conducting material, typically insulated copper wire, wrapped around a core either of plastic (to create an air-core inductor) or of a ferromagnetic (or ferrimagnetic) material; the latter is called an "iron core" inductor. The high permeability of the ferromagnetic core increases the magnetic field and confines it closely to the inductor, thereby increasing the inductance. Low frequency inductors are constructed like transformers, with cores of electrical steel laminated to prevent eddy currents. 'Soft' ferrites are widely used for cores above audio frequencies, since they do not cause the large energy losses at high frequencies that ordinary iron alloys do. Inductors come in many shapes. Some inductors have an adjustable core, which enables changing of the inductance. Inductors used to block very high frequencies are sometimes made by stringing a ferrite bead on a wire. Small inductors can be etched directly onto a printed circuit board by laying out the trace in a spiral pattern. Some such planar inductors use a planar core. Small value inductors can also be built on integrated circuits using the same processes that are used to make interconnects. Aluminium interconnect is typically used, laid out in a spiral coil pattern. However, the small dimensions limit the inductance, and it is far more common to use a circuit called a gyrator that uses a capacitor and active components to behave similarly to an inductor. Regardless of the design, because of the low inductances and low power dissipation on-die inductors allow, they are currently only commercially used for high frequency RF circuits. Shielded inductors Inductors used in power regulation systems, lighting, and other systems that require low-noise operating conditions, are often partially or fully shielded. In telecommunication circuits employing induction coils and repeating transformers shielding of inductors in close proximity reduces circuit cross-talk. Types Air-core inductor The term air core coil describes an inductor that does not use a magnetic core made of a ferromagnetic material. The term refers to coils wound on plastic, ceramic, or other nonmagnetic forms, as well as those that have only air inside the windings. Air core coils have lower inductance than ferromagnetic core coils, but are often used at high frequencies because they are free from energy losses called core losses that occur in ferromagnetic cores, which increase with frequency. A side effect that can occur in air core coils in which the winding is not rigidly supported on a form is 'microphony': mechanical vibration of the windings can cause variations in the inductance. Radio-frequency inductor At high frequencies, particularly radio frequencies (RF), inductors have higher resistance and other losses. In addition to causing power loss, in resonant circuits this can reduce the Q factor of the circuit, broadening the bandwidth. In RF inductors, which are mostly air core types, specialized construction techniques are used to minimize these losses. The losses are due to these effects: Skin effect The resistance of a wire to high frequency current is higher than its resistance to direct current because of skin effect. Radio frequency alternating current does not penetrate far into the body of a conductor but travels along its surface. For example, at 6 MHz the skin depth of copper wire is about 0.001 inches (25 µm); most of the current is within this depth of the surface. Therefore, in a solid wire, the interior portion of the wire may carry little current, effectively increasing its resistance. Proximity effect Another similar effect that also increases the resistance of the wire at high frequencies is proximity effect, which occurs in parallel wires that lie close to each other. The individual magnetic field of adjacent turns induces eddy currents in the wire of the coil, which causes the current in the conductor to be concentrated in a thin strip on the side near the adjacent wire. Like skin effect, this reduces the effective cross-sectional area of the wire conducting current, increasing its resistance. Dielectric losses The high frequency electric field near the conductors in a tank coil can cause the motion of polar molecules in nearby insulating materials, dissipating energy as heat. So coils used for tuned circuits are often not wound on coil forms but are suspended in air, supported by narrow plastic or ceramic strips. Parasitic capacitance The capacitance between individual wire turns of the coil, called parasitic capacitance, does not cause energy losses but can change the behavior of the coil. Each turn of the coil is at a slightly different potential, so the electric field between neighboring turns stores charge on the wire, so the coil acts as if it has a capacitor in parallel with it. At a high enough frequency this capacitance can resonate with the inductance of the coil forming a tuned circuit, causing the coil to become self-resonant. To reduce parasitic capacitance and proximity effect, high Q RF coils are constructed to avoid having many turns lying close together, parallel to one another. The windings of RF coils are often limited to a single layer, and the turns are spaced apart. To reduce resistance due to skin effect, in high-power inductors such as those used in transmitters the windings are sometimes made of a metal strip or tubing which has a larger surface area, and the surface is silver-plated. Basket-weave coils To reduce proximity effect and parasitic capacitance, multilayer RF coils are wound in patterns in which successive turns are not parallel but criss-crossed at an angle; these are often called honeycomb or basket-weave coils. These are occasionally wound on a vertical insulating supports with dowels or slots, with the wire weaving in and out through the slots. Spiderweb coils Another construction technique with similar advantages is flat spiral coils. These are often wound on a flat insulating support with radial spokes or slots, with the wire weaving in and out through the slots; these are called spiderweb coils. The form has an odd number of slots, so successive turns of the spiral lie on opposite sides of the form, increasing separation. Litz wire To reduce skin effect losses, some coils are wound with a special type of radio frequency wire called litz wire. Instead of a single solid conductor, litz wire consists of a number of smaller wire strands that carry the current. Unlike ordinary stranded wire, the strands are insulated from each other, to prevent skin effect from forcing the current to the surface, and are twisted or braided together. The twist pattern ensures that each wire strand spends the same amount of its length on the outside of the wire bundle, so skin effect distributes the current equally between the strands, resulting in a larger cross-sectional conduction area than an equivalent single wire. Axial Inductor Small inductors for low current and low power are made in molded cases resembling resistors. These may be either plain (phenolic) core or ferrite core. An ohmmeter readily distinguishes them from similar-sized resistors by showing the low resistance of the inductor. Ferromagnetic-core inductor Ferromagnetic-core or iron-core inductors use a magnetic core made of a ferromagnetic or ferrimagnetic material such as iron or ferrite to increase the inductance. A magnetic core can increase the inductance of a coil by a factor of several thousand, by increasing the magnetic field due to its higher magnetic permeability. However the magnetic properties of the core material cause several side effects which alter the behavior of the inductor and require special construction: Laminated-core inductor Low-frequency inductors are often made with laminated cores to prevent eddy currents, using construction similar to transformers. The core is made of stacks of thin steel sheets or laminations oriented parallel to the field, with an insulating coating on the surface. The insulation prevents eddy currents between the sheets, so any remaining currents must be within the cross sectional area of the individual laminations, reducing the area of the loop and thus reducing the energy losses greatly. The laminations are made of low-conductivity silicon steel to further reduce eddy current losses. Ferrite-core inductor For higher frequencies, inductors are made with cores of ferrite. Ferrite is a ceramic ferrimagnetic material that is nonconductive, so eddy currents cannot flow within it. The formulation of ferrite is xxFe2O4 where xx represents various metals. For inductor cores soft ferrites are used, which have low coercivity and thus low hysteresis losses. Powdered-iron-core inductor Another material is powdered iron cemented with a binder. Toroidal-core inductor In an inductor wound on a straight rod-shaped core, the magnetic field lines emerging from one end of the core must pass through the air to re-enter the core at the other end. This reduces the field, because much of the magnetic field path is in air rather than the higher permeability core material and is a source of electromagnetic interference. A higher magnetic field and inductance can be achieved by forming the core in a closed magnetic circuit. The magnetic field lines form closed loops within the core without leaving the core material. The shape often used is a toroidal or doughnut-shaped ferrite core. Because of their symmetry, toroidal cores allow a minimum of the magnetic flux to escape outside the core (called leakage flux), so they radiate less electromagnetic interference than other shapes. Toroidal core coils are manufactured of various materials, primarily ferrite, powdered iron and laminated cores. Variable inductor Probably the most common type of variable inductor today is one with a moveable ferrite magnetic core, which can be slid or screwed in or out of the coil. Moving the core farther into the coil increases the permeability, increasing the magnetic field and the inductance. Many inductors used in radio applications (usually less than 100 MHz) use adjustable cores in order to tune such inductors to their desired value, since manufacturing processes have certain tolerances (inaccuracy). Sometimes such cores
|
Users may experience scar tissue buildup around the inserted cannula, resulting in a hard bump under the skin after the cannula is removed. The scar tissue does not heal particularly fast, so years of wearing the pump and changing the infusion site will cause the user to start running out of viable "spots" to wear the pump. In addition, the areas with scar tissue buildup generally have lower insulin sensitivity and may affect basal rates and bolus amounts. In some extreme cases the insulin delivery will appear to have no/little effect on lowering blood glucose levels and the site must be changed. Users may experience allergic reactions and other skin irritation from the adhesive on the back of an infusion set. Experience may vary according to the individual, the pump manufacturer, and the type of infusion set used. A larger supply of insulin may be required in order to use the pump. Many units of insulin can be "wasted" while refilling the pump's reservoir or changing an infusion site. This may affect prescription and dosage information. Accessibility Use of insulin pumps is increasing because of: Easy delivery of multiple insulin injections for those using intensive insulin therapy. Accurate delivery of very small boluses, helpful for infants. Growing support among doctors and insurance companies due to the benefits contributing to reducing the incidence of long-term complications. Improvements in blood glucose monitoring. New meters require smaller drops of blood, and the corresponding lancet poke in the fingers is smaller and less painful. These meters also support alternate site testing for the most routine tests for practically painless testing. History In 1974 the first insulin pump was created and was named the Biostator. The first pump was so large that it was worn as a backpack. It also had the capability of monitoring the blood glucose levels so this also doubles as the first continuous glucose monitor. Today insulin pumps are so small that they can fit in a pocket or a purse. In 1984 an Infusaid implantable infusion device was used to treat a 22-year-old diabetic female successfully. The insulin pump was first endorsed in the United Kingdom in 2003, by the National Institute for Health and Care Excellence (NICE). Developments New insulin pumps are becoming "smart" as new features are added to their design. These simplify the tasks involved in delivering an insulin bolus. insulin on board: This calculation is based on the size of a bolus, the time elapsed since the completion of the bolus, and a programmable metabolic rate. The pump software will estimate the insulin remaining in the bloodstream and relay it to the user. This supports the process of performing a new bolus before the effects of the last bolus are complete and, thereby, helps prevent the user from overcompensating for high blood sugar with unnecessary correction boluses. bolus calculators: Pump software helps by calculating the dose for the next insulin bolus. The user enters the grams of carbohydrates to be consumed, and the bolus "wizard" calculates the units of insulin needed. It adjusts for the most recent blood glucose level and the insulin on board, and then suggests the best insulin dose to the user to approve and deliver. custom alarms: The pump can monitor for activities during specific times of day and alarm the user if an expected activity did not occur. Examples include a missed lunch bolus, a missed blood glucose test, a new blood glucose test 15 minutes after a low blood glucose test, etc. The alarms are customized for each user. touch bolus: For persons with visual impairments, this button on the pump can be used to bolus for insulin without using the display. This works with a system of beeps to confirm the bolus parameters to the pump user. This feature is described as 'touch', 'audio', or 'easy' bolus depending on brand. The feature was first introduced in the mid- to late 1990s. interface to personal computers: Since the late 1990s, most pumps can interface with personal computers for managing and documenting pump programming and/or to upload data from the pump. This simplifies record keeping and can be interfaced with diabetes management software. integration with blood glucose meters: Blood glucose data can be manually entered into the pump to support the bolus wizard for calculation of the next insulin bolus. Some pumps support an interface between the insulin pump and a blood glucose meter. The Medtronic Diabetes Minimed Paradigm series of insulin pumps allow for radio frequency (RF) communication. This enables the pump to receive data from a Lifescan (in the US) or Bayer (in other countries) blood glucose meter. The Animas Ping is a pump/blood glucose meter combo that connect to each other using radio frequency. They both can work independently of each other and each have their own history storage. The main purpose of the connection between the pump and the meter is that it allows boluses to be made from the meter or the pump. This is particularly useful when correcting for a high blood sugar as the meter remembers readings and automatically enters them in correction boluses if they are less than 15 minutes old. The DANA Diabecare IISG insulin pump has a blood glucose meter in it. After a blood glucose check with the integrated glucometer, the user can use the bolus wizard to deliver a required bolus. The Insulet OmniPod has a separate remote, also known as a Personal Diabetes Monitor (PDM), that features a built-in meter that uses Freestyle test strips. This eliminates the need to carry and manage a separate meter or transfer blood glucose results from device to device. integration with continuous glucose monitoring systems: Some insulin pumps can be used as a display for interstitial glucose values obtained from a continuous glucose monitoring system or sensor. The Minimed Paradigm series' RF link also supports a continuous blood glucose sensor known as the Paradigm REAL-Time Continuous Glucose Monitor that wirelessly provides an interstitial glucose value every 5 minutes on the pump screen. The Medtronic REAL-Time System was the first to link a continuous monitor with an insulin pump system. In the Minimed 530G with Enlite (in the US) or Paradigm Veo (in other countries), the pump can enter a low glucose suspend mode stopping all insulin delivery (bolus and basal insulin) if interstitial glucose values fall below the hypoglycemia threshold. In the Minimed 640G insulin pump series, low glucose suspend mode can also be entered based on predicted hypoglycemia. The Animas Vibe is an insulin pump that is fully integrated with the Dexcom G4 Continuous Glucose Monitor. The two connect wirelessly to monitor and track blood glucose levels and detect patterns. The Dexcom G4 has the advantage of being designed to monitor glucose levels every five minutes throughout 7 days of continuous wear. The Animas Vibe was approved for use in Europe in 2011, and Canada and the United States in January and December 2014, respectively. NOTE: Animas insulin pumps are not available due to Johnson & Johnson's decision to cease operations at their Animas subsidiary. The Tandem Diabetes Care t:Slim X2 was approved by the U.S. Food and Drug Administration in 2019 and is the first insulin pump to be designated as an alternate controller enabled (ACE) insulin pump. ACE insulin pumps allow users to integrate continuous glucose monitors, automated inulin dosing (AID) systems, and other diabetes management devices with the pump to create a personalized diabetes therapy system. Many users of the t:slim X2 integrate the pump with the Dexcom G6, a continuous glucose monitor approved by the FDA in 2018. It was the first CGM authorized for use in an integrated therapy system. The device does not require users to provide fingerstick calibrations and lasts for up to ten days. Other options may include remote control, tubeless pod, touch screen interface, rechargeable battery, pre-filled insulin cartridge. MiniMed 670G is a type of insulin pump and sensor system created by Medtronic. It was approved by the US FDA in September 2016 and was the first approved hybrid closed loop system which senses a diabetic person's basal insulin requirement and automatically adjusts its delivery to the body. Omnipod 5: On January 28th, 2022 Insulet Corporation announced the FDA has approved the Omnipod 5, the first tubeless closed loop insulin pump with Smartphone control, working with the Dexcom G6 Continuous Glucose Monitor. The Omnipod 5 will have a feature named SmartAdjust technology that allows for the increase, decrease, or suspension of insulin based on the user's custom blood glucose targets. Future developments When insulin pump technology is combined with a continuous blood glucose monitoring system, the technology seems promising for real time control of the blood sugar level. Currently there are no mature algorithms to automatically control the insulin delivery based on feedback of the blood glucose level. When the loop is closed, the system may function as an artificial pancreas. Insulin pumps are being used for infusing pramlintide (brand name Symlin, or synthetic amylin) with insulin for improved postprandial glycemic control compared to insulin alone. Dual hormone insulin pumps that infuse either insulin or glucagon. In event of hypoglycemia, the
|
was the first to link a continuous monitor with an insulin pump system. In the Minimed 530G with Enlite (in the US) or Paradigm Veo (in other countries), the pump can enter a low glucose suspend mode stopping all insulin delivery (bolus and basal insulin) if interstitial glucose values fall below the hypoglycemia threshold. In the Minimed 640G insulin pump series, low glucose suspend mode can also be entered based on predicted hypoglycemia. The Animas Vibe is an insulin pump that is fully integrated with the Dexcom G4 Continuous Glucose Monitor. The two connect wirelessly to monitor and track blood glucose levels and detect patterns. The Dexcom G4 has the advantage of being designed to monitor glucose levels every five minutes throughout 7 days of continuous wear. The Animas Vibe was approved for use in Europe in 2011, and Canada and the United States in January and December 2014, respectively. NOTE: Animas insulin pumps are not available due to Johnson & Johnson's decision to cease operations at their Animas subsidiary. The Tandem Diabetes Care t:Slim X2 was approved by the U.S. Food and Drug Administration in 2019 and is the first insulin pump to be designated as an alternate controller enabled (ACE) insulin pump. ACE insulin pumps allow users to integrate continuous glucose monitors, automated inulin dosing (AID) systems, and other diabetes management devices with the pump to create a personalized diabetes therapy system. Many users of the t:slim X2 integrate the pump with the Dexcom G6, a continuous glucose monitor approved by the FDA in 2018. It was the first CGM authorized for use in an integrated therapy system. The device does not require users to provide fingerstick calibrations and lasts for up to ten days. Other options may include remote control, tubeless pod, touch screen interface, rechargeable battery, pre-filled insulin cartridge. MiniMed 670G is a type of insulin pump and sensor system created by Medtronic. It was approved by the US FDA in September 2016 and was the first approved hybrid closed loop system which senses a diabetic person's basal insulin requirement and automatically adjusts its delivery to the body. Omnipod 5: On January 28th, 2022 Insulet Corporation announced the FDA has approved the Omnipod 5, the first tubeless closed loop insulin pump with Smartphone control, working with the Dexcom G6 Continuous Glucose Monitor. The Omnipod 5 will have a feature named SmartAdjust technology that allows for the increase, decrease, or suspension of insulin based on the user's custom blood glucose targets. Future developments When insulin pump technology is combined with a continuous blood glucose monitoring system, the technology seems promising for real time control of the blood sugar level. Currently there are no mature algorithms to automatically control the insulin delivery based on feedback of the blood glucose level. When the loop is closed, the system may function as an artificial pancreas. Insulin pumps are being used for infusing pramlintide (brand name Symlin, or synthetic amylin) with insulin for improved postprandial glycemic control compared to insulin alone. Dual hormone insulin pumps that infuse either insulin or glucagon. In event of hypoglycemia, the glucagon could be triggered to increase the blood glucose. This would be particularly valuable in a closed loop system under the control of a glucose sensor. The Artificial Pancreas, currently in clinical trials for FDA approval, is a recently developed device designed with this technology in mind. Ultrafast insulins. These insulins are absorbed more quickly than the currently available Humalog, Novolog, and Apidra which have a peak at about 60 minutes. Faster insulin uptake would theoretically coordinate with meals better, and allow faster recovery from hyperglycemia if the insulin infusion is suspended. Ultrafast insulins are in development by Biodel, Halozyme, and Novo Nordisk. Dosing An insulin pump allows the replacement of slow-acting insulin for basal needs with a continuous infusion of rapid-acting insulin. The insulin pump delivers a single type of rapid-acting insulin in two ways: a bolus dose that is pumped to cover food eaten or to correct a high blood glucose level. a basal dose that is pumped continuously at an adjustable basal rate to deliver insulin needed between meals and at night. Bolus shape An insulin pump user can influence the profile of the rapid-acting insulin by shaping the bolus. Users can experiment with bolus shapes to determine what is best for any given food, which means that they can improve control of blood sugar by adapting the bolus shape to their needs. A standard bolus is an infusion of insulin pumped completely at the onset of the bolus. It's the most similar to an injection. By pumping with a "spike" shape, the expected action is the fastest possible bolus for that type of insulin. The standard bolus is most appropriate when eating high carb low protein low fat meals because it will return blood sugar to normal levels quickly. An extended bolus is a slow infusion of insulin spread out over time. By pumping with a "square wave" shape, the bolus avoids a high initial dose of insulin that may enter the blood and cause low blood sugar before digestion can facilitate sugar entering the blood. The extended bolus also extends the action of insulin well beyond that of the insulin alone. The extended bolus is appropriate when covering high fat high protein meals such as steak, which will be raising blood sugar for many hours past the onset of the bolus. The extended bolus is also useful for those with slow digestion (such as with gastroparesis or coeliac disease). A combination bolus/multiwave bolus is the combination of a standard bolus spike with an extended bolus square wave. This shape provides a large dose of insulin up front, and then also extends the tail of the insulin action. The combination bolus is appropriate for high carb high fat meals such as pizza, pasta with heavy cream sauce, and chocolate cake. A super bolus is a method of increasing the spike of the standard bolus. Since the action of the bolus insulin in the blood stream will extend for several hours, the basal insulin could be stopped or reduced during this time. This facilitates the "borrowing" of the basal insulin and including it into the bolus spike to deliver the same total insulin with faster action than can be achieved with spike and basal rate together. The super bolus is useful for certain foods (like sugary breakfast cereals) which cause a large post-prandial peak of blood sugar. It attacks the blood sugar peak with the fastest delivery of insulin that can be practically achieved by pumping. Bolus timing Since the pump user is responsible to manually start a bolus, this provides an opportunity for the user to pre-bolus to improve upon the insulin pump's capability to prevent post-prandial hyperglycemia. A pre-bolus is simply a bolus of insulin given before it is actually needed to cover carbohydrates eaten. There are two situations where a pre-bolus is helpful: A pre-bolus of insulin will mitigate a spike in blood sugar that results from eating high glycemic foods. Infused insulin analogs such as NovoLog and Apidra typically begin to reduce blood sugar levels 15 or 20 minutes after infusion. As a result, easily digested sugars often hit the bloodstream much faster than infused insulin intended to cover them, and the blood sugar level spikes upward as a result. If the bolus were infused 20 minutes before eating, then the pre-bloused insulin would hit the bloodstream simultaneously with the digested sugars to control the magnitude of the spike. A pre-bolus of insulin can combine a meal bolus and a correction bolus when the blood sugar is above the target range before a meal. The timing of the bolus is a controllable variable to bring down the blood sugar level before eating again causes it to increase. Similarly, a low blood sugar level or a low glycemic food might be best treated with a bolus after a meal is begun. The blood sugar level, the type of food eaten, and a person's individual response to food and insulin affect the ideal time to bolus with the pump. Basal rate patterns The pattern for delivering basal insulin throughout the day can also be customized with a pattern to suit the pump user. A reduction of basal at night to prevent low blood sugar in infants and toddlers. An increase of basal at night to counteract high blood sugar levels due to growth hormone in teenagers. A pre-dawn increase to prevent high blood sugar due to the dawn effect in adults and teens. In a proactive plan before regularly scheduled exercise times such as morning gym for elementary school children or after-school basketball practice
|
France American National Standards Institute (ANSI) – United States British Standards Institution (BSI) – United Kingdom Deutsches Institut für Normung (DIN) – Germany Japanese Industrial Standards Committee (JISC) - Japan Standards Australia (SA) - Australia Kenya Bureau of Standards (KEBS) - Kenya Standardization Administration of China (SAC) - China Swedish Standards Institute (SIS) – Sweden The other six are representatives of major United Nations agencies or other international organizations who are all users of ISO 3166-1: International Atomic Energy Agency (IAEA) International Civil Aviation Organization (ICAO) International Telecommunication Union (ITU) Internet Corporation for Assigned Names and Numbers (ICANN) Universal Postal Union (UPU) United Nations Economic Commission for Europe (UNECE) The ISO 3166/MA has further associated members who do not participate in the votes but who, through their expertise, have significant influence on the decision-taking procedure in the maintenance agency. Codes beginning with “X” Country codes beginning with "X" are used for private custom use (reserved), never for official codes. Despite the words “private custom”, the use may include other public standards. Examples: The ISO 3166-based NATO country codes (STANAG 1059, 9th edition) use "X" codes for imaginary exercise countries ranging from XXB for "Brownland" to XXY for "Yellowland", as well as for major commands such as XXE for SHAPE or XXS for SACLANT. X currencies defined in ISO 4217. Current country codes See also International Organization for Standardization ISO 3166 ISO 3166-1 ISO 3166-2 ISO 3166-3 List of ISO 3166 country codes Country code International vehicle registration code Lists of countries and territories Sovereign state List of sovereign states List of states with limited recognition Dependent territory United Nations Member states of the United Nations United Nations list of Non-Self-Governing Territories References External links ISO 3166 Maintenance Agency, International
|
names of countries and their subdivisions – Part 1: Country codes, defines codes for the names of countries, dependent territories, and special areas of geographical interest. It defines three sets of country codes: ISO 3166-1 alpha-2 – two-letter country codes which are the most widely used of the three, and used most prominently for the Internet's country code top-level domains (with a few exceptions). ISO 3166-1 alpha-3 – three-letter country codes which allow a better visual association between the codes and the country names than the alpha-2 codes. ISO 3166-1 numeric – three-digit country codes which are identical to those developed and maintained by the United Nations Statistics Division, with the advantage of script (writing system) independence, and hence useful for people or systems using non-Latin scripts. ISO 3166-2, Codes for the representation of names of countries and their subdivisions – Part 2: Country subdivision code, defines codes for the names of the principal subdivisions (e.g., provinces, states, departments, regions) of all countries coded in ISO 3166-1. ISO 3166-3, Codes for the representation of names of countries and their subdivisions – Part 3: Code for formerly used names of countries, defines codes for country names which have been deleted from ISO 3166-1 since its first publication in 1974. Editions The first edition of ISO 3166, which included only alphabetic country codes, was published in 1974. The second edition, published in 1981, also included numeric country codes, with the third and fourth editions published in 1988 and 1993 respectively. The fifth edition, published between 1997 and 1999, was expanded into three parts to include codes for subdivisions
|
However, the clinical literature is very clear that patients whose basal insulin requirements tend not to vary throughout the day or do not require dosage precision smaller than 0.5 IU, are much less likely to realize much significant advantage of pump therapy. Another perceived advantage of pumps is the freedom from syringes and injections, however, infusion sets still require less frequent injections to guide infusion sets into the subcutaneous tissue. Intensive/flexible insulin therapy requires frequent blood glucose checking. To achieve the best balance of blood sugar with either intensive/flexible method, a patient must check his or her glucose level with a meter monitoring of blood glucose several times a day. This allows optimization of the basal insulin and meal coverage as well as correction of high glucose episodes. Advantages and disadvantages The two primary advantages of intensive/flexible therapy over more traditional two or three injection regimens are: greater flexibility of meal times, carbohydrate quantities, and physical activities, and better glycemic control to reduce the incidence and severity of the complications of diabetes. Major disadvantages of intensive/flexible therapy are that it requires greater amounts of education and effort to achieve the goals, and it increases the daily cost for glucose monitoring four or more times a day. This cost can substantially increase when the therapy is implemented with an insulin pump and/or continuous glucose monitor. It is a common notion that more frequent hypoglycemia is a disadvantage of intensive/flexible regimens. The frequency of hypoglycemia increases with increasing effort to achieve normal blood glucoses with most insulin regimens, but hypoglycemia can be minimized with appropriate glucose targets and control strategies. The difficulties lie in remembering to test, estimating meal size, taking the meal bolus and eating within the prescribed time, and being aware of snacks and meals that are not the expected size. When implemented correctly, flexible regimens offer greater ability to achieve good glycemic control with easier accommodation to variations of eating and physical activity. Semantics of changing care: why "flexible" is replacing "intensive" therapy Over the last two decades, the evidence that better glycemic control (i.e., keeping blood glucose and HbA1c levels as close to normal as possible) reduces the rates of many complications of diabetes has become overwhelming. As a result, diabetes specialists have expended increasing effort to help most people with diabetes achieve blood glucose levels as close to normal as achievable. It takes about the same amount of effort to achieve good glycemic control with a traditional two or three injection regimen as it does with flexible therapy: frequent glucose monitoring, attention to timing and amounts of meals. Many diabetes specialists no longer think of flexible insulin therapy
|
is central to the development of complications of diabetes. This evidence convinced most physicians who specialize in diabetes care that an important goal of treatment is to make the biochemical profile of the diabetic patient (blood lipids, HbA1c, etc.) as close to the values of non-diabetic people as possible. This is especially true for young patients with many decades of life ahead. General description A working pancreas continually secretes small amounts of insulin into the blood to maintain normal glucose levels, which would otherwise rise from glucose release by the liver, especially during the early morning dawn phenomenon. This insulin is referred to as basal insulin secretion, and constitutes almost half the insulin produced by the normal pancreas. Bolus insulin is produced during the digestion of meals. Insulin levels rise immediately as we begin to eat, remaining higher than the basal rate for 1 to 4 hours. This meal-associated (prandial) insulin production is roughly proportional to the amount of carbohydrate in the meal. Intensive or flexible therapy involves supplying a continual supply of insulin to serve as the basal insulin, supplying meal insulin in doses proportional to nutritional load of the meals, and supplying extra insulin when needed to correct high glucose levels. These three components of the insulin regimen are commonly referred to as basal insulin, bolus insulin, and high glucose correction insulin. Two common regimens: pens, injection ports, and pumps One method of intensive insulinotherapy is based on multiple daily injections (sometimes referred to in medical literature as MDI). Meal insulin is supplied by injection of rapid-acting insulin before each meal in an amount proportional to the meal. Basal insulin is provided as a once or twice daily injection of dose of a long-acting insulin. In an MDI regimen, long-acting insulins are preferred for basal use. An older insulin used for this purpose is ultralente, and beef ultralente in particular was considered for decades to be the gold standard of basal insulin. Long-acting insulin analogs such as insulin glargine (brand name Lantus, made by Sanofi-Aventis) and insulin detemir (brand name Levemir, made by Novo Nordisk) are also used, with insulin glargine used more than insulin detemir. Rapid-acting insulin analogs such as lispro (brand name Humalog, made by Eli Lilly and Company) and aspart (brand name Novolog/Novorapid, made by Novo Nordisk and Apidra made by Sanofi Aventis) are preferred by many clinicians over older regular insulin for meal coverage and high correction. Many people on MDI regimens carry insulin pens to inject their rapid-acting insulins instead of traditional syringes. Some people on an MDI regimen also use injection ports such as the I-port to minimize the number of daily skin punctures. The other method of intensive/flexible insulin therapy is an insulin pump. It is a small mechanical device about the size of a deck of cards. It contains a syringe-like reservoir with about three days' insulin supply. This is connected by thin, disposable, plastic tubing to a needle-like cannula inserted into the patient's skin and held in place by an adhesive patch. The infusion tubing and cannula must be removed and replaced every few days. An insulin pump can be programmed to infuse a steady amount of rapid-acting insulin under the skin. This steady infusion is termed the basal rate and is designed to supply the background insulin needs. Each
|
unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation . Repeatedly composing a function with itself is called iteration. If is applied times, starting with the value , then this is written as ; so , etc. Since , composing and yields , "undoing" the effect of one application of . Notation While the notation might be misunderstood, certainly denotes the multiplicative inverse of and has nothing to do with the inverse function of . In keeping with the general notation, some English authors use expressions like to denote the inverse of the sine function applied to (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of , which can be denoted as . To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin ). For instance, the inverse of the sine function is typically called the arcsine function, written as . Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ). For instance, the inverse of the hyperbolic sine function is typically written as . Note that the expressions like can still be useful to distinguish the multivalued inverse from the partial inverse: . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the notation should be avoided. Examples Squaring and square root functions The function given by is not injective because for all . Therefore, is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by . Standard inverse functions The following table shows several standard functions and their inverses: Formula for the inverse Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse of an invertible function has an explicit description as . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if is the function then to determine for a real number , one must find the unique real number such that . This equation can be solved: Thus the inverse function is given by the formula Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if is the function then is a bijection, and therefore possesses an inverse function . The formula for this inverse has an expression as an infinite sum: Properties Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. Uniqueness If an inverse function exists for a given function , then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by . Symmetry There is a symmetry between a function and its inverse. Specifically, if is an invertible function with domain and codomain , then its inverse has domain and image , and the inverse of is the original function . In symbols, for functions and , and This statement is a consequence of the implication that for to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by The inverse of a composition of functions is given by Notice that the order of and have been reversed; to undo followed by , we must first undo , and then undo . For example, let and let . Then the composition is the function that first multiplies by three and then adds five, To reverse this process, we must first subtract five, and then divide by three, This is the composition . Self-inverses If is a set, then the identity function on is its own inverse: More generally, a function is equal to its own inverse, if and only if the composition is equal to . Such a function is called an involution. Graph of the inverse If is invertible, then the graph of the function is the same as the graph of the equation This is identical to the equation that defines the graph of , except that the roles of and have been reversed. Thus the graph of can be obtained from the graph of by switching the positions of the and axes. This is equivalent to reflecting the graph across the line . Inverses and derivatives The inverse function theorem states that a continuous function is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function is invertible, since the derivative is always positive. If the function is differentiable on an interval and for each , then the inverse is differentiable on . If , the derivative of the inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a differentiable multivariable function is invertible in a neighborhood of a point as long as the Jacobian matrix of at is invertible. In this case, the Jacobian of at is the matrix inverse of the Jacobian of at . Real-world examples Let be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, then its inverse function converts degrees Fahrenheit to degrees
|
set, then the identity function on is its own inverse: More generally, a function is equal to its own inverse, if and only if the composition is equal to . Such a function is called an involution. Graph of the inverse If is invertible, then the graph of the function is the same as the graph of the equation This is identical to the equation that defines the graph of , except that the roles of and have been reversed. Thus the graph of can be obtained from the graph of by switching the positions of the and axes. This is equivalent to reflecting the graph across the line . Inverses and derivatives The inverse function theorem states that a continuous function is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function is invertible, since the derivative is always positive. If the function is differentiable on an interval and for each , then the inverse is differentiable on . If , the derivative of the inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a differentiable multivariable function is invertible in a neighborhood of a point as long as the Jacobian matrix of at is invertible. In this case, the Jacobian of at is the matrix inverse of the Jacobian of at . Real-world examples Let be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, then its inverse function converts degrees Fahrenheit to degrees Celsius, since Suppose assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, Let be the function that leads to an percentage rise of some quantity, and be the function producing an percentage fall. Applied to $100 with = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is . In many cases we need to find the concentration of acid from a pH measurement. The inverse function is used. Generalizations Partial inverses Even if a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function is not one-to-one, since . However, the function becomes one-to-one if we restrict to the domain , in which case (If we instead restrict to the domain , then the inverse is the negative of the square root of .) Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: Sometimes, this multivalued inverse is called the full inverse of , and the portions (such as and −) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at is called the principal value of . For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). These considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since for every real (and more generally for every integer ). However, the sine is one-to-one on the interval , and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between − and . The following table describes the principal branch of each inverse trigonometric function: Left and right inverses Left and right inverses are not necessarily the same. If is a left inverse for , then may or may not be a right inverse for ; and if is a right inverse for , then is not necessarily a left inverse for . For example, let denote the squaring map, such that for all in , and let denote the square root map, such that for all . Then for all in ; that is, is a right inverse to . However, is not a left inverse to , since, e.g., . Left inverses If , a left inverse for (or retraction of ) is a function such that composing with from the left gives the identity function: That is, the function satisfies the rule If , then Thus, must equal the inverse of on the image of , but may take any values for elements of not in the image. A function is injective if and only if it has a left inverse or is the empty function. If is the left inverse of , then is injective. If , then . If is injective, either is the empty function () or has a left inverse (, which can be constructed as follows: for all , if is in the image of (there exists such that ), let ( is unique because is injective); otherwise, let be an arbitrary element of . For all , is in the image of , so by above, so is a left inverse of . In classical mathematics, every injective function with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion of the two-element set in the reals
|
been; if placed in movement towards the west (for example), it will maintain itself in that movement." This notion which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the center of the earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity. The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614. Concepts of inertia in Galileo's writings would later come to be refined, modified and codified by Isaac Newton as the first of his Laws of Motion (first published in Newton's work, Philosophiae Naturalis Principia Mathematica, in 1687): Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. Since initial publication, Newton's Laws of Motion (and by inclusion, this first law) have come to form the basis for the branch of physics known as classical mechanics. The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1617–1621); however, the meaning of Kepler's term (which he derived from the Latin word for "idleness" or "laziness") was not quite the same as its modern interpretation. Kepler defined inertia only in terms of a resistance to movement, once again based on the presumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to these concepts as it is today. Nevertheless, despite defining the concept so elegantly in his laws of motion, even Newton did not actually use the term "inertia" to refer to his First Law. In fact, Newton originally viewed the phenomenon he described in his First Law of Motion as being caused by "innate forces" inherent in matter, which resisted any acceleration. Given this perspective, and borrowing from Kepler, Newton attributed the term "inertia" to mean "the innate force possessed by an object which resists changes in motion"; thus, Newton defined "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one which we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical
|
surface of the Earth, inertia is often masked by gravity and the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. The principle of inertia is one of the fundamental principles in classical physics that are still used today to describe the motion of objects and how they are affected by the applied forces on them. History and development of the concept Early understanding of motion The sinologist Joseph Needham credits The Mozi (a Chinese text from the Warring States period (475–221 BCE) as the first description of inertia. Before the European Renaissance the prevailing theory of motion in western philosophy was that of Aristotle (335 BCE to 322 BCE). Aristotle said that all moving objects (on Earth) eventually come to rest unless an external power (force) continued to move them. Aristotle explained the continued motion of projectiles, after being separated from their projector, as (an itself unexplained) action of the surrounding medium continuing to move the projectile. Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of the matter was motion, not stasis. In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus had several supporters who further developed his ideas. In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon. Theory of impetus In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments which further undermined the Aristotelian model. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of illustrating the laws of motion with graphs. Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone: Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. Classical inertia According to historian of science Charles Coulston Gillispie, inertia "entered science as a physical consequence of Descartes'
|
finishes. Guitars Sub-brands Ibanez J. Custom The J. Custom series are the most exclusive and high-end custom shop guitars Ibanez offers. They are "Envisioned to be the finest Japanese-made guitar in history". Built by some of the most skilled luthiers Ibanez has to offer, they "represent every advance in design and technology Ibanez has developed over the last 20 years". They feature aftermarket pickups (Seymour Duncan Jazz & Custom 5 in the 6 string model and DiMarzio PAF-7 pickups in the 7 string model,) 5 piece maple/wenge necks with Titanium reinforcement rods, ebony fingerboard with a tree of life fret board inlay, and Edge Zero tremolo systems. Ibanez Prestige The Prestige guitars are Ibanez's top-of-the-line models that are built in Japan. They feature higher quality materials, high craftsmanship, and higher quality bridges compared to other models. Ibanez Premium The Premium guitars are similar to other models but are built in Ibanez's Indonesian premium factory to premium quality standards. Ibanez Gio The Ibanez Gio are Ibanez' budget guitars, designed for high playability at low costs. Many high end Ibanez guitars are recreated in the more affordable Gio form, such as the RGA and ART models. U.S.A. custom USA custom range. Late 1980s to mid-1990s. Also known as Ibanez LACS (L.A. Custom Shop), services only their endorsed artists today. Solid body electric guitars Ibanez RG The main characteristics that are common among all Ibanez RG guitars (RG stands for Roadstar Guitar) are that they feature 24 frets and use thin necks, known as "Wizard", which allows for faster playing. The RG features a line up of guitars with both floating tremolo systems and fixed bridge systems. Ibanez RGA The Ibanez RGA was introduced at a time when the Ibanez RG series only had tremolo bridges. Since then, the RG series has introduced fixed bridge models, but Ibanez still produces the RGA series with an arched top to differentiate from the RG series. The arched top allows for added comfort while playing the guitar. Ibanez RGD The Ibanez RGD guitar was developed for heavy metal guitar players. The RGD features a 26.5" scale which allows for lower than standard guitar tuning while retaining standard string tension without use of thicker gauge strings. It also features an extra deep scoop cut on the lower horn for easy high fret access. Ibanez currently makes two Ibanez RGD Prestige models. Ibanez S The Ibanez S (Saber) guitar has an extremely thin body made out of mahogany, and is available in 6, 7 and 8-string models. They may come with either 22 or 24 frets, depending on year of manufacture. The standard line currently have Wizard III necks that are slightly wider and thicker than the original Wizard. All S models have bodies that are thicker in the middle where the pickups are, and taper off towards the outer edges. The guitars use ZR (Zero Resistance), Lo-TRS, and variants of the Edge bridge system as well as fixed bridges. Ibanez currently makes 8 Prestige S-Series guitars. Ibanez DN The Ibanez DN guitar (DN stands for Darkstone) was developed for heavy metal guitar players. The main features of the DN are that it has a set-in neck for speed and playing comfort, medium frets, and coil tapped pickups. This guitar is currently discontinued. Ibanez X The Ibanez X guitars are Ibanez guitars that feature unconventional and unique body designs. An example would be the Ibanez Xiphos, which is stylized to look like the letter X. For all X guitars currently available and for more information, check the Ibanez Electric Guitar page in 2013. (as of 2013, variations may be: Halberd XH300 and Glaive XG300, Mick Thomson Signature MTM100, MTM10) Ibanez Artist (AR) The Ibanez Artist guitars were designed for heavy playing such as for heavy metal or traditional rock. The Artist ARZ is a single cutaway, 24 fret, 25" scale guitar that features a wide variety of bridges and pickups depending on the specific models. The Artist ART is a single cutaway, 22 fret, 24.75" scale guitar that features a hard tail bridge. The Ibanez AR is a reissued series originating from the 70s. The AR series features a set-in neck, double cutaway, with 22 frets on a 24.75" scale. Ibanez FR The Ibanez FR is a simple body type guitar that is designed to be played in many genres. Ibanez Mikro The Ibanez Mikro series are small form factor guitars designed for children, beginners, or guitar players looking for a guitar that is easy to transport. Hollow body electric guitars Ibanez Artcore series The first Ibanez Artcore models were released in mid-2002 whose goal was to offer an affordable range of full-hollow and semi-hollow body guitars that appealed to entry level guitarists who were unable or unwilling to pay big money on high-priced guitars. Ibanez Artcore Custom The Artcore Custom is Ibanez's flagship model for the Artcore series. The bodies of the guitars are made of maple, the neck has a set-in construction type, and features wood control knobs and hand rolled frets. Ibanez AK The Ibanez AK is a guitar designed for jazz and blues type playing. It features a slim set-in neck with a body designed to easily access the higher frets. The AK is easily distinguishable by its sharper lower body horn (Florentine cutaway ?) that other Artcore guitars do not have. Production signature guitars JEM , Universe and Pia Series – Steve Vai Signature JS – Joe Satriani Signature PGM – Paul Gilbert Signature APEX – Munky Signature E-Gen – Herman Li Signature NDM4 – Noodles Signature PWM - Paul Waggoner Signature KIKO - Kiko Loureiro Signature STM2 – Sam Totman Signature ORM – Omar Rodriguez Signature MBM – Matt Bachand Signature HRG – H. R. Giger Signature GB – George Benson Signature K7 – Head and Munky Signature PM – Pat Metheny Signature PS10 – Paul Stanley Signature JSM – John Scofield Signature AT – Andy Timmons Signature TAM - Tosin Abasi Signature RBM - Reb Beach Signature JBM - Jake Bowen Signature BBM - Ben Bruce Signature JIVA - Nita Strauss Signature THBB - Tim Henson Signature SLM - Scott LePage Signature MAR - Mario Camarena Signature EH - Erick Hansel Signature YY - Yvette Young Signature M8M - Mårten Hagström Signature FTM - Fredrik Thordendal Signature ICHI - Ichika Nito Signature LB - Lari Basilio Signature Discontinued guitars Ibanez R series, also known as the Radius series, are famous for having lightweight aerofoil-profiled basswood bodies. The main endorser was Joe Satriani before he was given his own Signature JS series. The Radius series is now discontinued. RT series – Superstrat design with 24 frets. Discontinued in 1994. RX series – Superstrat design but with 22 frets instead. Discontinued in 1998, and currently only exists as GRX (budget model of RX series). Axstar (a.k.a. Axstar by Ibanez) – discontinued EDR/EXR – Ergodyne series – discontinued MC – Musician series – Discontinued – Neck-through construction (except for MC-100, which has a bolt-on neck), with 24 frets (two octaves) – As with the Artist models of the late 1970s, some of these guitars were equipped with trisound switches, and some models (MC 400 and MC 500) were equipped with active electronics. ST – Studio series 1977–82 offset double cutaway ranging from bolt on to fixed and through necks with pairs of V2 distortion humbuckers. 24 frets and 25.5" scale. CN – Concert range 1977–79 like a bolt on neck Artist with slightly offset cutaways. IC – Iceman a radical shape endorsed and used by Paul Stanley, Various pickup combinations. SB70 – Studio & Blazer spot build: Mixing Studio series double cutaway, ash bodies with Blazer series 21 fret bolt on maple necks, and sporting a fixed brass bridge, 2 Super 70 Humbuckers, 1 vol, 2 tone knobs, a pickup selector switch, and a phase mini-toggle switch (which gives a unique strat-like quack sound), an estimated 300-400 of these were assembled, mostly in 1982. A cult following has emerged, as these guitars are rare, and sell for 3x-4x their original price. Learn more at The Unofficial SB70 Registry: https://www.ibanezcollectors.com/forum/index.php?topic=20623.0 BL – Blazer series 1980–82 – fixed bridge strat-like with maple necks and mahogany or ash bodies sporting 3 single coil pickups (Super 6 or BL) or 2 Super 70 humbuckers. ARC-100/300 (Retro Series) ARX-100/300 (Retro Series) AR-100/200 (black vintage top) V Series – Flying V's – discontinued Ibanez Artcore Series – Ibanez's full and semi-hollow guitar line, with some models discontinued since their debut in 2002. Ibanez Jet King 2 and Jet King 1 – A modern remake of the Ibanez Rhythm maker, vintage looking and sounding guitars. Radius series – discontinued, a modified version is now taken over by the Joe Satriani signature series which features a multi-radius neck. EX Series – Manufactured in Korea and Japan (rare). PL – Pro Line series
|
are "Envisioned to be the finest Japanese-made guitar in history". Built by some of the most skilled luthiers Ibanez has to offer, they "represent every advance in design and technology Ibanez has developed over the last 20 years". They feature aftermarket pickups (Seymour Duncan Jazz & Custom 5 in the 6 string model and DiMarzio PAF-7 pickups in the 7 string model,) 5 piece maple/wenge necks with Titanium reinforcement rods, ebony fingerboard with a tree of life fret board inlay, and Edge Zero tremolo systems. Ibanez Prestige The Prestige guitars are Ibanez's top-of-the-line models that are built in Japan. They feature higher quality materials, high craftsmanship, and higher quality bridges compared to other models. Ibanez Premium The Premium guitars are similar to other models but are built in Ibanez's Indonesian premium factory to premium quality standards. Ibanez Gio The Ibanez Gio are Ibanez' budget guitars, designed for high playability at low costs. Many high end Ibanez guitars are recreated in the more affordable Gio form, such as the RGA and ART models. U.S.A. custom USA custom range. Late 1980s to mid-1990s. Also known as Ibanez LACS (L.A. Custom Shop), services only their endorsed artists today. Solid body electric guitars Ibanez RG The main characteristics that are common among all Ibanez RG guitars (RG stands for Roadstar Guitar) are that they feature 24 frets and use thin necks, known as "Wizard", which allows for faster playing. The RG features a line up of guitars with both floating tremolo systems and fixed bridge systems. Ibanez RGA The Ibanez RGA was introduced at a time when the Ibanez RG series only had tremolo bridges. Since then, the RG series has introduced fixed bridge models, but Ibanez still produces the RGA series with an arched top to differentiate from the RG series. The arched top allows for added comfort while playing the guitar. Ibanez RGD The Ibanez RGD guitar was developed for heavy metal guitar players. The RGD features a 26.5" scale which allows for lower than standard guitar tuning while retaining standard string tension without use of thicker gauge strings. It also features an extra deep scoop cut on the lower horn for easy high fret access. Ibanez currently makes two Ibanez RGD Prestige models. Ibanez S The Ibanez S (Saber) guitar has an extremely thin body made out of mahogany, and is available in 6, 7 and 8-string models. They may come with either 22 or 24 frets, depending on year of manufacture. The standard line currently have Wizard III necks that are slightly wider and thicker than the original Wizard. All S models have bodies that are thicker in the middle where the pickups are, and taper off towards the outer edges. The guitars use ZR (Zero Resistance), Lo-TRS, and variants of the Edge bridge system as well as fixed bridges. Ibanez currently makes 8 Prestige S-Series guitars. Ibanez DN The Ibanez DN guitar (DN stands for Darkstone) was developed for heavy metal guitar players. The main features of the DN are that it has a set-in neck for speed and playing comfort, medium frets, and coil tapped pickups. This guitar is currently discontinued. Ibanez X The Ibanez X guitars are Ibanez guitars that feature unconventional and unique body designs. An example would be the Ibanez Xiphos, which is stylized to look like the letter X. For all X guitars currently available and for more information, check the Ibanez Electric Guitar page in 2013. (as of 2013, variations may be: Halberd XH300 and Glaive XG300, Mick Thomson Signature MTM100, MTM10) Ibanez Artist (AR) The Ibanez Artist guitars were designed for heavy playing such as for heavy metal or traditional rock. The Artist ARZ is a single cutaway, 24 fret, 25" scale guitar that features a wide variety of bridges and pickups depending on the specific models. The Artist ART is a single cutaway, 22 fret, 24.75" scale guitar that features a hard tail bridge. The Ibanez AR is a reissued series originating from the 70s. The AR series features a set-in neck, double cutaway, with 22 frets on a 24.75" scale. Ibanez FR The Ibanez FR is a simple body type guitar that is designed to be played in many genres. Ibanez Mikro The Ibanez Mikro series are small form factor guitars designed for children, beginners, or guitar players looking for a guitar that is easy to transport. Hollow body electric guitars Ibanez Artcore series The first Ibanez Artcore models were released in mid-2002 whose goal was to offer an affordable range of full-hollow and semi-hollow body guitars that appealed to entry level guitarists who were unable or unwilling to pay big money on high-priced guitars. Ibanez Artcore Custom The Artcore Custom is Ibanez's flagship model for the Artcore series. The bodies of the guitars are made of maple, the neck has a set-in construction type, and features wood control knobs and hand rolled frets. Ibanez AK The Ibanez AK is a guitar designed for jazz and blues type playing. It features a slim set-in neck with a body designed to easily access the higher frets. The AK is easily distinguishable by its sharper lower body horn (Florentine cutaway ?) that other Artcore guitars do not have. Production signature guitars JEM , Universe and Pia Series – Steve Vai Signature JS – Joe Satriani Signature PGM – Paul Gilbert Signature APEX – Munky Signature E-Gen – Herman Li Signature NDM4 – Noodles Signature PWM - Paul Waggoner Signature KIKO - Kiko Loureiro Signature STM2 – Sam Totman Signature ORM – Omar Rodriguez Signature MBM – Matt Bachand Signature HRG – H. R. Giger Signature GB – George Benson Signature K7 – Head and Munky Signature PM – Pat Metheny Signature PS10 – Paul Stanley Signature JSM – John Scofield Signature AT – Andy Timmons Signature TAM - Tosin Abasi Signature RBM - Reb Beach Signature JBM - Jake Bowen Signature BBM - Ben Bruce Signature JIVA - Nita Strauss Signature THBB - Tim Henson Signature SLM - Scott LePage Signature MAR - Mario Camarena Signature EH - Erick Hansel Signature YY - Yvette Young Signature M8M - Mårten Hagström Signature FTM - Fredrik Thordendal Signature ICHI - Ichika Nito Signature LB - Lari Basilio Signature Discontinued guitars Ibanez R series, also known as the Radius series, are famous for having lightweight aerofoil-profiled basswood bodies. The main endorser was Joe Satriani before he was given his own Signature JS series. The Radius series is now discontinued. RT series – Superstrat design with 24 frets. Discontinued in 1994. RX series – Superstrat design but with 22 frets instead. Discontinued in 1998, and currently only exists as GRX (budget model of RX series). Axstar (a.k.a. Axstar by Ibanez) – discontinued EDR/EXR – Ergodyne series – discontinued MC – Musician series – Discontinued – Neck-through construction (except for MC-100, which has a bolt-on neck), with 24 frets (two octaves) – As with the Artist models of the late 1970s, some of these guitars were equipped with trisound switches, and some models (MC 400 and MC 500) were equipped with active electronics. ST – Studio series 1977–82 offset double cutaway ranging from bolt on to fixed
|
in any society. In Norse mythology, there are themes of brother-sister marriage, a prominent example being between Njörðr and his unnamed sister (perhaps Nerthus), parents of Freyja and Freyr. Loki in turn also accuses Freyja and Freyr of having a sexual relationship. Biblical references The earliest Biblical reference to incest involved Cain. It was cited that he knew his wife and she conceived and bore Enoch. During this period, there was no other woman except Eve or there was an unnamed sister and so this meant Cain had an incestuous relationship with his mother or his sister. According to the Book of Jubilees, Cain married his sister Awan. Later, in Genesis 20 of the Hebrew Bible, the Patriarch Abraham married his half-sister Sarah. Other references include the passage in Samuel where Amnon, King David's son, raped his half-sister, Tamar. According to Michael D. Coogan, it would have been perfectly all right for Amnon to have married her, the Bible being inconsistent about prohibiting incest. In Genesis 19:30-38, living in an isolated area after the destruction of Sodom and Gomorrah, Lot's two daughters conspired to inebriate and rape their father due to the lack of available partners to continue his line of descent. Because of intoxication, Lot "perceived not" when his firstborn, and the following night his younger daughter, lay with him. Moses was also born to an incestuous marriage. Exodus 6 detailed how his father Amram was the nephew of his mother Jochebed. An account noted that the incestuous relations did not suffer the fate of childlessness, which was the punishment for such couples in levitical law. It stated, however, that the incest exposed Moses "to the peril of wild beasts, of the weather, of the water, and more." From the Middle Ages onward Many European monarchs were related due to political marriages, sometimes resulting in distant cousins – and even first cousins – being married. This was especially true in the Habsburg, Hohenzollern, Savoy and Bourbon royal houses. However, relations between siblings, which may have been tolerated in other cultures, were considered abhorrent. For example, the accusation that Anne Boleyn and her brother George Boleyn had committed incest was one of the reasons that both siblings were executed in May 1536. Incestuous marriages were also seen in the royal houses of ancient Japan and Korea, Inca Peru, Ancient Hawaii, and, at times, Central Africa, Mexico, and Thailand. Like the kings of ancient Egypt, the Inca rulers married their sisters. Huayna Capac, for instance, was the son of Topa Inca Yupanqui and the Inca's sister and wife. The ruling Inca king was expected to marry his full sister. If he had no children by his eldest sister, he married the second and third until they had children. Preservation of the purity of the Sun's blood was one of the reasons for the brother-sister marriage of the Inca king. The Inca kings claimed divine descent from celestial bodies, and emulated the behavior of their celestial ancestor, the Sun, who married his sister, the Moon. Another reason the princes and kings married their sisters was so the heir might inherit the kingdom as much as through his mother as through his father. Therefore, the prince could invoke both principles of inheritance. Half-sibling marriages were found in ancient Japan such as the marriage of Emperor Bidatsu and his half-sister Empress Suiko. Japanese Prince Kinashi no Karu had sexual relationships with his full sister Princess Karu no Ōiratsume, although the action was regarded as foolish. In order to prevent the influence of the other families, a half-sister of Korean Goryeo Dynasty monarch Gwangjong became his wife in the 10th century. Her name was Daemok. Marriage with a family member not related by blood was also regarded as contravening morality and was therefore incest. One example of this is the 14th century Chunghye of Goryeo, who raped one of his deceased father's concubines, who was thus regarded to be his mother. In India, the largest proportion of women aged 13 to 49 who marry their close relative are in Tamil Nadu, then Andhra Pradesh, Karnataka, and Maharashtra. While it is rare for uncle-niece marriages, it is more common in Andhra Pradesh and Tamil Nadu. Others In some Southeast Asian cultures, stories of incest being common among certain ethnicities are sometimes told as expressions of contempt for those ethnicities. Marriages between younger brothers and their older sisters were common among the early Udegei people. In the Hawaiian Islands, high ali'i chiefs were obligated to marry their older sisters in order to increase their mana. These copulations were thought to maintain the purity of the royal blood. Another reason for these familial unions was to maintain a limited size of the ruling ali'i group. As per the priestly regulations of Kanalu, put in place after multiple disasters, "chiefs must increase their numbers and this can be done if a brother marries his older sister." Prevalence and statistics Incest between an adult and a person under the age of consent is considered a form of child sexual abuse that has been shown to be one of the most extreme forms of childhood abuse; it often results in serious and long-term psychological trauma, especially in the case of parental incest. Its prevalence is difficult to generalize, but research has estimated 10–15% of the general population as having at least one such sexual contact, with less than 2% involving intercourse or attempted intercourse. Among women, research has yielded estimates as high as 20%. Father–daughter incest was for many years the most commonly reported and studied form of incest. More recently, studies have suggested that sibling incest, particularly older brothers having sexual relations with younger siblings, is the most common form of incest, with some studies finding sibling incest occurring more frequently than other forms of incest. Some studies suggest that adolescent perpetrators of sibling abuse choose younger victims, abuse victims over a lengthier period, use violence more frequently and severely than adult perpetrators, and that sibling abuse has a higher rate of penetrative acts than father or stepfather incest, with father and older brother incest resulting in greater reported distress than stepfather incest. Saudi Arabia, Pakistan, Sudan, Mauritania and Nigeria are some of the countries with the most incest through consanguineous marriage. Types Between adults and children Sex between an adult family member and a child is usually considered a form of child sexual abuse, also known as child incestuous abuse, and for many years has been the most reported form of incest. Father–daughter and stepfather–stepdaughter sex is the most commonly reported form of adult–child incest, with most of the remaining involving a mother or stepmother. Many studies found that stepfathers tend to be far more likely than biological fathers to engage in this form of incest. One study of adult women in San Francisco estimated that 17% of women were abused by stepfathers and 2% were abused by biological fathers. Father–son incest is reported less often, but it is not known how close the frequency is to heterosexual incest because it is likely more under-reported. Prevalence of incest between parents and their children is difficult to estimate due to secrecy and privacy. In a 1999 news story, BBC reported, "Close-knit family life in India masks an alarming amount of sexual abuse of children and teenage girls by family members, a new report suggests. Delhi organisation RAHI said 76% of respondents to its survey had been abused when they were children—40% of those by a family member." According to the National Center for Victims of Crime a large proportion of rape committed in the United States is perpetrated by a family member: A study of victims of father–daughter incest in the 1970s showed that there were "common features" within families before the occurrence of incest: estrangement between the mother and the daughter, extreme paternal dominance, and reassignment of some of the mother's traditional major family responsibility to the daughter. Oldest and only daughters were more likely to be the victims of incest. It was also stated that the incest experience was psychologically harmful to the woman in later life, frequently leading to feelings of low self-esteem, very unhealthy sexual activity, contempt for other women, and other emotional problems. Adults who as children were incestuously victimized by adults often suffer from low self-esteem, difficulties in interpersonal relationships, and sexual dysfunction, and are at an extremely high risk of many mental disorders, including depression, anxiety disorders, phobic avoidance reactions, somatoform disorder, substance abuse, borderline personality disorder, and complex post-traumatic stress disorder. The Goler clan in Nova Scotia is a specific instance in which child sexual abuse in the form of forced adult/child and sibling/sibling incest took place over at least three generations. A number of Goler children were victims of sexual abuse at the hands of fathers, mothers, uncles, aunts, sisters, brothers, cousins, and each other. During interrogation by police, several of the adults openly admitted to engaging in many forms of sexual activity, up to and including full intercourse, multiple times with the children. Sixteen adults (both men and women) were charged with hundreds of allegations of incest and sexual abuse of children as young as five. In July 2012, twelve children were removed from the 'Colt' family (a pseudonym) in New South Wales, Australia, after the discovery of four generations of incest. Child protection workers and psychologists said interviews with the children indicated "a virtual sexual free-for-all". In Japan, there is a popular misconception that mother-son incestuous contact is common, due to the manner in which it is depicted in the press and popular media. According to Hideo Tokuoka, "When Americans think of incest, they think of fathers and daughters; in Japan one thinks of mothers and sons" due to the extensive media coverage of mother-son incest there. Some western researchers assumed that mother-son incest is common in Japan, but research into victimization statistics from police and health-care systems discredits this; it shows that the vast majority of sexual abuse, including incest, in Japan is perpetrated by men against young girls. While incest between adults and children generally involves the adult as the perpetrator of abuse, there are rare instances of sons sexually assaulting their mothers. These sons are typically mid adolescent to young adult, and, unlike parent-initiated incest, the incidents involve some kind of physical force. Although the mothers may be accused of being seductive with their sons and inviting the sexual contact, this is contrary to evidence. Such accusations can parallel other forms of rape, where, due to victim blaming, a woman is accused of somehow being at fault for the rape. In some cases, mother-son incest is best classified as acquaintance rape of the mother by the adolescent son. Between children Childhood sibling–sibling incest is considered to be widespread but rarely reported. Sibling–sibling incest becomes child-on-child sexual abuse when it occurs without consent, without equality, or as a result of coercion. In this form, it is believed to be the most common form of intrafamilial abuse. The most commonly reported form of abusive sibling incest is abuse of a younger sibling by an older sibling. A 2006 study showed a large portion of adults who experienced
|
taboo against incest in ancient Rome is demonstrated by the fact that politicians would use charges of incest (often false charges) as insults and means of political disenfranchisement. However, scholars agree that during the first two centuries A.D., in Roman Egypt, full sibling marriage occurred with some frequency among commoners as both Egyptians and Romans announced weddings that have been between full-siblings. This is the only evidence for brother-sister marriage among commoners in any society. In Norse mythology, there are themes of brother-sister marriage, a prominent example being between Njörðr and his unnamed sister (perhaps Nerthus), parents of Freyja and Freyr. Loki in turn also accuses Freyja and Freyr of having a sexual relationship. Biblical references The earliest Biblical reference to incest involved Cain. It was cited that he knew his wife and she conceived and bore Enoch. During this period, there was no other woman except Eve or there was an unnamed sister and so this meant Cain had an incestuous relationship with his mother or his sister. According to the Book of Jubilees, Cain married his sister Awan. Later, in Genesis 20 of the Hebrew Bible, the Patriarch Abraham married his half-sister Sarah. Other references include the passage in Samuel where Amnon, King David's son, raped his half-sister, Tamar. According to Michael D. Coogan, it would have been perfectly all right for Amnon to have married her, the Bible being inconsistent about prohibiting incest. In Genesis 19:30-38, living in an isolated area after the destruction of Sodom and Gomorrah, Lot's two daughters conspired to inebriate and rape their father due to the lack of available partners to continue his line of descent. Because of intoxication, Lot "perceived not" when his firstborn, and the following night his younger daughter, lay with him. Moses was also born to an incestuous marriage. Exodus 6 detailed how his father Amram was the nephew of his mother Jochebed. An account noted that the incestuous relations did not suffer the fate of childlessness, which was the punishment for such couples in levitical law. It stated, however, that the incest exposed Moses "to the peril of wild beasts, of the weather, of the water, and more." From the Middle Ages onward Many European monarchs were related due to political marriages, sometimes resulting in distant cousins – and even first cousins – being married. This was especially true in the Habsburg, Hohenzollern, Savoy and Bourbon royal houses. However, relations between siblings, which may have been tolerated in other cultures, were considered abhorrent. For example, the accusation that Anne Boleyn and her brother George Boleyn had committed incest was one of the reasons that both siblings were executed in May 1536. Incestuous marriages were also seen in the royal houses of ancient Japan and Korea, Inca Peru, Ancient Hawaii, and, at times, Central Africa, Mexico, and Thailand. Like the kings of ancient Egypt, the Inca rulers married their sisters. Huayna Capac, for instance, was the son of Topa Inca Yupanqui and the Inca's sister and wife. The ruling Inca king was expected to marry his full sister. If he had no children by his eldest sister, he married the second and third until they had children. Preservation of the purity of the Sun's blood was one of the reasons for the brother-sister marriage of the Inca king. The Inca kings claimed divine descent from celestial bodies, and emulated the behavior of their celestial ancestor, the Sun, who married his sister, the Moon. Another reason the princes and kings married their sisters was so the heir might inherit the kingdom as much as through his mother as through his father. Therefore, the prince could invoke both principles of inheritance. Half-sibling marriages were found in ancient Japan such as the marriage of Emperor Bidatsu and his half-sister Empress Suiko. Japanese Prince Kinashi no Karu had sexual relationships with his full sister Princess Karu no Ōiratsume, although the action was regarded as foolish. In order to prevent the influence of the other families, a half-sister of Korean Goryeo Dynasty monarch Gwangjong became his wife in the 10th century. Her name was Daemok. Marriage with a family member not related by blood was also regarded as contravening morality and was therefore incest. One example of this is the 14th century Chunghye of Goryeo, who raped one of his deceased father's concubines, who was thus regarded to be his mother. In India, the largest proportion of women aged 13 to 49 who marry their close relative are in Tamil Nadu, then Andhra Pradesh, Karnataka, and Maharashtra. While it is rare for uncle-niece marriages, it is more common in Andhra Pradesh and Tamil Nadu. Others In some Southeast Asian cultures, stories of incest being common among certain ethnicities are sometimes told as expressions of contempt for those ethnicities. Marriages between younger brothers and their older sisters were common among the early Udegei people. In the Hawaiian Islands, high ali'i chiefs were obligated to marry their older sisters in order to increase their mana. These copulations were thought to maintain the purity of the royal blood. Another reason for these familial unions was to maintain a limited size of the ruling ali'i group. As per the priestly regulations of Kanalu, put in place after multiple disasters, "chiefs must increase their numbers and this can be done if a brother marries his older sister." Prevalence and statistics Incest between an adult and a person under the age of consent is considered a form of child sexual abuse that has been shown to be one of the most extreme forms of childhood abuse; it often results in serious and long-term psychological trauma, especially in the case of parental incest. Its prevalence is difficult to generalize, but research has estimated 10–15% of the general population as having at least one such sexual contact, with less than 2% involving intercourse or attempted intercourse. Among women, research has yielded estimates as high as 20%. Father–daughter incest was for many years the most commonly reported and studied form of incest. More recently, studies have suggested that sibling incest, particularly older brothers having sexual relations with younger siblings, is the most common form of incest, with some studies finding sibling incest occurring more frequently than other forms of incest. Some studies suggest that adolescent perpetrators of sibling abuse choose younger victims, abuse victims over a lengthier period, use violence more frequently and severely than adult perpetrators, and that sibling abuse has a higher rate of penetrative acts than father or stepfather incest, with father and older brother incest resulting in greater reported distress than stepfather incest. Saudi Arabia, Pakistan, Sudan, Mauritania and Nigeria are some of the countries with the most incest through consanguineous marriage. Types Between adults and children Sex between an adult family member and a child is usually considered a form of child sexual abuse, also known as child incestuous abuse, and for many years has been the most reported form of incest. Father–daughter and stepfather–stepdaughter sex is the most commonly reported form of adult–child incest, with most of the remaining involving a mother or stepmother. Many studies found that stepfathers tend to be far more likely than biological fathers to engage in this form of incest. One study of adult women in San Francisco estimated that 17% of women were abused by stepfathers and 2% were abused by biological fathers. Father–son incest is reported less often, but it is not known how close the frequency is to heterosexual incest because it is likely more under-reported. Prevalence of incest between parents and their children is difficult to estimate due to secrecy and privacy. In a 1999 news story, BBC reported, "Close-knit family life in India masks an alarming amount of sexual abuse of children and teenage girls by family members, a new report suggests. Delhi organisation RAHI said 76% of respondents to its survey had been abused when they were children—40% of those by a family member." According to the National Center for Victims of Crime a large proportion of rape committed in the United States is perpetrated by a family member: A study of victims of father–daughter incest in the 1970s showed that there were "common features" within families before the occurrence of incest: estrangement between the mother and the daughter, extreme paternal dominance, and reassignment of some of the mother's traditional major family responsibility to the daughter. Oldest and only daughters were more likely to be the victims of incest. It was also stated that the incest experience was psychologically harmful to the woman in later life, frequently leading to feelings of low self-esteem, very unhealthy sexual activity, contempt for other women, and other emotional problems. Adults who as children were incestuously victimized by adults often suffer from low self-esteem, difficulties in interpersonal relationships, and sexual dysfunction, and are at an extremely high risk of many mental disorders, including depression, anxiety disorders, phobic avoidance reactions, somatoform disorder, substance abuse, borderline personality disorder, and complex post-traumatic stress disorder. The Goler clan in Nova Scotia is a specific instance in which child sexual abuse in the form of forced adult/child and sibling/sibling incest took place over at least three generations. A number of Goler children were victims of sexual abuse at the hands of fathers, mothers, uncles, aunts, sisters, brothers, cousins, and each other. During interrogation by police, several of the adults openly admitted to engaging in many forms of sexual activity, up to and including full intercourse, multiple times with the children. Sixteen adults (both men and women) were charged with hundreds of allegations of incest and sexual abuse of children as young as five. In July 2012, twelve children were removed from the 'Colt' family (a pseudonym) in New South Wales, Australia, after the discovery of four generations of incest. Child protection workers and psychologists said interviews with the children indicated "a virtual sexual free-for-all". In Japan, there is a popular misconception that mother-son incestuous contact is common, due to the manner in which it is depicted in the press and popular media. According to Hideo Tokuoka, "When Americans think of incest, they think of fathers and daughters; in Japan one thinks of mothers and sons" due to the extensive media coverage of mother-son incest there. Some western researchers assumed that mother-son incest is common in Japan, but research into victimization statistics from police and health-care systems discredits this; it shows that the vast majority of sexual abuse, including incest, in Japan is perpetrated by men against young girls. While incest between adults and children generally involves the adult as the perpetrator of abuse, there are rare instances of sons sexually assaulting their mothers. These sons are typically mid adolescent to young adult, and, unlike parent-initiated incest, the incidents involve some kind of physical force. Although the mothers may be accused of being seductive with their sons and inviting the sexual contact, this is contrary to evidence. Such accusations can parallel other forms of rape, where, due to victim blaming, a woman is accused of somehow being at fault for the rape. In some cases, mother-son incest is best classified as acquaintance rape of the mother by the adolescent son. Between children Childhood sibling–sibling incest is considered to be widespread but rarely reported. Sibling–sibling incest becomes child-on-child sexual abuse when it occurs without consent, without equality, or as a result of coercion. In this form, it is believed to be the most common form of intrafamilial abuse. The most commonly reported form of abusive sibling incest is abuse of a younger sibling by an older sibling. A 2006 study showed a large portion of adults who experienced sibling incest abuse have "distorted" or "disturbed" beliefs (such as that the act was "normal") both about their own experience and the subject of sexual abuse in general. Sibling abusive incest is most prevalent in families where one or both parents are often absent or emotionally unavailable, with the abusive siblings using incest as a way to assert their power over a weaker sibling. Absence of the father in particular has been found to be a significant element of most cases of sexual abuse of female children by a brother. The damaging effects on both childhood development and adult symptoms resulting from brother–sister sexual abuse are similar to the effects of father–daughter, including substance abuse, depression, suicidality, and eating disorders. Between adults Proponents of incest between consenting adults draw clear boundaries between the behavior of consenting adults on one hand and rape, child molestation, and abusive incest on the other. However, even consensual relationships such as these are still legally classified as incest, and criminalized in many jurisdictions (although there are certain exceptions). James Roffee, a senior lecturer in criminology at Monash University and former worker on legal responses to familial sexual activity in England and Wales, and Scotland, discussed how the European Convention on Human Rights deems all familial sexual acts to be criminal, even if all parties give their full consent and are knowledgeable to all possible consequences. He also argues that the use of particular language tools in the legislation manipulates the reader to deem all familial sexual activities as immoral and criminal, even if all parties are consenting adults. In Slate, William Saletan drew a legal connection between gay sex and incest between consenting adults. As he described in his article, in 2003, U.S. Senator Rick Santorum commented on a pending U.S. Supreme Court case involving sodomy laws (primarily as a matter of constitutional rights to privacy and equal protection under the law): Saletan argued that, legally and morally, there is essentially no difference between the two, and went on to support incest between consenting adults being covered by a legal right to privacy. UCLA law professor Eugene Volokh has made similar arguments. In a more recent article, Saletan said that incest is wrong because it introduces the possibility of irreparably damaging family units by introducing "a notoriously incendiary dynamic—sexual tension—into the mix". Aunts, uncles, nieces or nephews In the Netherlands, marrying one's nephew or niece is legal, but only with the explicit permission of the Dutch Government, due to the possible risk of genetic defects among the offspring. Nephew-niece marriages predominantly occur among foreign immigrants. In November 2008, the Christian Democratic (CDA) party's Scientific Institute announced that it wanted a ban on marriages to nephews and nieces. Consensual sex between adults (persons of 18 years and older) is always lawful in the Netherlands and Belgium, even among closely related family members. Sexual acts between an adult family member and a minor are illegal, though they are not classified as incest, but as abuse of the authority such an adult has over a minor, comparable to that of a teacher, coach or priest. In Florida, consensual adult sexual intercourse with someone known to be your aunt, uncle, niece or nephew constitutes a felony of the third degree. Other states also commonly prohibit marriages between such kin. The legality of sex with a half-aunt or half-uncle varies state by state. In the United Kingdom, incest includes only sexual intercourse with a parent, grandparent, child or sibling, but the more recently introduced offence of "sex with an adult relative" extends also as far as half-siblings, uncles, aunts, nephews and nieces. However, the term 'incest' remains widely used in popular culture to describe any form of sexual activity with a relative. In Canada, marriage between uncles and nieces and between aunts and nephews is legal. Between adult siblings The most public case of consensual adult sibling incest in recent years is the case of a brother-sister couple from Germany, Patrick Stübing and Susan Karolewski. Because of violent behavior on the part of his father, Patrick was taken in at the age of 3 by foster parents, who adopted him later. At the age of 23 he learned about his biological parents, contacted his mother, and met her and his then 16-year-old sister Susan for the first time. The now-adult Patrick moved in with his birth family shortly thereafter. After their mother died suddenly six months later, the siblings became intimately close, and had their first child together in 2001. By 2004, they had four children together: Eric, Sarah, Nancy, and Sofia. The public nature of their relationship, and the repeated prosecutions and even jail time they have served as a result, has caused some in Germany to question whether incest between consenting adults should be punished at all. An article about them in Der Spiegel states that the couple are happy together. According to court records, the first three children have mental and physical disabilities, and have been placed in foster care. In April 2012, at the European Court of Human Rights, Patrick Stübing lost his case that the conviction violated his right to a private and family life. On September 24, 2014, the German Ethics Council has recommended that the government abolish laws criminalizing incest between siblings, arguing that such bans impinge upon citizens. Some societies differentiate between full sibling and half sibling relations. Cousin relationships Marriages and sexual relationships between first cousins are stigmatized as incest in some cultures, but tolerated in much of the world. Currently, 24 US states prohibit marriages between first cousins, and another seven permit them only under special circumstances. The United Kingdom permits both marriage and sexual relations between first cousins. In some non-Western societies, marriages between close biological relatives account for 20% to 60% of all marriages. First- and second-cousin marriages are rare, accounting for less than 1% of marriages in Western Europe, North America and Oceania, while reaching 9% in South America, East Asia and South Europe and about 50% in regions of the Middle East, North Africa and South Asia. Communities such as the Dhond and the Bhittani of Pakistan clearly prefer marriages between cousins as belief they ensure purity of the descent line, provide intimate knowledge of the spouses, and ensure that patrimony will not pass into the hands of "outsiders". Cross-cousin marriages are preferred among the Yanomami of Brazilian Amazonia, among many other tribal societies identified by anthropologists. There are some cultures in Asia which stigmatize cousin marriage, in some instances even marriages between second cousins or more remotely related people. This is notably true in the culture of Korea. In South Korea, before 1997, anyone with the same last name and clan were prohibited from marriage. In light of this law being held unconstitutional, South Korea now only prohibits up to third cousins (see Article 809 of the Korean Civil Code). Hmong culture prohibits the marriage of anyone with the same last name – to do so would result in being shunned by the entire community, and they are usually stripped of their last name. Some Hindu communities in India prohibit cousin marriages. In a review of 48 studies on the children parented by cousins, the rate of birth defects was twice that of non-related couples: 4% for cousin couples as opposed to 2% for the general population. Defined through marriage Some cultures include relatives by marriage in incest prohibitions; these relationships are called affinity rather than consanguinity. For example, the question of the legality and morality of a widower who wished to marry his deceased wife's sister was the subject of long and fierce debate in the United Kingdom in the 19th century, involving, among others, Matthew Boulton and Charles La Trobe. The marriages were entered into in Scotland and Switzerland respectively, where they were legal. In medieval Europe, standing as a godparent to a child also created a bond of affinity. But in other societies, a deceased spouse's sibling was considered the ideal person to marry. The Hebrew Bible forbids a man from marrying his brother's widow with the exception that, if his brother died childless, the man is instead required to marry his brother's widow so as to "raise up seed to him". Some societies have long practiced sororal polygyny, a form of polygamy in which a man marries multiple wives who are sisters to each other (though not closely related to him). In Islamic law, marriage among close blood relations like parents, stepparent, parents in-law, siblings, stepsiblings, the children of siblings, aunts and uncles is forbidden, while first or second cousins may marry. Marrying the widow of a brother, or the sister of deceased or divorced wife is also allowed. Inbreeding Offspring of biologically related parents are subject to the possible impact of inbreeding. Such offspring have a higher possibility of congenital birth defects (see Coefficient of relationship) because it increases the proportion of zygotes that are homozygous for deleterious recessive alleles that produce such disorders (see Inbreeding depression). Because most such alleles are rare in populations, it is unlikely that two unrelated marriage partners will both be heterozygous carriers. However, because close relatives share a large fraction of their alleles, the probability that any such rare deleterious allele present in the common ancestor will be inherited from both related parents is increased dramatically with respect to non-inbred couples. Contrary to common belief, inbreeding does not in itself alter allele frequencies, but rather increases the relative proportion of homozygotes to heterozygotes. This has two contrary effects. In the short term, because incestuous reproduction increases zygosity, deleterious recessive alleles will express themselves more frequently, leading to increases in spontaneous abortions of zygotes, perinatal deaths, and postnatal offspring with birth defects. In the long run, however, because of this increased exposure of deleterious recessive alleles to natural selection, their frequency decreases more rapidly in inbred population, leading to a "healthier" population (with fewer deleterious recessive alleles). The closer two persons are related, the higher the zygosity, and thus the more severe the biological costs of inbreeding. This fact likely explains why inbreeding between close relatives, such as siblings, is less common than inbreeding between cousins. There may also be other deleterious effects besides those caused by recessive diseases. Thus, similar immune systems may be more vulnerable to
|
in 1770. It was the first practical spinning frame with multiple spindles. The jenny worked in a similar manner to the spinning wheel, by first clamping down on the fibres, then by drawing them out, followed by twisting. It was a simple, wooden framed machine that only cost about £6 for a 40-spindle model in 1792, and was used mainly by home spinners. The jenny produced a lightly twisted yarn only suitable for weft, not warp. The spinning frame or water frame was developed by Richard Arkwright who, along with two partners, patented it in 1769. The design was partly based on a spinning machine built for Thomas High by clockmaker John Kay, who was hired by Arkwright. For each spindle the water frame used a series of four pairs of rollers, each operating at a successively higher rotating speed, to draw out the fibre, which was then twisted by the spindle. The roller spacing was slightly longer than the fibre length. Too close a spacing caused the fibres to break while too distant a spacing caused uneven thread. The top rollers were leather-covered and loading on the rollers was applied by a weight. The weights kept the twist from backing up before the rollers. The bottom rollers were wood and metal, with fluting along the length. The water frame was able to produce a hard, medium-count thread suitable for warp, finally allowing 100% cotton cloth to be made in Britain. A horse powered the first factory to use the spinning frame. Arkwright and his partners used water power at a factory in Cromford, Derbyshire in 1771, giving the invention its name. Samuel Crompton's Spinning Mule was introduced in 1779. Mule implies a hybrid because it was a combination of the spinning jenny and the water frame, in which the spindles were placed on a carriage, which went through an operational sequence during which the rollers stopped while the carriage moved away from the drawing roller to finish drawing out the fibres as the spindles started rotating. Crompton's mule was able to produce finer thread than hand spinning and at a lower cost. Mule spun thread was of suitable strength to be used as a warp and finally allowed Britain to produce highly competitive yarn in large quantities. Realising that the expiration of the Arkwright patent would greatly increase the supply of spun cotton and lead to a shortage of weavers, Edmund Cartwright developed a vertical power loom which he patented in 1785. In 1776 he patented a two-man operated loom which was more conventional. Cartwright built two factories; the first burned down and the second was sabotaged by his workers. Cartwright's loom design had several flaws, the most serious being thread breakage. Samuel Horrocks patented a fairly successful loom in 1813. Horock's loom was improved by Richard Roberts in 1822 and these were produced in large numbers by Roberts, Hill & Co. The demand for cotton presented an opportunity to planters in the Southern United States, who thought upland cotton would be a profitable crop if a better way could be found to remove the seed. Eli Whitney responded to the challenge by inventing the inexpensive cotton gin. A man using a cotton gin could remove seed from as much upland cotton in one day as would previously, working at the rate of one pound of cotton per day, have taken a woman two months to process. These advances were capitalised on by entrepreneurs, of whom the best known is Richard Arkwright. He is credited with a list of inventions, but these were actually developed by such people as Thomas Highs and John Kay; Arkwright nurtured the inventors, patented the ideas, financed the initiatives, and protected the machines. He created the cotton mill which brought the production processes together in a factory, and he developed the use of powerfirst horsepower and then water powerwhich made cotton manufacture a mechanised industry. Other inventors increased the efficiency of the individual steps of spinning (carding, twisting and spinning, and rolling) so that the supply of yarn increased greatly. Before long steam power was applied to drive textile machinery. Manchester acquired the nickname Cottonopolis during the early 19th century owing to its sprawl of textile factories. Although mechanization dramatically decreased the cost of cotton cloth, by the mid-19th century machine-woven cloth still could not equal the quality of hand-woven Indian cloth, in part due to the fineness of thread made possible by the type of cotton used in India, which allowed high thread counts. However, the high productivity of British textile manufacturing allowed coarser grades of British cloth to undersell hand-spun and woven fabric in low-wage India, eventually destroying the industry. Wool The earliest European attempts at mechanized spinning were with wool; however, wool spinning proved more difficult to mechanize than cotton. Productivity improvement in wool spinning during the Industrial Revolution was significant, but far less than that of cotton. Silk Arguably the first highly mechanised factory was John Lombe's water-powered silk mill at Derby, operational by 1721. Lombe learned silk thread manufacturing by taking a job in Italy and acting as an industrial spy; however, because the Italian silk industry guarded its secrets closely, the state of the industry at that time is unknown. Although Lombe's factory was technically successful, the supply of raw silk from Italy was cut off to eliminate competition. In order to promote manufacturing, the Crown paid for models of Lombe's machinery which were exhibited in the Tower of London. Iron industry UK iron production statistics Bar iron was the commodity form of iron used as the raw material for making hardware goods such as nails, wire, hinges, horseshoes, wagon tires, chains, etc., as well as structural shapes. A small amount of bar iron was converted into steel. Cast iron was used for pots, stoves, and other items where its brittleness was tolerable. Most cast iron was refined and converted to bar iron, with substantial losses. Bar iron was also made by the bloomery process, which was the predominant iron smelting process until the late 18th century. In the UK in 1720, there were 20,500 tons of cast iron produced with charcoal and 400 tons with coke. In 1750 charcoal iron production was 24,500 and coke iron was 2,500 tons. In 1788 the production of charcoal cast iron was 14,000 tons while coke iron production was 54,000 tons. In 1806 charcoal cast iron production was 7,800 tons and coke cast iron was 250,000 tons. In 1750 the UK imported 31,200 tons of bar iron and either refined from cast iron or directly produced 18,800 tons of bar iron using charcoal and 100 tons using coke. In 1796 the UK was making 125,000 tons of bar iron with coke and 6,400 tons with charcoal; imports were 38,000 tons and exports were 24,600 tons. In 1806 the UK did not import bar iron but exported 31,500 tons. Iron process innovations A major change in the iron industries during the Industrial Revolution was the replacement of wood and other bio-fuels with coal. For a given amount of heat, mining coal required much less labour than cutting wood and converting it to charcoal, and coal was much more abundant than wood, supplies of which were becoming scarce before the enormous increase in iron production that took place in the late 18th century. By 1750 coke had generally replaced charcoal in the smelting of copper and lead, and was in widespread use in glass production. In the smelting and refining of iron, coal and coke produced inferior iron to that made with charcoal because of the coal's sulfur content. Low sulfur coals were known, but they still contained harmful amounts. Conversion of coal to coke only slightly reduces the sulfur content. A minority of coals are coking. Another factor limiting the iron industry before the Industrial Revolution was the scarcity of water power to power blast bellows. This limitation was overcome by the steam engine. Use of coal in iron smelting started somewhat before the Industrial Revolution, based on innovations by Sir Clement Clerke and others from 1678, using coal reverberatory furnaces known as cupolas. These were operated by the flames playing on the ore and charcoal or coke mixture, reducing the oxide to metal. This has the advantage that impurities (such as sulphur ash) in the coal do not migrate into the metal. This technology was applied to lead from 1678 and to copper from 1687. It was also applied to iron foundry work in the 1690s, but in this case the reverberatory furnace was known as an air furnace. (The foundry cupola is a different, and later, innovation.) By 1709 Abraham Darby made progress using coke to fuel his blast furnaces at Coalbrookdale. However, the coke pig iron he made was not suitable for making wrought iron and was used mostly for the production of cast iron goods, such as pots and kettles. He had the advantage over his rivals in that his pots, cast by his patented process, were thinner and cheaper than theirs. Coke pig iron was hardly used to produce wrought iron until 1755–56, when Darby's son Abraham Darby II built furnaces at Horsehay and Ketley where low sulfur coal was available (and not far from Coalbrookdale). These new furnaces were equipped with water-powered bellows, the water being pumped by Newcomen steam engines. The Newcomen engines were not attached directly to the blowing cylinders because the engines alone could not produce a steady air blast. Abraham Darby III installed similar steam-pumped, water-powered blowing cylinders at the Dale Company when he took control in 1768. The Dale Company used several Newcomen engines to drain its mines and made parts for engines which it sold throughout the country. Steam engines made the use of higher-pressure and volume blast practical; however, the leather used in bellows was expensive to replace. In 1757, ironmaster John Wilkinson patented a hydraulic powered blowing engine for blast furnaces. The blowing cylinder for blast furnaces was introduced in 1760 and the first blowing cylinder made of cast iron is believed to be the one used at Carrington in 1768 that was designed by John Smeaton. Cast iron cylinders for use with a piston were difficult to manufacture; the cylinders had to be free of holes and had to be machined smooth and straight to remove any warping. James Watt had great difficulty trying to have a cylinder made for his first steam engine. In 1774 John Wilkinson, who built a cast iron blowing cylinder for his ironworks, invented a precision boring machine for boring cylinders. After Wilkinson bored the first successful cylinder for a Boulton and Watt steam engine in 1776, he was given an exclusive contract for providing cylinders. After Watt developed a rotary steam engine in 1782, they were widely applied to blowing, hammering, rolling and slitting. The solutions to the sulfur problem were the addition of sufficient limestone to the furnace to force sulfur into the slag and the use of low sulfur coal. The use of lime or limestone required higher furnace temperatures to form a free-flowing slag. The increased furnace temperature made possible by improved blowing also increased the capacity of blast furnaces and allowed for increased furnace height. In addition to lower cost and greater availability, coke had other important advantages over charcoal in that it was harder and made the column of materials (iron ore, fuel, slag) flowing down the blast furnace more porous and did not crush in the much taller furnaces of the late 19th century. As cast iron became cheaper and widely available, it began being a structural material for bridges and buildings. A famous early example was the Iron Bridge built in 1778 with cast iron produced by Abraham Darby III. However, most cast iron was converted to wrought iron. Europe relied on the bloomery for most of its wrought iron until the large-scale production of cast iron. Conversion of cast iron was done in a finery forge, as it long had been. An improved refining process known as potting and stamping was developed, but this was superseded by Henry Cort's puddling process. Cort developed two significant iron manufacturing processes: rolling in 1783 and puddling in 1784. Puddling produced a structural grade iron at a relatively low cost. Puddling was a means of decarburizing molten pig iron by slow oxidation in a reverberatory furnace by manually stirring it with a long rod. The decarburized iron, having a higher melting point than cast iron, was raked into globs by the puddler. When the glob was large enough, the puddler would remove it. Puddling was backbreaking and extremely hot work. Few puddlers lived to be 40. Because puddling was done in a reverberatory furnace, coal or coke could be used as fuel. The puddling process continued to be used until the late 19th century when iron was being displaced by steel. Because puddling required human skill in sensing the iron globs, it was never successfully mechanised. Rolling was an important part of the puddling process because the grooved rollers expelled most of the molten slag and consolidated the mass of hot wrought iron. Rolling was 15 times faster at this than a trip hammer. A different use of rolling, which was done at lower temperatures than that for expelling slag, was in the production of iron sheets, and later structural shapes such as beams, angles, and rails. The puddling process was improved in 1818 by Baldwyn Rogers, who replaced some of the sand lining on the reverberatory furnace bottom with iron oxide. In 1838 John Hall patented the use of roasted tap cinder (iron silicate) for the furnace bottom, greatly reducing the loss of iron through increased slag caused by a sand lined bottom. The tap cinder also tied up some phosphorus, but this was not understood at the time. Hall's process also used iron scale or rust, which reacted with carbon in the molten iron. Hall's process, called wet puddling, reduced losses of iron with the slag from almost 50% to around 8%. Puddling became widely used after 1800. Up to that time, British iron manufacturers had used considerable amounts of iron imported from Sweden and Russia to supplement domestic supplies. Because of the increased British production, imports began to decline in 1785 and by the 1790s Britain eliminated imports and became a net exporter of bar iron. Hot blast, patented by the Scottish inventor James Beaumont Neilson in 1828, was the most important development of the 19th century for saving energy in making pig iron. By using preheated combustion air, the amount of fuel to make a unit of pig iron was reduced at first by between one-third using coke or two-thirds using coal; however, the efficiency gains continued as the technology improved. Hot blast also raised the operating temperature of furnaces, increasing their capacity. Using less coal or coke meant introducing fewer impurities into the pig iron. This meant that lower quality coal or anthracite could be used in areas where coking coal was unavailable or too expensive; however, by the end of the 19th century transportation costs fell considerably. Shortly before the Industrial Revolution, an improvement was made in the production of steel, which was an expensive commodity and used only where iron would not do, such as for cutting edge tools and for springs. Benjamin Huntsman developed his crucible steel technique in the 1740s. The raw material for this was blister steel, made by the cementation process. The supply of cheaper iron and steel aided a number of industries, such as those making nails, hinges, wire, and other hardware items. The development of machine tools allowed better working of iron, causing it to be increasingly used in the rapidly growing machinery and engine industries. Steam power The development of the stationary steam engine was an important element of the Industrial Revolution; however, during the early period of the Industrial Revolution, most industrial power was supplied by water and wind. In Britain, by 1800 an estimated 10,000 horsepower was being supplied by steam. By 1815 steam power had grown to 210,000 hp. The first commercially successful industrial use of steam power was due to Thomas Savery in 1698. He constructed and patented in London a low-lift combined vacuum and pressure water pump, that generated about one horsepower (hp) and was used in numerous waterworks and in a few mines (hence its "brand name", The Miner's Friend). Savery's pump was economical in small horsepower ranges but was prone to boiler explosions in larger sizes. Savery pumps continued to be produced until the late 18th century. The first successful piston steam engine was introduced by Thomas Newcomen before 1712. A number of Newcomen engines were installed in Britain for draining hitherto unworkable deep mines, with the engine on the surface; these were large machines, requiring a significant amount of capital to build, and produced upwards of . They were also used to power municipal water supply pumps. They were extremely inefficient by modern standards, but when located where coal was cheap at pit heads, opened up a great expansion in coal mining by allowing mines to go deeper. Despite their disadvantages, Newcomen engines were reliable and easy to maintain and continued to be used in the coalfields until the early decades of the 19th century. By 1729, when Newcomen died, his engines had spread (first) to Hungary in 1722, Germany, Austria, and Sweden. A total of 110 are known to have been built by 1733 when the joint patent expired, of which 14 were abroad. In the 1770s the engineer John Smeaton built some very large examples and introduced a number of improvements. A total of 1,454 engines had been built by 1800. A fundamental change in working principles was brought about by Scotsman James Watt. With financial support from his business partner Englishman Matthew Boulton, he had succeeded by 1778 in perfecting his steam engine, which incorporated a series of radical improvements, notably the closing off of the upper part of the cylinder, thereby making the low-pressure steam drive the top of the piston instead of the atmosphere, use of a steam jacket and the celebrated separate steam condenser chamber. The separate condenser did away with the cooling water that had been injected directly into the cylinder, which cooled the cylinder and wasted steam. Likewise, the steam jacket kept steam from condensing in the cylinder, also improving efficiency. These improvements increased engine efficiency so that Boulton and Watt's engines used only 20–25% as much coal per horsepower-hour as Newcomen's. Boulton and Watt opened the Soho Foundry for the manufacture of such engines in 1795. By 1783 the Watt steam engine had been fully developed into a double-acting rotative type, which meant that it could be used to directly drive the rotary machinery of a factory or mill. Both of Watt's basic engine types were commercially very successful, and by 1800, the firm Boulton & Watt had constructed 496 engines, with 164 driving reciprocating pumps, 24 serving blast furnaces, and 308 powering mill machinery; most of the engines generated from . Until about 1800 the most common pattern of steam engine was the beam engine, built as an integral part of a stone or brick engine-house, but soon various patterns of self-contained rotative engines (readily removable, but not on wheels) were developed, such as the table engine. Around the start of the 19th century, at which time the Boulton and Watt patent expired, the Cornish engineer Richard Trevithick and the American Oliver Evans began to construct higher-pressure non-condensing steam engines, exhausting against the atmosphere. High pressure yielded an engine and boiler compact enough to be used on mobile road and rail locomotives and steam boats. The development of machine tools, such as the engine lathe, planing, milling and shaping machines powered by these engines, enabled all the metal parts of the engines to be easily and accurately cut and in turn made it possible to build larger and more powerful engines. Small industrial power requirements continued to be provided by animal and human muscle until widespread electrification in the early 20th century. These included crank-powered, treadle-powered and horse-powered workshop, and light industrial machinery. Machine tools Pre-industrial machinery was built by various craftsmenmillwrights built water and windmills, carpenters made wooden framing, and smiths and turners made metal parts. Wooden components had the disadvantage of changing dimensions with temperature and humidity, and the various joints tended to rack (work loose) over time. As the Industrial Revolution progressed, machines with metal parts and frames became more common. Other important uses of metal parts were in firearms and threaded fasteners, such as machine screws, bolts, and nuts. There was also the need for precision in making parts. Precision would allow better working machinery, interchangeability of parts, and standardization of threaded fasteners. The demand for metal parts led to the development of several machine tools. They have their origins in the tools developed in the 18th century by makers of clocks and watches and scientific instrument makers to enable them to batch-produce small mechanisms. Before the advent of machine tools, metal was worked manually using the basic hand tools of hammers, files, scrapers, saws, and chisels. Consequently, the use of metal machine parts was kept to a minimum. Hand methods of production were very laborious and costly and precision was difficult to achieve. The first large precision machine tool was the cylinder boring machine invented by John Wilkinson in 1774. It was used for boring the large-diameter cylinders on early steam engines. Wilkinson's boring machine differed from earlier cantilevered machines used for boring cannon in that the cutting tool was mounted on a beam that ran through the cylinder being bored and was supported outside on both ends. The planing machine, the milling machine and the shaping machine were developed in the early decades of the 19th century. Although the milling machine was invented at this time, it was not developed as a serious workshop tool until somewhat later in the 19th century. Henry Maudslay, who trained a school of machine tool makers early in the 19th century, was a mechanic with superior ability who had been employed at the Royal Arsenal, Woolwich. He worked as an apprentice in the Royal Gun Foundry of Jan Verbruggen. In 1774 Jan Verbruggen had installed a horizontal boring machine in Woolwich which was the first industrial size lathe in the UK. Maudslay was hired away by Joseph Bramah for the production of high-security metal locks that required precision craftsmanship. Bramah patented a lathe that had similarities to the slide rest lathe. Maudslay perfected the slide rest lathe, which could cut machine screws of different thread pitches by using changeable gears between the spindle and the lead screw. Before its invention screws could not be cut to any precision using various earlier lathe designs, some of which copied from a template. The slide rest lathe was called one of history's most important inventions. Although it was not entirely Maudslay's idea, he was the first person to build a functional lathe using a combination of known innovations of the lead screw, slide rest, and change gears. Maudslay left Bramah's employment and set up his own shop. He was engaged to build the machinery for making ships' pulley blocks for the Royal Navy in the Portsmouth Block Mills. These machines were all-metal and were the first machines for mass production and making components with a degree of interchangeability. The lessons Maudslay learned about the need for stability and precision he adapted to the development of machine tools, and in his workshops, he trained a generation of men to build on his work, such as Richard Roberts, Joseph Clement and Joseph Whitworth. James Fox of Derby had a healthy export trade in machine tools for the first third of the century, as did Matthew Murray of Leeds. Roberts was a maker of high-quality machine tools and a pioneer of the use of jigs and gauges for precision workshop measurement. The effect of machine tools during the Industrial Revolution was not that great because other than firearms, threaded fasteners, and a few other industries there were few mass-produced metal parts. The techniques to make mass-produced metal parts made with sufficient precision to be interchangeable is largely attributed to a program of the U.S. Department of War which perfected interchangeable parts for firearms in the early 19th century. In the half-century following the invention of the fundamental machine tools the machine industry became the largest industrial sector of the U.S. economy, by value added. Chemicals The large-scale production of chemicals was an important development during the Industrial Revolution. The first of these was the production of sulphuric acid by the lead chamber process invented by the Englishman John Roebuck (James Watt's first partner) in 1746. He was able to greatly increase the scale of the manufacture by replacing the relatively expensive glass vessels formerly used with larger, less expensive chambers made of riveted sheets of lead. Instead of making a small amount each time, he was able to make around in each of the chambers, at least a tenfold increase. The production of an alkali on a large scale became an important goal as well, and Nicolas Leblanc succeeded in 1791 in introducing a method for the production of sodium carbonate. The Leblanc process was a reaction of sulfuric acid with sodium chloride to give sodium sulfate and hydrochloric acid. The sodium sulfate was heated with limestone (calcium carbonate) and coal to give a mixture of sodium carbonate and calcium sulfide. Adding water separated the soluble sodium carbonate from the calcium sulfide. The process produced a large amount of pollution (the hydrochloric acid was initially vented to the air, and calcium sulfide was a useless waste product). Nonetheless, this synthetic soda ash proved economical compared to that from burning specific plants (barilla) or from kelp, which were the previously dominant sources of soda ash, and also to potash (potassium carbonate) produced from hardwood ashes. These two chemicals were very important because they enabled the introduction of a host of other inventions, replacing many small-scale operations with more cost-effective and controllable processes. Sodium carbonate had many uses in the glass, textile, soap, and paper industries. Early uses for sulfuric acid included pickling (removing rust from) iron and steel, and for bleaching cloth. The development of bleaching powder (calcium hypochlorite) by Scottish chemist Charles Tennant in about 1800, based on the discoveries of French chemist Claude Louis Berthollet, revolutionised the bleaching processes in the textile industry by dramatically reducing the time required (from months to days) for the traditional process then in use, which required repeated exposure to the sun in bleach fields after soaking the textiles with alkali or sour milk. Tennant's factory at St Rollox, North Glasgow, became the largest chemical plant in the world. After 1860 the focus on chemical innovation was in dyestuffs, and Germany took world leadership, building a strong chemical industry. Aspiring chemists flocked to German universities in the 1860–1914 era to learn the latest techniques. British scientists by contrast, lacked research universities and did not train advanced students; instead, the practice was to hire German-trained chemists. Cement In 1824 Joseph Aspdin, a British bricklayer turned builder, patented a chemical process for making portland cement which was an important advance in the building trades. This process involves sintering a mixture of clay and limestone to about , then grinding it into a fine powder which is then mixed with water, sand and gravel to produce concrete. Portland cement was used by the famous English engineer Marc Isambard Brunel several years later when constructing the Thames Tunnel. Cement was used on a large scale in the construction of the London sewerage system a generation later. Gas lighting Another major industry of the later Industrial Revolution was gas lighting. Though others made a similar innovation elsewhere, the large-scale introduction of this was the work of William Murdoch, an employee of Boulton & Watt, the Birmingham steam engine pioneers. The process consisted of the large-scale gasification of coal in furnaces, the purification of the gas (removal of sulphur, ammonia, and heavy hydrocarbons), and its storage and distribution. The first gas lighting utilities were established in London between 1812 and 1820. They soon became one of the major consumers of coal in the UK. Gas lighting affected social and industrial organisation because it allowed factories and stores to remain open longer than with tallow candles or oil. Its introduction allowed nightlife to flourish in cities and towns as interiors and streets could be lighted on a larger scale than before. Glass making The glass was made in ancient Greece and Rome. A new method of producing glass, known as the cylinder process, was developed in Europe during the early 19th century. In 1832 this process was used by the Chance Brothers to create sheet glass. They became the leading producers of window and plate glass. This advancement allowed for larger panes of glass to be created without interruption, thus freeing up the space planning in interiors as well as the fenestration of buildings. The Crystal Palace is the supreme example of the use of sheet glass in a new and innovative structure. Paper machine A machine for making a continuous sheet of paper on a loop of wire fabric was patented in 1798 by Nicholas Louis Robert who worked for Saint-Léger Didot family in France. The paper machine is known as a Fourdrinier after the financiers, brothers Sealy and Henry Fourdrinier, who were stationers in London. Although greatly improved and with many variations, the Fourdrinier machine is the predominant means of paper production today. The method of continuous production demonstrated by the paper machine influenced the development of continuous rolling of iron and later steel and other continuous production processes. Agriculture The British Agricultural Revolution is considered one of the causes of the Industrial Revolution because improved agricultural productivity freed up workers to work in other sectors of the economy. In contrast, per-capita food supply in Europe was stagnant or declining and did not improve in some parts of Europe until the late 18th century. Industrial technologies that affected farming included the seed drill, the Dutch plough, which contained iron parts, and the threshing machine. The English lawyer Jethro Tull invented an improved seed drill in 1701. It was a mechanical seeder that distributed seeds evenly across a plot of land and planted them at the correct depth. This was important because the yield of seeds harvested to seeds planted at that time was around four or five. Tull's seed drill was very expensive and not very reliable and therefore did not have much of an effect. Good quality seed drills were not produced until the mid 18th century. Joseph Foljambe's Rotherham plough of 1730 was the first commercially successful iron plough. The threshing machine, invented by the Scottish engineer Andrew Meikle in 1784, displaced hand threshing with a flail, a laborious job that took about one-quarter of agricultural labour. It took several decades to diffuse and was the final straw for many farm labourers, who faced near starvation, leading to the 1830 agricultural rebellion of the Swing Riots. Machine tools and metalworking techniques developed during the Industrial Revolution eventually resulted in precision manufacturing techniques in the late 19th century for mass-producing agricultural equipment, such as reapers, binders, and combine harvesters. Mining Coal mining in Britain, particularly in South Wales, started early. Before the steam engine, pits were often shallow bell pits following a seam of coal along the surface, which were abandoned as the coal was extracted. In other cases, if the geology was favourable, the coal was mined by means of an adit or drift mine driven into the side of a hill. Shaft mining was done in some areas, but the limiting factor was the problem of removing water. It could be done by hauling buckets of water up the shaft or to a sough (a tunnel driven into a hill to drain a mine). In either case, the water had to be discharged into a stream or ditch at a level where it could flow away by gravity. The introduction of the steam pump by Thomas Savery in 1698 and the Newcomen steam engine in 1712 greatly facilitated the removal of water and enabled shafts to be made deeper, enabling more coal to be extracted. These were developments that had begun before the Industrial Revolution, but the adoption of John Smeaton's improvements to the Newcomen engine followed by James Watt's more efficient steam engines from the 1770s reduced the fuel costs of engines, making mines more profitable. The Cornish engine, developed in the 1810s, was much more efficient than the Watt steam engine. Coal mining was very dangerous owing to the presence of firedamp in many coal seams. Some degree of safety was provided by the safety lamp which was invented
|
Industrial Revolution between the 18th and 19th centuries. In England and Scotland in 1788, two-thirds of the workers in 143 water-powered cotton mills were described as children. Child labour existed before the Industrial Revolution but with the increase in population and education, it became more visible. Many children were forced to work in relatively bad conditions for much lower pay than their elders, 10–20% of an adult male's wage. Reports were written detailing some of the abuses, particularly in the coal mines and textile factories, and these helped to popularise the children's plight. The public outcry, especially among the upper and middle classes, helped stir change in the young workers' welfare. Politicians and the government tried to limit child labour by law but factory owners resisted; some felt that they were aiding the poor by giving their children money to buy food to avoid starvation, and others simply welcomed the cheap labour. In 1833 and 1844, the first general laws against child labour, the Factory Acts, were passed in Britain: Children younger than nine were not allowed to work, children were not permitted to work at night, and the workday of youth under the age of 18 was limited to twelve hours. Factory inspectors supervised the execution of the law, however, their scarcity made enforcement difficult. About ten years later, the employment of children and women in mining was forbidden. Although laws such as these decreased the number of child labourers, child labour remained significantly present in Europe and the United States until the 20th century. Organisation of labour The Industrial Revolution concentrated labour into mills, factories, and mines, thus facilitating the organisation of combinations or trade unions to help advance the interests of working people. The power of a union could demand better terms by withdrawing all labour and causing a consequent cessation of production. Employers had to decide between giving in to the union demands at a cost to themselves or suffering the cost of the lost production. Skilled workers were hard to replace, and these were the first groups to successfully advance their conditions through this kind of bargaining. The main method the unions used to effect change was strike action. Many strikes were painful events for both sides, the unions, and the management. In Britain, the Combination Act 1799 forbade workers to form any kind of trade union until its repeal in 1824. Even after this, unions were still severely restricted. One British newspaper in 1834 described unions as "the most dangerous institutions that were ever permitted to take root, under shelter of law, in any country..." In 1832, the Reform Act extended the vote in Britain but did not grant universal suffrage. That year six men from Tolpuddle in Dorset founded the Friendly Society of Agricultural Labourers to protest against the gradual lowering of wages in the 1830s. They refused to work for less than ten shillings a week, although by this time wages had been reduced to seven shillings a week and were due to be further reduced to six. In 1834 James Frampton, a local landowner, wrote to the Prime Minister, Lord Melbourne, to complain about the union, invoking an obscure law from 1797 prohibiting people from swearing oaths to each other, which the members of the Friendly Society had done. James Brine, James Hammett, George Loveless, George's brother James Loveless, George's brother-in-law Thomas Standfield, and Thomas's son John Standfield were arrested, found guilty, and transported to Australia. They became known as the Tolpuddle Martyrs. In the 1830s and 1840s, the Chartist movement was the first large-scale organised working-class political movement that campaigned for political equality and social justice. Its Charter of reforms received over three million signatures but was rejected by Parliament without consideration. Working people also formed friendly societies and co-operative societies as mutual support groups against times of economic hardship. Enlightened industrialists, such as Robert Owen also supported these organisations to improve the conditions of the working class. Unions slowly overcame the legal restrictions on the right to strike. In 1842, a general strike involving cotton workers and colliers was organised through the Chartist movement which stopped production across Great Britain. Eventually, effective political organisation for working people was achieved through the trades unions who, after the extensions of the franchise in 1867 and 1885, began to support socialist political parties that later merged to become the British Labour Party. Luddites The rapid industrialisation of the English economy cost many craft workers their jobs. The movement started first with lace and hosiery workers near Nottingham and spread to other areas of the textile industry owing to early industrialisation. Many weavers also found themselves suddenly unemployed since they could no longer compete with machines which only required relatively limited (and unskilled) labour to produce more cloth than a single weaver. Many such unemployed workers, weavers, and others turned their animosity towards the machines that had taken their jobs and began destroying factories and machinery. These attackers became known as Luddites, supposedly followers of Ned Ludd, a folklore figure. The first attacks of the Luddite movement began in 1811. The Luddites rapidly gained popularity, and the British government took drastic measures, using the militia or army to protect industry. Those rioters who were caught were tried and hanged, or transported for life. Unrest continued in other sectors as they industrialised, such as with agricultural labourers in the 1830s when large parts of southern Britain were affected by the Captain Swing disturbances. Threshing machines were a particular target, and hayrick burning was a popular activity. However, the riots led to the first formation of trade unions, and further pressure for reform. Shift in production's center of gravity The traditional centers of hand textile production such as India, parts of the Middle East, and later China could not withstand the competition from machine-made textiles, which over a period of decades destroyed the hand made textile industries and left millions of people without work, many of whom starved. The Industrial Revolution also generated an enormous and unprecedented economic division in the world, as measured by the share of manufacturing output. Cotton and the expansion of slavery Cheap cotton textiles increased the demand for raw cotton; previously, it had primarily been consumed in subtropical regions where it was grown, with little raw cotton available for export. Consequently, prices of raw cotton rose. British production grew from 2 million pounds in 1700 to 5 million pounds in 1781 to 56 million in 1800. The invention of the cotton gin by American Eli Whitney in 1792 was the decisive event. It allowed green-seeded cotton to become profitable, leading to the widespread growth of the large slave plantation in the United States, Brazil, and the West Indies. In 1791 American cotton production was about 2 million pounds, soaring to 35 million by 1800, half of which was exported. America's cotton plantations were highly efficient and profitable, and able to keep up with demand. The U.S. Civil War created a "cotton famine" that led to increased production in other areas of the world, including European colonies in Africa. Effect on environment The origins of the environmental movement lay in the response to increasing levels of smoke pollution in the atmosphere during the Industrial Revolution. The emergence of great factories and the concomitant immense growth in coal consumption gave rise to an unprecedented level of air pollution in industrial centers; after 1900 the large volume of industrial chemical discharges added to the growing load of untreated human waste. The first large-scale, modern environmental laws came in the form of Britain's Alkali Acts, passed in 1863, to regulate the deleterious air pollution (gaseous hydrochloric acid) given off by the Leblanc process, used to produce soda ash. An Alkali inspector and four sub-inspectors were appointed to curb this pollution. The responsibilities of the inspectorate were gradually expanded, culminating in the Alkali Order 1958 which placed all major heavy industries that emitted smoke, grit, dust, and fumes under supervision. The manufactured gas industry began in British cities in 1812–1820. The technique used produced highly toxic effluent that was dumped into sewers and rivers. The gas companies were repeatedly sued in nuisance lawsuits. They usually lost and modified the worst practices. The City of London repeatedly indicted gas companies in the 1820s for polluting the Thames and poisoning its fish. Finally, Parliament wrote company charters to regulate toxicity. The industry reached the US around 1850 causing pollution and lawsuits. In industrial cities local experts and reformers, especially after 1890, took the lead in identifying environmental degradation and pollution, and initiating grass-roots movements to demand and achieve reforms. Typically the highest priority went to water and air pollution. The Coal Smoke Abatement Society was formed in Britain in 1898 making it one of the oldest environmental NGOs. It was founded by artist Sir William Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, the Public Health Act 1875 required all furnaces and fireplaces to consume their own smoke. It also provided for sanctions against factories that emitted large amounts of black smoke. The provisions of this law were extended in 1926 with the Smoke Abatement Act to include other emissions, such as soot, ash, and gritty particles, and to empower local authorities to impose their own regulations. Nations and nationalism In his 1983 book Nations and Nationalism, philosopher Ernest Gellner argues that the industrial revolution and economic modernization spurred the creation of nations. Industrialisation beyond Great Britain Continental Europe The Industrial Revolution in Continental Europe came later than in Great Britain. It started in Belgium and France, then spread to the German states by the middle of the 19th century. In many industries, this involved the application of technology developed in Britain in new places. Typically the technology was purchased from Britain or British engineers and entrepreneurs moved abroad in search of new opportunities. By 1809, part of the Ruhr Valley in Westphalia was called 'Miniature England' because of its similarities to the industrial areas of Britain. Most European governments provided state funding to the new industries. In some cases (such as iron), the different availability of resources locally meant that only some aspects of the British technology were adopted. Austria-Hungary The Habsburg realms which became Austria-Hungary in 1867 included 23 million inhabitants in 1800, growing to 36 million by 1870. Nationally the per capita rate of industrial growth averaged about 3% between 1818 and 1870. However, there were strong regional differences. The railway system was built in the 1850-1873 period. Before they arrived transportation was very slow and expensive. In the Alpine and Bohemian (modern-day Czech Republic) regions, proto-industrialization began by 1750 and became the center of the first phases of the industrial revolution after 1800. The textile industry was the main factor, utilizing mechanization, steam engines, and the factory system. In the Czech lands, the "first mechanical loom followed in Varnsdorf in 1801," with the first steam engines appearing in Bohemia and Moravia just a few years later. The textile production flourished particularly in Prague and Brno (German: Brünn), which was considered the 'Moravian Manchester'. The Czech lands, especially Bohemia, became the center of industrialization due to its natural and human resources. The iron industry had developed in the Alpine regions after 1750, with smaller centers in Bohemia and Moravia. Hungary—the eastern half of the Dual Monarchy, was heavily rural with little industry before 1870. In 1791 Prague organized the first World's Fair/List of world's fairs, Bohemia (modern-day Czech Republic). The first industrial exhibition was on the occasion of the coronation of Leopold II as a king of Bohemia, which took place in Clementinum, and therefore celebrated the considerable sophistication of manufacturing methods in the Czech lands during that time period. Technological change accelerated industrialization and urbanization. The GNP per capita grew roughly 1.76% per year from 1870 to 1913. That level of growth compared very favorably to that of other European nations such as Britain (1%), France (1.06%), and Germany (1.51%). However, in a comparison with Germany and Britain: the Austro-Hungarian economy as a whole still lagged considerably, as sustained modernization had begun much later. Belgium Belgium was the second country in which the Industrial Revolution took place and the first in continental Europe: Wallonia (French-speaking southern Belgium) took the lead. Starting in the middle of the 1820s, and especially after Belgium became an independent nation in 1830, numerous works comprising coke blast furnaces as well as puddling and rolling mills were built in the coal mining areas around Liège and Charleroi. The leader was a transplanted Englishman John Cockerill. His factories at Seraing integrated all stages of production, from engineering to the supply of raw materials, as early as 1825. Wallonia exemplified the radical evolution of industrial expansion. Thanks to coal (the French word "houille" was coined in Wallonia), the region geared up to become the 2nd industrial power in the world after Britain. But it is also pointed out by many researchers, with its Sillon industriel, 'Especially in the Haine, Sambre and Meuse valleys, between the Borinage and Liège...there was a huge industrial development based on coal-mining and iron-making...'. Philippe Raxhon wrote about the period after 1830: "It was not propaganda but a reality the Walloon regions were becoming the second industrial power all over the world after Britain." "The sole industrial centre outside the collieries and blast furnaces of Walloon was the old cloth-making town of Ghent." Professor Michel De Coster stated: "The historians and the economists say that Belgium was the second industrial power of the world, in proportion to its population and its territory [...] But this rank is the one of Wallonia where the coal-mines, the blast furnaces, the iron and zinc factories, the wool industry, the glass industry, the weapons industry... were concentrated." Many of the 19th-century coal mines in Wallonia are now protected as World Heritage sites Wallonia was also the birthplace of a strong Socialist party and strong trade unions in a particular sociological landscape. At the left, the Sillon industriel, which runs from Mons in the west, to Verviers in the east (except part of North Flanders, in another period of the industrial revolution, after 1920). Even if Belgium is the second industrial country after Britain, the effect of the industrial revolution there was very different. In 'Breaking stereotypes', Muriel Neven and Isabelle Devious say: The industrial revolution changed a mainly rural society into an urban one, but with a strong contrast between northern and southern Belgium. During the Middle Ages and the Early Modern Period, Flanders was characterised by the presence of large urban centres [...] at the beginning of the nineteenth century this region (Flanders), with an urbanisation degree of more than 30 percent, remained one of the most urbanised in the world. By comparison, this proportion reached only 17 percent in Wallonia, barely 10 percent in most West European countries, 16 percent in France, and 25 percent in Britain. Nineteenth-century industrialisation did not affect the traditional urban infrastructure, except in Ghent...Also, in Wallonia, the traditional urban network was largely unaffected by the industrialisation process, even though the proportion of city-dwellers rose from 17 to 45 percent between 1831 and 1910. Especially in the Haine, Sambre and Meuse valleys, between the Borinage and Liège, where there was a huge industrial development based on coal-mining and iron-making, urbanisation was fast. During these eighty years, the number of municipalities with more than 5,000 inhabitants increased from only 21 to more than one hundred, concentrating nearly half of the Walloon population in this region. Nevertheless, industrialisation remained quite traditional in the sense that it did not lead to the growth of modern and large urban centres, but to a conurbation of industrial villages and towns developed around a coal mine or a factory. Communication routes between these small centres only became populated later and created a much less dense urban morphology than, for instance, the area around Liège where the old town was there to direct migratory flows. France The industrial revolution in France followed a particular course as it did not correspond to the main model followed by other countries. Notably, most French historians argue France did not go through a clear take-off. Instead, France's economic growth and industrialisation process was slow and steady through the 18th and 19th centuries. However, some stages were identified by Maurice Lévy-Leboyer: French Revolution and Napoleonic wars (1789–1815), industrialisation, along with Britain (1815–1860), economic slowdown (1860–1905), renewal of the growth after 1905. Germany Based on its leadership in chemical research in the universities and industrial laboratories, Germany, which was unified in 1871, became dominant in the world's chemical industry in the late 19th century. At first the production of dyes based on aniline was critical. Germany's political disunitywith three dozen statesand a pervasive conservatism made it difficult to build railways in the 1830s. However, by the 1840s, trunk lines linked the major cities; each German state was responsible for the lines within its own borders. Lacking a technological base at first, the Germans imported their engineering and hardware from Britain, but quickly learned the skills needed to operate and expand the railways. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railroad construction, and the railways were a major impetus for the growth of the new steel industry. Observers found that even as late as 1890, their engineering was inferior to Britain's. However, German unification in 1870 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike the situation in France, the goal was the support of industrialisation, and so heavy lines crisscrossed the Ruhr and other industrial districts and provided good connections to the major ports of Hamburg and Bremen. By 1880, Germany had 9,400 locomotives pulling 43,000 passengers and 30,000 tons of freight, and pulled ahead of France. Sweden During the period 1790–1815 Sweden experienced two parallel economic movements: an agricultural revolution with larger agricultural estates, new crops, and farming tools and commercialisation of farming, and a proto industrialisation, with small industries being established in the countryside and with workers switching between agricultural work in summer and industrial production in winter. This led to economic growth benefiting large sections of the population and leading up to a consumption revolution starting in the 1820s. Between 1815 and 1850, the protoindustries developed into more specialised and larger industries. This period witnessed increasing regional specialisation with mining in Bergslagen, textile mills in Sjuhäradsbygden, and forestry in Norrland. Several important institutional changes took place in this period, such as free and mandatory schooling introduced in 1842 (as the first country in the world), the abolition of the national monopoly on trade in handicrafts in 1846, and a stock company law in 1848. From 1850 to 1890, Sweden experienced its "first" Industrial Revolution with a veritable explosion in export, dominated by crops, wood, and steel. Sweden abolished most tariffs and other barriers to free trade in the 1850s and joined the gold standard in 1873. Large infrastructural investments were made during this period, mainly in the expanding railroad network, which was financed in part by the government and in part by private enterprises. From 1890 to 1930, new industries developed with their focus on the domestic market: mechanical engineering, power utilities, papermaking and textile. Japan The industrial revolution began about 1870 as Meiji period leaders decided to catch up with the West. The government built railroads, improved roads, and inaugurated a land reform program to prepare the country for further development. It inaugurated a new Western-based education system for all young people, sent thousands of students to the United States and Europe, and hired more than 3,000 Westerners to teach modern science, mathematics, technology, and foreign languages in Japan (Foreign government advisors in Meiji Japan). In 1871, a group of Japanese politicians known as the Iwakura Mission toured Europe and the United States to learn western ways. The result was a deliberate state-led industrialisation policy to enable Japan to quickly catch up. The Bank of Japan, founded in 1882, used taxes to fund model steel and textile factories. Education was expanded and Japanese students were sent to study in the west. Modern industry first appeared in textiles, including cotton and especially silk, which was based in home workshops in rural areas. United States During the late 18th and early 19th centuries when the UK and parts of Western Europe began to industrialise, the US was primarily an agricultural and natural resource producing and processing economy. The building of roads and canals, the introduction of steamboats and the building of railroads were important for handling agricultural and natural resource products in the large and sparsely populated country of the period. Important American technological contributions during the period of the Industrial Revolution were the cotton gin and the development of a system for making interchangeable parts, the latter aided by the development of the milling machine in the US. The development of machine tools and the system of interchangeable parts was the basis for the rise of the US as the world's leading industrial nation in the late 19th century. Oliver Evans invented an automated flour mill in the mid-1780s that used control mechanisms and conveyors so that no labour was needed from the time grain was loaded into the elevator buckets until the flour was discharged into a wagon. This is considered to be the first modern materials handling system an important advance in the progress toward mass production. The United States originally used horse-powered machinery for small-scale applications such as grain milling, but eventually switched to water power after textile factories began being built in the 1790s. As a result, industrialisation was concentrated in New England and the Northeastern United States, which has fast-moving rivers. The newer water-powered production lines proved more economical than horse-drawn production. In the late 19th century steam-powered manufacturing overtook water-powered manufacturing, allowing the industry to spread to the Midwest. Thomas Somers and the Cabot Brothers founded the Beverly Cotton Manufactory in 1787, the first cotton mill in America, the largest cotton mill of its era, and a significant milestone in the research and development of cotton mills in the future. This mill was designed to use horsepower, but the operators quickly learned that the horse-drawn platform was economically unstable, and had economic losses for years. Despite the losses, the Manufactory served as a playground of innovation, both in turning a large amount of cotton, but also developing the water-powered milling structure used in Slater's Mill. In 1793, Samuel Slater (1768–1835) founded the Slater Mill at Pawtucket, Rhode Island. He had learned of the new textile technologies as a boy apprentice in Derbyshire, England, and defied laws against the emigration of skilled workers by leaving for New York in 1789, hoping to make money with his knowledge. After founding Slater's Mill, he went on to own 13 textile mills. Daniel Day established a wool carding mill in the Blackstone Valley at Uxbridge, Massachusetts in 1809, the third woollen mill established in the US (The first was in Hartford, Connecticut, and the second at Watertown, Massachusetts.) The John H. Chafee Blackstone River Valley National Heritage Corridor retraces the history of "America's Hardest-Working River', the Blackstone. The Blackstone River and its tributaries, which cover more than from Worcester, Massachusetts to Providence, Rhode Island, was the birthplace of America's Industrial Revolution. At its peak over 1,100 mills operated in this valley, including Slater's mill, and with it the earliest beginnings of America's Industrial and Technological Development. Merchant Francis Cabot Lowell from Newburyport, Massachusetts memorised the design of textile machines on his tour of British factories in 1810. Realising that the War of 1812 had ruined his import business but that demand for domestic finished cloth was emerging in America, on his return to the United States, he set up the Boston Manufacturing Company. Lowell and his partners built America's second cotton-to-cloth textile mill at Waltham, Massachusetts, second to the Beverly Cotton Manufactory. After his death in 1817, his associates built America's first planned factory town, which they named after him. This enterprise was capitalised in a public stock offering, one of the first uses of it in the United States. Lowell, Massachusetts, using of canals and delivered by the Merrimack River, is considered by some as a major contributor to the success of the American Industrial Revolution. The short-lived utopia-like Waltham-Lowell system was formed, as a direct response to the poor working conditions in Britain. However, by 1850, especially following the Great Famine of Ireland, the system had been replaced by poor immigrant labour. A major U.S. contribution to industrialisation was the development of techniques to make interchangeable parts from metal. Precision metal machining techniques were developed by the U.S. Department of War to make interchangeable parts for small firearms. The development work took place at the Federal Arsenals at Springfield Armory and Harpers Ferry Armory. Techniques for precision machining using machine tools included using fixtures to hold the parts in the proper position, jigs to guide the cutting tools and precision blocks and gauges to measure the accuracy. The milling machine, a fundamental machine tool, is believed to have been invented by Eli Whitney, who was a government contractor who built firearms as part of this program. Another important invention was the Blanchard lathe, invented by Thomas Blanchard. The Blanchard lathe, or pattern tracing lathe, was actually a shaper that could produce copies of wooden gun stocks. The use of machinery and the techniques for producing standardised and interchangeable parts became known as the American system of manufacturing. Precision manufacturing techniques made it possible to build machines that mechanised the shoe industry and the watch industry. The industrialisation of the watch industry started in 1854 also in Waltham, Massachusetts, at the Waltham Watch Company, with the development of machine tools, gauges and assembling methods adapted to the micro precision required for watches. Second Industrial Revolution Steel is often cited as the first of several new areas for industrial mass-production, which are said to characterise a "Second Industrial Revolution", beginning around 1850, although a method for mass manufacture of steel was not invented until the 1860s, when Sir Henry Bessemer invented a new furnace which could convert molten pig iron into steel in large quantities. However, it only became widely available in the 1870s after the process was modified to produce more uniform quality. Bessemer steel was being displaced by the open hearth furnace near the end of the 19th century. This Second Industrial Revolution gradually grew to include chemicals, mainly the chemical industries, petroleum (refining and distribution), and, in the 20th century, the automotive industry, and was marked by a transition of technological leadership from Britain to the United States and Germany. The increasing availability of economical petroleum products also reduced the importance of coal and further widened the potential for industrialisation. A new revolution began with electricity and electrification in the electrical industries. The introduction of hydroelectric power generation in the Alps enabled the rapid industrialisation of coal-deprived northern Italy, beginning in the 1890s. By the 1890s, industrialisation in these areas had created the first giant industrial corporations with burgeoning global interests, as companies like U.S. Steel, General Electric, Standard Oil and Bayer AG joined the railroad and ship companies on the world's stock markets. New Industrialism The New Industrialist movement advocates for increasing domestic manufacturing while reducing emphasis on a financial-based economy that relies on real estate and trading speculative assets. New Industrialism has been described as "supply-side progressivism" or embracing the idea of "Building More Stuff." New Industrialism developed after the China Shock that resulted in lost manufacturing jobs in the U.S. after China joined the World Trade Organization in 2001. The movement strengthened after the reduction of manufacturing jobs during the Great Recession and when the U.S. was not able to manufacture enough tests or facemasks during the COVID-19 pandemic. New Industrialism calls for building enough housing to satisfy demand in order to reduce the profit in land speculation, to invest in infrastructure, and to develop advanced technology to manufacture green energy for the world. New Industrialists believe that the United States isn’t building enough productive capital and should invest more into economic growth. Causes The causes of the Industrial Revolution were complicated and remain a topic for debate. Geographic factors include Britain's vast mineral resources. In addition to metal ores, Britain had the highest quality coal reserves known at the time, as well as abundant water power, highly productive agriculture, and numerous seaports and navigable waterways. Some historians believe the Industrial Revolution was an outgrowth of social and institutional changes brought by the end of feudalism in Britain after the English Civil War in the 17th century, although feudalism began to break down after the Black Death of the mid 14th century, followed by other epidemics, until the population reached a low in the 14th century. This created labour shortages and led to falling food prices and a peak in real wages around 1500, after which population growth began reducing wages. Inflation caused by coinage debasement after 1540 followed by precious metals supply increasing from the Americas caused land rents (often long-term leases that transferred to heirs on death) to fall in real terms. The Enclosure movement and the British Agricultural Revolution made food production more efficient and less labour-intensive, forcing the farmers who could no longer be self-sufficient in agriculture into cottage industry, for example weaving, and in the longer term into the cities and the newly developed factories. The colonial expansion of the 17th century with the accompanying development of international trade, creation of financial markets and accumulation of capital are also cited as factors, as is the scientific revolution of the 17th century. A change in marrying patterns to getting married later made people able to accumulate more human capital during their youth, thereby encouraging economic development. Until the 1980s, it was universally believed by academic historians that technological innovation was the heart of the Industrial Revolution and the key enabling technology was the invention and improvement of the steam engine. Marketing professor Ronald Fullerton suggested that innovative marketing techniques, business practices, and competition also influenced changes in the manufacturing industry. Lewis Mumford has proposed that the Industrial Revolution had its origins in the Early Middle Ages, much earlier than most estimates. He explains that the model for standardised mass production was the printing press and that "the archetypal model for the industrial era was the clock". He also cites the monastic emphasis on order and time-keeping, as well as the fact that medieval cities had at their centre a church with bell ringing at regular intervals as being necessary precursors to a greater synchronisation necessary for later, more physical, manifestations such as the steam engine. The presence of a large domestic market should also be considered an important driver of the Industrial Revolution, particularly explaining why it occurred in Britain. In other nations, such as France, markets were split up by local regions, which often imposed tolls and tariffs on goods traded among them. Internal tariffs were abolished by Henry VIII of England, they survived in Russia until 1753, 1789 in France and 1839 in Spain. Governments' grant of limited monopolies to inventors under a developing patent system (the Statute of Monopolies in 1623) is considered an influential factor. The effects of patents, both good and ill, on the development of industrialisation are clearly illustrated in the history of the steam engine, the key enabling technology. In return for publicly revealing the workings of an invention the patent system rewarded inventors such as James Watt by allowing them to monopolise the production of the first steam engines, thereby rewarding inventors and increasing the pace of technological development. However, monopolies bring with them their own inefficiencies which may counterbalance, or even overbalance, the beneficial effects of publicising ingenuity and rewarding inventors. Watt's monopoly prevented other inventors, such as Richard Trevithick, William Murdoch, or Jonathan Hornblower, whom Boulton and Watt sued, from introducing improved steam engines, thereby retarding the spread of steam power. Causes in Europe One question of active interest to historians is why the Industrial Revolution occurred in Europe and not in other parts of the world in the 18th century, particularly China, India, and the Middle East (which pioneered in shipbuilding, textile production, water mills, and much more in the period between 750 and 1100), or at other times like in Classical Antiquity or the Middle Ages. A recent account argued that Europeans have been characterized for thousands of years by a freedom-loving culture originating from the aristocratic societies of early Indo-European invaders. Many historians, however, have challenged this explanation as being not only Eurocentric, but also ignoring historical context. In fact, before the Industrial Revolution, "there existed something of a global economic parity between the most advanced regions in the world economy." These historians have suggested a number of other factors, including education, technological changes (see Scientific Revolution in Europe), "modern" government, "modern" work attitudes, ecology, and culture. China was the world's most technologically advanced country for many centuries; however, China stagnated economically and technologically and was surpassed by Western Europe before the Age of Discovery, by which time China banned imports and denied entry to foreigners. China was also a totalitarian society. China also heavily taxed transported goods. Modern estimates of per capita income in Western Europe in the late 18th century are of roughly 1,500 dollars in purchasing power parity (and Britain had a per capita income of nearly 2,000 dollars) whereas China, by comparison, had only 450 dollars. India was essentially feudal, politically fragmented and not as economically advanced as Western Europe. Historians such as David Landes and sociologists Max Weber and Rodney Stark credit the different belief systems in Asia and Europe with dictating where the revolution occurred. The religion and beliefs of Europe were largely products of Judaeo-Christianity and Greek thought. Conversely, Chinese society was founded on men like Confucius, Mencius, Han Feizi (Legalism), Lao Tzu (Taoism), and Buddha (Buddhism), resulting in very different worldviews. Other factors include the considerable distance of China's coal deposits, though large, from its cities
|
of a Permanent Court of International Justice (PCIJ), which would be responsible for adjudicating any international dispute submitted to it by the contesting parties, as well as to provide an advisory opinion upon any dispute or question referred to it by the League of Nations. In December 1920, following several drafts and debates, the Assembly of the League unanimously adopted the Statute of the PCIJ, which was signed and ratified the following year by a majority of members. Among other things, the new Statute resolved the contentious issues of selecting judges by providing that the judges be elected by both the Council and the Assembly of the League concurrently but independently. The makeup of the PCIJ would reflect the "main forms of civilization and the principal legal systems of the world". The PCIJ would be permanently placed at the Peace Palace in The Hague, alongside Permanent Court of Arbitration. The PCIJ represented a major innovation in international jurisprudence in several ways: Unlike previous international arbitral tribunals, it was a permanent body governed by its own statutory provisions and rules of procedure It had a permanent registry that served as a liaison with governments and international bodies; Its proceedings were largely public, including pleadings, oral arguments, and all documentary evidence; It was accessible to all states and could be declared by states to have compulsory jurisdiction over disputes; The PCIJ Statute was the first to list sources of law it would draw upon, which in turn became sources of international law Judges were more representative of the world and its legal systems than any prior international judicial body. As a permanent body, the PCIJ would, over time, make a series decisions and rulings that would develop international law Unlike the ICJ, the PCIJ was not part of the League, nor were members of the League automatically a party to its Statute. The United States, which played a key role in both the second Hague Peace Conference and the Paris Peace Conference, was notably not a member of the League, although several of its nationals served as judges of the Court. From its first session in 1922 until 1940, the PCIJ dealt with 29 interstate disputes and issued 27 advisory opinions. The Court's widespread acceptance was reflected by the fact that several hundred international treaties and agreements conferred jurisdiction upon it over specified categories of disputes. In addition to helping resolve several serious international disputes, the PCIJ helped clarify several ambiguities in international law that contributed to its development. The United States played a major role in setting up the World Court but never joined. Presidents Wilson, Harding, Coolidge, Hoover and Roosevelt all supported membership, but it was impossible to get a 2/3 majority in the Senate for a treaty. Establishment of the International Court of Justice Following a peak of activity in 1933, the PCIJ began to decline in its activities due to the growing international tension and isolationism that characterized the era. The Second World War effectively put an end to the Court, which held its last public session in December 1939 and issued its last orders in February 1940. In 1942 the United States and United Kingdom jointly declared support for establishing or re-establishing an international court after the war, and in 1943, the U.K. chaired a panel of jurists from around the world, the "Inter-Allied Committee", to discuss the matter. Its 1944 report recommended that: The statute of any new international court should be based on that of the PCIJ; The new court should retain an advisory jurisdiction; Acceptance of the new court's jurisdiction should be voluntary; The court should deal only with judicial and not political matters Several months later, a conference of the major Allied Powers—China, the USSR, the U.K., and the U.S.—issued a joint declaration recognizing the necessity "of establishing at the earliest practicable date a general international organization, based on the principle of the sovereign equality of all peace-loving States, and open to membership by all such States, large and small, for the maintenance of international peace and security". The following Allied conference at Dumbarton Oaks, in the United States, published a proposal in October 1944 that called for the establishment of an intergovernmental organization that would include an international court. A meeting was subsequently convened in Washington, D.C. in April 1945, involving 44 jurists from around the world to draft a statute for the proposed court. The draft statute was substantially similar to that of the PCIJ, and it was questioned whether a new court should even be created. During the San Francisco Conference, which took place from 25 April to 26 June 1945 and involved 50 countries, it was decided that an entirely new court should be established as a principal organ of the new United Nations. The statute of this court would form an integral part of the United Nations Charter, which, to maintain continuity, expressly held that the Statute of the International Court of Justice (ICJ) was based upon that of the PCIJ. Consequently, the PCIJ convened for the last time in October 1945 and resolved to transfer its archives to its successor, which would take its place at the Peace Palace. The judges of the PCIJ all resigned on 31 January 1946, with the election of the first members of the ICJ taking place the following February at the First Session of the United Nations General Assembly and Security Council. In April 1946, the PCIJ was formally dissolved, and the ICJ, in its first meeting, elected as President José Gustavo Guerrero of El Salvador, who had served as the last President of the PCIJ. The Court also appointed members of its Registry, drawn largely from that of the PCIJ, and held an inaugural public sitting later that month. The first case was submitted in May 1947 by the United Kingdom against Albania concerning incidents in the Corfu Channel. Activities Established in 1945 by the UN Charter, the court began work in 1946 as the successor to the Permanent Court of International Justice. The Statute of the International Court of Justice, similar to that of its predecessor, is the main constitutional document constituting and regulating the court. The court's workload covers a wide range of judicial activity. After the court ruled that the United States's covert war against Nicaragua was in violation of international law (Nicaragua v. United States), the United States withdrew from compulsory jurisdiction in 1986 to accept the court's jurisdiction only on a discretionary basis. Chapter XIV of the United Nations Charter authorizes the UN Security Council to enforce Court rulings. However, such enforcement is subject to the veto power of the five permanent members of the council, which the United States used in the Nicaragua case. Composition The ICJ is composed of fifteen judges elected to nine-year terms by the UN General Assembly and the UN Security Council from a list of people nominated by the national groups in the Permanent Court of Arbitration. The election process is set out in Articles 4–19 of the ICJ Statute. Elections are staggered, with five judges elected every three years to ensure continuity within the court. Should a judge die in office, the practice has generally been to elect a judge in a special election to complete the term. Judges of the International Court of Justice are entitled to the style of His/Her Excellency. No two judges may be nationals of the same country. According to Article 9, the membership of the court is supposed to represent the "main forms of civilization and of the principal legal systems of the world". That has meant common law, civil law and socialist law (now post-communist law). There is an informal understanding that the seats will be distributed by geographic regions so that there are five seats for Western countries, three for African states (including one judge of francophone civil law, one of Anglophone common law and one Arab), two for Eastern European states, three for Asian states and two for Latin American and Caribbean states. For most of the court's history, the five permanent members of the United Nations Security Council (France, USSR, China, the United Kingdom, and the United States) have always had a judge serving, thereby occupying three of the Western seats, one of the Asian seats and one of the Eastern European seats. Exceptions have been China not having a judge on the court from 1967 to 1985, during which time it did not put forward a candidate, and British judge Sir Christopher Greenwood being withdrawn as a candidate for election for a second nine-year term on the bench in 2017, leaving no judges from the United Kingdom on the court. Greenwood had been supported by the UN Security Council but failed to get a majority in the UN General Assembly. Indian judge Dalveer Bhandari took the seat instead. Article 6 of the Statute provides that all judges should be "elected regardless of their nationality among persons of high moral character" who are either qualified for the highest judicial office in their home states or known as lawyers with sufficient competence in international law. Judicial independence is dealt with specifically in Articles 16–18. Judges of the ICJ are not able to hold any other post or act as counsel. In practice, members of the court have their own interpretation of these rules and allow them to be involved in outside arbitration and hold professional posts as long as there is no conflict of interest. A judge can be dismissed only by a unanimous vote of the other members of the court. Despite these provisions, the independence of ICJ judges has been questioned. For example, during the Nicaragua case, the United States issued a communiqué suggesting that it could not present sensitive material to the court because of the presence of judges from the Soviet bloc. Judges may deliver joint judgments or give their own separate opinions. Decisions and advisory opinions are by majority, and, in the event of an equal division, the President's vote becomes decisive, which occurred in the Legality of the Use by a State of Nuclear Weapons in Armed Conflict (Opinion requested by WHO), [1996] ICJ Reports 66. Judges may also deliver separate dissenting opinions. Ad hoc judges Article 31 of the statute sets out a procedure whereby ad hoc judges sit on contentious cases before the court. The system allows any party to a contentious case (if it otherwise does not have one of that party's nationals sitting on the court) to select one additional person to sit as a judge on that case only. It is thus possible that as many as seventeen judges may sit on one case. The system may seem strange when compared with domestic court processes, but its purpose is to encourage states to submit cases. For example, if a state knows that it will have a judicial officer who can participate in deliberation and offer other judges local knowledge and an understanding of the state's perspective, it may be more willing to submit to the jurisdiction of the court. Although this system does not sit well with the judicial nature of the body, it is usually of little practical consequence. Ad hoc judges usually (but not always) vote in favour of the state that appointed them and thus cancel each other out. Chambers Generally, the court sits as full bench, but in the last fifteen years, it has on occasion sat as a chamber. Articles 26–29 of the statute allow the court to form smaller chambers, usually 3 or 5 judges, to hear cases. Two types of chambers are contemplated by Article 26: firstly, chambers for special categories of cases, and second, the formation of ad hoc chambers to hear particular disputes. In 1993, a special chamber was established, under Article 26(1) of the ICJ statute, to deal specifically with environmental matters (although it has never been used). Ad hoc chambers are more frequently convened. For example, chambers were used to hear the Gulf of Maine Case (Canada/US). In that case, the parties made clear they would withdraw the case unless the court appointed judges to the chamber acceptable to the parties. Judgments of chambers may have either less authority than full Court judgments or diminish the proper interpretation of universal international law informed by a variety of cultural and legal perspectives. On the other hand, the use of chambers might encourage greater recourse to the court and thus enhance international dispute resolution. Current composition , the composition of the court is as follows: Presidents Jurisdiction As stated in Article 93 of the UN Charter, all UN members are automatically parties to the court's statute. Non-UN members may also become parties to the court's statute under the Article 93(2) procedure, which was used by Switzerland in 1948 and Nauru in 1988, prior to either joining the UN. Once a state is a party to the court's statute, it is entitled to participate in cases before the court. However, being a party to the statute does not automatically give the court jurisdiction over disputes involving those parties. The issue of jurisdiction is considered in the three types of ICJ cases: contentious issues, incidental jurisdiction, and advisory opinions. Contentious issues In contentious cases (adversarial proceedings seeking to settle a dispute), the ICJ produces a binding ruling between states that agree to submit to the ruling of the court. Only states may be parties in contentious cases; individuals, corporations, component parts of a federal state, NGOs, UN organs, and self-determination groups are excluded from direct participation, although the court may receive information from public international organizations. However, this does not preclude non-state interests from being the subject of proceedings; for example, a state may bring a case on behalf of one of its nationals or corporations, such as in matters concerning diplomatic protection. Jurisdiction is often a crucial question for the court in contentious cases. The key principle is that the ICJ has jurisdiction only on the basis of consent. Under Article 36, there are four foundations for the Court's jurisdiction: Compromis or
|
around the world, the "Inter-Allied Committee", to discuss the matter. Its 1944 report recommended that: The statute of any new international court should be based on that of the PCIJ; The new court should retain an advisory jurisdiction; Acceptance of the new court's jurisdiction should be voluntary; The court should deal only with judicial and not political matters Several months later, a conference of the major Allied Powers—China, the USSR, the U.K., and the U.S.—issued a joint declaration recognizing the necessity "of establishing at the earliest practicable date a general international organization, based on the principle of the sovereign equality of all peace-loving States, and open to membership by all such States, large and small, for the maintenance of international peace and security". The following Allied conference at Dumbarton Oaks, in the United States, published a proposal in October 1944 that called for the establishment of an intergovernmental organization that would include an international court. A meeting was subsequently convened in Washington, D.C. in April 1945, involving 44 jurists from around the world to draft a statute for the proposed court. The draft statute was substantially similar to that of the PCIJ, and it was questioned whether a new court should even be created. During the San Francisco Conference, which took place from 25 April to 26 June 1945 and involved 50 countries, it was decided that an entirely new court should be established as a principal organ of the new United Nations. The statute of this court would form an integral part of the United Nations Charter, which, to maintain continuity, expressly held that the Statute of the International Court of Justice (ICJ) was based upon that of the PCIJ. Consequently, the PCIJ convened for the last time in October 1945 and resolved to transfer its archives to its successor, which would take its place at the Peace Palace. The judges of the PCIJ all resigned on 31 January 1946, with the election of the first members of the ICJ taking place the following February at the First Session of the United Nations General Assembly and Security Council. In April 1946, the PCIJ was formally dissolved, and the ICJ, in its first meeting, elected as President José Gustavo Guerrero of El Salvador, who had served as the last President of the PCIJ. The Court also appointed members of its Registry, drawn largely from that of the PCIJ, and held an inaugural public sitting later that month. The first case was submitted in May 1947 by the United Kingdom against Albania concerning incidents in the Corfu Channel. Activities Established in 1945 by the UN Charter, the court began work in 1946 as the successor to the Permanent Court of International Justice. The Statute of the International Court of Justice, similar to that of its predecessor, is the main constitutional document constituting and regulating the court. The court's workload covers a wide range of judicial activity. After the court ruled that the United States's covert war against Nicaragua was in violation of international law (Nicaragua v. United States), the United States withdrew from compulsory jurisdiction in 1986 to accept the court's jurisdiction only on a discretionary basis. Chapter XIV of the United Nations Charter authorizes the UN Security Council to enforce Court rulings. However, such enforcement is subject to the veto power of the five permanent members of the council, which the United States used in the Nicaragua case. Composition The ICJ is composed of fifteen judges elected to nine-year terms by the UN General Assembly and the UN Security Council from a list of people nominated by the national groups in the Permanent Court of Arbitration. The election process is set out in Articles 4–19 of the ICJ Statute. Elections are staggered, with five judges elected every three years to ensure continuity within the court. Should a judge die in office, the practice has generally been to elect a judge in a special election to complete the term. Judges of the International Court of Justice are entitled to the style of His/Her Excellency. No two judges may be nationals of the same country. According to Article 9, the membership of the court is supposed to represent the "main forms of civilization and of the principal legal systems of the world". That has meant common law, civil law and socialist law (now post-communist law). There is an informal understanding that the seats will be distributed by geographic regions so that there are five seats for Western countries, three for African states (including one judge of francophone civil law, one of Anglophone common law and one Arab), two for Eastern European states, three for Asian states and two for Latin American and Caribbean states. For most of the court's history, the five permanent members of the United Nations Security Council (France, USSR, China, the United Kingdom, and the United States) have always had a judge serving, thereby occupying three of the Western seats, one of the Asian seats and one of the Eastern European seats. Exceptions have been China not having a judge on the court from 1967 to 1985, during which time it did not put forward a candidate, and British judge Sir Christopher Greenwood being withdrawn as a candidate for election for a second nine-year term on the bench in 2017, leaving no judges from the United Kingdom on the court. Greenwood had been supported by the UN Security Council but failed to get a majority in the UN General Assembly. Indian judge Dalveer Bhandari took the seat instead. Article 6 of the Statute provides that all judges should be "elected regardless of their nationality among persons of high moral character" who are either qualified for the highest judicial office in their home states or known as lawyers with sufficient competence in international law. Judicial independence is dealt with specifically in Articles 16–18. Judges of the ICJ are not able to hold any other post or act as counsel. In practice, members of the court have their own interpretation of these rules and allow them to be involved in outside arbitration and hold professional posts as long as there is no conflict of interest. A judge can be dismissed only by a unanimous vote of the other members of the court. Despite these provisions, the independence of ICJ judges has been questioned. For example, during the Nicaragua case, the United States issued a communiqué suggesting that it could not present sensitive material to the court because of the presence of judges from the Soviet bloc. Judges may deliver joint judgments or give their own separate opinions. Decisions and advisory opinions are by majority, and, in the event of an equal division, the President's vote becomes decisive, which occurred in the Legality of the Use by a State of Nuclear Weapons in Armed Conflict (Opinion requested by WHO), [1996] ICJ Reports 66. Judges may also deliver separate dissenting opinions. Ad hoc judges Article 31 of the statute sets out a procedure whereby ad hoc judges sit on contentious cases before the court. The system allows any party to a contentious case (if it otherwise does not have one of that party's nationals sitting on the court) to select one additional person to sit as a judge on that case only. It is thus possible that as many as seventeen judges may sit on one case. The system may seem strange when compared with domestic court processes, but its purpose is to encourage states to submit cases. For example, if a state knows that it will have a judicial officer who can participate in deliberation and offer other judges local knowledge and an understanding of the state's perspective, it may be more willing to submit to the jurisdiction of the court. Although this system does not sit well with the judicial nature of the body, it is usually of little practical consequence. Ad hoc judges usually (but not always) vote in favour of the state that appointed them and thus cancel each other out. Chambers Generally, the court sits as full bench, but in the last fifteen years, it has on occasion sat as a chamber. Articles 26–29 of the statute allow the court to form smaller chambers, usually 3 or 5 judges, to hear cases. Two types of chambers are contemplated by Article 26: firstly, chambers for special categories of cases, and second, the formation of ad hoc chambers to hear particular disputes. In 1993, a special chamber was established, under Article 26(1) of the ICJ statute, to deal specifically with environmental matters (although it has never been used). Ad hoc chambers are more frequently convened. For example, chambers were used to hear the Gulf of Maine Case (Canada/US). In that case, the parties made clear they would withdraw the case unless the court appointed judges to the chamber acceptable to the parties. Judgments of chambers may have either less authority than full Court judgments or diminish the proper interpretation of universal international law informed by a variety of cultural and legal perspectives. On the other hand, the use of chambers might encourage greater recourse to the court and thus enhance international dispute resolution. Current composition , the composition of the court is as follows: Presidents Jurisdiction As stated in Article 93 of the UN Charter, all UN members are automatically parties to the court's statute. Non-UN members may also become parties to the court's statute under the Article 93(2) procedure, which was used by Switzerland in 1948 and Nauru in 1988, prior to either joining the UN. Once a state is a party to the court's statute, it is entitled to participate in cases before the court. However, being a party to the statute does not automatically give the court jurisdiction over disputes involving those parties. The issue of jurisdiction is considered in the three types of ICJ cases: contentious issues, incidental jurisdiction, and advisory opinions. Contentious issues In contentious cases (adversarial proceedings seeking to settle a dispute), the ICJ produces a binding ruling between states that agree to submit to the ruling of the court. Only states may be parties in contentious cases; individuals, corporations, component parts of a federal state, NGOs, UN organs, and self-determination groups are excluded from direct participation, although the court may receive information from public international organizations. However, this does not preclude non-state interests from being the subject of proceedings; for example, a state may bring a case on behalf of one of its nationals or corporations, such as in matters concerning diplomatic protection. Jurisdiction is often a crucial question for the court in contentious cases. The key principle is that the ICJ has jurisdiction only on the basis of consent. Under Article 36, there are four foundations for the Court's jurisdiction: Compromis or "special agreement", in which parties provide explicit consent to the Court's jurisdiction by referring cases to it. While not true compulsory jurisdiction, this is perhaps the most effective jurisdictional basis, because the parties concerned have a desire for the dispute to be resolved by the Court, and are thus more likely to comply with the Court's judgment. Compromissory clauses in a binding treaty. Most modern treaties contain such clauses to provide or dispute resolution by the ICJ. Cases founded on compromissory clauses have not been as effective as cases founded on special agreement, since a state may have no interest in having the matter examined by the Court and may refuse to comply with a judgment. For example, during the Iran hostage crisis, Iran refused to participate in a case brought by the US based on a compromissory clause contained in the Vienna Convention on Diplomatic Relations and did not comply with the judgment. Since the 1970s, the use of such clauses has declined; many modern treaties set out their own dispute resolution regime, often based on forms of arbitration. Optional clause declarations accepting the court's jurisdiction. Also known as Article 36(2) jurisdiction, it is sometimes misleadingly labeled "compulsory", though such declarations are voluntary. Many such declarations contain reservations that exclude from jurisdiction certain types of disputes (ratione materia). The principle of reciprocity may further limit jurisdiction, as Article 36(2) holds that such declaration may be made "in relation to any other State accepting the same obligation...". As of January 2018, seventy-four states had a declaration in force, up from sixty-six in February 2011; of the permanent Security Council members, only the United Kingdom has a declaration. In the court's early years, most declarations were made by industrialized countries. Since the 1986 Nicaragua case, declarations made by developing countries have increased, reflecting a growing confidence in the Court. However, even those industrialized countries that have invoked optional declarations have sometimes increased exclusions or rescinded them altogether. Notable examples include the United States in the Nicaragua case, and Australia, which modified its declaration in 2002 to exclude disputes on maritime boundaries, most likely to prevent an impending challenge from East Timor, which gained independence two months later. Article 36(5) provides for jurisdiction on the basis of declarations made under the Statute of the Permanent Court of International Justice. Article 37 similarly transfers jurisdiction under any compromissory clause in a treaty that gave jurisdiction to the PCIJ. Additionally, the court may have jurisdiction on the basis of tacit consent (forum prorogatum). In the absence of clear jurisdiction under Article 36, jurisdiction is established if the respondent accepts ICJ jurisdiction explicitly or simply pleads on the merits. This arose in the 1949 Corfu Channel Case (U.K. v. Albania), in which the court held that a letter from Albania stating that it submitted to the jurisdiction of the ICJ was sufficient to grant the court jurisdiction. Incidental jurisdiction Until rendering a final judgment, the court has competence to order interim measures for the protection of the rights of a party to a dispute. One or both parties to a dispute may apply the ICJ for issuing interim measures. In the Frontier Dispute Case, both parties to the dispute, Burkina Faso and Mali, submitted an application to the court to indicate interim measures. Incidental jurisdiction of the court derives from the Article 41 of the Statute of it. Such as the final judgment, the order for interim measures of the court are binding on state parties to the dispute. The ICJ has competence to indicate interim measures only if the prima facie jurisdiction is satisfied. Advisory opinions An advisory opinion is a function of the court open only to specified United Nations bodies and agencies. The UN Charter grants the General Assembly or the Security Council a power to request the court to issue an advisory opinion on any legal question. Other organs of the UN rather than GA and SC may not request an advisory opinion of the ICJ unless the General Assembly authorizes them. Other organs of the UN only request an advisory opinion of the court regarding the matters falling into the scope of their activities. On receiving a request, the court decides which states and organizations might provide useful information and gives them an opportunity to present written or oral statements. Advisory opinions were intended as a means by which UN agencies could seek the court's help in deciding complex legal issues that might fall under their respective mandates. In principle, the court's advisory opinions are only consultative in character but they are influential and widely respected. Certain instruments or regulations can provide in advance that the advisory opinion shall be specifically binding on particular agencies or states, but inherently, they are non-binding under the Statute of the Court. This non-binding character does not mean that advisory opinions are without legal effect, because the legal reasoning embodied in them reflects the court's authoritative views on important issues of international law. In arriving at them, the court follows essentially the same rules and procedures that govern its binding judgments delivered in contentious cases submitted to it by sovereign states. An advisory opinion derives its status and authority from the fact that it is the official pronouncement of the principal judicial organ of the United Nations. Advisory opinions have often been controversial because the questions asked are controversial or the case was pursued as an indirect way of bringing what is really a contentious case before the court. Examples of advisory opinions can be found in the section advisory opinions in the List of International Court of Justice cases article. One such well-known advisory opinion is the Nuclear Weapons Case. Examples of contentious cases A complaint by the United States in 1980 that Iran was detaining American diplomats in Tehran in violation of international law. A dispute between Tunisia and Libya over the delimitation of the continental shelf between them. A complaint by Iran after the shooting down of Iran Air Flight 655 by a United States Navy guided missile cruiser. A dispute over the course of the maritime boundary dividing the U.S. and Canada in the Gulf of Maine area. A complaint by the Federal Republic of Yugoslavia against the member states of the North Atlantic Treaty Organization regarding their actions in the Kosovo War. This was denied on 15 December 2004 because of lack of jurisdiction, the FRY not being a party to the ICJ statute at the time it made the application. A complaint by the Republic of North Macedonia (former Yugoslav Republic of Macedonia) that Greece's vetoing of its accession to NATO violates the Interim Accord of 13 September 1995 between the two countries. The complaint was decided in favour of North Macedonia on 5 December 2011. A complaint by the Democratic Republic of the Congo that its sovereignty had been violated by Uganda and that the DRC had lost billions of dollars worth of resources was decided in favour of the DRC. A complaint by the Republic of India regarding a death penalty verdict against an Indian citizen, Kulbhushan Jadhav, by a Pakistani military court (based alleged espionage and subversive activities). Relationship with UN Security Council Article 94 establishes the duty of all UN members to comply with decisions of the court involving them. If parties do not comply, the issue may be taken before the Security Council for enforcement action. There are obvious problems with such a method of enforcement. If the judgment is against one of the permanent five members of the Security Council or its allies, any resolution on enforcement would then be vetoed. That occurred, for example, after the Nicaragua case, when Nicaragua brought the issue of the United States' noncompliance with
|
ISBN-10 check digit (which is the last digit of the 10-digit ISBN) must range from 0 to 10 (the symbol 'X' is used for 10), and must be such that the sum of the ten digits, each multiplied by its (integer) weight, descending from 10 to 1, is a multiple of 11. That is, if is the th digit, then must be chosen such that: For example, for an ISBN-10 of 0-306-40615-2: Formally, using modular arithmetic, this is rendered: It is also true for ISBN-10s that the sum of all ten digits, each multiplied by its weight in ascending order from 1 to 10, is a multiple of 11. For this example: Formally, this is rendered: The two most common errors in handling an ISBN (e.g. when typing it or writing it down) are a single altered digit or the transposition of adjacent digits. It can be proven mathematically that all pairs of valid ISBN-10s differ in at least two digits. It can also be proven that there are no pairs of valid ISBN-10s with eight identical digits and two transposed digits. (These proofs are true because the ISBN is less than eleven digits long and because 11 is a prime number.) The ISBN check digit method therefore ensures that it will always be possible to detect these two most common types of error, i.e., if either of these types of error has occurred, the result will never be a valid ISBN – the sum of the digits multiplied by their weights will never be a multiple of 11. However, if the error were to occur in the publishing house and remain undetected, the book would be issued with an invalid ISBN. In contrast, it is possible for other types of error, such as two altered non-transposed digits, or three altered digits, to result in a valid ISBN (although it is still unlikely). ISBN-10 check digit calculation Each of the first nine digits of the 10-digit ISBN—excluding the check digit itself—is multiplied by its (integer) weight, descending from 10 to 2, and the sum of these nine products found. The value of the check digit is simply the one number between 0 and 10 which, when added to this sum, means the total is a multiple of 11. For example, the check digit for an ISBN-10 of 0-306-40615-? is calculated as follows: Adding 2 to 130 gives a multiple of 11 (because 132 = 12×11) – this is the only number between 0 and 10 which does so. Therefore, the check digit has to be 2, and the complete sequence is ISBN 0-306-40615-2. If the value of required to satisfy this condition is 10, then an 'X' should be used. Alternatively, modular arithmetic is convenient for calculating the check digit using modulus 11. The remainder of this sum when it is divided by 11 (i.e. its value modulo 11), is computed. This remainder plus the check digit must equal either 0 or 11. Therefore, the check digit is (11 minus the remainder of the sum of the products modulo 11) modulo 11. Taking the remainder modulo 11 a second time accounts for the possibility that the first remainder is 0. Without the second modulo operation, the calculation could result in a check digit value of = 11, which is invalid. (Strictly speaking, the first "modulo 11" is not needed, but it may be considered to simplify the calculation.) For example, the check digit for the ISBN-10 of 0-306-40615-? is calculated as follows: Thus the check digit is 2. It is possible to avoid the multiplications in a software implementation by using two accumulators. Repeatedly adding t into s computes the necessary multiples: // Returns ISBN error syndrome, zero for a valid ISBN, non-zero for an invalid one. // digits[i] must be between 0 and 10. int CheckISBN(int const digits[10]) { int i, s = 0, t = 0; for (i = 0; i < 10; i++) { t += digits[i]; s += t; } return s % 11; } The modular reduction can be done once at the end, as shown above (in which case s could hold a value as large as 496, for the invalid ISBN 99999-999-9-X), or s and t could be reduced by a conditional subtract after each addition. ISBN-13 check digit calculation Appendix 1 of the International ISBN Agency's official user manual describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10. As ISBN-13 is a subset of EAN-13, the algorithm for calculating the check digit is exactly the same for both. Formally, using modular arithmetic, this is rendered: The calculation of an ISBN-13 check digit begins with the first twelve digits of the 13-digit ISBN (thus excluding the check digit itself). Each digit, from left to right, is alternately multiplied by 1 or 3, then those products are summed modulo 10 to give a value ranging from 0 to 9. Subtracted from 10, that leaves a result from 1 to 10. A zero replaces a ten, so, in all cases, a single check digit results. For example, the ISBN-13 check digit of 978-0-306-40615-? is calculated as follows: s = 9×1 + 7×3 + 8×1 + 0×3 + 3×1 + 0×3 + 6×1 + 4×3 + 0×1 + 6×3 + 1×1 + 5×3 = 9 + 21 + 8 + 0 + 3 + 0 + 6 + 12 + 0 + 18 + 1 + 15 = 93 93 / 10 = 9 remainder 3 10 – 3 = 7 Thus, the check digit is 7, and the complete sequence is ISBN 978-0-306-40615-7. In general, the ISBN-13 check digit is calculated as follows. Let Then This check system – similar to the UPC check digit formula – does not catch all errors of adjacent digit transposition. Specifically, if the difference between two adjacent digits is 5, the check digit will not catch their transposition. For instance, the above example allows this situation with the 6 followed by a 1. The correct order contributes 3×6+1×1 = 19 to the sum; while, if the digits are transposed (1 followed by a 6), the contribution of those two digits will be 3×1+1×6 = 9. However, 19 and 9 are congruent modulo 10, and so produce the same, final result: both ISBNs will have a check digit of 7. The ISBN-10 formula uses the prime modulus 11 which avoids this blind spot, but requires more than the digits 0–9 to express the check digit. Additionally, if the sum of the 2nd, 4th, 6th, 8th, 10th, and 12th digits is tripled then added to the remaining digits (1st, 3rd, 5th, 7th, 9th, 11th, and 13th), the total will always be divisible by 10 (i.e., end in 0). ISBN-10 to ISBN-13 conversion An ISBN-10 is converted to ISBN-13 by prepending "978" to the ISBN-10 and recalculating the final checksum digit using the ISBN-13 algorithm. The reverse process can also be performed, but not for numbers commencing with a prefix other than 978, which have no 10-digit equivalent. Errors in usage Publishers and libraries have varied policies about the use of the ISBN check digit. Publishers sometimes fail to check the correspondence of a book title and its ISBN before publishing it; that failure causes book identification problems for libraries, booksellers, and readers. For example, is shared by two books – Ninja gaiden: a novel based on the best-selling game by Tecmo (1990) and Wacky laws (1997), both published by Scholastic. Most libraries and booksellers display the book record for an invalid ISBN issued by the publisher. The Library of Congress catalogue contains books published with invalid ISBNs, which it usually tags with the phrase "Cancelled ISBN". However, book-ordering systems will not search for a book if an invalid ISBN is entered to its search engine. OCLC often indexes by invalid ISBNs, if the book is indexed in that way by a member library. eISBN Only the term "ISBN" should be used; the terms "eISBN" and "e-ISBN" have historically been sources of confusion and should be avoided. If a book exists in one or more digital (e-book) formats, each of those formats must have its own ISBN. In other words, each of the three separate EPUB, Amazon Kindle, and PDF formats of a particular book will have its own specific ISBN. They should not share the ISBN of the paper version, and there is no generic "eISBN" which encompasses all the e-book formats for a title. EAN format used in barcodes, and upgrading Currently the barcodes on
|
Rammohun Roy National Agency for ISBN (Book Promotion and Copyright Division), under Department of Higher Education, a constituent of the Ministry of Human Resource Development Iceland – Landsbókasafn (National and University Library of Iceland) Israel – The Israel Center for Libraries Italy – EDISER srl, owned by Associazione Italiana Editori (Italian Publishers Association) Maldives – The National Bureau of Classification (NBC) Malta – The National Book Council () Morocco – The National Library of Morocco New Zealand – The National Library of New Zealand Pakistan – National Library of Pakistan Philippines – National Library of the Philippines South Africa – National Library of South Africa Spain – Spanish ISBN Agency – Agencia del ISBN Turkey – General Directorate of Libraries and Publications, a branch of the Ministry of Culture United Kingdom and Republic of Ireland – Nielsen Book Services Ltd, part of Nielsen Holdings N.V. United States – R. R. Bowker Registration group element The ISBN registration group element is a 1- to 5-digit number that is valid within a single prefix element (i.e. one of 978 or 979), and can be separated between hyphens, such as . Registration groups have primarily been allocated within the 978 prefix element. The single-digit registration groups within the 978-prefix element are: 0 or 1 for English-speaking countries; 2 for French-speaking countries; 3 for German-speaking countries; 4 for Japan; 5 for Russian-speaking countries; and 7 for People's Republic of China. An example 5-digit registration group is 99936, for Bhutan. The allocated registration groups are: 0–5, 600–625, 65, 7, 80–94, 950–989, 9917–9989, and 99901–99983. Books published in rare languages typically have longer group elements. Within the 979 prefix element, the registration group 0 is reserved for compatibility with International Standard Music Numbers (ISMNs), but such material is not actually assigned an ISBN. The registration groups within prefix element 979 that have been assigned are 8 for the United States of America, 10 for France, 11 for the Republic of Korea, and 12 for Italy. The original 9-digit standard book number (SBN) had no registration group identifier, but prefixing a zero to a 9-digit SBN creates a valid 10-digit ISBN. Registrant element The national ISBN agency assigns the registrant element (cf. :Category:ISBN agencies) and an accompanying series of ISBNs within that registrant element to the publisher; the publisher then allocates one of the ISBNs to each of its books. In most countries, a book publisher is not legally required to assign an ISBN, although most large bookstores only handle publications that have ISBNs assigned to them. A listing of more than 900,000 assigned publisher codes is published, and can be ordered in book form. The website of the ISBN agency does not offer any free method of looking up publisher codes. Partial lists have been compiled (from library catalogs) for the English-language groups: identifier 0 and identifier 1. Publishers receive blocks of ISBNs, with larger blocks allotted to publishers expecting to need them; a small publisher may receive ISBNs of one or more digits for the registration group identifier, several digits for the registrant, and a single digit for the publication element. Once that block of ISBNs is used, the publisher may receive another block of ISBNs, with a different registrant element. Consequently, a publisher may have different allotted registrant elements. There also may be more than one registration group identifier used in a country. This might occur once all the registrant elements from a particular registration group have been allocated to publishers. By using variable block lengths, registration agencies are able to customise the allocations of ISBNs that they make to publishers. For example, a large publisher may be given a block of ISBNs where fewer digits are allocated for the registrant element and many digits are allocated for the publication element; likewise, countries publishing many titles have few allocated digits for the registration group identifier and many for the registrant and publication elements. Here are some sample ISBN-10 codes, illustrating block length variations. Pattern for English language ISBNs English-language registration group elements are 0 and 1 (2 of more than 220 registration group elements). These two registration group elements are divided into registrant elements in a systematic pattern, which allows their length to be determined, as follows: Check digits A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary check bit. It consists of a single digit computed from the other digits in the number. The method for the 10-digit ISBN is an extension of that for SBNs, so the two systems are compatible; an SBN prefixed with a zero (the 10-digit ISBN) will give the same check digit as the SBN without the zero. The check digit is base eleven, and can be an integer between 0 and 9, or an 'X'. The system for 13-digit ISBNs is not compatible with SBNs and will, in general, give a different check digit from the corresponding 10-digit ISBN, so does not provide the same protection against transposition. This is because the 13-digit code was required to be compatible with the EAN format, and hence could not contain an 'X'. ISBN-10 check digits According to the 2001 edition of the International ISBN Agency's official user manual, the ISBN-10 check digit (which is the last digit of the 10-digit ISBN) must range from 0 to 10 (the symbol 'X' is used for 10), and must be such that the sum of the ten digits, each multiplied by its (integer) weight, descending from 10 to 1, is a multiple of 11. That is, if is the th digit, then must be chosen such that: For example, for an ISBN-10 of 0-306-40615-2: Formally, using modular arithmetic, this is rendered: It is also true for ISBN-10s that the sum of all ten digits, each multiplied by its weight in ascending order from 1 to 10, is a multiple of 11. For this example: Formally, this is rendered: The two most common errors in handling an ISBN (e.g. when typing it or writing it down) are a single altered digit or the transposition of adjacent digits. It can be proven mathematically that all pairs of valid ISBN-10s differ in at least two digits. It can also be proven that there are no pairs of valid ISBN-10s with eight identical digits and two transposed digits. (These proofs are true because the ISBN is less than eleven digits long and because 11 is a prime number.) The ISBN check digit method therefore ensures that it will always be possible to detect these two most common types of error, i.e., if either of these types of error has occurred, the result will never be a valid ISBN – the sum of the digits multiplied by their weights will never be a multiple of 11. However, if the error were to occur in the publishing house and remain undetected, the book would be issued with an invalid ISBN. In contrast, it is possible for other types of error, such as two altered non-transposed digits, or three altered digits, to result in a valid ISBN (although it is still unlikely). ISBN-10 check digit calculation Each of the first nine digits of the 10-digit ISBN—excluding the check digit itself—is multiplied by its (integer) weight, descending from 10 to 2, and the sum of these nine products found. The value of the check digit is simply the one number between 0 and 10 which, when added to this sum, means the total is a multiple of 11. For example, the check digit for an ISBN-10 of 0-306-40615-? is calculated as follows: Adding 2 to 130 gives a multiple of 11 (because 132 = 12×11) – this is the only number between 0 and 10 which does so. Therefore, the check digit has to be 2, and the complete sequence is ISBN 0-306-40615-2. If the value of required to satisfy this condition is 10, then an 'X' should be used. Alternatively, modular arithmetic is convenient for calculating the check digit using modulus 11. The remainder of this sum when it is divided by 11 (i.e. its value modulo 11), is computed. This remainder plus the check digit must equal either 0 or 11. Therefore, the check digit is (11 minus the remainder of the sum of the products modulo 11) modulo 11. Taking the remainder modulo 11 a second time accounts for the possibility that the first remainder is 0. Without the second modulo operation, the calculation could result in a check digit value of = 11, which is invalid. (Strictly speaking, the first "modulo 11" is not needed, but it may be considered to simplify the calculation.) For example, the check digit for the ISBN-10 of 0-306-40615-? is calculated as follows: Thus the check digit is 2. It is possible to avoid the multiplications in a software implementation by using two accumulators. Repeatedly adding t into s computes the necessary multiples: // Returns ISBN error syndrome, zero for a valid ISBN, non-zero for an invalid one. // digits[i] must be between 0 and 10. int CheckISBN(int const digits[10]) { int i, s = 0, t = 0; for (i = 0; i < 10; i++) { t += digits[i]; s += t; } return s % 11; } The modular reduction can be done once at the end, as shown above (in which case s could hold a value as large as 496, for the invalid ISBN 99999-999-9-X), or s and t could be reduced by a conditional subtract after each addition. ISBN-13 check digit calculation Appendix 1 of the International ISBN Agency's official user manual describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10. As ISBN-13 is a subset of EAN-13, the algorithm for calculating the check digit is exactly the same for both. Formally, using modular arithmetic, this is rendered: The calculation of an ISBN-13 check digit begins with the first twelve digits of the 13-digit ISBN (thus excluding the check digit itself). Each digit, from left to right, is alternately multiplied by 1 or 3, then those products are summed modulo 10 to give a value ranging from 0 to 9. Subtracted from 10, that leaves a result from 1 to 10. A zero replaces a ten, so, in all cases, a single check digit results. For example, the ISBN-13 check digit of 978-0-306-40615-? is calculated as follows: s = 9×1 + 7×3 + 8×1 + 0×3 + 3×1 + 0×3 + 6×1 + 4×3 + 0×1 + 6×3 + 1×1 + 5×3 = 9 + 21 + 8 + 0 + 3 + 0 + 6 + 12 + 0 + 18 + 1 + 15 = 93 93 / 10 = 9 remainder 3 10 – 3 = 7 Thus, the check digit is 7, and the complete sequence is ISBN 978-0-306-40615-7. In general, the ISBN-13 check digit is calculated as follows. Let Then This check system – similar to the UPC check digit formula – does not catch all errors of adjacent digit transposition. Specifically, if the difference between two adjacent digits is 5, the check digit will not catch their transposition. For instance, the above example allows this situation with the 6 followed by a 1. The correct order contributes 3×6+1×1 = 19 to the sum; while, if the digits are transposed (1 followed by a 6), the contribution of those two digits will be 3×1+1×6 = 9. However, 19 and 9 are congruent modulo 10, and so produce the same, final result: both ISBNs will have a check digit of 7. The ISBN-10 formula uses the prime modulus 11 which
|
routing prefixes. This resulted in slower growth of routing tables in routers. The smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4 Internet. At these levels, actual address utilization ratios will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment, i.e. the local administration of the segment's available space, from the addressing prefix used to route traffic to and from external networks. IPv6 has facilities that automatically change the routing prefix of entire networks, should the global connectivity or the routing policy change, without requiring internal redesign or manual renumbering. The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is no need to have complex address conservation methods as used in CIDR. All modern desktop and enterprise server operating systems include native support for IPv6, but it is not yet widely deployed in other devices, such as residential networking routers, voice over IP (VoIP) and multimedia equipment, and some networking hardware. Private addresses Just as IPv4 reserves addresses for private networks, blocks of addresses are set aside in IPv6. In IPv6, these are referred to as unique local addresses (ULAs). The routing prefix is reserved for this block, which is divided into two blocks with different implied policies. The addresses include a 40-bit pseudorandom number that minimizes the risk of address collisions if sites merge or packets are misrouted. Early practices used a different block for this purpose (), dubbed site-local addresses. However, the definition of what constituted a site remained unclear and the poorly defined addressing policy created ambiguities for routing. This address type was abandoned and must not be used in new systems. Addresses starting with , called link-local addresses, are assigned to interfaces for communication on the attached link. The addresses are automatically generated by the operating system for each network interface. This provides instant and automatic communication between all IPv6 hosts on a link. This feature is used in the lower layers of IPv6 network administration, such as for the Neighbor Discovery Protocol. Private and link-local address prefixes may not be routed on the public Internet. IP address assignment IP addresses are assigned to a host either dynamically as they join the network, or persistently by configuration of the host hardware or software. Persistent configuration is also known as using a static IP address. In contrast, when a computer's IP address is assigned each time it restarts, this is known as using a dynamic IP address. Dynamic IP addresses are assigned by network using Dynamic Host Configuration Protocol (DHCP). DHCP is the most frequently used technology for assigning addresses. It avoids the administrative burden of assigning specific static addresses to each device on a network. It also allows devices to share the limited address space on a network if only some of them are online at a particular time. Typically, dynamic IP configuration is enabled by default in modern desktop operating systems. The address assigned with DHCP is associated with a lease and usually has an expiration period. If the lease is not renewed by the host before expiry, the address may be assigned to another device. Some DHCP implementations attempt to reassign the same IP address to a host, based on its MAC address, each time it joins the network. A network administrator may configure DHCP by allocating specific IP addresses based on MAC address. DHCP is not the only technology used to assign IP addresses dynamically. Bootstrap Protocol is a similar protocol and predecessor to DHCP. Dialup and some broadband networks use dynamic address features of the Point-to-Point Protocol. Computers and equipment used for the network infrastructure, such as routers and mail servers, are typically configured with static addressing. In the absence or failure of static or dynamic address configurations, an operating system may assign a link-local address to a host using stateless address autoconfiguration. Sticky dynamic IP address Address autoconfiguration Address block is defined for the special use of link-local addressing for IPv4 networks. In IPv6, every interface, whether using static or dynamic addresses, also receives a link-local address automatically in the block . These addresses are only valid on the link, such as a local network segment or point-to-point connection, to which a host is connected. These addresses are not routable and, like private addresses, cannot be the source or destination of packets traversing the Internet. When the link-local IPv4 address block was reserved, no standards existed for mechanisms of address autoconfiguration. Filling the void, Microsoft developed a protocol called Automatic Private IP Addressing (APIPA), whose first public implementation appeared in Windows 98. APIPA has been deployed on millions of machines and became a de facto standard in the industry. In May 2005, the IETF defined a formal standard for it. Addressing conflicts An IP address conflict occurs when two devices on the same local physical or wireless network claim to have the same IP address. A second assignment of an address generally stops the IP functionality of one or both of the devices. Many modern operating systems notify the administrator of IP address conflicts. When IP addresses are assigned by multiple people and systems with differing methods, any of them may be at fault. If one of the devices involved in the conflict is the default gateway access beyond the LAN for all devices on the LAN, all devices may be impaired. Routing IP addresses are classified into several classes of operational characteristics: unicast, multicast, anycast and broadcast addressing. Unicast addressing The most common concept of an IP address is in unicast addressing, available in both IPv4 and IPv6. It normally refers to a single sender or a single receiver, and can be used for both sending and receiving. Usually, a unicast address is associated with a single device or host, but a device or host may have more than one unicast address. Sending the same data to multiple unicast addresses requires the sender to send all the data many times over, once for each recipient. Broadcast addressing Broadcasting is an addressing technique available in IPv4 to address data to all possible destinations on a network in one transmission operation as an all-hosts broadcast. All receivers capture the network packet. The address is used for network broadcast. In addition, a more limited directed broadcast uses the all-ones host address with the network prefix. For example, the destination address used for directed broadcast to devices on the network is . IPv6 does not implement broadcast addressing and replaces it with multicast to the specially defined all-nodes multicast address. Multicast addressing A multicast address is associated with a group of interested receivers. In IPv4, addresses through (the former Class D addresses) are designated as multicast addresses. IPv6 uses the address block with the prefix for multicast. In either case, the sender sends a single datagram from its unicast address to the multicast group address and the intermediary routers take care of making copies and sending them to all interested receivers (those that have joined the corresponding multicast group). Anycast
|
depending on network practices and software features. Function An IP address serves two principal functions: it identifies the host, or more specifically its network interface, and it provides the location of the host in the network, and thus the capability of establishing a path to that host. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there." The header of each IP packet contains the IP address of the sending host and that of the destination host. IP versions Two versions of the Internet Protocol are in common use on the Internet today. The original version of the Internet Protocol that was first deployed in 1983 in the ARPANET, the predecessor of the Internet, is Internet Protocol version 4 (IPv4). The rapid exhaustion of IPv4 address space available for assignment to Internet service providers and end-user organizations by the early 1990s, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the addressing capability on the Internet. The result was a redesign of the Internet Protocol which became eventually known as Internet Protocol Version 6 (IPv6) in 1995. IPv6 technology was in various testing stages until the mid-2000s when commercial production deployment commenced. Today, these two versions of the Internet Protocol are in simultaneous use. Among other technical changes, each version defines the format of addresses differently. Because of the historical prevalence of IPv4, the generic term IP address typically still refers to the addresses defined by IPv4. The gap in version sequence between IPv4 and IPv6 resulted from the assignment of version 5 to the experimental Internet Stream Protocol in 1979, which however was never referred to as IPv5. Other versions v1 to v9 were defined, but only v4 and v6 ever gained widespread use. v1 and v2 were names for TCP protocols in 1974 and 1977, as there was no separate IP specification at the time. v3 was defined in 1978, and v3.1 is the first version where TCP is separated from IP. v6 is a synthesis of several suggested versions, v6 Simple Internet Protocol, v7 TP/IX: The Next Internet, v8 PIP — The P Internet Protocol, and v9 TUBA — Tcp & Udp with Big Addresses. Subnetworks IP networks may be divided into subnetworks in both IPv4 and IPv6. For this purpose, an IP address is recognized as consisting of two parts: the network prefix in the high-order bits and the remaining bits called the rest field, host identifier, or interface identifier (IPv6), used for host numbering within a network. The subnet mask or CIDR notation determines how the IP address is divided into network and host parts. The term subnet mask is only used within IPv4. Both IP versions however use the CIDR concept and notation. In this, the IP address is followed by a slash and the number (in decimal) of bits used for the network part, also called the routing prefix. For example, an IPv4 address and its subnet mask may be and , respectively. The CIDR notation for the same IP address and subnet is , because the first 24 bits of the IP address indicate the network and subnet. IPv4 addresses An IPv4 address has a size of 32 bits, which limits the address space to (232) addresses. Of this number, some addresses are reserved for special purposes such as private networks (~18 million addresses) and multicast addressing (~270 million addresses). IPv4 addresses are usually represented in dot-decimal notation, consisting of four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., . Each part represents a group of 8 bits (an octet) of the address. In some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations. Subnetting history In the early stages of development of the Internet Protocol, the network number was always the highest order octet (most significant eight bits). Because this method allowed for only 256 networks, it soon proved inadequate as additional networks developed that were independent of the existing networks already designated by a network number. In 1981, the addressing specification was revised with the introduction of classful network architecture. Classful network design allowed for a larger number of individual network assignments and fine-grained subnetwork design. The first three bits of the most significant octet of an IP address were defined as the class of the address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on the class derived, the network identification was based on octet boundary segments of the entire address. Each class used successively additional octets in the network identifier, thus reducing the possible number of hosts in the higher order classes (B and C). The following table gives an overview of this now-obsolete system. Classful network design served its purpose in the startup stage of the Internet, but it lacked scalability in the face of the rapid expansion of networking in the 1990s. The class system of the address space was replaced with Classless Inter-Domain Routing (CIDR) in 1993. CIDR is based on variable-length subnet masking (VLSM) to allow allocation and routing based on arbitrary-length prefixes. Today, remnants of classful network concepts function only in a limited scope as the default configuration parameters of some network software and hardware components (e.g. netmask), and in the technical jargon used in network administrators' discussions. Private addresses Early network design, when global end-to-end connectivity was envisioned for communications with all Internet hosts, intended that IP addresses be globally unique. However, it was found that this was not always necessary as private networks developed and public address space needed to be conserved. Computers not connected to the Internet, such as factory machines that communicate only with each other via TCP/IP, need not have globally unique IP addresses. Today, such private networks are widely used and typically connect to the Internet with network address translation (NAT), when needed. Three non-overlapping ranges of IPv4 addresses for private networks are reserved. These addresses are not routed on the Internet and thus their use need not be coordinated with an IP address registry. Any user may use any of the reserved blocks. Typically, a network administrator will divide a block into subnets; for example, many home routers automatically use a default address range of through (). IPv6 addresses In IPv6, the address size was increased from 32 bits in IPv4 to 128 bits, thus providing up to 2128 (approximately ) addresses. This is deemed sufficient for the foreseeable future. The intent of the new design was not to provide just a sufficient quantity of addresses, but also redesign routing in the Internet by allowing more efficient aggregation of subnetwork routing prefixes. This resulted in slower growth of routing tables in routers. The smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4 Internet. At these levels, actual address utilization ratios will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment, i.e. the local administration of the segment's available space, from the addressing prefix used to route traffic to and from external networks. IPv6 has facilities that automatically change the routing prefix of entire networks, should the global connectivity or the routing policy change, without requiring internal redesign or manual renumbering. The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is no need to have complex address conservation methods as used in CIDR. All modern desktop and enterprise server operating systems include native support for IPv6, but it is not yet widely
|
of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" is truth-functional, "P iff Q" follows if P and Q have been shown to be both true, or both false. Origin of iff and pronunciation Usage of the abbreviation "iff" first appeared in print in John L. Kelley's 1955 book General Topology. Its invention is often credited to Paul Halmos, who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor." It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface of General Topology, Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' and euphony demands something less I use Halmos' 'iff'". The authors of one discrete mathematics textbook suggest: "Should you need to pronounce iff, really hang on to the 'ff' so that people hear the difference from 'if'", implying that "iff" could be pronounced as . Usage in definitions Technically, definitions are always "if and only if" statements; some texts — such as Kelley's General Topology — follow the strict demands of logic, and use "if and only if" or iff in definitions of new terms. However, this logically correct usage of "if and only if" is relatively uncommon, as the majority of textbooks, research papers and articles (including English Wikipedia articles) follow the special convention to interpret "if" as "if and only if", whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover"). Distinction from "if" and "only if" "Madison will eat the fruit if it is an apple." (equivalent to "Only if Madison will eat the fruit, can it be an apple" or "Madison will eat the fruit ← the fruit is an apple") This states that Madison will eat fruits that are apples. It does not, however, exclude the possibility that Madison might also eat bananas or other types of fruit. All that is known for certain is that she will eat any and all apples that she happens upon. That the fruit is an apple is a sufficient condition for Madison to eat the fruit. "Madison will eat the fruit only if it is an apple." (equivalent to "If Madison will eat the fruit, then
|
In most logical systems, one proves a statement of the form "P iff Q" by proving either "if P, then Q" and "if Q, then P", or "if P, then Q" and "if not-P, then not-Q". Proving these pair of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" is truth-functional, "P iff Q" follows if P and Q have been shown to be both true, or both false. Origin of iff and pronunciation Usage of the abbreviation "iff" first appeared in print in John L. Kelley's 1955 book General Topology. Its invention is often credited to Paul Halmos, who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor." It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface of General Topology, Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' and euphony demands something less I use Halmos' 'iff'". The authors of one discrete mathematics textbook suggest: "Should you need to pronounce iff, really hang on to the 'ff' so that people hear the difference from 'if'", implying that "iff" could be pronounced as . Usage in definitions Technically, definitions are always "if and only if" statements; some texts — such as Kelley's General Topology — follow the strict demands of logic, and use "if and only if" or iff in definitions of new terms. However, this logically correct usage of "if and only if" is relatively uncommon, as the majority of textbooks, research papers and articles (including English Wikipedia articles) follow the special convention to interpret "if" as "if and only if", whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover"). Distinction from "if" and "only if" "Madison will eat the fruit if it is an apple." (equivalent to "Only if Madison will eat the fruit, can it be an apple" or "Madison will eat the fruit ← the fruit is an apple") This states that Madison will eat fruits that are apples. It does not, however, exclude the possibility that Madison might also eat bananas or other types of fruit. All that is known for certain is that she will eat any and all apples that she happens upon. That the fruit is an apple is a sufficient condition for Madison to eat the fruit. "Madison will eat the fruit only if it is an apple." (equivalent to "If Madison will eat the fruit, then it is an apple" or "Madison will
|
Sălaj, Romania Ip (river), a river in Sălaj County, Romania IP Casino Resort Spa, in Biloxi, Mississippi, US Science and technology Biology and medicine Immunoprecipitation, a molecular biology technique Incontinentia pigmenti, a genetic disorder Infundibulopelvic ligament, part of the female pelvis Interphalangeal joint (disambiguation) Interventional pulmonology, a less invasive lung treatment than surgery Intestinal permeability Intraperitoneal injection (IP injection), the injection of a substance into the peritoneum Prostacyclin receptor (symbol PTGIR, older synonym IP) Computing Internet Protocol, a set of rules for sending data across a network IP address, a numerical label assigned to each device connected to a computer network IP (complexity), a class in computational complexity theory IP core (Intellectual Property core), a reusable design unit owned by one party Instruction pointer, a processor register Intelligent Peripheral, a part of a public telecommunications Intelligent Network Image processing ip, a Linux command in the iproute2 collection Other science and technology IP Code (Ingress Protection code), an equipment protection classification scheme Identified patient, a psychology term Identity preserved, an agricultural designation
|
a legal specialist Industrial policy, a country's effort to encourage the development of certain sectors of the economy Integrated project (EU), a type of research project Immunity passport Other uses Ip (cuneiform) Ip (surname) Inflectional phrase, a functional phrase that has inflectional properties Innings pitched, a baseball statistic Integrated Programme, an academic scheme in Singapore Internationale Politik, a German political journal Item and Process, a linguistic method to describe phenomena of allomorphy See also IP in IP, an IP tunneling protocol List of IP version numbers Ip Man (disambiguation) IP3 (disambiguation) IP5 (disambiguation) Independence Party
|
– baked sweet and aromatic pork sausage from Bologna Pan Pepato – very rich Christmas dried fruit and nut dessert with almonds, candies and a lot of sweet spices Parmigiano-Reggiano – prized ancient long-aged cheese from Reggio Emilia, Parma. Modena and Bologna Passatelli – noodles made of breadcrumbs, Parmigiano Reggiano, cheese, lemon zest and nutmeg from Romagna* Pesto di Modena – cured pork back fat pounded with garlic, rosemary and Parmigiano-Reggiano used to fill borlenghi and baked crescentine Piadina Fritta – Fried Romagna pastry rectangles Piadina – Pancake shaped flat bread (from Romagna) which can be smaller and higher or larger and very thin Pisarei e faśö – pasta peas with beans from Piacenza Salame Felino – salami from Parma province Salamina da Sugo – soft sausage from Ferrara, seasonal. Spalla di San Secondo – gourmet salami from a small town near Parma; it is made with seasoned pork shoulder, stuffed in cow bladders and slowly boiled or steamed. Spongata – very rich Christmas time thin tart: a soft crust with flour sugar dusting, stuffed with finely broken almonds and other nuts, candies and a lot of sweet spices, from Reggio Emilia Squacquerone – sweet, runny, milky cheese from Romagna Tagliatelle all' uovo – egg pasta noodles, very popular across Emilia-Romagna; they are made in slightly different thickness, width and length according to local practise (in Bologna the authentic size of Tagliatelle alla Bolognese is officially registered at the local Chamber of Commerce) Torresani – roasted pigeons popular in Emilia Torta Barozzi o Torta Nera – barozzi tart or black tart (a dessert made with a coffee/cocoa and almond filling encased in a fine pastry dough (from Modena) Tortelli alla Lastra – griddle baked pasta rectangles filed with potato and pumpkin puree and sausage or bacon bits Tortelli – usually square, made in all Emilia-Romagna, filled with swiss chard or spinach, ricotta and Parmigiano Reggiano in Romagna or ricotta, parsley, Parmigiano Reggiano in Bologna (where they are called Tortelloni) and Emilia, or with potatoes and pancetta in the Apennine mountains Tortellini – small egg pasta navel shapes filled with lean pork, eggs, Parmigiano-Reggiano, Mortadella, Parma Ham and nutmeg (from Bologna and Modena: according to a legend, they were invented in Castelfranco Emilia by a peeping innkeeper after the navel of a beautiful guest) Zampone – stuffed pig's trotter, fat, but leaner than cotechino's, stuffing; to be boiled (from Modena) Tuscany Bistecca alla fiorentina – grilled Florentine T-bone steak traditionally from the Chianina cattle breed. Crema paradiso – Tuscan cream Fegatelli di maiale – pig's liver forcemeat stuffed into pig's stomach and baked in a slow oven with stock and red wine Ossibuchi alla toscana – osso buco, sliced braised veal shank, "Tuscan-style" Pinzimonio – fresh seasonal raw or slightly blanched vegetables served with seasoned olive oil for dipping Ribollita – twice-cooked vegetable soup Lampredotto – cooked abomasum Tuscan bread specialties Carsenta lunigianese – baked on a bed of chestnut leaves and served on Good Friday Ciaccia – from the Maremma made from maize Donzelle – round loaf fried in olive oil Fiandolone – made with sweet chestnut flour and strewn with rosemary leaves Filone – classic Tuscan unsalted bread Pan di granturco – made from maize flour Pan di ramerino – a rosemary bread seasoned with sugar and salt. The bread was originally served during Holy Week decorated with a cross on top and sold at the Church by semellai; it is, however, offered year round now. Pan maoko – equal parts wheat and maize flour, with pine nuts and raisins added Pane classico integrale – unsalted bread made with semolina with a crisp crust Pane con i grassetti – a bread from the Garfagnana area, with pork cracklings mixed in Pane con l'uva – in other areas this bread often takes the form of small loaves or rolls, but in Tuscany it is a rolled-out dough with red grapes incorporated into it and sprinkled with sugar. It is bread served often in the autumn in place of dessert and often served with figs Panigaccio – Lunigiana specialty made with flour, water and salt baked over red-hot coals and served with cheese and olive oil Panina gialla aretina – an Easter bread with a high fat content, containing raisins, saffron, and spices. It is consecrated in a church before being served with eggs Panini di Sant' Antonio – sweet rolls eaten on the feast day of St. Anthony Schiacciata – dough rolled out onto baking sheet and can have pork cracklings, herbs, potatoes and/or tomatoes added to the top along with a salt and olive oil Schiacciatina – made with a fine flour, salt dough with yeast and olive oil Panino co' i' lampredotto – lampredotto sandwich Umbria Lenticchie di Castelluccio con salsicce – lentil stew with sausages Minestra di farro – spelt soup Piccioni allo spiedo – spit-roasted pigeon Regina in porchetta – carp in fennel sauce Specialties of the Norcineria (Umbrian Butcher) Barbozzo – cured, matured pig's cheek Budellacci – smoked, spiced pig intestines eaten raw, spit-roasted, or broiled Capocollo – Sausage highly seasoned with garlic and pepper Coppa – sausage made from the pig's head Mazzafegati – sweet or hot pig's liver sausage, the sweet version containing raisins, orange peel and sugar Prosciutto di Norcia – a pressed, cured ham made from the legs of pigs fed on a strict diet of acorns Marche Brodetto di San Benedetto del Tronto – fish stew, San Benedetto del Tronto-style, with green tomatoes and sweet green pepper. Brodetto di Porto Recanati – fish stew, without tomato, wild saffron spiced. Olive all'ascolana – fried stoned olives stuffed with pork, beef, chicken, eggs and Parmesan cheese in Ascoli Piceno. Passatelli all'urbinate – spinach and meat dumplings Unique ham and sausage specialties Coppa – coppa in this region refers to a boiling sausage made from pig's head, bacon, orange peel, nutmeg and sometimes pinenuts or almonds. It is meant to be eaten within a month of preparation Ciauscolo – made from the belly and shoulder of pig with half its weight in pork fat and seasoned with salt, pepper, orange peel and fennel. It is stuffed into an intestine casing, dried in a smoking chamber and cured for three weeks. Fegatino – a liver sausage with pork belly and shoulder, where the liver replaces the fat of other sausages Mazzafegato di Fabriano – mortadella made from fat and lean pork with liver and lung added to the fine-grained emulsification. It is seasoned with salt and pepper, stuffed into casings and smoked. This sausage is often served at festivals. Prosciutto del Montefeltro – made from free-range black pigs, this is a smoked Prosciutto washed with vinegar and ground black pepper Salame del Montefeltro – made from the leg and loin meat of the black pig, this sausage is highly seasoned with peppercorns and hung to dry Salame di Fabriano – similar to salame lardellato except that it is made solely from leg of pork with pepper and salt Salame lardellato – made with lean pork shoulder, or leg meat, along with diced bacon, salt, pepper, and whole peppercorns. It is cased in hog's intestines, dried for one-and-a-half days and then placed in a warm room for 3–4 days, two days in a cold room and then two months in a ventilated storage room Soppressata di Fabriano – finely emulsified pork flavored with bacon, salt and pepper, the sausage is smoked and then aged Lazio Bucatini all'amatriciana – bucatini with guanciale, tomatoes and pecorino Carciofi alla giudia – artichokes fried in olive oil, typical of Roman Jewish cooking Carciofi alla Romana – artichokes Roman-style; outer leaves removed, stuffed with mint, garlic, breadcrumbs and braised Coda di bue alla vaccinara – oxtail ragout Saltimbocca alla Romana – Veal cutlet, Roman-style; topped with raw ham and sage and simmered with white wine and butter Spaghetti alla carbonara – spaghetti with eggs, guanciale and pecorino Abruzzo and Molise Agnello casc' e ove – Lamb stuffed with grated Pecorino cheese and eggs Agnello con le olive – Arrosticini – skewered pieces of meat Maccheroni alla chitarra – a narrow stripped pasta served with a sauce of tomatoes, bacon and Pecorino cheese Maccheroni alla molinara - also la pasta alla mugnaia is a long (single) hand made pasta served with tomato sauce Mozzarelline allo zafferano – mini mozzarella cheese coated with a batter flavored with saffron Parrozzo - a cake-like dessert made from a mixture of flour and crushed almonds, and coated in chocolate Pizza Dolce - A layered (with two or three cream fillings - white custard, chocolate or almond) sponge cake, that is soaked with alchermes (if you can find it) or rum. Pizzelle - (also known as Ferratelle). A thin, cookie made with a waffle iron device, often flavored with anise. Spaghetti all' aglio, olio e peperoncino Scripelle 'Mbusse - Abruzzo crêpes (flour, water and eggs), seasoned with Pecorino cheese, rolled and served in chicken broth. Sugo di castrato – mutton sauce made with onion, rosemary, bacon, white wine, and tomatoes Timballo teremana - A "lasagne" made with scripelle (Abruzzo crêpes) layered with a ragout of beef, pork, onion, carrot and celery, also layered with mushrooms, crumbled hard boiled egg, peas and besciamella. Campania Babà – Neapolitan rum-dippe dessert Braciole di maiale – Pork loin with tomatoes sauce, garlic, capers and pine nuts Caponata di pesce – Fish Caponata; bread (baked in the shape of a donut), anchovies, tuna, lemon juice, olive oil and pepper Casatiello – Neapolitan Easter pie with Parmesan cheese, Pecorino cheese, eggs, salame, bacon, and pepper Gattò – A Neapolitan potato casserole with ham, Parmesan cheese and Pecorino cheese. Graffe – fried Neapolitan "doughnuts" made with flour, potato, yeast and sugar. Insalata caprese – salad of tomatoes, Mozzarella di Bufala (buffalo mozzarella) and basil Limoncello – Lemon liqueur Maccheroni alla napoletana – macaroni with Neapolitan sauce; a sauce of braised beef, carrot, celery, onion, garlic, white wine, tomato paste and fresh basil. Melanzane a Scapece – Scapece eggplant; marinated eggplant with red pepper and olive oil Melanzane al cioccolato – mid-August dessert; eggplants with chocolate and almonds Mozzarella di Bufala Campana – Particular variety of cheese products made exclusively with milk from buffalo Mozzarella in carrozza – fried mozzarella with slices of toasted bread and olive oil Mustacciuoli – Neapolitan Christmas dessert; cookies with almonds and coffee covered with chocolate Parmigiana – Sliced eggplant pan fried in oil, layered with tomato sauce and cheese, and baked in an oven Pastiera napoletana – Neapolitan ricotta cake Pepata di cozze – Mussel and Clam soup with tomato sauce, served with slices of toasted bread. Pizza napoletana – neapolitan pizza; the most popular is "Pizza Margherita": pizza topped with tomatoes sauce, mozzarella cheese, Parmesan cheese, basil and olive oil Polipo alla Luciana – Luciana Octopus; octopus with tomatoes sauce, chopped tomatoes, olives and garlic Ragù napoletano – Neapolitan ragù; tomatoes sauce, onions, olive oil, carrots, celery, veal shank, pork ribs, lard, basil, salt and pepper Roccocò – Neapolitan Christmas dessert; almond crunch cookies Sartù di riso – Rice Sartù; rice with mushrooms, onions, tomato-paste, beef, peas, Parmesan cheese and Mozzarella cheese and olive oil Sfogliatelle – Neapolitan ricotta dessert; seashell-shaped pastry with ricotta cheese. Sfogliatella Santarosa – Neapolitan dessert; Slightly larger than a traditional sfogliatella, it is filled with a crema pasticciera and garnished with crema di amarene (sour black cherry) Spaghetti alle vongole – Spaghetti with clams in a white sauce with garlic, olive oil and pepper Struffoli – Neapolitan Christmas dessert; honey balls with lemon juice and colored candy Torta caprese – Chocolate cake with almonds Zeppole di San Giuseppe – Fritters for Saint Joseph's Day; Cream-filled with crema pasticciera Apulia (Puglia) Burrata – an Italian cow milk cheese (occasionally buffalo milk) made from mozzarella and cream. The outer casing is solid cheese, while the inside contains stracciatella and cream, giving it an unusual, soft texture. It is typical of Apulia. Caciocavallo podolico – a variety of cheese products made exclusively with Podolica cow milk. Cacioricotta – a cheese produced throughout Apulia. Calzone (in Lecce) or Panzerotto (in Bari and
|
of potato dumpling dough Montasio – cheese of the Friuli Palatschinken – pancake filled with apricot jam or chocolate sauce Polenta – all over the region Porcina or Porzina – boiled pork served with mustard and horseradish Prosciutto di San Daniele DOP, famous ham exported all over the world Scuete fumade – sweet smoked ricotta Smoked hams of Sauris, of Cormons and of the Carso plateau Speck friulano of Sauris Veneto Bigoli con l'arna – a type of pasta similar to Tagliatelle but bigger with a sauce of liver of the duck Galani or Crostoli – pastries Lesso e pearà – boiled meats with pepper sauce, most common in the Province of Verona Pasta e fagioli – a soup of pasta and beans Polenta e osei – polenta accompanied with roasted wild birds Radicchio e pancetta – raw or cooked radicchio salad with pancetta Risi e bisi – rice with young peas Sarde in saor – fried, marinated sardines Trentino-Alto Adige/Südtirol Canederli or Knödel – dumplings made with leftover bread and cold cuts Carne salada e fasoi – aromatized salt beef with beans Crauti – Sauerkraut Minestrone di orzetto – barley soup Speck – is a type of salume from the historical-geographical region of Tyrol and generally obtained from pork leg subjected to a process of cold-smoking Strangolapreti – spinach dumplings Spatzle – typical Trentino Alto Adige first course, similar to Strangolapreti in flavour, different in form Smacafam – kind of salty cake, usually eaten during carnival, particularly at the "carneval de Mori" Fasoi en Bronzon – kind of soup made by beans and tomatosauce, not suggested if your home toilet is not available Zelten – a typical dessert of the Christmas tradition of the Trentino-Alto Adige region. Made with dried fruit (pine nuts, walnuts, almonds) and candied fruit Grostoli – in dialect "Grostoi" (Grøśtœį) tyopical fried dessert from the Trentino-Alto Adige culture Strauben – Austro-Hungarian culinary artefact, served in every alpine hut with plenty of "currant jam" (Marmelada de ribes) on top Strudel – is a sweet rolled or stuffed pastry that can be sweet or savory, but in its best known version is sweet with apples, pine nuts, raisins and cinnamon Lombardy Mostarda di Cremona – a sweet/spicy sauce made with candied fruits and meant to be served along boiled beef. Nocciolini di Canzo – small sweet amaretto-style biscuits with hazelnut flour Panettone – a Milanese Christmas traditional sweet bread made with a yeast and egg dough along with candied citrus peel, and raisins Pizzoccheri – buckwheat tagliatelle dressed with potatoes, greens (often Swiss Chard or Spinach), butter and Bitto cheese: a speciality of the Valtellina. Risotto alla milanese – A stirred rice dish made with Vialone or Carnaroli rice flavored with saffron and beef marrow Torrone – a candy made of honey, sugar, and egg white, with toasted almonds or hazelnuts Tortelli di zucca – ravioli with a squash filling Salame di Varzi Val D'Aosta Tortino di riso alla valdostana – rice cake with ox tongue Zuppa di Valpelline – savoy cabbage stew thickened with stale bread Piedmont (Piemonte) Bagna càuda – A hot dip based on anchovies, olive oil and garlic (sometimes blanched in milk), to accompany vegetables (either raw or cooked), meat or fried polenta sticks Bollito misto Brasato al vino – stew made from wine marinated beef Carne cruda all'albese – steak tartare with truffles Gnocchi di semolino alla romana – semolina dumpling Lepre in Civet – jugged hare Paniscia di Novara – a dish based on rice with borlotti beans, salame sotto grasso and red wine Panissa di Vercelli - a dish based on rice with borlotti beans, salame sotto grasso and red wine Panna cotta – sweetened cream set with gelatin Pere San Martin al vino rosso - winter pears in red wine Risotto alla piemontese – risotto cooked with meat broth and seasoned with nutmeg, parmesan and truffle Vitello tonnato – veal in tuna sauce Rane Fritte – fried frogs Riso e Rane – risotto with frogs Salame sotto Grasso – pork salami aged under a thick layer of lard Liguria Agliata – the direct ancestor of pesto, it is a spread made from garlic cloves, egg yolk and olive oil pestled in a mortar until creamy Baccalà fritto – morsels of salt cod dipped in flour batter and fried Bagnun (literally Big Bath or Big Dip) a soup made with fresh anchovies, onion, olive oil and tomato sauce where crusty bread is then dipped; originally prepared by fishermen on long fishing expeditions and eaten with hard tack instead of bread. Bianchetti – Whitebait of anchovies and sardines, usually boiled and eaten with lemon juice, salt and olive oil as an entrée Buridda – seafood stew Cappon Magro – a preparation of fish, shellfishes and vegetables layered in an aspic Capra e fagioli - a stew made of goat meat and white beans, a typical dish of the hinterland of Imperia Cima alla genovese – this cold preparation features an outer layer of beef breast made into a pocket and stuffed with a mix of brain, lard, onion, carrot, peas, eggs and breadcrumbs, then sewn and boiled. It is then sliced and eaten as an entrée or a sandwich filler Cobeletti – sweet corn tarts Condigiun – a salad made with tomatoes, bell peppers, cucumber, black olives, basil, garlic, anchovies, hard boiled egg, oregano, tuna. Farinata di zucca – a preparation similar to chickpea farinata substituting pumpkin for the legumes' flour as its main ingredient, the end result is slightly sweeter and thicker than the original Galantina – similar to Testa in cassetta but with added veal. Latte dolce fritto – a thick milk based cream left to solidify, then cut in rectangular pieces which are breaded and fried. Maccheroni con la Trippa – A traditional savonese soup uniting maccheroni pasta, tripe, onion, carrot, sausage, "cardo" which is the Italian word for Swiss chard, parsley, and white wine in a base of capon broth, with olive oil to help make it satisyfing. Tomato may be added but that is not the traditional way to make it. (Traditional ingredients: brodo di gallina o cappone, carota, cipolla, prezzemolo, foglie di cardo, trippa di vitello, salsiccia di maiale, maccheroni al torchio, vino bianco, burro, olio d'oliva, formaggio grana, sale.) Mescciüa – a soup of chickpeas, beans and wheat grains, typical of eastern Liguria and likely of Arab origin Mosciamme – originally a cut of dolphin meat dried and then made tender again thanks to immersion in olive oil, for several decades tuna has replaced dolphin meat. Pandolce – sweet bread made with raisins, pine nuts and candied orange and cedar skins Panera genovese – a kind of semifreddo rich in cream and eggs flavoured with coffee, similar to a cappuccino in ice cream form Panissa and Farinata – chickpea-based polentas and pancakes respectively Pansoti – triangle-shaped stuffed pasta filled with a mix of borage (or spinach) and ricotta cheese, they can be eaten with butter, tomato sauce or a white sauce made with either walnuts or pine nuts (the latter two being the more traditional ligurian options) Pesto – Probably Liguria's most famous recipe, widely enjoyed beyond regional borders, is a green sauce made from basil leaves, sliced garlic, pine nuts, pecorino or parmigiano cheese (or a mix of both) and olive oil. Traditionally used as a pasta dressing (especially with gnocchi or trenette, it is finding wider uses as sandwich spread and finger-food filler) Pizza all'Andrea – focaccia-style pizza topped with tomato slices (not sauce) onions and anchovies Scabeggio – fried fish marinated in wine, garlic, lemon juice and sage, typical of Moneglia Sgabei – fritters made from bread dough (often incorporating some cornmeal in it) Stecchi alla genovese – wooden skewers alternating morsels of leftover chicken meats (crests, testicles, livers...) and mushrooms, dipped in white bechamel sauce, left to dry a bit and then breaded and fried Testa in cassetta – a salami made from all kind of leftover meats from pork butchering (especially from the head) Torta di riso – Unlike all other rice cakes this preparation is not sweet, but a savoury pie made with rice, caillé, parmigiano and eggs, it can be wrapped in a thin layer of dough or simply baked until firm Torta pasqualina – savory flan filled with a mixture of green vegetables, ricotta and parmigiano cheese, milk and marjoram; some eggs are then poured in the already-placed filling, so that their yolks will remain whole when cooked Trenette col pesto – Pasta with Pesto (Olive Oil, garlic, Basil, Parmigiano and Pecorino Sardo cheese) sauce Emilia-Romagna Aceto Balsamico Tradizionale di Modena (Traditional Balsamic Vinegar) and Aceto Balsamico Tradizionale di Reggio Emilia (Balsamic vinegar) – very precious, expensive and rare sweet, dark, sweet and aromatic vinegar, made in small quantities according to elaborated and time-consuming procedures (it takes at least 12 years to brew the youngest Aceto Balsamico) from local grapes must (look for the essential "Tradizionale" denomination on the label to avoid confusing it with the cheaper and completely different "Aceto Balsamico di Modena" vinegar, mass-produced from wine and other ingredients Borlengo from the hills South of Modena Cannelloni, Crespelle and Rosette – pasta filled with bechamel, cream, ham and others Cappellacci – large size filled egg pasta with chestnut puree and sweet Mostarda di Bologna, from Romagna. Cappelletti – small egg pasta "hats" filled with ricotta, parsley, Parmigiano Reggiano and nutmeg, sometimes also chicken breast or pork and lemon zest, from Emilia, in particular Reggio. Cappello del prete – "tricorno" hat shaped bag of pork rind with stuffing similar to zampone's, to be boiled (from Parma, Reggio Emilia and Modena) Ciccioli – cold meat made with pig's feet and head from Modena Coppa – cured pork neck form Piacenza and Parma Cotechino – big raw spiced pork sausage to be boiled, stuffing rich in pork rind (from Emilia provinces) Crescentine baked on Tigelle – (currently known also as Tigelle that is the traditional name of the stone dies which Crescentine were baked between) a small round (approx. 8 cm diameter, 1 cm or less thick) flat bread from the Modena Appennine mountains Crescentine – flat bread from Bologna and Modena: to be fried in pork fat or baked between hot dies (see Tigelle above) Culatello – a cured ham made with the most tender of the pork rump: the best is from the small Zibello area in Parma lowlands Erbazzone – spinach and cheese filled pie from Reggio Emilia Fave stufate – broad beans with mortadella Garganelli – typical Romagna quill shaped egg pasta usually dressed with Guanciale (cheek bacon), peas, Parmigiano Reggiano and a hint of cream. Gnocco Fritto – fried pastry puffs from Modena (Gnocco Fritto was a very local name: until few decades ago it was unknown even in neighbouring Emilian provinces where different denominations, i.e. Crescentine Fritte in Bologna, for similar fried puffs) Gramigna con salsiccia – typical Bologna short and small diameter curly pasta pipes with sausage ragù. Lasagne – green or yellow egg pasta layered with Bolognese Ragù (meat sauce) and bechamel Mortadella – baked sweet and aromatic pork sausage from Bologna Pan Pepato – very rich Christmas dried fruit and nut dessert with almonds, candies and a lot of sweet spices Parmigiano-Reggiano – prized ancient long-aged cheese from Reggio Emilia, Parma. Modena and Bologna Passatelli – noodles made of breadcrumbs, Parmigiano Reggiano, cheese, lemon zest and nutmeg from Romagna* Pesto di Modena – cured pork back fat pounded with garlic, rosemary and Parmigiano-Reggiano used to fill borlenghi and baked crescentine Piadina Fritta – Fried Romagna pastry rectangles Piadina – Pancake shaped flat bread (from Romagna) which can be smaller and higher or larger and very thin Pisarei e faśö – pasta peas with beans from Piacenza Salame Felino – salami from Parma province Salamina da Sugo – soft sausage from Ferrara, seasonal. Spalla di San Secondo – gourmet salami from a small town near Parma; it is made with seasoned pork shoulder, stuffed in cow bladders and slowly boiled or steamed. Spongata – very rich Christmas time thin tart: a soft crust with flour sugar dusting, stuffed with finely broken almonds and other nuts, candies and a lot of sweet spices, from Reggio Emilia Squacquerone – sweet, runny, milky cheese from Romagna Tagliatelle all' uovo – egg pasta noodles, very popular across Emilia-Romagna; they are made in slightly different thickness, width and length according to local practise (in Bologna the authentic size of Tagliatelle alla Bolognese is officially registered at the local Chamber of Commerce) Torresani – roasted pigeons popular in Emilia Torta Barozzi o Torta Nera – barozzi tart or black tart (a dessert made with a coffee/cocoa and almond filling encased in a fine pastry dough (from Modena) Tortelli alla Lastra – griddle baked pasta rectangles filed with potato and pumpkin puree and sausage or bacon bits Tortelli – usually square, made in all Emilia-Romagna, filled with swiss chard or spinach, ricotta and Parmigiano Reggiano in Romagna or ricotta, parsley, Parmigiano Reggiano in Bologna (where they are called Tortelloni) and Emilia, or with potatoes and pancetta in the Apennine mountains Tortellini – small egg pasta navel shapes filled with lean pork, eggs, Parmigiano-Reggiano, Mortadella, Parma Ham and nutmeg (from Bologna and Modena: according to a legend, they were invented in Castelfranco Emilia by a peeping innkeeper after the navel of a beautiful guest) Zampone – stuffed pig's trotter, fat, but leaner than cotechino's, stuffing; to be boiled (from Modena) Tuscany Bistecca alla fiorentina – grilled Florentine T-bone steak traditionally from the Chianina cattle breed. Crema paradiso – Tuscan cream Fegatelli di maiale – pig's liver forcemeat stuffed into pig's stomach and baked in a slow oven with stock and red wine Ossibuchi alla toscana – osso buco, sliced braised veal shank, "Tuscan-style" Pinzimonio – fresh seasonal raw or slightly blanched vegetables served with seasoned olive oil for dipping Ribollita – twice-cooked vegetable soup Lampredotto – cooked abomasum Tuscan bread specialties Carsenta lunigianese – baked on a bed of chestnut leaves and served on Good Friday Ciaccia – from the Maremma made from maize Donzelle – round loaf fried in olive oil Fiandolone – made with sweet chestnut flour and strewn with rosemary leaves Filone – classic Tuscan unsalted bread Pan di granturco – made from maize flour Pan di ramerino – a rosemary bread seasoned with sugar and salt. The bread was originally served during Holy Week decorated with a cross on top and sold at the Church by semellai; it is, however, offered year round now. Pan maoko – equal parts wheat and maize flour, with pine nuts and raisins added Pane classico integrale – unsalted bread made with semolina with a crisp crust Pane con i grassetti – a bread from the Garfagnana area, with pork cracklings mixed in Pane con l'uva – in other areas this bread often takes the form of small loaves or rolls, but in Tuscany it is a rolled-out dough with red grapes incorporated into it and sprinkled with sugar. It is bread served often in the autumn in place of dessert and often served with figs Panigaccio – Lunigiana specialty made with flour, water and salt baked over red-hot coals and served with cheese and olive oil Panina gialla aretina – an Easter bread with a high fat content, containing raisins, saffron, and spices. It is consecrated in a church before being served with eggs Panini di Sant' Antonio – sweet rolls eaten on the feast day of St. Anthony Schiacciata – dough rolled out onto baking sheet and can have pork cracklings, herbs, potatoes and/or tomatoes added to the top along with a salt and olive oil Schiacciatina – made with a fine flour, salt dough with yeast and olive oil Panino co' i' lampredotto – lampredotto sandwich Umbria Lenticchie di Castelluccio con salsicce – lentil stew with sausages Minestra di farro – spelt soup Piccioni allo spiedo – spit-roasted pigeon Regina in porchetta – carp in fennel sauce Specialties of the Norcineria (Umbrian Butcher) Barbozzo – cured, matured pig's cheek Budellacci – smoked, spiced pig intestines eaten raw, spit-roasted, or broiled Capocollo – Sausage highly seasoned with garlic and pepper Coppa – sausage made from the pig's head Mazzafegati – sweet or hot pig's liver sausage, the sweet version containing raisins, orange peel and sugar Prosciutto di Norcia – a pressed, cured ham made from the legs of pigs fed on a strict diet of acorns Marche Brodetto di San Benedetto del Tronto – fish stew, San Benedetto del Tronto-style, with green tomatoes and sweet green pepper. Brodetto di Porto Recanati – fish stew, without tomato, wild saffron spiced. Olive all'ascolana – fried stoned olives stuffed with pork, beef, chicken, eggs and Parmesan cheese in Ascoli Piceno. Passatelli all'urbinate – spinach and meat dumplings Unique ham and sausage specialties Coppa – coppa in this region refers to a boiling sausage made from pig's head, bacon, orange peel, nutmeg and sometimes pinenuts or almonds. It is meant to be eaten within a month of preparation Ciauscolo – made from the belly and shoulder of pig with half its weight in pork fat and seasoned with salt, pepper, orange peel and fennel. It is stuffed into an intestine casing, dried in a smoking chamber and cured for three weeks. Fegatino – a liver sausage with pork belly and shoulder, where the liver replaces the fat of other sausages Mazzafegato di Fabriano – mortadella made from fat and lean pork with liver and lung added to the fine-grained emulsification. It is seasoned with salt and pepper, stuffed into casings and smoked. This sausage is often served at festivals. Prosciutto del Montefeltro – made from free-range black pigs, this is a smoked Prosciutto washed with vinegar and ground black pepper Salame del Montefeltro – made from the leg and loin meat of the black pig, this sausage is highly seasoned with peppercorns and hung to dry Salame di Fabriano – similar to salame lardellato except that it is made solely from leg of pork with pepper and salt Salame lardellato – made with lean pork shoulder, or leg meat, along with diced bacon, salt, pepper, and whole peppercorns. It is cased in hog's intestines, dried for one-and-a-half days and then placed in a warm room for 3–4 days, two days in a cold room and then two months in a ventilated storage room Soppressata di Fabriano – finely emulsified pork flavored with bacon, salt and pepper, the sausage is smoked and then aged Lazio Bucatini all'amatriciana – bucatini with guanciale, tomatoes and pecorino Carciofi alla giudia – artichokes fried in olive oil, typical of Roman Jewish cooking Carciofi alla Romana – artichokes Roman-style; outer leaves removed, stuffed with mint, garlic, breadcrumbs and braised Coda di bue alla vaccinara – oxtail ragout Saltimbocca alla Romana – Veal cutlet, Roman-style; topped with raw ham and sage and simmered with white wine and butter Spaghetti alla carbonara – spaghetti with eggs, guanciale and pecorino Abruzzo and Molise Agnello casc' e ove – Lamb stuffed with grated Pecorino cheese and eggs Agnello con le olive – Arrosticini – skewered pieces of meat Maccheroni alla chitarra – a narrow stripped pasta served with a sauce of tomatoes, bacon and Pecorino cheese Maccheroni alla molinara - also la pasta alla mugnaia is a long (single) hand made pasta served with tomato sauce Mozzarelline allo zafferano – mini mozzarella cheese coated with a batter flavored with saffron Parrozzo - a cake-like dessert made from a mixture of flour and crushed almonds, and coated in chocolate Pizza Dolce - A layered (with two or three cream fillings - white custard, chocolate or almond) sponge cake, that is soaked with alchermes (if you can find it) or rum. Pizzelle - (also known as Ferratelle). A thin, cookie made with a waffle iron device, often flavored with anise. Spaghetti all' aglio, olio e peperoncino Scripelle 'Mbusse - Abruzzo crêpes (flour, water and eggs), seasoned with Pecorino cheese, rolled and served in chicken broth. Sugo di castrato – mutton sauce made with onion, rosemary, bacon, white wine, and tomatoes Timballo teremana - A "lasagne" made with scripelle (Abruzzo crêpes) layered with a ragout of beef, pork, onion, carrot and celery, also layered with mushrooms, crumbled hard boiled egg, peas and besciamella. Campania Babà – Neapolitan rum-dippe dessert Braciole di maiale – Pork loin with tomatoes sauce, garlic, capers and pine nuts Caponata di pesce – Fish Caponata; bread (baked in the shape of a donut), anchovies, tuna, lemon juice, olive oil and pepper Casatiello – Neapolitan Easter pie with Parmesan cheese, Pecorino cheese, eggs, salame, bacon,
|
Bedford, he was appointed one of the king's itinerant preachers in Lancashire, and after living for a time in Garstang, he was selected by the Lady Margaret Hoghton as vicar of Preston. He associated himself with Presbyterianism, and was on the celebrated committee for the ejection of "scandalous and ignorant ministers and schoolmasters" during the Commonwealth. So long as Ambrose continued at Preston he was favoured with the warm friendship of the Hoghton family, their ancestral woods and the tower near Blackburn affording him sequestered places for those devout meditations and "experiences" that give such a charm to his diary, portions of which are quoted in his Prima Media and Ultima (1650, 1659). The immense auditory of his sermon (Redeeming the Time) at the funeral of Lady Hoghton was long a living tradition all over the county. On account of the feeling engendered by the civil war Ambrose left his great church of Preston in 1654, and became minister of Garstang, whence, however, in 1662 he was ejected along two thousand ministers who refused
|
in 1662 he was ejected along two thousand ministers who refused to conform (see Great Ejection). His after years were passed among old friends and in quiet meditation at Preston. He died of apoplexy about 20 January 1664. Character assessment As a religious writer Ambrose has a vividness and freshness of imagination possessed by scarcely any of the Puritan Nonconformists. Many who have no love for Puritan doctrine, nor sympathy with Puritan experience, have appreciated the pathos and beauty of his writings, and his Looking unto Jesus long held its own in popular appreciation with the writings of John Bunyan. Dr Edmund Calamy the Elder (1600–1666) wrote about him: In the opinion of John Eglington Bailey (his biographer in the DNB), his character has been misrepresented by Wood. He was of a peaceful disposition; and though he put his name to the fierce "Harmonious Consent", he was not naturally a partisan. He evaded the political controversies of the time. His gentleness of character and earnest presentation of the gospel attached him to his people. He was much given to secluding
|
managed the historical transition from open whale hunting to highly restricted hunting. It has stopped all but the most highly motivated whale-hunting countries. This success has made its life more difficult, since it has left the hardest part of the problem for last." References External links Text of the Convention at the IWC website Ratifications. Environmental treaties Whaling Treaties concluded in 1946 Treaties entered into force in 1948 Whale conservation 1948 in the environment 1946 in Washington, D.C. Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Austria Treaties of Australia Treaties of Belgium Treaties of Belize Treaties of Benin Treaties of the military dictatorship in Brazil Treaties of Bulgaria Treaties of Cambodia Treaties of Cameroon Treaties of Chile Treaties of the Republic of China (1949–1971) Treaties of the People's Republic of China Treaties of Colombia Treaties of the Republic of the Congo Treaties of Costa Rica Treaties of Ivory Coast Treaties of Croatia Treaties of Cyprus Treaties of the Czech Republic Treaties of Denmark Treaties of Dominica Treaties of the Dominican Republic Treaties of Ecuador Treaties of Eritrea Treaties of Estonia Treaties of Finland Treaties of the French Fourth Republic Treaties of Gabon Treaties of the Gambia Treaties of West Germany Treaties of Ghana Treaties of Grenada Treaties of Guatemala Treaties of Guinea Treaties of Guinea-Bissau Treaties of Hungary Treaties of Iceland Treaties of India Treaties of Ireland Treaties of Israel Treaties of Italy Treaties of Japan Treaties of Kenya Treaties of Kiribati Treaties of South Korea Treaties of Laos Treaties of Lithuania Treaties of Luxembourg Treaties of Mali Treaties of the Marshall Islands Treaties of Mauritania Treaties of Mexico Treaties of Monaco Treaties of Mongolia Treaties of Morocco Treaties of Nauru Treaties of the Netherlands Treaties of New Zealand Treaties of Nicaragua Treaties of Norway Treaties of Oman Treaties of Palau Treaties of Panama Treaties of Peru Treaties of Poland Treaties of Portugal Treaties of Romania Treaties of the Soviet Union Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Saint Vincent and the Grenadines Treaties of San Marino Treaties of Senegal Treaties of Slovakia Treaties of Slovenia Treaties of the Solomon Islands Treaties of the Union of South Africa Treaties of Spain Treaties of Suriname
|
Japan, New Zealand, and Panama have all withdrawn from the convention temporarily but ratified it second time; the Netherlands, Norway, and Sweden have each withdrawn from the convention twice, only to have accepted it a third time. Japan is the most recent member to depart, in January 2019, so as to resume commercial whaling. Effectiveness There have been consistent disagreement over the scope of the convention. The 1946 Convention does not define a 'whale'. Some members of IWC claim that it has the legal competence to regulate catches of only great whales (the baleen whales and the sperm whale). Others believe that all cetaceans, including the smaller dolphins and porpoises, fall within IWC jurisdiction. An analysis by the Carnegie Council determined that while the ICRW has had "ambiguous success" owing to its internal divisions, it has nonetheless "successfully managed the historical transition from open whale hunting to highly restricted hunting. It has stopped all but the most highly motivated whale-hunting countries. This success has made its life more difficult, since it has left the hardest part of the problem for last." References External links Text of the Convention at the IWC website Ratifications. Environmental treaties Whaling Treaties concluded in 1946 Treaties entered into force in 1948 Whale conservation 1948 in the environment 1946 in Washington, D.C. Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Austria Treaties of Australia Treaties of Belgium Treaties of Belize Treaties of Benin Treaties of the military dictatorship in Brazil Treaties of Bulgaria Treaties of Cambodia Treaties of Cameroon Treaties of Chile Treaties of the Republic of China (1949–1971) Treaties of the People's Republic of China Treaties of Colombia Treaties of the Republic of the Congo Treaties
|
(e.g., ISO/IEC 13818-1:2007/FDAmd 4) PRF Amd – (e.g., ISO 12639:2004/PRF Amd 1) Amd – Amendment (e.g., ISO/IEC 13818-1:2007/Amd 1:2007) Other abbreviations are: TR – Technical Report (e.g., ISO/IEC TR 19791:2006) DTR – Draft Technical Report (e.g., ISO/IEC DTR 19791) TS – Technical Specification (e.g., ISO/TS 16949:2009) DTS – Draft Technical Specification (e.g., ISO/DTS 11602-1) PAS – Publicly Available Specification TTA – Technology Trends Assessment (e.g., ISO/TTA 1:1994) IWA – International Workshop Agreement (e.g., IWA 1:2005) Cor – Technical Corrigendum (e.g., ISO/IEC 13818-1:2007/Cor 1:2008) Guide – a guidance to technical committees for the preparation of standards International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a process with six steps: Stage 1: Proposal stage Stage 2: Preparatory stage Stage 3: Committee stage Stage 4: Enquiry stage Stage 5: Approval stage Stage 6: Publication stage The TC/SC may set up working groups (WG) of experts for the preparation of a working drafts. Subcommittees may have several working groups, which may have several Sub Groups (SG). It is possible to omit certain stages, if there is a document with a certain degree of maturity at the start of a standardization project, for example, a standard developed by another organization. ISO/IEC directives also allow the so-called "Fast-track procedure". In this procedure a document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies or as a final draft International Standard (FDIS), if the document was developed by an international standardizing body recognized by the ISO Council. The first step—a proposal of work (New Proposal) is approved at the relevant subcommittee or technical committee (e.g., SC29 and JTC1 respectively in the case of Moving Picture Experts Group – ISO/IEC JTC1/SC29/WG11). A working group (WG) of experts is set up by the TC/SC for the preparation of a working draft. When the scope of a new work is sufficiently clarified, some of the working groups (e.g., MPEG) usually make open request for proposals—known as a "call for proposals". The first document that is produced, for example, for audio and video coding standards is called a verification model (VM) (previously also called a "simulation and test model"). When a sufficient confidence in the stability of the standard under development is reached, a working draft (WD) is produced. This is in the form of a standard, but is kept internal to working group for revision. When a working draft is sufficiently solid and the working group is satisfied that it has developed the best technical solution to the problem being addressed, it becomes a committee draft (CD). If it is required, it is then sent to the P-members of the TC/SC (national bodies) for ballot. The committee draft becomes final committee draft (FCD) if the number of positive votes exceeds the quorum. Successive committee drafts may be considered until consensus is reached on the technical content. When consensus is reached, the text is finalized for submission as a draft International Standard (DIS). Then the text is submitted to national bodies for voting and comment within a period of five months. It is approved for submission as a final draft International Standard (FDIS) if a two-thirds majority of the P-members of the TC/SC are in favour and if not more than one-quarter of the total number of votes cast are negative. ISO will then hold a ballot with National Bodies where no technical changes are allowed (yes/no ballot), within a period of two months. It is approved as an International Standard (IS) if a two-thirds majority of the P-members of the TC/SC is in favour and not more than one-quarter of the total number of votes cast are negative. After approval, only minor editorial changes are introduced into the final text. The final text is sent to the ISO central secretariat, which publishes it as the International Standard. International Workshop Agreements International Workshop Agreements (IWAs) follow a slightly different process outside the usual committee system but overseen by the ISO, allowing "key industry players to negotiate in an open workshop environment" in order to shape the IWA standard. Products named after ISO On occasion, the fact that many of the ISO-created standards are ubiquitous has led to common use of "ISO" to describe the product that conforms to a standard. Some examples of this are: Disk images end in the file extension "ISO" to signify that they are using the ISO 9660 standard file system as opposed to another file system—hence disc images commonly being referred to as "ISOs". The sensitivity of a photographic film to light (its "film speed") is described by ISO 6, ISO 2240 and ISO 5800. Hence, the speed of the film often is referred to by its ISO number. As it was originally defined in ISO 518, the flash hot shoe found on cameras often is called the "ISO shoe". ISO 11783, which is marketed as ISOBUS. ISO 13216, which is marketed as ISOFIX. Criticism With the exception of a small number of isolated standards, normally ISO standards are not available free of charge, but for a purchase fee, which has been seen by some as unaffordable by small open source projects. The ISO/IEC JTC1 fast-track procedures ("Fast-track" as used by OOXML and "PAS" as used by OpenDocument) have garnered criticism in relation to the standardization of Office Open XML (ISO/IEC 29500). Martin Bryan, outgoing convenor of ISO/IEC JTC1/SC34 WG1, is quoted as saying: I would recommend my successor that it is perhaps time to pass WG1’s outstanding standards over to OASIS (Organization for the Advancement of Structured Information
|
They are generally issued with the expectation that the affected standard will be updated or withdrawn at its next scheduled review. ISO guides These are meta-standards covering "matters related to international standardization". They are named using the format "ISO[/IEC] Guide N:yyyy: Title". For example: ISO/IEC Guide 2:2004 Standardization and related activities — General vocabulary ISO/IEC Guide 65:1996 General requirements for bodies operating product certification Document copyright ISO documents have strict copyright restrictions and ISO charges for most copies. , the typical cost of a copy of an ISO standard is about or more (and electronic copies typically have a single-user license, so they cannot be shared among groups of people). Some standards by ISO and its official U.S. representative (and, via the U.S. National Committee, the International Electrotechnical Commission) are made freely available. Standardization process A standard published by ISO/IEC is the last stage of a long process that commonly starts with the proposal of new work within a committee. Some abbreviations used for marking a standard with its status are: PWI – Preliminary Work Item NP or NWIP – New Proposal / New Work Item Proposal (e.g., ISO/IEC NP 23007) AWI – Approved new Work Item (e.g., ISO/IEC AWI 15444-14) WD – Working Draft (e.g., ISO/IEC WD 27032) CD – Committee Draft (e.g., ISO/IEC CD 23000-5) FCD – Final Committee Draft (e.g., ISO/IEC FCD 23000-12) DIS – Draft International Standard (e.g., ISO/IEC DIS 14297) FDIS – Final Draft International Standard (e.g., ISO/IEC FDIS 27003) PRF – Proof of a new International Standard (e.g., ISO/IEC PRF 18018) IS – International Standard (e.g., ISO/IEC 13818-1:2007) Abbreviations used for amendments are: NP Amd – New Proposal Amendment (e.g., ISO/IEC 15444-2:2004/NP Amd 3) AWI Amd – Approved new Work Item Amendment (e.g., ISO/IEC 14492:2001/AWI Amd 4) WD Amd – Working Draft Amendment (e.g., ISO 11092:1993/WD Amd 1) CD Amd / PDAmd – Committee Draft Amendment / Proposed Draft Amendment (e.g., ISO/IEC 13818-1:2007/CD Amd 6) FPDAmd / DAM (DAmd) – Final Proposed Draft Amendment / Draft Amendment (e.g., ISO/IEC 14496-14:2003/FPDAmd 1) FDAM (FDAmd) – Final Draft Amendment (e.g., ISO/IEC 13818-1:2007/FDAmd 4) PRF Amd – (e.g., ISO 12639:2004/PRF Amd 1) Amd – Amendment (e.g., ISO/IEC 13818-1:2007/Amd 1:2007) Other abbreviations are: TR – Technical Report (e.g., ISO/IEC TR 19791:2006) DTR – Draft Technical Report (e.g., ISO/IEC DTR 19791) TS – Technical Specification (e.g., ISO/TS 16949:2009) DTS – Draft Technical Specification (e.g., ISO/DTS 11602-1) PAS – Publicly Available Specification TTA – Technology Trends Assessment (e.g., ISO/TTA 1:1994) IWA – International Workshop Agreement (e.g., IWA 1:2005) Cor – Technical Corrigendum (e.g., ISO/IEC 13818-1:2007/Cor 1:2008) Guide – a guidance to technical committees for the preparation of standards International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a process with six steps: Stage 1: Proposal stage Stage 2: Preparatory stage Stage 3: Committee stage Stage 4: Enquiry stage Stage 5: Approval stage Stage 6: Publication stage The TC/SC may set up working groups (WG) of experts for the preparation of a working drafts. Subcommittees may have several working groups, which may have several Sub Groups (SG). It is possible to omit certain stages, if there is a document with a certain degree of maturity at the start of a standardization project, for example, a standard developed by another organization. ISO/IEC directives also allow the so-called "Fast-track procedure". In this procedure a document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies or as a final draft International Standard (FDIS), if the document was developed by an international standardizing body recognized by the ISO Council. The first step—a proposal of work (New Proposal) is approved at the relevant subcommittee or technical committee (e.g., SC29 and JTC1 respectively in the case of Moving Picture Experts Group – ISO/IEC JTC1/SC29/WG11). A working group (WG) of experts is set up by the TC/SC for the preparation of a working draft. When the scope of a new work is sufficiently clarified, some of the working groups (e.g., MPEG) usually make open request for proposals—known as a "call for proposals". The first document that is produced, for example, for audio and video coding standards is called a verification model (VM) (previously also called a "simulation and test model"). When a sufficient confidence in the stability of the standard under development is reached, a working draft (WD) is produced. This is in the form of a standard, but is kept internal to working group for revision. When a working draft is sufficiently solid and the working group is satisfied that it has developed the best technical solution to the problem being addressed, it becomes a committee draft (CD). If it is required, it is then sent to the P-members of the TC/SC (national bodies) for ballot. The committee draft becomes final committee draft (FCD) if the number of positive votes exceeds the quorum. Successive committee drafts may be considered until consensus is reached on the technical content. When consensus is reached, the text is finalized for submission as a draft International Standard (DIS). Then the text is submitted to national bodies for voting and comment within a period of five months. It is approved for submission as a final draft International Standard (FDIS) if a two-thirds majority of the P-members of the TC/SC are in favour and if not more than one-quarter of the total number of votes cast are negative. ISO will then hold a ballot with National Bodies where no technical changes are allowed (yes/no ballot), within a period of two months. It is approved as an International Standard (IS) if a two-thirds majority of the P-members of the TC/SC is in favour and not more than one-quarter of the total number of votes cast are
|
give it away as enlightened altruists. This was to be based on utilitarian principles and he said: "Every man has a right to that, the exclusive possession of which being awarded to him, a greater sum of benefit or pleasure will result than could have arisen from its being otherwise appropriated". Godwin's political views were diverse and do not perfectly agree with any of the ideologies that claim his influence as writers of the Socialist Standard, organ of the Socialist Party of Great Britain, consider Godwin both an individualist and a communist; Murray Rothbard did not regard Godwin as being in the individualist camp at all, referring to him as the "founder of communist anarchism"; and historian Albert Weisbord considers him an individualist anarchist without reservation. Some writers see a conflict between Godwin's advocacy of "private judgement" and utilitarianism as he says that ethics requires that individuals give their surplus property to each other resulting in an egalitarian society, but at the same time he insists that all things be left to individual choice. As noted by Kropotkin, many of Godwin's views changed over time. William Godwin's influenced "the socialism of Robert Owen and Charles Fourier. After success of his British venture, Owen himself established a cooperative community within the United States at New Harmony, Indiana during 1825. One member of this commune was Josiah Warren, considered to be the first individualist anarchist. After New Harmony failed, Warren shifted his ideological loyalties from socialism to anarchism. According to anarchist Peter Sabatini, this "was no great leap, given that Owen's socialism had been predicated on Godwin's anarchism". Pierre-Joseph Proudhon Pierre-Joseph Proudhon was the first philosopher to label himself an "anarchist". Some consider Proudhon to be an individualist anarchist while others regard him to be a social anarchist.Knowles, Rob. "Political Economy from below : Communitarian Anarchism as a Neglected Discourse in Histories of Economic Thought". History of Economics Review, No.31 Winter 2000. Some commentators do not identify Proudhon as an individualist anarchist due to his preference for association in large industries, rather than individual control. Nevertheless, he was influential among some of the American individualists—in the 1840s and 1850s, Charles Anderson Dana and William Batchelder Greene introduced Proudhon's works to the United States. Greene adapted Proudhon's mutualism to American conditions and introduced it to Benjamin Tucker. Proudhon opposed government privilege that protects capitalist, banking and land interests and the accumulation or acquisition of property (and any form of coercion that led to it) which he believed hampers competition and keeps wealth in the hands of the few. Proudhon favoured a right of individuals to retain the product of their labour as their own property, but he believed that any property beyond that which an individual produced and could possess was illegitimate. Thus he saw private property as both essential to liberty and a road to tyranny, the former when it resulted from labour and was required for labour and the latter when it resulted in exploitation (profit, interest, rent and tax). He generally called the former "possession" and the latter "property". For large-scale industry, he supported workers associations to replace wage labour and opposed the ownership of land. Proudhon maintained that those who labour should retain the entirety of what they produce and that monopolies on credit and land are the forces that prohibit such. He advocated an economic system that included private property as possession and exchange market, but without profit, which he called mutualism. It is Proudhon's philosophy that was explicitly rejected by Joseph Déjacque in the inception of anarcho-communism, with the latter asserting directly to Proudhon in a letter that "it is not the product of his or her labour that the worker has a right to, but to the satisfaction of his or her needs, whatever may be their nature". An individualist rather than anarcho-communist, Proudhon said that "communism [...] is the very denial of society in its foundation" and famously declared that "property is theft" in reference to his rejection of ownership rights to land being granted to a person who is not using that land. After Déjacque and others split from Proudhon due to the latter's support of individual property and an exchange economy, the relationship between the individualists (who continued in relative alignment with the philosophy of Proudhon) and the anarcho-communists was characterised by various degrees of antagonism and harmony. For example, individualists like Tucker on the one hand translated and reprinted the works of collectivists like Mikhail Bakunin while on the other hand rejected the economic aspects of collectivism and communism as incompatible with anarchist ideals. Mutualism Mutualism is an anarchist school of thought which can be traced to the writings of Pierre-Joseph Proudhon, who envisioned a society where each person might possess a means of production, either individually or collectively, with trade representing equivalent amounts of labor in the free market. Integral to the scheme was the establishment of a mutual-credit bank which would lend to producers at a minimal interest rate only high enough to cover the costs of administration. Mutualism is based on a labor theory of value which holds that when labour or its product is sold, in exchange it ought to receive goods or services embodying "the amount of labor necessary to produce an article of exactly similar and equal utility". Some mutualists believe that if the state did not intervene, individuals would receive no more income than that in proportion to the amount of labor they exert as a result of increased competition in the marketplace.Carson, Kevin, 2004, Studies in Mutualist Political Economy, chapter 2 (after Meek & Oppenheimer). Mutualists oppose the idea of individuals receiving an income through loans, investments and rent as they believe these individuals are not labouring. Some of them argue that if state intervention ceased, these types of incomes would disappear due to increased competition in capital.Carson, Kevin, 2004, Studies in Mutualist Political Economy, chapter 2 (after Ricardo, Dobb & Oppenheimer). Although Proudhon opposed this type of income, he expressed that he "never meant to [...] forbid or suppress, by sovereign decree, ground rent and interest on capital. I believe that all these forms of human activity should remain free and optional for all". Mutualists argue for conditional titles to land, whose private ownership is legitimate only so long as it remains in use or occupation (which Proudhon called "possession"). Proudhon's mutualism supports labor-owned cooperative firms and associations for "we need not hesitate, for we have no choice [...] it is necessary to form an ASSOCIATION among workers [...] because without that, they would remain related as subordinates and superiors, and there would ensue two [...] castes of masters and wage-workers, which is repugnant to a free and democratic society" and so "it becomes necessary for the workers to form themselves into democratic societies, with equal conditions for all members, on pain of a relapse into feudalism". As for capital goods (man-made and non-land, means of production), mutualist opinion differs on whether these should be common property and commonly managed public assets or private property in the form of worker cooperatives, for as long as they ensure the worker's right to the full product of their labor, mutualists support markets and property in the product of labor, differentiating between capitalist private property (productive property) and personal property (private property).Hargreaves, David H. (2019). Beyond Schooling: An Anarchist Challenge. London: Routledge. pp. 90–91. . "Ironically, Proudhon did not mean literally what he said. His boldness of expression was intended for emphasis, and by 'property' he wished to be understood what he later called 'the sum of its abuses'. He was denouncing the property of the man who uses it to exploit the labour of others without any effort on his own part, property distinguished by interest and rent, by the impositions of the non-producer on the producer. Towards property regarded as 'possession' the right of a man to control his dwelling and the land and tools he needs to live, Proudhon had no hostility; indeed, he regarded it as the cornerstone of liberty, and his main criticism of the communists was that they wished to destroy it." Following Proudhon, mutualists are libertarian socialists who consider themselves to part of the market socialist tradition and the socialist movement. However, some contemporary mutualists outside the classical anarchist tradition abandoned the labor theory of value and prefer to avoid the term socialist due to its association with state socialism throughout the 20th century. Nonetheless, those contemporary mutualists "still retain some cultural attitudes, for the most part, that set them off from the libertarian right. Most of them view mutualism as an alternative to capitalism, and believe that capitalism as it exists is a statist system with exploitative features". Mutualists have distinguished themselves from state socialism and do not advocate state ownership over the means of production. Benjamin Tucker said of Proudhon that "though opposed to socializing the ownership of capital, Proudhon aimed nevertheless to socialize its effects by making its use beneficial to all instead of a means of impoverishing the many to enrich the few [...] by subjecting capital to the natural law of competition, thus bringing the price of its own use down to cost". Max Stirner Johann Kaspar Schmidt, better known as Max Stirner (the nom de plume he adopted from a schoolyard nickname he had acquired as a child because of his high brow, in German Stirn), was a German philosopher who ranks as one of the literary fathers of nihilism, existentialism, post-modernism and anarchism, especially of individualist anarchism. Stirner's main work is The Ego and Its Own, also known as The Ego and His Own (Der Einzige und sein Eigentum in German which translates literally as The Only One [individual] and his Property or The Unique Individual and His Property). This work was first published in 1844 in Leipzig and has since appeared in numerous editions and translations. Egoism Max Stirner's philosophy, sometimes called egoism, is a form of individualist anarchism. Stirner was a Hegelian philosopher whose "name appears with familiar regularity in historically oriented surveys of anarchist thought as one of the earliest and best-known exponents of individualist anarchism". In 1844, Stirner's work The Ego and Its Own was published and is considered to be "a founding text in the tradition of individualist anarchism". Stirner does not recommend that the individual try to eliminate the state, but simply that they disregard the state when it conflicts with one's autonomous choices and go along with it when doing so is conducive to one's interests. Stirner says that the egoist rejects pursuit of devotion to "a great idea, a good cause, a doctrine, a system, a lofty calling", arguing that the egoist has no political calling, but rather "lives themselves out" without regard to "how well or ill humanity may fare thereby". Stirner held that the only limitation on the rights of the individual is that individual's power to obtain what he desires. Stirner proposes that most commonly accepted social institutions, including the notion of state, property as a right, natural rights in general and the very notion of "society" as a legal and ideal abstractness, were mere spooks in the mind. Stirner wants to "abolish not only the state but also society as an institution responsible for its members". Stirner advocated self-assertion and foresaw Union of egoists, non-systematic associations which he proposed in as a form of organization in place of the state. A Union is understood as a relation between egoists which is continually renewed by all parties' support through an act of will. Even murder is permissible "if it is right for me", although it is claimed by egoist anarchists that egoism will foster genuine and spontaneous unions between individuals. For Stirner, property simply comes about through might, arguing that "[w]hoever knows how to take, to defend, the thing, to him belongs property". He further says that "[w]hat I have in my power, that is my own. So long as I assert myself as holder, I am the proprietor of the thing" and that "I do not step shyly back from your property, but look upon it always as my property, in which I respect nothing. Pray do the like with what you call my property!" His concept of "egoistic property" not only a lack of moral restraint on how one obtains and uses things, but includes other people as well. His embrace of egotism is in stark contrast to Godwin's altruism. Although Stirner was opposed to communism, for the same reasons he opposed capitalism, humanism, liberalism, property rights and nationalism, seeing them as forms of authority over the individual and as spooks in the mind, he has influenced many anarcho-communists and post-left anarchists. The writers of An Anarchist FAQ report that "many in the anarchist movement in Glasgow, Scotland, took Stirner's 'Union of egoists' literally as the basis for their anarcho-syndicalist organising in the 1940s and beyond". Similarly, the noted anarchist historian Max Nettlau states that "[o]n reading Stirner, I maintain that he cannot be interpreted except in a socialist sense". Stirner does not personally oppose the struggles carried out by certain ideologies such as socialism, humanism or the advocacy of human rights. Rather, he opposes their legal and ideal abstractness, a fact that makes him different from the liberal individualists, including the anarcho-capitalists and right-libertarians, but also from the Übermensch theories of fascism as he places the individual at the center and not the sacred collective. About socialism, Stirner wrote in a letter to Moses Hess that "I am not at all against socialism, but against consecrated socialism; my selfishness is not opposed to love [...] nor is it an enemy of sacrifice, nor of self-denial [...] and least of all of socialism [...] — in short, it is not an enemy of true interests; it rebels not against love, but against sacred love, not against thought, but against sacred thought, not against socialists, but against sacred socialism". This position on property is quite different from the Native American, natural law, form of individualist anarchism which defends the inviolability of the private property that has been earned through labor. However, Benjamin Tucker rejected the natural rights philosophy and adopted Stirner's egoism in 1886, with several others joining with him. This split the American individualists into fierce debate, "with the natural rights proponents accusing the egoists of destroying libertarianism itself". Other egoists include James L. Walker, Sidney Parker, Dora Marsden and John Beverly Robinson. In Russia, individualist anarchism inspired by Stirner combined with an appreciation for Friedrich Nietzsche attracted a small following of bohemian artists and intellectuals such as Lev Chernyi as well as a few lone wolves who found self-expression in crime and violence. They rejected organizing, believing that only unorganized individuals were safe from coercion and domination, believing this kept them true to the ideals of anarchism. This type of individualist anarchism inspired anarcha-feminist Emma Goldman. Although Stirner's philosophy is individualist, it has influenced some libertarian communists and anarcho-communists. "For Ourselves Council for Generalized Self-Management" discusses Stirner and speaks of a "communist egoism" which is said to be a "synthesis of individualism and collectivism" and says that "greed in its fullest sense is the only possible basis of communist society". Forms of libertarian communism such as Situationism are influenced by Stirner. Anarcho-communist Emma Goldman was influenced by both Stirner and Peter Kropotkin and blended their philosophies together in her own as shown in books of hers such as Anarchism And Other Essays. Early individualist anarchism in the United States Josiah Warren Josiah Warren is widely regarded as the first American anarchist and the four-page weekly paper he edited during 1833, The Peaceful Revolutionist, was the first anarchist periodical published, an enterprise for which he built his own printing press, cast his own type and made his own printing plates. Warren was a follower of Robert Owen and joined Owen's community at New Harmony, Indiana. Warren termed the phrase "Cost the limit of price", with "cost" here referring not to monetary price paid but the labor one exerted to produce an item. Therefore, "[h]e proposed a system to pay people with certificates indicating how many hours of work they did. They could exchange the notes at local time stores for goods that took the same amount of time to produce". He put his theories to the test by establishing an experimental "labor for labor store" called the Cincinnati Time Store where trade was facilitated by notes backed by a promise to perform labor. The store proved successful and operated for three years after which it was closed so that Warren could pursue establishing colonies based on mutualism. These included Utopia and Modern Times. Warren said that Stephen Pearl Andrews' The Science of Society (published in 1852) was the most lucid and complete exposition of Warren's own theories. Catalan historian Xavier Diez report that the intentional communal experiments pioneered by Warren were influential in European individualist anarchists of the late 19th and early 20th centuries such as Émile Armand and the intentional communities started by them. Henry David Thoreau Henry David Thoreau was an important early influence in individualist anarchist thought in the United States and Europe. Thoreau was an American author, poet, naturalist, tax resister, development critic, surveyor, historian, philosopher and leading transcendentalist. He is best known for his book Walden, a reflection upon simple living in natural surroundings; and his essay, Civil Disobedience, an argument for individual resistance to civil government in moral opposition to an unjust state. His thought is an early influence on green anarchism, but with an emphasis on the individual experience of the natural world influencing later naturist currents, simple living as a rejection of a materialist lifestyle and self-sufficiency were Thoreau's goals and the whole project was inspired by transcendentalist philosophy. Many have seen in Thoreau one of the precursors of ecologism and anarcho-primitivism represented today in John Zerzan. For George Woodcock, this attitude can be also motivated by certain idea of resistance to progress and of rejection of the growing materialism which is the nature of American society in the mid 19th century. The essay "Civil Disobedience" (Resistance to Civil Government) was first published in 1849. It argues that people should not permit governments to overrule or atrophy their consciences and that people have a duty to avoid allowing such acquiescence to enable the government to make them the agents of injustice. Thoreau was motivated in part by his disgust with slavery and the Mexican–American War. The essay later influenced Mohandas Gandhi, Martin Luther King Jr., Martin Buber and Leo Tolstoy through its advocacy of nonviolent resistance. It is also the main precedent for anarcho-pacifism. The American version of individualist anarchism has a strong emphasis on the non-aggression principle and individual sovereignty. Some individualist anarchists such as ThoreauEncyclopaedia of the Social Sciences, edited by Edwin Robert Anderson Seligman, Alvin Saunders Johnson, 1937, p. 12. do not speak of economics, but simply of the right of "disunion" from the state and foresee the gradual elimination of the state through social evolution. Developments and expansion Anarcha-feminism, free love, freethought and LGBT issues An important current within individualist anarchism is free love. Free love advocates sometimes traced their roots back to Josiah Warren and to experimental communities, and viewed sexual freedom as a clear, direct expression of an individual's self-ownership. Free love particularly stressed women's rights since most sexual laws, such as those governing marriage and use of birth control, discriminated against women. The most important American free love journal was Lucifer the Lightbearer (1883–1907) edited by Moses Harman and Lois Waisbrooker but also there existed Ezra Heywood and Angela Heywood's The Word (1872–1890, 1892–1893). M. E. Lazarus was also an important American individualist anarchist who promoted free love. John William Lloyd, a collaborator of Benjamin Tucker's periodical Liberty, published in 1931 a sex manual that he called The Karezza Method or Magnetation: The Art of Connubial Love. In Europe, the main propagandist of free love within individualist anarchism was Émile Armand. He proposed the concept of la camaraderie amoureuse to speak of free love as the possibility of voluntary sexual encounter between consenting adults. He was also a consistent proponent of polyamory. In France, there was also feminist activity inside individualist anarchism as promoted by individualist feminists Marie Küge, Anna Mahé, Rirette Maîtrejean and Sophia Zaïkovska. The Brazilian individualist anarchist Maria Lacerda de Moura lectured on topics such as education, women's rights, free love and antimilitarism. Her writings and essays garnered her attention not only in Brazil, but also in Argentina and Uruguay. She also wrote for the Spanish individualist anarchist magazine Al Margen alongside Miguel Giménez Igualada. In Germany, the Stirnerists Adolf Brand and John Henry Mackay were pioneering campaigners for the acceptance of male bisexuality and homosexuality. Freethought as a philosophical position and as activism was important in both North American and European individualist anarchism, but in the United States freethought was basically an anti-Christian, anti-clerical movement whose purpose was to make the individual politically and spiritually free to decide for himself on religious matters. A number of contributors to Liberty were prominent figures in both freethought and anarchism. The individualist anarchist George MacDonald was a co-editor of Freethought and for a time The Truth Seeker. E.C. Walker was co-editor of Lucifer, the Light-Bearer. Many of the anarchists were ardent freethinkers; reprints from freethought papers such as Lucifer, the Light-Bearer, Freethought and The Truth Seeker appeared in Liberty. The church was viewed as a common ally of the state and as a repressive force in and of itself. In Europe, a similar development occurred in French and Spanish individualist anarchist circles: "Anticlericalism, just as in the rest of the libertarian movement, is another of the frequent elements which will gain relevance related to the measure in which the (French) Republic begins to have conflicts with the church [...] Anti-clerical discourse, frequently called for by the french individualist André Lorulot, will have its impacts in Estudios (a Spanish individualist anarchist publication). There will be an attack on institutionalized religion for the responsibility that it had in the past on negative developments, for its irrationality which makes it a counterpoint of philosophical and scientific progress. There will be a criticism of proselitism and ideological manipulation which happens on both believers and agnostics". This tendencies will continue in French individualist anarchism in the work and activism of Charles-Auguste Bontemps and others. In the Spanish individualist anarchist magazine Ética and Iniciales, "there is a strong interest in publishing scientific news, usually linked to a certain atheist and anti-theist obsession, philosophy which will also work for pointing out the incompatibility between science and religion, faith and reason. In this way there will be a lot of talk on Darwin's theories or on the negation of the existence of the soul". Anarcho-naturism Another important current, especially within French and Spanish"Anarchism and the different Naturist views have always been related." "Anarchism – Nudism, Naturism" by Carlos Ortega at Asociacion para el Desarrollo Naturista de la Comunidad de Madrid. Published on Revista ADN. Winter 2003. individualist anarchist groups was naturism. Naturism promoted an ecological worldview, small ecovillages and most prominently nudism as a way to avoid the artificiality of the industrial mass society of modernity. Naturist individualist anarchists saw the individual in his biological, physical and psychological aspects and avoided and tried to eliminate social determinations. An early influence in this vein was Henry David Thoreau and his famous book Walden. Important promoters of this were Henri Zisly and Émile Gravelle who collaborated in La Nouvelle Humanité followed by Le Naturien, Le Sauvage, L'Ordre Naturel and La Vie Naturelle."Henri Zisly, self-labeled individualist anarchist, is considered one of the forerunners and principal organizers of the naturist movement in France and one of its most able and outspoken defenders worldwide." "Zisly, Henri (1872–1945)" by Stefano Boni. This relationship between anarchism and naturism was quite important at the end of the 1920s in Spain, when "[t]he linking role played by the 'Sol y Vida' group was very important. The goal of this group was to take trips and enjoy the open air. The Naturist athenaeum, 'Ecléctico', in Barcelona, was the base from which the activities of the group were launched. First Etica and then Iniciales, which began in 1929, were the publications of the group, which lasted until the Spanish Civil War. We must be aware that the naturist ideas expressed in them matched the desires that the libertarian youth had of breaking up with the conventions of the bourgeoisie of the time. That is what a young worker explained in a letter to 'Iniciales' He writes it under the odd pseudonym of 'silvestre del campo', (wild man in the country). "I find great pleasure in being naked in the woods, bathed in light and air, two natural elements we cannot do without. By shunning the humble garment of an exploited person, (garments which, in my opinion, are the result of all the laws devised to make our lives bitter), we feel there no others left but just the natural laws. Clothes mean slavery for some and tyranny for others. Only the naked man who rebels against all norms, stands for anarchism, devoid of the prejudices of outfit imposed by our money-oriented society". The relation between anarchism and naturism "gives way to the Naturist Federation, in July 1928, and to the lV Spanish Naturist Congress, in September 1929, both supported by the Libertarian Movement. However, in the short term, the Naturist and Libertarian movements grew apart in their conceptions of everyday life. The Naturist movement felt closer to the Libertarian individualism of some French theoreticians such as Henri Ner (real name of Han Ryner) than to the revolutionary goals proposed by some Anarchist organisations such as the FAI, (Federación Anarquista Ibérica)". Individualist anarchism and Friedrich Nietzsche The thought of German philosopher Friedrich Nietzsche has been influential in individualist anarchism, specifically in thinkers such as France's Émile Armand, the Italian Renzo Novatore and the Colombian Biofilo Panclasta. Robert C. Holub, author of Nietzsche: Socialist, Anarchist, Feminist posits that "translations of Nietzsche's writings in the United States very likely appeared first in Liberty, the anarchist journal edited by Benjamin Tucker". Individualist anarchism in the United States Mutualism and utopianism For American anarchist historian Eunice Minette Schuster, "[i]t is apparent [...] that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews [...] William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form". William Batchelder Greene is best known for the works Mutual Banking (1850) which proposed an interest-free banking system and Transcendentalism, a critique of the New England philosophical school. He saw mutualism as the synthesis of "liberty and order". His "associationism [...] is checked by individualism. [...] 'Mind your own business,' 'Judge not that ye be not judged.' Over matters which are purely personal, as for example, moral conduct, the individual is sovereign, as well as over that which he himself produces. For this reason he demands 'mutuality' in marriage – the equal right of a woman to her own personal freedom and property and feminist and spiritualist tendencies". Within some individualist anarchist circles, mutualism came to mean non-communist anarchism. Contemporary American anarchist Hakim Bey reports that "Steven Pearl Andrews [...] was not a fourierist, but he lived through the brief craze for phalansteries in America & adopted a lot of fourierist principles & practices, [...] a maker of worlds out of words. He syncretized Abolitionism, Free Love, spiritual universalism, [Josiah] Warren, & [Charles] Fourier into a grand utopian scheme he called the Universal Pantarchy. [...] He was instrumental in founding several 'intentional communities,' including the 'Brownstone Utopia' on 14th St. in New York, & 'Modern Times' in Brentwood, Long Island. The latter became as famous as the best-known fourierist communes (Brook Farm in Massachusetts & the North American Phalanx in New Jersey) – in fact, Modern Times became downright notorious (for 'Free Love') & finally foundered under a wave of scandalous publicity. Andrews (& Victoria Woodhull) were members of the infamous Section 12 of the 1st International, expelled by Marx for its anarchist, feminist, & spiritualist tendencies". Boston anarchists Another form of individualist anarchism was found in the United States as advocated by the so-called Boston anarchists. By default, American individualists had no difficulty accepting the concepts that "one man employ another" or that "he direct him", in his labor but rather demanded that "all natural opportunities requisite to the production of wealth be accessible to all on equal terms and that monopolies arising from special privileges created by law be abolished". They believed state monopoly capitalism (defined as a state-sponsored monopoly) prevented labor from being fully rewarded. Voltairine de Cleyre summed up the philosophy by saying that the anarchist individualists "are firm in the idea that the system of employer and employed, buying and selling, banking, and all the other essential institutions of Commercialism, centred upon private property, are in themselves good, and are rendered vicious merely by the interference of the State". Even among the 19th-century American individualists, there was not a monolithic doctrine as they disagreed amongst each other on various issues including intellectual property rights and possession versus property in land.Watner, Carl (1977). . Journal of Libertarian Studies, Vol. 1, No. 4, p. 308. A major schism occurred later in the 19th century when Tucker and some others abandoned their traditional support of natural rights as espoused by Lysander Spooner and converted to an "egoism" modeled upon Max Stirner's philosophy. Lysander Spooner besides his individualist anarchist activism was also an important anti-slavery activist and became a member of the First International. Some Boston anarchists, including Benjamin Tucker, identified themselves as socialists, which in the 19th century was often used in the sense of a commitment to improving conditions of the working class (i.e. "the labor problem"). The Boston anarchists such as Tucker and his followers continue to be considered socialists due to their opposition to usury. They do so because as the modern economist Jim Stanford points out there are many different kinds of competitive markets such as market socialism and capitalism is only one type of a market economy. By around the start of the 20th century, the heyday of individualist anarchism had passed. Individualist anarchism and the labor movement George Woodcock reports that the American individualist anarchists Lysander Spooner and William B. Greene had been members of the socialist First International. Two individualist anarchists who wrote in Benjamin Tucker's Liberty were also important labor organizers of the time. Joseph Labadie was an American labor organizer, individualist anarchist, social activist, printer, publisher, essayist and poet. In 1883, Labadie embraced a non-violent version of individualist anarchism. Without the oppression of the state, Labadie believed, humans would choose to harmonize with "the great natural laws [...] without robbing [their] fellows through interest, profit, rent and taxes". However, he supported community cooperation as he supported community control of water utilities, streets and railroads. Although he did not support the militant anarchism of the Haymarket anarchists, he fought for the clemency of the accused because he did not believe they were the perpetrators. In 1888, Labadie organized the Michigan Federation of Labor, became its first president and forged an alliance with Samuel Gompers. A colleague of Labadie's at Liberty, Dyer Lum was another important individualist anarchist labor activist and poet of the era. A leading anarcho-syndicalist and a prominent left-wing intellectual of the 1880s, he is remembered as the lover and mentor of early anarcha-feminist Voltairine de Cleyre. Lum was a prolific writer who wrote a number of key anarchist texts and contributed to publications including Mother Earth, Twentieth Century, The Alarm (the journal of the International Working People's Association) and The Open Court among others. Lum's political philosophy was a fusion of individualist anarchist economics—"a radicalized form of laissez-faire economics" inspired by the Boston anarchists—with radical labor organization similar to that of the Chicago anarchists of the time. Herbert Spencer and Pierre-Joseph Proudhon influenced Lum strongly in his individualist tendency. He developed a "mutualist" theory of unions and as such was active within the Knights of Labor and later promoted anti-political strategies in the American Federation of Labor. Frustration with abolitionism, spiritualism and labor reform caused Lum to embrace anarchism and radicalize workers. Convinced of the necessity of violence to enact social change he volunteered to fight in the American Civil War, hoping thereby to bring about the end of slavery. Kevin Carson has praised Lum's fusion of individualist laissez-faire economics with radical labor activism as "creative" and described him as "more significant than any in the Boston group". Egoist anarchism Some of the American individualist anarchists later in this era such as Benjamin Tucker abandoned natural rights positions and converted to Max Stirner's egoist anarchism. Rejecting the idea of moral rights, Tucker said that there were only two rights, "the right of might" and "the right of contract". He also said after converting to Egoist individualism that "[i]n times past [...] it was my habit to talk glibly of the right of man to land. It was a bad habit, and I long ago sloughed it off [...] Man's only right to land is his might over it". In adopting Stirnerite egoism in 1886, Tucker rejected natural rights which had long been considered the foundation of libertarianism in the United States. This rejection galvanized the movement into fierce debates, with the natural rights proponents accusing the egoists of destroying libertarianism itself. So bitter was the conflict that a number of natural rights proponents withdrew from the pages of Liberty in protest even though they had hitherto been among its frequent contributors. Thereafter, Liberty championed egoism although its general content did not change significantly. Several periodicals were undoubtedly influenced by Liberty's presentation of egoism. They included I published by Clarence Lee Swartz, edited by William Walstein Gordak and J. William Lloyd (all associates of Liberty); and The Ego and The Egoist, both of which were edited by Edward H. Fulton. Among the egoist papers that Tucker followed were the German Der Eigene, edited by Adolf Brand; and The Eagle and The Serpent, issued from London. The latter, the most prominent English-language egoist journal, was published from 1898 to 1900 with the subtitle "A Journal of Egoistic Philosophy and Sociology". American anarchists who adhered to egoism include Benjamin Tucker, John Beverley Robinson, Steven T. Byington, Hutchins Hapgood, James L. Walker, Victor Yarros and Edward H. Fulton. Robinson wrote an essay called "Egoism" in which he states that "[m]odern egoism, as propounded by Stirner and Nietzsche, and expounded by Ibsen, Shaw and others, is all these; but it is more. It is the realization by the individual that they are an individual; that, as far as they are concerned, they are the only individual". Walker published the work The Philosophy of Egoism in which he argued that egoism "implies a rethinking of the self-other relationship, nothing less than 'a complete revolution in the relations of mankind' that avoids both the 'archist' principle that legitimates domination and the 'moralist' notion that elevates self-renunciation to a virtue. Walker describes himself as an 'egoistic anarchist' who believed in both contract and cooperation as practical principles to guide everyday interactions". For Walker, "what really defines egoism is not mere self-interest, pleasure, or greed; it is the sovereignty of the individual, the full expression of the subjectivity of the individual ego". Italian anti-organizationalist individualist anarchism was brought to the United States by Italian born individualists such as Giuseppe Ciancabilla and others who advocated for violent propaganda by the deed there. Anarchist historian George Woodcock reports the incident in which the important Italian social anarchist Errico Malatesta became involved "in a dispute with the individualist anarchists of Paterson, who insisted that anarchism implied no organization at all, and that every man must act solely on his impulses. At last, in one noisy debate, the individual impulse of a certain Ciancabilla directed him to shoot Malatesta, who was badly wounded but obstinately refused to name his assailant". Enrico Arrigoni (pseudonym Frank Brand) was an Italian American individualist anarchist Lathe operator, house painter, bricklayer, dramatist and political activist influenced by the work of Max Stirner.Paul Avrich. Anarchist Voices: An Oral History of Anarchism in America. He took the pseudonym Brand from a fictional character in one of Henrik Ibsen's plays. In the 1910s, he started becoming involved in anarchist and anti-war activism around Milan. From the 1910s until the 1920s, he participated in anarchist activities and popular uprisings in various countries including Switzerland, Germany, Hungary, Argentina and Cuba. He lived from the 1920s onwards in New York City, where he edited the individualist anarchist eclectic journal Eresia in 1928. He also wrote for other American anarchist publications such as L' Adunata dei refrattari, Cultura Obrera, Controcorrente and Intesa Libertaria. During the Spanish Civil War, he went to fight with the anarchists, but he was imprisoned and was helped on his release by Emma Goldman. Afterwards, Arrigoni became a longtime member of the Libertarian Book Club in New York City. His written works include The Totalitarian Nightmare (1975), The Lunacy of the Superman (1977), Adventures in the Country of the Monoliths (1981) and Freedom: My Dream (1986). Post-left anarchy and insurrectionary anarchism Murray Bookchin has identified post-left anarchy as a form of individualist anarchism in Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm where he identifies "a shift among Euro-American anarchists away from social anarchism and toward individualist or lifestyle anarchism. Indeed, lifestyle anarchism today is finding its principal expression in spray-can graffiti, post-modernist nihilism, antirationalism, neoprimitivism, anti-technologism, neo-Situationist 'cultural terrorism', mysticism, and a 'practice' of staging Foucauldian 'personal insurrections'". Post-left anarchist Bob Black in his long critique of Bookchin's philosophy called Anarchy After Leftism said about post-left anarchy that "[i]t is, unlike Bookchinism, "individualistic" in the sense that if the freedom and happiness of the individual – i.e., each and every really existing person, every Tom, Dick and Murray – is not the measure of the good society, what is?" A strong relationship does exist between post-left anarchism and the work of individualist anarchist Max Stirner. Jason McQuinn says that "when I (and other anti-ideological anarchists) criticize ideology, it is always from a specifically critical, anarchist perspective rooted in both the skeptical, individualist-anarchist philosophy of Max Stirner. Bob Black and Feral Faun/Wolfi Landstreicher also strongly adhere to stirnerist egoist anarchism. Bob Black has humorously suggested the idea of "marxist stirnerism". Hakim Bey has said that "[f]rom Stirner's 'Union of Self-Owning Ones' we proceed to Nietzsche's circle of 'Free Spirits' and thence to Charles Fourier's 'Passional Series', doubling and redoubling ourselves even as the Other multiplies itself in the eros of the group". Bey also wrote that "[t]he Mackay Society, of which Mark & I are active members, is devoted to the anarchism of Max Stirner, Benj. Tucker & John Henry Mackay. [...] The Mackay Society, incidentally, represents a little-known current of individualist thought which never cut its ties with revolutionary labor. Dyer Lum, Ezra & Angela Haywood represent this school of thought; Jo Labadie, who wrote for Tucker's Liberty, made himself a link between the American 'plumb-line' anarchists, the 'philosophical' individualists, & the syndicalist or communist branch of the movement; his influence reached the Mackay Society through his son, Laurance. Like the Italian Stirnerites (who influenced us through our late friend Enrico Arrigoni) we support all anti-authoritarian currents, despite their apparent contradictions". As far as posterior individualist anarchists, Jason McQuinn for some time used the pseudonym Lev Chernyi in honor of the Russian individualist anarchist of the same name while Feral Faun has quoted Italian individualist anarchist Renzo Novatore and has translated both Novatore and the young Italian individualist anarchist Bruno Filippi Egoism has had a strong influence on insurrectionary anarchism as can be seen in the work of Wolfi Landstreicher. Feral Faun wrote in 1995: In the game of insurgence – a lived guerilla war game – it is strategically necessary to use identities and roles. Unfortunately, the context of social relationships gives these roles and identities the power to define the individual who attempts to use them. So I, Feral Faun, became [...] an anarchist, [...] a writer, [...] a Stirner-influenced, post-situationist, anti-civilization theorist, [...] if not in my own eyes, at least in the eyes of most people who've read my writings. Individualist anarchism in Europe European individualist anarchism proceeded from the roots laid by William Godwin, Pierre-Joseph Proudhon and Max Stirner. Proudhon was an early pioneer of anarchism as well as of the important individualist anarchist current of mutualism. Stirner became a central figure of individualist anarchism through the publication of his seminal work The Ego and Its Own which is considered to be "a founding text in the tradition of individualist anarchism". Another early figure was Anselme Bellegarrigue. Individualist anarchism expanded and diversified through Europe, incorporating influences from North American individualist anarchism. European individualist anarchists include Albert Libertad, Bellegarrigue, Oscar Wilde, Émile Armand, Lev Chernyi, John Henry Mackay, Han Ryner, Adolf Brand, Miguel Giménez Igualada, Renzo Novatore and currently Michel Onfray. Important currents within it include free love, anarcho-naturism and illegalism. France From the legacy of Proudhon and Stirner there emerged a strong tradition of French individualist anarchism. An early important individualist anarchist was Anselme Bellegarrigue. He participated in the French Revolution of 1848, was author and editor of Anarchie, Journal de l'Ordre and Au fait ! Au fait ! Interprétation de l'idée démocratique and wrote the important early Anarchist Manifesto in 1850. Catalan historian of individualist anarchism Xavier Diez reports that during his travels in the United States "he at least contacted (Henry David) Thoreau and, probably (Josiah) Warren". Autonomie Individuelle was an individualist anarchist publication that ran from 1887 to 1888. It was edited by Jean-Baptiste Louiche, Charles Schæffer and Georges Deherme. Later, this tradition continued with such intellectuals as Albert Libertad, André Lorulot, Émile Armand, Victor Serge, Zo d'Axa and Rirette Maîtrejean, who in 1905 developed theory in the main individualist anarchist journal in France, L'Anarchie. Outside this journal, Han Ryner wrote Petit Manuel individualiste (1903). In 1891, Zo d'Axa created the journal L'En-Dehors. Anarcho-naturism was promoted by Henri Zisly, Émile Gravelle and Georges Butaud. Butaud was an individualist "partisan of the milieux libres, publisher of 'Flambeau' ('an enemy of authority') in 1901 in Vienna" and most of his energies were devoted to creating anarchist colonies (communautés expérimentales) in which he participated in several. In this sense, "the theoretical positions and the vital experiences of [F]rench individualism are deeply iconoclastic and scandalous, even within libertarian circles. The call of nudist naturism, the strong defence of birth control methods, the idea of "unions of egoists" with the sole justification of sexual practices, that will try to put in practice, not without difficulties, will establish a way of thought and action, and will result in sympathy within some, and a strong rejection within others". French individualist anarchists grouped behind Émile Armand, published L'Unique after World War II. L'Unique went from 1945 to 1956 with a total of 110 numbers.Unique, L' (1945–1956) Gérard de Lacaze-Duthiers was a French writer, art critic, pacifist and anarchist. Lacaze-Duthiers, an art critic for the Symbolist review journal La Plume, was influenced by Oscar Wilde, Friedrich Nietzsche and Max Stirner. His (1906) L'Ideal Humain de l'Art helped found the "artistocracy movement"—a movement advocating life in the service of art. His ideal was an anti-elitist aestheticism: "All men should be artists". Together with André Colomer and Manuel Devaldes, in 1913 he founded L'Action d'Art, an anarchist literary journal. After World War II, he contributed to the journal L'Unique. Within the synthesist anarchist organization, the Fédération Anarchiste, there existed an individualist anarchist tendency alongside anarcho-communist and anarchosyndicalist currents. Individualist anarchists participating inside the Fédération Anarchiste included Charles-Auguste Bontemps, Georges Vincey and André Arru. The new base principles of the francophone Anarchist Federation were written by the individualist anarchist Charles-Auguste Bontemps and the anarcho-communist Maurice Joyeux which established an organization with a plurality of tendencies and autonomy of federated groups organized around synthesist principles. Charles-Auguste Bontemps was a prolific author mainly in the anarchist, freethinking, pacifist and naturist press of the time. His view on anarchism was based around his concept of "Social Individualism" on which he wrote extensively. He defended an anarchist perspective which consisted on "a collectivism of things and an individualism of persons". In 2002, Libertad organized a new version of the L'EnDehors, collaborating with Green Anarchy and including several contributors, such as Lawrence Jarach, Patrick Mignard, Thierry Lodé, Ron Sakolsky and Thomas Slut. Numerous articles about capitalism, human rights, free love and social fights were published. The EnDehors continues now as a website, EnDehors.org. The prolific contemporary French philosopher Michel Onfray has been writing from an individualist anarchist"Au-delà, l'éthique et la politique de Michel Onfray font signe vers l'anarchisme individualiste de la Belle Epoque qui est d'ailleurs une de ses références explicites.""Individualité et rapports à l'engagement militant Individualite et rapports a l engageme".. par : Pereira Irène perspective influenced by Nietzsche, French post-structuralists thinkers such as Michel Foucault and Gilles Deleuze; and Greek classical schools of philosophy such as the Cynics and Cyrenaics. Among the books which best expose Onfray's individualist anarchist perspective include La sculpture de soi : la morale esthétique (The Sculpture of Oneself: Aesthetic Morality), La philosophie féroce : exercices anarchistes, La puissance d'exister and Physiologie de Georges Palante, portrait d'un nietzchéen de gauche which focuses on French individualist philosopher Georges Palante. Illegalism Illegalism is an anarchist philosophy that developed primarily in France, Italy, Belgium and Switzerland during the early 1900s as an outgrowth of Stirner's individualist anarchism. Illegalists usually did not seek moral basis for their actions, recognizing only the reality of "might" rather than "right"; and for the most part, illegal acts were done simply to satisfy personal desires, not for some greater ideal, although some committed crimes as a form of propaganda of the deed. The illegalists embraced direct action and propaganda of the deed. Influenced by theorist Max Stirner's egoism as well as Pierre-Joseph Proudhon (his view that "property is theft!"), Clément Duval and Marius Jacob proposed the theory of la reprise individuelle (individual reclamation) which justified robbery on the rich and personal direct action against exploiters and the system. Illegalism first rose to prominence among a generation of Europeans inspired by the unrest of the 1890s, during which Ravachol, Émile Henry, Auguste Vaillant and Sante Geronimo Caserio committed daring crimes in the name of anarchism in what is known as propaganda of the deed. France's Bonnot Gang was the most famous group to embrace illegalism. Germany In Germany, the Scottish-German John Henry Mackay became the most important propagandist for individualist anarchist ideas. He fused Stirnerist egoism with the positions of Benjamin Tucker and actually translated Tucker into German. Two semi-fictional writings of his own, Die Anarchisten and Der Freiheitsucher, contributed to individualist theory through an updating of egoist themes within a consideration of the anarchist movement. English translations of these works arrived in the United Kingdom and in individualist American circles led by Tucker. Mackay is also known as an important European early activist for gay rights. Using the pseudonym Sagitta, Mackay wrote a series of works for pederastic emancipation, titled Die Buecher der namenlosen Liebe (Books of the Nameless Love). This series was conceived in 1905 and completed in 1913 and included the Fenny Skaller, a story of a pederast. Under the same pseudonym, he also published fiction, such as Holland (1924) and a pederastic novel of the Berlin boy-bars, Der Puppenjunge (The Hustler) (1926). Adolf Brand was a German writer, Stirnerist anarchist and pioneering campaigner for the acceptance of male bisexuality and homosexuality. In 1896, Brand published a German homosexual periodical, Der Eigene. This was the first ongoing homosexual publication in the world. The name was taken from writings of egoist philosopher Max Stirner (who had greatly influenced the young Brand) and refers to Stirner's concept of "self-ownership" of the individual. Der Eigene concentrated on cultural and scholarly material and may have had an average of around 1,500 subscribers per issue during its lifetime, although the exact numbers are uncertain. Contributors included Erich Mühsam, Kurt Hiller, John Henry Mackay (under the pseudonym Sagitta) and artists Wilhelm von Gloeden, Fidus and Sascha Schneider. Brand contributed many poems and articles himself. Benjamin Tucker followed this journal from the United States.Der Einzige was a German individualist anarchist magazine. It appeared in 1919 as a weekly, then sporadically until 1925 and was edited by cousins Anselm Ruest (pseudonym for Ernst Samuel) and Mynona (pseudonym for Salomo Friedlaender). Its title was adopted from the book Der Einzige und sein Eigentum (The Ego and Its Own) by Max Stirner. Another influence was the thought of German philosopher Friedrich Nietzsche. The publication was connected to the local expressionist artistic current and the transition from it towards Dada. Italy In Italy, individualist anarchism had a strong tendency towards illegalism and violent propaganda by the deed similar to French individualist anarchism, but perhaps more extreme"At this point, encouraged by the disillusionment that followed the breakdown of the general strike, the terrorist individualists who had always – despite Malatesta's influence – survived as a small minority among Italian anarchists, intervened frightfully and tragically." George Woodcock. Anarchism: A History of Libertarian Ideas and
|
himself to be an anarchistic socialist in opposition to state socialism, included the full text of a "Socialistic Letter" by Ernest Lesigne in his essay "State Socialism and Anarchism". According to Lesigne, there are two socialisms: "One is dictatorial, the other libertarian". Tucker's two socialisms were the state socialism which he associated to the Marxist school and the libertarian socialism that he advocated. What those two schools of socialism had in common was the labor theory of value and the ends, by which anarchism pursued different means. According to Rudolf Rocker, individualist anarchists "all agree on the point that man be given the full reward of his labour and recognised in this right the economic basis of all personal liberty. They regard free competition [...] as something inherent in human nature. [...] They answered the socialists of other schools who saw in free competition one of the destructive elements of capitalistic society that the evil lies in the fact that today we have too little rather than too much competition". Individualist anarchist Joseph Labadie wrote that both "the two great sub-divisions of Socialists [Anarchists and State Socialists] agree that the resources of nature — land, mines, and so forth — should not be held as private property and subject to being held by the individual for speculative purposes, that use of these things shall be the only valid title, and that each person has an equal right to the use of all these things. They all agree that the present social system is one composed of a class of slaves and a class of masters, and that justice is impossible under such conditions". The egoist form of individualist anarchism, derived from the philosophy of Max Stirner, supports the individual doing exactly what he pleases—taking no notice of God, state, or moral rules. To Stirner, rights were spooks in the mind, and he held that society does not exist but "the individuals are its reality"—he supported property by force of might rather than moral right. Stirner advocated self-assertion and foresaw "associations of egoists" drawn together by respect for each other's ruthlessness. For historian Eunice Minette Schuster, American individualist anarchism "stresses the isolation of the individual – his right to his own tools, his mind, his body, and to the products of his labor. To the artist who embraces this philosophy it is "aesthetic" anarchism, to the reformer, ethical anarchism, to the independent mechanic, economic anarchism. The former is concerned with philosophy, the latter with practical demonstration. The economic anarchist is concerned with constructing a society on the basis of anarchism. Economically he sees no harm whatever in the private possession of what the individual produces by his own labor, but only so much and no more. The aesthetic and ethical type found expression in the transcendentalism, humanitarianism, and Romanticism of the first part of the nineteenth century, the economic type in the pioneer life of the West during the same period, but more favorably after the Civil War". For this reason, it has been suggested that in order to understand individualist anarchism one must take into account "the social context of their ideas, namely the transformation of America from a pre-capitalist to a capitalist society [...] the non-capitalist nature of the early U.S. can be seen from the early dominance of self-employment (artisan and peasant production). At the beginning of the 19th century, around 80% of the working (non-slave) male population were self-employed. The great majority of Americans during this time were farmers working their own land, primarily for their own needs" and "[i]ndividualist anarchism is clearly a form of artisanal socialism [...] while communist anarchism and anarcho-syndicalism are forms of industrial (or proletarian) socialism". Liberty insisted on "the abolition of the State and the abolition of usury; on no more government of man by man, and no more exploitation of man by man" and anarchism is "the abolition of the State and the abolition of usury". Those anarchists held that there were "two schools of Socialistic thought, [...] State Socialism and Anarchism" and "liberty insists on Socialism [...] — true Socialism, Anarchistic Socialism: the prevalence on earth of Liberty, Equality, and Solidarity". Individualist anarchists followed Proudhon and other anarchists that "exploitation of man by man and the domination of man over man are inseparable, and each is the condition of the other", that "the bottom claim of Socialism" was "that labour should be put in possession of its own", that "the natural wage of labour is its product" in an "effort to abolish the exploitation of labour by capital" and that anarchists "do not admit the government of man by man any more than the exploitation of man by man", advocating "the complete destruction of the domination and exploitation of man by man". Contemporary individualist anarchist Kevin Carson characterizes American individualist anarchism by saying that "[u]nlike the rest of the socialist movement, the individualist anarchists believed that the natural wage of labor in a free market was its product, and that economic exploitation could only take place when capitalists and landlords harnessed the power of the state in their interests. Thus, individualist anarchism was an alternative both to the increasing statism of the mainstream socialist movement, and to a classical liberal movement that was moving toward a mere apologetic for the power of big business". In European individualist anarchism, a different social context helped the rise of European individualist illegalism and as such "[t]he illegalists were proletarians who had nothing to sell but their labour power, and nothing to discard but their dignity; if they disdained waged-work, it was because of its compulsive nature. If they turned to illegality it was due to the fact that honest toil only benefited the employers and often entailed a complete loss of dignity, while any complaints resulted in the sack; to avoid starvation through lack of work it was necessary to beg or steal, and to avoid conscription into the army many of them had to go on the run". A European tendency of individualist anarchism advocated violent individual acts of individual reclamation, propaganda by the deed and criticism of organization. Such individualist anarchist tendencies include French illegalism and Italian anti-organizational insurrectionarism. Bookchin reports that at the end of the 19th century and the beginning of the 20th "it was in times of severe social repression and deadening social quiescence that individualist anarchists came to the foreground of libertarian activity – and then primarily as terrorists. In France, Spain, and the United States, individualistic anarchists committed acts of terrorism that gave anarchism its reputation as a violently sinister conspiracy". Another important tendency within individualist anarchist currents emphasizes individual subjective exploration and defiance of social conventions. Individualist anarchist philosophy attracted "amongst artists, intellectuals and the well-read, urban middle classes in general". Murray Bookchin describes a lot of individualist anarchism as people who "expressed their opposition in uniquely personal forms, especially in fiery tracts, outrageous behavior and aberrant lifestyles in the cultural ghettos of fin de siecle New York, Paris and London. As a credo, individualist anarchism remained largely a bohemian lifestyle, most conspicuous in its demands for sexual freedom ('free love') and enamored of innovations in art, behavior, and clothing". In this way, free love currents and other radical lifestyles such as naturism had popularity among individualist anarchists. For Catalan historian Xavier Diez, "under its iconoclastic, antiintelectual, antitheist run, which goes against all sacralized ideas or values it entailed, a philosophy of life which could be considered a reaction against the sacred gods of capitalist society. Against the idea of nation, it opposed its internationalism. Against the exaltation of authority embodied in the military institution, it opposed its antimilitarism. Against the concept of industrial civilization, it opposed its naturist vision". In regards to economic questions, there are diverse positions. There are adherents to mutualism (Proudhon, Émile Armand and the early Tucker), egoistic disrespect for "ghosts" such as private property and markets (Stirner, John Henry Mackay, Lev Chernyi and the later Tucker) and adherents to anarcho-communism (Albert Libertad, illegalism and Renzo Novatore). Anarchist historian George Woodcock finds a tendency in individualist anarchism of a "distrust (of) all co-operation beyond the barest minimum for an ascetic life". On the issue of violence opinions have gone from a violentist point of view mainly exemplified by illegalism and insurrectionary anarchism to one that can be called anarcho-pacifist. In the particular case of Spanish individualist anarchist Miguel Giménez Igualada, he went from illegalist practice in his youth towards a pacifist position later in his life. Early influences William Godwin William Godwin can be considered an individualist anarchist and philosophical anarchist who was influenced by the ideas of the Age of Enlightenment, and developed what many consider the first expression of modern anarchist thought. According to Peter Kropotkin, Godwin was "the first to formulate the political and economical conceptions of anarchism, even though he did not give that name to the ideas developed in his work".<ref name="EB1910">Peter Kropotkin, "Anarchism", Encyclopædia Britannica, 1910</ref> Godwin himself attributed the first anarchist writing to Edmund Burke's A Vindication of Natural Society. Godwin advocated extreme individualism, proposing that all cooperation in labor be eliminated. Godwin was a utilitarian who believed that all individuals are not of equal value, with some of us "of more worth and importance" than others depending on our utility in bringing about social good. Therefore, he does not believe in equal rights, but the person's life that should be favored that is most conducive to the general good. Godwin opposed government because it infringes on the individual's right to "private judgement" to determine which actions most maximize utility, but also makes a critique of all authority over the individual's judgement. This aspect of Godwin's philosophy, minus the utilitarianism, was developed into a more extreme form later by Stirner. Godwin took individualism to the radical extent of opposing individuals performing together in orchestras, writing in Political Justice that "everything understood by the term co-operation is in some sense an evil". The only apparent exception to this opposition to cooperation is the spontaneous association that may arise when a society is threatened by violent force. One reason he opposed cooperation is he believed it to interfere with an individual's ability to be benevolent for the greater good. Godwin opposes the idea of government, but wrote that a minimal state as a present "necessary evil" that would become increasingly irrelevant and powerless by the gradual spread of knowledge. He believed democracy to be preferable to other forms of government. Godwin supported individual ownership of property, defining it as "the empire to which every man is entitled over the produce of his own industry". However, he also advocated that individuals give to each other their surplus property on the occasion that others have a need for it, without involving trade (e.g. gift economy). Thus while people have the right to private property, they should give it away as enlightened altruists. This was to be based on utilitarian principles and he said: "Every man has a right to that, the exclusive possession of which being awarded to him, a greater sum of benefit or pleasure will result than could have arisen from its being otherwise appropriated". Godwin's political views were diverse and do not perfectly agree with any of the ideologies that claim his influence as writers of the Socialist Standard, organ of the Socialist Party of Great Britain, consider Godwin both an individualist and a communist; Murray Rothbard did not regard Godwin as being in the individualist camp at all, referring to him as the "founder of communist anarchism"; and historian Albert Weisbord considers him an individualist anarchist without reservation. Some writers see a conflict between Godwin's advocacy of "private judgement" and utilitarianism as he says that ethics requires that individuals give their surplus property to each other resulting in an egalitarian society, but at the same time he insists that all things be left to individual choice. As noted by Kropotkin, many of Godwin's views changed over time. William Godwin's influenced "the socialism of Robert Owen and Charles Fourier. After success of his British venture, Owen himself established a cooperative community within the United States at New Harmony, Indiana during 1825. One member of this commune was Josiah Warren, considered to be the first individualist anarchist. After New Harmony failed, Warren shifted his ideological loyalties from socialism to anarchism. According to anarchist Peter Sabatini, this "was no great leap, given that Owen's socialism had been predicated on Godwin's anarchism". Pierre-Joseph Proudhon Pierre-Joseph Proudhon was the first philosopher to label himself an "anarchist". Some consider Proudhon to be an individualist anarchist while others regard him to be a social anarchist.Knowles, Rob. "Political Economy from below : Communitarian Anarchism as a Neglected Discourse in Histories of Economic Thought". History of Economics Review, No.31 Winter 2000. Some commentators do not identify Proudhon as an individualist anarchist due to his preference for association in large industries, rather than individual control. Nevertheless, he was influential among some of the American individualists—in the 1840s and 1850s, Charles Anderson Dana and William Batchelder Greene introduced Proudhon's works to the United States. Greene adapted Proudhon's mutualism to American conditions and introduced it to Benjamin Tucker. Proudhon opposed government privilege that protects capitalist, banking and land interests and the accumulation or acquisition of property (and any form of coercion that led to it) which he believed hampers competition and keeps wealth in the hands of the few. Proudhon favoured a right of individuals to retain the product of their labour as their own property, but he believed that any property beyond that which an individual produced and could possess was illegitimate. Thus he saw private property as both essential to liberty and a road to tyranny, the former when it resulted from labour and was required for labour and the latter when it resulted in exploitation (profit, interest, rent and tax). He generally called the former "possession" and the latter "property". For large-scale industry, he supported workers associations to replace wage labour and opposed the ownership of land. Proudhon maintained that those who labour should retain the entirety of what they produce and that monopolies on credit and land are the forces that prohibit such. He advocated an economic system that included private property as possession and exchange market, but without profit, which he called mutualism. It is Proudhon's philosophy that was explicitly rejected by Joseph Déjacque in the inception of anarcho-communism, with the latter asserting directly to Proudhon in a letter that "it is not the product of his or her labour that the worker has a right to, but to the satisfaction of his or her needs, whatever may be their nature". An individualist rather than anarcho-communist, Proudhon said that "communism [...] is the very denial of society in its foundation" and famously declared that "property is theft" in reference to his rejection of ownership rights to land being granted to a person who is not using that land. After Déjacque and others split from Proudhon due to the latter's support of individual property and an exchange economy, the relationship between the individualists (who continued in relative alignment with the philosophy of Proudhon) and the anarcho-communists was characterised by various degrees of antagonism and harmony. For example, individualists like Tucker on the one hand translated and reprinted the works of collectivists like Mikhail Bakunin while on the other hand rejected the economic aspects of collectivism and communism as incompatible with anarchist ideals. Mutualism Mutualism is an anarchist school of thought which can be traced to the writings of Pierre-Joseph Proudhon, who envisioned a society where each person might possess a means of production, either individually or collectively, with trade representing equivalent amounts of labor in the free market. Integral to the scheme was the establishment of a mutual-credit bank which would lend to producers at a minimal interest rate only high enough to cover the costs of administration. Mutualism is based on a labor theory of value which holds that when labour or its product is sold, in exchange it ought to receive goods or services embodying "the amount of labor necessary to produce an article of exactly similar and equal utility". Some mutualists believe that if the state did not intervene, individuals would receive no more income than that in proportion to the amount of labor they exert as a result of increased competition in the marketplace.Carson, Kevin, 2004, Studies in Mutualist Political Economy, chapter 2 (after Meek & Oppenheimer). Mutualists oppose the idea of individuals receiving an income through loans, investments and rent as they believe these individuals are not labouring. Some of them argue that if state intervention ceased, these types of incomes would disappear due to increased competition in capital.Carson, Kevin, 2004, Studies in Mutualist Political Economy, chapter 2 (after Ricardo, Dobb & Oppenheimer). Although Proudhon opposed this type of income, he expressed that he "never meant to [...] forbid or suppress, by sovereign decree, ground rent and interest on capital. I believe that all these forms of human activity should remain free and optional for all". Mutualists argue for conditional titles to land, whose private ownership is legitimate only so long as it remains in use or occupation (which Proudhon called "possession"). Proudhon's mutualism supports labor-owned cooperative firms and associations for "we need not hesitate, for we have no choice [...] it is necessary to form an ASSOCIATION among workers [...] because without that, they would remain related as subordinates and superiors, and there would ensue two [...] castes of masters and wage-workers, which is repugnant to a free and democratic society" and so "it becomes necessary for the workers to form themselves into democratic societies, with equal conditions for all members, on pain of a relapse into feudalism". As for capital goods (man-made and non-land, means of production), mutualist opinion differs on whether these should be common property and commonly managed public assets or private property in the form of worker cooperatives, for as long as they ensure the worker's right to the full product of their labor, mutualists support markets and property in the product of labor, differentiating between capitalist private property (productive property) and personal property (private property).Hargreaves, David H. (2019). Beyond Schooling: An Anarchist Challenge. London: Routledge. pp. 90–91. . "Ironically, Proudhon did not mean literally what he said. His boldness of expression was intended for emphasis, and by 'property' he wished to be understood what he later called 'the sum of its abuses'. He was denouncing the property of the man who uses it to exploit the labour of others without any effort on his own part, property distinguished by interest and rent, by the impositions of the non-producer on the producer. Towards property regarded as 'possession' the right of a man to control his dwelling and the land and tools he needs to live, Proudhon had no hostility; indeed, he regarded it as the cornerstone of liberty, and his main criticism of the communists was that they wished to destroy it." Following Proudhon, mutualists are libertarian socialists who consider themselves to part of the market socialist tradition and the socialist movement. However, some contemporary mutualists outside the classical anarchist tradition abandoned the labor theory of value and prefer to avoid the term socialist due to its association with state socialism throughout the 20th century. Nonetheless, those contemporary mutualists "still retain some cultural attitudes, for the most part, that set them off from the libertarian right. Most of them view mutualism as an alternative to capitalism, and believe that capitalism as it exists is a statist system with exploitative features". Mutualists have distinguished themselves from state socialism and do not advocate state ownership over the means of production. Benjamin Tucker said of Proudhon that "though opposed to socializing the ownership of capital, Proudhon aimed nevertheless to socialize its effects by making its use beneficial to all instead of a means of impoverishing the many to enrich the few [...] by subjecting capital to the natural law of competition, thus bringing the price of its own use down to cost". Max Stirner Johann Kaspar Schmidt, better known as Max Stirner (the nom de plume he adopted from a schoolyard nickname he had acquired as a child because of his high brow, in German Stirn), was a German philosopher who ranks as one of the literary fathers of nihilism, existentialism, post-modernism and anarchism, especially of individualist anarchism. Stirner's main work is The Ego and Its Own, also known as The Ego and His Own (Der Einzige und sein Eigentum in German which translates literally as The Only One [individual] and his Property or The Unique Individual and His Property). This work was first published in 1844 in Leipzig and has since appeared in numerous editions and translations. Egoism Max Stirner's philosophy, sometimes called egoism, is a form of individualist anarchism. Stirner was a Hegelian philosopher whose "name appears with familiar regularity in historically oriented surveys of anarchist thought as one of the earliest and best-known exponents of individualist anarchism". In 1844, Stirner's work The Ego and Its Own was published and is considered to be "a founding text in the tradition of individualist anarchism". Stirner does not recommend that the individual try to eliminate the state, but simply that they disregard the state when it conflicts with one's autonomous choices and go along with it when doing so is conducive to one's interests. Stirner says that the egoist rejects pursuit of devotion to "a great idea, a good cause, a doctrine, a system, a lofty calling", arguing that the egoist has no political calling, but rather "lives themselves out" without regard to "how well or ill humanity may fare thereby". Stirner held that the only limitation on the rights of the individual is that individual's power to obtain what he desires. Stirner proposes that most commonly accepted social institutions, including the notion of state, property as a right, natural rights in general and the very notion of "society" as a legal and ideal abstractness, were mere spooks in the mind. Stirner wants to "abolish not only the state but also society as an institution responsible for its members". Stirner advocated self-assertion and foresaw Union of egoists, non-systematic associations which he proposed in as a form of organization in place of the state. A Union is understood as a relation between egoists which is continually renewed by all parties' support through an act of will. Even murder is permissible "if it is right for me", although it is claimed by egoist anarchists that egoism will foster genuine and spontaneous unions between individuals. For Stirner, property simply comes about through might, arguing that "[w]hoever knows how to take, to defend, the thing, to him belongs property". He further says that "[w]hat I have in my power, that is my own. So long as I assert myself as holder, I am the proprietor of the thing" and that "I do not step shyly back from your property, but look upon it always as my property, in which I respect nothing. Pray do the like with what you call my property!" His concept of "egoistic property" not only a lack of moral restraint on how one obtains and uses things, but includes other people as well. His embrace of egotism is in stark contrast to Godwin's altruism. Although Stirner was opposed to communism, for the same reasons he opposed capitalism, humanism, liberalism, property rights and nationalism, seeing them as forms of authority over the individual and as spooks in the mind, he has influenced many anarcho-communists and post-left anarchists. The writers of An Anarchist FAQ report that "many in the anarchist movement in Glasgow, Scotland, took Stirner's 'Union of egoists' literally as the basis for their anarcho-syndicalist organising in the 1940s and beyond". Similarly, the noted anarchist historian Max Nettlau states that "[o]n reading Stirner, I maintain that he cannot be interpreted except in a socialist sense". Stirner does not personally oppose the struggles carried out by certain ideologies such as socialism, humanism or the advocacy of human rights. Rather, he opposes their legal and ideal abstractness, a fact that makes him different from the liberal individualists, including the anarcho-capitalists and right-libertarians, but also from the Übermensch theories of fascism as he places the individual at the center and not the sacred collective. About socialism, Stirner wrote in a letter to Moses Hess that "I am not at all against socialism, but against consecrated socialism; my selfishness is not opposed to love [...] nor is it an enemy of sacrifice, nor of self-denial [...] and least of all of socialism [...] — in short, it is not an enemy of true interests; it rebels not against love, but against sacred love, not against thought, but against sacred thought, not against socialists, but against sacred socialism". This position on property is quite different from the Native American, natural law, form of individualist anarchism which defends the inviolability of the private property that has been earned through labor. However, Benjamin Tucker rejected the natural rights philosophy and adopted Stirner's egoism in 1886, with several others joining with him. This split the American individualists into fierce debate, "with the natural rights proponents accusing the egoists of destroying libertarianism itself". Other egoists include James L. Walker, Sidney Parker, Dora Marsden and John Beverly Robinson. In Russia, individualist anarchism inspired by Stirner combined with an appreciation for Friedrich Nietzsche attracted a small following of bohemian artists and intellectuals such as Lev Chernyi as well as a few lone wolves who found self-expression in crime and violence. They rejected organizing, believing that only unorganized individuals were safe from coercion and domination, believing this kept them true to the ideals of anarchism. This type of individualist anarchism inspired anarcha-feminist Emma Goldman. Although Stirner's philosophy is individualist, it has influenced some libertarian communists and anarcho-communists. "For Ourselves Council for Generalized Self-Management" discusses Stirner and speaks of a "communist egoism" which is said to be a "synthesis of individualism and collectivism" and says that "greed in its fullest sense is the only possible basis of communist society". Forms of libertarian communism such as Situationism are influenced by Stirner. Anarcho-communist Emma Goldman was influenced by both Stirner and Peter Kropotkin and blended their philosophies together in her own as shown in books of hers such as Anarchism And Other Essays. Early individualist anarchism in the United States Josiah Warren Josiah Warren is widely regarded as the first American anarchist and the four-page weekly paper he edited during 1833, The Peaceful Revolutionist, was the first anarchist periodical published, an enterprise for which he built his own printing press, cast his own type and made his own printing plates. Warren was a follower of Robert Owen and joined Owen's community at New Harmony, Indiana. Warren termed the phrase "Cost the limit of price", with "cost" here referring not to monetary price paid but the labor one exerted to produce an item. Therefore, "[h]e proposed a system to pay people with certificates indicating how many hours of work they did. They could exchange the notes at local time stores for goods that took the same amount of time to produce". He put his theories to the test by establishing an experimental "labor for labor store" called the Cincinnati Time Store where trade was facilitated by notes backed by a promise to perform labor. The store proved successful and operated for three years after which it was closed so that Warren could pursue establishing colonies based on mutualism. These included Utopia and Modern Times. Warren said that Stephen Pearl Andrews' The Science of Society (published in 1852) was the most lucid and complete exposition of Warren's own theories. Catalan historian Xavier Diez report that the intentional communal experiments pioneered by Warren were influential in European individualist anarchists of the late 19th and early 20th centuries such as Émile Armand and the intentional communities started by them. Henry David Thoreau Henry David Thoreau was an important early influence in individualist anarchist thought in the United States and Europe. Thoreau was an American author, poet, naturalist, tax resister, development critic, surveyor, historian, philosopher and leading transcendentalist. He is best known for his book Walden, a reflection upon simple living in natural surroundings; and his essay, Civil Disobedience, an argument for individual resistance to civil government in moral opposition to an unjust state. His thought is an early influence on green anarchism, but with an emphasis on the individual experience of the natural world influencing later naturist currents, simple living as a rejection of a materialist lifestyle and self-sufficiency were Thoreau's goals and the whole project was inspired by transcendentalist philosophy. Many have seen in Thoreau one of the precursors of ecologism and anarcho-primitivism represented today in John Zerzan. For George Woodcock, this attitude can be also motivated by certain idea of resistance to progress and of rejection of the growing materialism which is the nature of American society in the mid 19th century. The essay "Civil Disobedience" (Resistance to Civil Government) was first published in 1849. It argues that people should not permit governments to overrule or atrophy their consciences and that people have a duty to avoid allowing such acquiescence to enable the government to make them the agents of injustice. Thoreau was motivated in part by his disgust with slavery and the Mexican–American War. The essay later influenced Mohandas Gandhi, Martin Luther King Jr., Martin Buber and Leo Tolstoy through its advocacy of nonviolent resistance. It is also the main precedent for anarcho-pacifism. The American version of individualist anarchism has a strong emphasis on the non-aggression principle and individual sovereignty. Some individualist anarchists such as ThoreauEncyclopaedia of the Social Sciences, edited by Edwin Robert Anderson Seligman, Alvin Saunders Johnson, 1937, p. 12. do not speak of economics, but simply of the right of "disunion" from the state and foresee the gradual elimination of the state through social evolution. Developments and expansion Anarcha-feminism, free love, freethought and LGBT issues An important current within individualist anarchism is free love. Free love advocates sometimes traced their roots back to Josiah Warren and to experimental communities, and viewed sexual freedom as a clear, direct expression of an individual's self-ownership. Free love particularly stressed women's rights since most sexual laws, such as those governing marriage and use of birth control, discriminated against women. The most important American free love journal was Lucifer the Lightbearer (1883–1907) edited by Moses Harman and Lois Waisbrooker but also there existed Ezra Heywood and Angela Heywood's The Word (1872–1890, 1892–1893). M. E. Lazarus was also an important American individualist anarchist who promoted free love. John William Lloyd, a collaborator of Benjamin Tucker's periodical Liberty, published in 1931 a sex manual that he called The Karezza Method or Magnetation: The Art of Connubial Love. In Europe, the main propagandist of free love within individualist anarchism was Émile Armand. He proposed the concept of la camaraderie amoureuse to speak of free love as the possibility of voluntary sexual encounter between consenting adults. He was also a consistent proponent of polyamory. In France, there was also feminist activity inside individualist anarchism as promoted by individualist feminists Marie Küge, Anna Mahé, Rirette Maîtrejean and Sophia Zaïkovska. The Brazilian individualist anarchist Maria Lacerda de Moura lectured on topics such as education, women's rights, free love and antimilitarism. Her writings and essays garnered her attention not only in Brazil, but also in Argentina and Uruguay. She also wrote for the Spanish individualist anarchist magazine Al Margen alongside Miguel Giménez Igualada. In Germany, the Stirnerists Adolf Brand and John Henry Mackay were pioneering campaigners for the acceptance of male bisexuality and homosexuality. Freethought as a philosophical position and as activism was important in both North American and European individualist anarchism, but in the United States freethought was basically an anti-Christian, anti-clerical movement whose purpose was to make the individual politically and spiritually free to decide for himself on religious matters. A number of contributors to Liberty were prominent figures in both freethought and anarchism. The individualist anarchist George MacDonald was a co-editor of Freethought and for a time The Truth Seeker. E.C. Walker was co-editor of Lucifer, the Light-Bearer. Many of the anarchists were ardent freethinkers; reprints from freethought papers such as Lucifer, the Light-Bearer, Freethought and The Truth Seeker appeared in Liberty. The church was viewed as a common ally of the state and as a repressive force in and of itself. In Europe, a similar development occurred in French and Spanish individualist anarchist circles: "Anticlericalism, just as in the rest of the libertarian movement, is another of the frequent elements which will gain relevance related to the measure in which the (French) Republic begins to have conflicts with the church [...] Anti-clerical discourse, frequently called for by the french individualist André Lorulot, will have its impacts in Estudios (a Spanish individualist anarchist publication). There will be an attack on institutionalized religion for the responsibility that it had in the past on negative developments, for its irrationality which makes it a counterpoint of philosophical and scientific progress. There will be a criticism of proselitism and ideological manipulation which happens on both believers and agnostics". This tendencies will continue in French individualist anarchism in the work and activism of Charles-Auguste Bontemps and others. In the Spanish individualist anarchist magazine Ética and Iniciales, "there is a strong interest in publishing scientific news, usually linked to a certain atheist and anti-theist obsession, philosophy which will also work for pointing out the incompatibility between science and religion, faith and reason. In this way there will be a lot of talk on Darwin's theories or on the negation of the existence of the soul". Anarcho-naturism Another important current, especially within French and Spanish"Anarchism and the different Naturist views have always been related." "Anarchism – Nudism, Naturism" by Carlos Ortega at Asociacion para el Desarrollo Naturista de la Comunidad de Madrid. Published on Revista ADN. Winter 2003. individualist anarchist groups was naturism. Naturism promoted an ecological worldview, small ecovillages and most prominently nudism as a way to avoid the artificiality of the industrial mass society of modernity. Naturist individualist anarchists saw the individual in his biological, physical and psychological aspects and avoided and tried to eliminate social determinations. An early influence in this vein was Henry David Thoreau and his famous book Walden. Important promoters of this were Henri Zisly and Émile Gravelle who collaborated in La Nouvelle Humanité followed by Le Naturien, Le Sauvage, L'Ordre Naturel and La Vie Naturelle."Henri Zisly, self-labeled individualist anarchist, is considered one of the forerunners and principal organizers of the naturist movement in France and one of its most able and outspoken defenders worldwide." "Zisly, Henri (1872–1945)" by Stefano Boni. This relationship between anarchism and naturism was quite important at the end of the 1920s in Spain, when "[t]he linking role played by the 'Sol y Vida' group was very important. The goal of this group was to take trips and enjoy the open air. The Naturist athenaeum, 'Ecléctico', in Barcelona, was the base from which the activities of the group were launched. First Etica and then Iniciales, which began in 1929, were the publications of the group, which lasted until the Spanish Civil War. We must be aware that the naturist ideas expressed in them matched the desires that the libertarian youth had of breaking up with the conventions of the bourgeoisie of the time. That is what a young worker explained in a letter to 'Iniciales' He writes it under the odd pseudonym of 'silvestre del campo', (wild man in the country). "I find great pleasure in being naked in the woods, bathed in light and air, two natural elements we cannot do without. By shunning the humble garment of an exploited person, (garments which, in my opinion, are the result of all the laws devised to make our lives bitter), we feel there no others left but just the natural laws. Clothes mean slavery for some and tyranny for others. Only the naked man who rebels against all norms, stands for anarchism, devoid of the prejudices of outfit imposed by our money-oriented society". The relation between anarchism and naturism "gives way to the Naturist Federation, in July 1928, and to the lV Spanish Naturist Congress, in September 1929, both supported by the Libertarian Movement. However, in the short term, the Naturist and Libertarian movements grew apart in their conceptions of everyday life. The Naturist movement felt closer to the Libertarian individualism of some French theoreticians such as Henri Ner (real name of Han Ryner) than to the revolutionary goals proposed by some Anarchist organisations such as the FAI, (Federación Anarquista Ibérica)". Individualist anarchism and Friedrich Nietzsche The thought of German philosopher Friedrich Nietzsche has been influential in individualist anarchism, specifically in thinkers such as France's Émile Armand, the Italian Renzo Novatore and the Colombian Biofilo Panclasta. Robert C. Holub, author of Nietzsche: Socialist, Anarchist, Feminist posits that "translations of Nietzsche's writings in the United States very likely appeared first in Liberty, the anarchist journal edited by Benjamin Tucker". Individualist anarchism in the United States Mutualism and utopianism For American anarchist historian Eunice Minette Schuster, "[i]t is apparent [...] that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews [...] William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form". William Batchelder Greene is best known for the works Mutual Banking (1850) which proposed an interest-free banking system and Transcendentalism, a critique of the New England philosophical school. He saw mutualism as the synthesis of "liberty and order". His "associationism [...] is checked by individualism. [...] 'Mind your own business,' 'Judge not that ye be not judged.' Over matters which are purely personal, as for example, moral conduct, the individual is sovereign, as well as over that which he himself produces. For this reason he demands 'mutuality' in marriage – the equal right of a woman to her own personal freedom and property and feminist and spiritualist tendencies". Within some individualist anarchist circles, mutualism came to mean non-communist anarchism. Contemporary American anarchist Hakim Bey reports that "Steven Pearl Andrews [...] was not a fourierist, but he lived through the brief craze for phalansteries in America & adopted a lot of fourierist principles & practices, [...] a maker of worlds out of words. He syncretized Abolitionism, Free Love, spiritual universalism, [Josiah] Warren, & [Charles] Fourier into a grand utopian scheme he called the Universal Pantarchy. [...] He was instrumental in founding several 'intentional communities,' including the 'Brownstone Utopia' on 14th St. in New York, & 'Modern Times' in Brentwood, Long Island. The latter became as famous as the best-known fourierist communes (Brook Farm in Massachusetts & the North American Phalanx in New Jersey) – in fact, Modern Times became downright notorious (for 'Free Love') & finally foundered under a wave of scandalous publicity. Andrews (& Victoria Woodhull) were members of the infamous Section 12 of the 1st International, expelled by Marx for its anarchist, feminist, & spiritualist tendencies". Boston anarchists Another form of individualist anarchism was found in the United States as advocated by the so-called Boston anarchists. By default, American individualists had no difficulty accepting the concepts that "one man employ another" or that "he direct him", in his labor but rather demanded that "all natural opportunities requisite to the production of wealth be accessible to all on equal terms and that monopolies arising from special privileges created by law be abolished". They believed state monopoly capitalism (defined as a state-sponsored monopoly) prevented labor from being fully rewarded. Voltairine de Cleyre summed up the philosophy by saying that the anarchist individualists "are firm in the idea that the system of employer and employed, buying and selling, banking, and all the other essential institutions of Commercialism, centred upon private property, are in themselves good, and are rendered vicious merely by the interference of the State". Even among the 19th-century American individualists, there was not a monolithic doctrine as they disagreed amongst each other on various issues including intellectual property rights and possession versus property in land.Watner, Carl (1977). . Journal of Libertarian Studies, Vol. 1, No. 4, p. 308. A major schism occurred later in the 19th century when Tucker and some others abandoned their traditional support of natural rights as espoused by Lysander Spooner and converted to an "egoism" modeled upon Max Stirner's philosophy. Lysander Spooner besides his individualist anarchist activism was also an important anti-slavery activist and became a member of the First International. Some Boston anarchists, including Benjamin Tucker, identified themselves as socialists, which in the 19th century was often used in the sense of a commitment to improving conditions of the working class (i.e. "the labor problem"). The Boston anarchists such as Tucker and his followers continue to be considered socialists due to their opposition to usury. They do so because as the modern economist Jim Stanford points out there are many different kinds of competitive markets such as market socialism and capitalism is only one type of a market economy. By around the start of the 20th century, the heyday of individualist anarchism had passed. Individualist anarchism and the labor movement George Woodcock reports that the American individualist anarchists Lysander Spooner and William B. Greene had been members of the socialist First International. Two individualist anarchists who wrote in Benjamin Tucker's Liberty were also important labor
|
Italiane (1956; Italian Folktales) on the basis of the question, "Is there an Italian equivalent of the Brothers Grimm?" For two years, Calvino collated tales found in 19th century collections across Italy then translated 200 of the finest from various dialects into Italian. Key works he read at this time were Vladimir Propp's Morphology of the Folktale and Historical Roots of Russian Fairy Tales, stimulating his own ideas on the origin, shape and function of the story. In 1952 Calvino wrote with Giorgio Bassani for Botteghe Oscure, a magazine named after the popular name of the party's head-offices in Rome. He also worked for Il Contemporaneo, a Marxist weekly. From 1955 to 1958 Calvino had an affair with Italian actress Elsa De Giorgi, a married, older woman. Excerpts of the hundreds of love letters Calvino wrote to her were published in the Corriere della Sera in 2004, causing some controversy. After communism In 1957, disillusioned by the 1956 Soviet invasion of Hungary, Calvino left the Italian Communist Party. In his letter of resignation published in L'Unità on 7 August, he explained the reason of his dissent (the violent suppression of the Hungarian uprising and the revelation of Joseph Stalin's crimes) while confirming his "confidence in the democratic perspectives" of world Communism. He withdrew from taking an active role in politics and never joined another party. Ostracized by the PCI party leader Palmiro Togliatti and his supporters on publication of Becalmed in the Antilles (La gran bonaccia delle Antille), a satirical allegory of the party's immobilism, Calvino began writing The Baron in the Trees. Completed in three months and published in 1957, the fantasy is based on the "problem of the intellectual's political commitment at a time of shattered illusions". He found new outlets for his periodic writings in the journals Città aperta and Tempo presente, the magazine Passato e presente, and the weekly Italia Domani. With Vittorini in 1959, he became co-editor of 'Il Menabò, a cultural journal devoted to literature in the modern industrial age, a position he held until 1966. Despite severe restrictions in the US against foreigners holding communist views, Calvino was allowed to visit the United States, where he stayed six months from 1959 to 1960 (four of which he spent in New York), after an invitation by the Ford Foundation. Calvino was particularly impressed by the "New World": "Naturally I visited the South and also California, but I always felt a New Yorker. My city is New York." The letters he wrote to Einaudi describing this visit to the United States were first published as "American Diary 1959–1960" in Hermit in Paris in 2003. In 1962 Calvino met Argentinian translator Esther Judith Singer ("Chichita") and married her in 1964 in Havana, during a trip in which he visited his birthplace and was introduced to Ernesto "Che" Guevara. On 15 October 1967, a few days after Guevara's death, Calvino wrote a tribute to him that was published in Cuba in 1968, and in Italy thirty years later. He and his wife settled in Rome in the via Monte Brianzo where their daughter, Giovanna, was born in 1965. Once again working for Einaudi, Calvino began publishing some of his "Cosmicomics" in Il Caffè, a literary magazine. Later life and work Vittorini's death in 1966 greatly affected Calvino. He went through what he called an "intellectual depression", which the writer himself described as an important passage in his life: "...I ceased to be young. Perhaps it's a metabolic process, something that comes with age, I'd been young for a long time, perhaps too long, suddenly I felt that I had to begin my old age, yes, old age, perhaps with the hope of prolonging it by beginning it early." In the fermenting atmosphere that evolved into 1968's cultural revolution (the French May), he moved with his family to Paris in 1967, setting up home in a villa in the Square de Châtillon. Nicknamed L'ironique amusé, he was invited by Raymond Queneau in 1968 to join the Oulipo (Ouvroir de littérature potentielle) group of experimental writers where he met Roland Barthes, and Georges Perec, all of whom influenced his later production. That same year, he turned down the Viareggio Prize for Ti con zero (Time and the Hunter) on the grounds that it was an award given by "institutions emptied of meaning". He accepted, however, both the Asti Prize and the Feltrinelli Prize for his writing in 1970 and 1972, respectively. In two autobiographical essays published in 1962 and 1970, Calvino described himself as "atheist" and his outlook as "non-religious". Calvino had more intense contacts with the academic world, with notable experiences at the Sorbonne (with Barthes) and the University of Urbino. His interests included classical studies: Honoré de Balzac, Ludovico Ariosto, Dante, Ignacio de Loyola, Cervantes, Shakespeare, Cyrano de Bergerac, and Giacomo Leopardi. Between 1972 and 1973 Calvino published two short stories, "The Name, the Nose" and the Oulipo-inspired "The Burning of the Abominable House" in the Italian edition of Playboy. He became a regular contributor to the Italian newspaper Corriere della Sera, spending his summer vacations in a house constructed in the pinewood of Roccamare, in Castiglione della Pescaia, Tuscany. In 1975 Calvino was made Honorary Member of the American Academy. Awarded the Austrian State Prize for European Literature in 1976, he visited Mexico, Japan, and the United States where he gave a series of lectures in several American towns. After his mother died in 1978 at the age of 92, Calvino sold Villa Meridiana, the family home in San Remo. Two years later, he moved to Rome in Piazza Campo Marzio near the Pantheon and began editing the work of Tommaso Landolfi for Rizzoli. Awarded the French Légion d'honneur in 1981, he also accepted to be jury president of the 29th Venice Film Festival. During the summer of 1985, Calvino prepared a series of texts on literature for the Charles Eliot Norton Lectures to be delivered at Harvard University in the fall. On 6 September, he was admitted to the ancient hospital of Santa Maria della Scala in Siena where he died during the night between 18 and 19 September of a cerebral hemorrhage. His lecture notes were published posthumously in Italian in 1988 and in English as Six Memos for the Next Millennium in 1993. Authors he helped publish Mario Rigoni Stern Gianni Celati Andrea De Carlo Daniele Del Giudice Leonardo Sciascia Selected bibliography A selected bibliography of Calvino's writings follows, listing the works that have been translated into and published in English, along with a few major untranslated works. More exhaustive bibliographies can be found in Martin McLaughlin's Italo Calvino, and Beno Weiss's Understanding Italo Calvino. Fiction Fiction collections {| class="wikitable" style="width:100%; margin:auto;" |- ! Title !! Originalpublication !! Englishtranslation !! Translator {{Book list | title = Prima che tu dica 'Pronto | publish_date = 1993 | alt_title = Numbers in the Dark and Other Stories | aux1 = 1996 | aux2 = Tim Parks | short_summary = 37 short stories: The Man Who Shouted Teresa; The Flash; Making Do; Dry River; Conscience; Solidarity; The Black Sheep; Good for Nothing; Like a Flight of Ducks; Love Far from Home; Wind in a City; The Lost Regiment; Enemy Eyes; A General in the Library; The Workshop Hen; Numbers in the Dark; The Queen's Necklace; Becalmed in the Antilles; The Tribe with Its Eyes on the Sky; Nocturnal Soliloquy of a Scottish Nobleman; A Beautiful March Day; World Memory; Beheading the Heads; The Burning of the Abominable House; The Petrol Pump; Neanderthal Man; Montezuma; Before You Say 'Hello'; Glaciation; The Call of the Water; The Mirror, the Target; The Other Eurydice; The Memoirs of Casanova; Henry Ford; The Last Channel; Implosion; Nothing and Not Much. }} |} Essays and other writings Autobiographical works Libretti Translations Selected filmography Boccaccio '70, 1962 (co-wrote screenplay of "Renzo e Luciano" segment directed by Mario Monicelli) L'Amore difficile, 1963 (wrote "L'avventura di un soldato" segment directed by Nino Manfredi) Tiko and the Shark, 1964 (co-wrote screenplay directed by Folco Quilici) Film and television adaptations The Nonexistent Knight by Pino Zac, 1969 (Italian animated film based on the novel) Amores dificiles by Ana Luisa Ligouri, 1983 (13' Mexican short) L'Aventure d'une baigneuse by Philippe Donzelot, 1991 (14' French short based on The Adventure of a Bather in Difficult Loves ) Fantaghirò by Lamberto Bava, 1991 (TV adaptation based on Fanta-Ghirò the Beautiful in Italian Folktales) Palookaville by Alan Taylor, 1995 (American film based on Theft in a Cake Shop, Desire in November, and Transit Bed) Solidarity by Nancy Kiang, 2006 (10' American short) Conscience by Yu-Hsiu Camille Chen, 2009 (10' Australian short) "La Luna" by Enrico Casarosa, 2011 (American short) Films on Calvino Damian Pettigrew, Lo specchio di Calvino (Inside Italo, 2012). Co-produced by Arte France, Italy's Ministero per i Beni e le Attività Culturali, and the National Film Board of Canada, the feature-length docufiction stars Neri Marcorè as the Italian writer and critic Pietro Citati. The film also uses in-depth conversations videotaped at Calvino's Rome penthouse a year before his death in 1985 and rare footage from RAI, INA (Institut national de l'audiovisuel), and BBC television archives. The 52-minute French version titled, Dans la peau d'Italo Calvino ("Being Italo Calvino"), was broadcast by Arte France on 19 December 2012 and Sky Arte (Italy) on 14 October 2013. Legacy The Scuola Italiana Italo Calvino, an Italian curriculum school in Moscow, Russia, is named after him. A crater on the planet Mercury, Calvino, and a main belt asteroid, 22370 Italocalvino, are also named after him. Salt Hill Journal and University of Louisville award annually the Italo Calvino Prize "for a work of fiction written in the fabulist experimental style of Italo Calvino". Awards 1946 – L'Unità Prize (shared with Marcello Venturi) for the short story, Minefield (Campo di mine) 1947 – Riccione Prize for The Path to the Nest of Spiders 1952 – Saint-Vincent Prize 1957 – Viareggio Prize for The Baron in the Trees 1959 – Bagutta Prize 1960 – Salento Prize for Our Ancestors 1963 – International Charles Veillon Prize for The Watcher 1970 – Asti Prize 1972 – Feltrinelli Prize for Invisible Cities 1976 – Austrian State Prize for European Literature 1981 – Legion of Honour 1982 – World Fantasy Award – Life Achievement Notes Sources Primary sources Calvino, Italo. Adam, One Afternoon (trans. Archibald Colquhoun, Peggy Wright). London: Minerva, 1992. —. The Castle of Crossed Destinies (trans. William Weaver). London: Secker & Warburg, 1977 —. Cosmicomics (trans. William Weaver). London: Picador, 1993. —. The Crow Comes Last (Ultimo viene il corvo). Turin: Einaudi, 1949. —. Difficult Loves. Smog. A Plunge into Real Estate (trans. William Weaver, Donald Selwyn Carne-Ross). London: Picador, 1985. —. Hermit in Paris (trans. Martin McLaughlin). London: Jonathan Cape, 2003. —. If on a winter's night a traveller (trans. William Weaver). London: Vintage, 1998. —. Invisible Cities (trans. William Weaver). London: Secker & Warburg, 1974. —. Italian Fables (trans. Louis Brigante). New York: Collier, 1961. (50 tales) —. Italian Folk Tales (trans. Sylvia Mulcahy). London: J.M. Dent & Sons, 1975. (24 tales) —. Italian Folktales (trans. George Martin). Harmondsworth: Penguin, 1980. (complete 200 tales) —. Marcovaldo or the Seasons in the City (trans. William Weaver). London: Minerva, 1993. —. Mr. Palomar (trans. William Weaver). London: Vintage, 1999. —. Our Ancestors (trans. A. Colquhoun). London: Vintage, 1998. —. The Path to the Nest of Spiders (trans. Archibald Colquhoun). Boston: Beacon, 1957. —. The Path to the Spiders' Nests (trans. A. Colquhoun, revised by Martin McLaughlin). London: Jonathan Cape, 1993. —. t zero (trans. William Weaver). New York: Harcourt, Brace & World, 1969. —. The Road to San Giovanni (trans. Tim Parks). New York: Vintage International, 1993. —. Six Memos for the Next Millennium (trans. Patrick Creagh). New York: Vintage International, 1993. —. The Watcher and Other Stories (trans. William Weaver). New York: Harcourt, Brace & Company, 1971. Secondary sources Barenghi, Mario, and Bruno Falcetto. Romanzi e racconti di Italo Calvino. Milano: Mondadori, 1991. Bernardini Napoletano, Francesca. I segni nuovi di Italo Calvino. Rome: Bulzoni, 1977. Bonura, Giuseppe. Invito alla lettura di Calvino. Milan: U. Mursia, 1972. Calvino, Italo. Uno scrittore pomeridiano: Intervista sull'arte della narrativa a cura di William Weaver e Damian Pettigrew con un ricordo di Pietro Citati. Rome: minimum fax, 2003. . Corti, Maria. 'Intervista: Italo Calvino' in Autografo 2 (October 1985): 47–53. Di Carlo, Franco. Come leggere I nostri antenati. Milan: U. Mursia, 1958. (1998 ). McLaughlin, Martin. Italo Calvino. Edinburgh: Edinburgh University Press, 1998. (pb. ). Weiss, Beno. Understanding Italo Calvino. Columbia: University of South Carolina Press, 1993. . Online sources Italo Calvino at Emory University Online Resources and Links Outside the Town of Malbork A Site for Italo Calvino The Words that Failed Calvino on Che Guevara http://atlantecalvino.unige.ch/ vizualisation of Calvino's work by Further readingGeneral''' Benussi, Cristina (1989). Introduzione a Calvino. Rome: Laterza. Bartoloni, Paolo (2003). Interstitial Writing: Calvino, Caproni, Sereni and Svevo. Leicester: Troubador. Bloom, Harold (ed.)(2002). Bloom's Major Short Story Writers: Italo Calvino. Broomall, Pennsylvania: Chelsea House. Bolongaro, Eugenio (2003). Italo Calvino and the Compass of Literature. Toronto: University of Toronto Press. Cannon, JoAnn (1981). Italo Calvino: Writer and Critic. Ravenna: Longo Press. Carter III, Albert Howard (1987). Italo Calvino: Metamorphoses of Fantasy. Ann Arbor, Michigan: UMI Research Press. Chubb, Stephen (1997). I, Writer, I, Reader: the Concept of the Self in the Fiction of Italo Calvino. Leicester: Troubador. Gabriele, Tomassina (1994). Italo Calvino: Eros and Language. Teaneck, N.J.: Fairleigh Dickinson University Press. Jeannet, Angela M. (2000) Under the Radiant Sun and the Crescent Moon. Toronto: University of Toronto Press. Markey, Constance (1999). Italo Calvino. A Journey Toward Postmodernism. Gainesville: Florida University Press.
|
the American Academy. Awarded the Austrian State Prize for European Literature in 1976, he visited Mexico, Japan, and the United States where he gave a series of lectures in several American towns. After his mother died in 1978 at the age of 92, Calvino sold Villa Meridiana, the family home in San Remo. Two years later, he moved to Rome in Piazza Campo Marzio near the Pantheon and began editing the work of Tommaso Landolfi for Rizzoli. Awarded the French Légion d'honneur in 1981, he also accepted to be jury president of the 29th Venice Film Festival. During the summer of 1985, Calvino prepared a series of texts on literature for the Charles Eliot Norton Lectures to be delivered at Harvard University in the fall. On 6 September, he was admitted to the ancient hospital of Santa Maria della Scala in Siena where he died during the night between 18 and 19 September of a cerebral hemorrhage. His lecture notes were published posthumously in Italian in 1988 and in English as Six Memos for the Next Millennium in 1993. Authors he helped publish Mario Rigoni Stern Gianni Celati Andrea De Carlo Daniele Del Giudice Leonardo Sciascia Selected bibliography A selected bibliography of Calvino's writings follows, listing the works that have been translated into and published in English, along with a few major untranslated works. More exhaustive bibliographies can be found in Martin McLaughlin's Italo Calvino, and Beno Weiss's Understanding Italo Calvino. Fiction Fiction collections {| class="wikitable" style="width:100%; margin:auto;" |- ! Title !! Originalpublication !! Englishtranslation !! Translator {{Book list | title = Prima che tu dica 'Pronto | publish_date = 1993 | alt_title = Numbers in the Dark and Other Stories | aux1 = 1996 | aux2 = Tim Parks | short_summary = 37 short stories: The Man Who Shouted Teresa; The Flash; Making Do; Dry River; Conscience; Solidarity; The Black Sheep; Good for Nothing; Like a Flight of Ducks; Love Far from Home; Wind in a City; The Lost Regiment; Enemy Eyes; A General in the Library; The Workshop Hen; Numbers in the Dark; The Queen's Necklace; Becalmed in the Antilles; The Tribe with Its Eyes on the Sky; Nocturnal Soliloquy of a Scottish Nobleman; A Beautiful March Day; World Memory; Beheading the Heads; The Burning of the Abominable House; The Petrol Pump; Neanderthal Man; Montezuma; Before You Say 'Hello'; Glaciation; The Call of the Water; The Mirror, the Target; The Other Eurydice; The Memoirs of Casanova; Henry Ford; The Last Channel; Implosion; Nothing and Not Much. }} |} Essays and other writings Autobiographical works Libretti Translations Selected filmography Boccaccio '70, 1962 (co-wrote screenplay of "Renzo e Luciano" segment directed by Mario Monicelli) L'Amore difficile, 1963 (wrote "L'avventura di un soldato" segment directed by Nino Manfredi) Tiko and the Shark, 1964 (co-wrote screenplay directed by Folco Quilici) Film and television adaptations The Nonexistent Knight by Pino Zac, 1969 (Italian animated film based on the novel) Amores dificiles by Ana Luisa Ligouri, 1983 (13' Mexican short) L'Aventure d'une baigneuse by Philippe Donzelot, 1991 (14' French short based on The Adventure of a Bather in Difficult Loves ) Fantaghirò by Lamberto Bava, 1991 (TV adaptation based on Fanta-Ghirò the Beautiful in Italian Folktales) Palookaville by Alan Taylor, 1995 (American film based on Theft in a Cake Shop, Desire in November, and Transit Bed) Solidarity by Nancy Kiang, 2006 (10' American short) Conscience by Yu-Hsiu Camille Chen, 2009 (10' Australian short) "La Luna" by Enrico Casarosa, 2011 (American short) Films on Calvino Damian Pettigrew, Lo specchio di Calvino (Inside Italo, 2012). Co-produced by Arte France, Italy's Ministero per i Beni e le Attività Culturali, and the National Film Board of Canada, the feature-length docufiction stars Neri Marcorè as the Italian writer and critic Pietro Citati. The film also uses in-depth conversations videotaped at Calvino's Rome penthouse a year before his death in 1985 and rare footage from RAI, INA (Institut national de l'audiovisuel), and BBC television archives. The 52-minute French version titled, Dans la peau d'Italo Calvino ("Being Italo Calvino"), was broadcast by Arte France on 19 December 2012 and Sky Arte (Italy) on 14 October 2013. Legacy The Scuola Italiana Italo Calvino, an Italian curriculum school in Moscow, Russia, is named after him. A crater on the planet Mercury, Calvino, and a main belt asteroid, 22370 Italocalvino, are also named after him. Salt Hill Journal and University of Louisville award annually the Italo Calvino Prize "for a work of fiction written in the fabulist experimental style of Italo Calvino". Awards 1946 – L'Unità Prize (shared with Marcello Venturi) for the short story, Minefield (Campo di mine) 1947 – Riccione Prize for The Path to the Nest of Spiders 1952 – Saint-Vincent Prize 1957 – Viareggio Prize for The Baron in the Trees 1959 – Bagutta Prize 1960 – Salento Prize for Our Ancestors 1963 – International Charles Veillon Prize for The Watcher 1970 – Asti Prize 1972 – Feltrinelli Prize for Invisible Cities 1976 – Austrian State Prize for European Literature 1981 – Legion of Honour 1982 – World Fantasy Award – Life Achievement Notes Sources Primary sources Calvino, Italo. Adam, One Afternoon (trans. Archibald Colquhoun, Peggy Wright). London: Minerva, 1992. —. The Castle of Crossed Destinies (trans. William Weaver). London: Secker & Warburg, 1977 —. Cosmicomics (trans. William Weaver). London: Picador, 1993. —. The Crow Comes Last (Ultimo viene il corvo). Turin: Einaudi, 1949. —. Difficult Loves. Smog. A Plunge into Real Estate (trans. William Weaver, Donald Selwyn Carne-Ross). London: Picador, 1985. —. Hermit in Paris (trans. Martin McLaughlin). London: Jonathan Cape, 2003. —. If on a winter's night a traveller (trans. William Weaver). London: Vintage, 1998. —. Invisible Cities (trans. William Weaver). London: Secker & Warburg, 1974. —. Italian Fables (trans. Louis Brigante). New York: Collier, 1961. (50 tales) —. Italian Folk Tales (trans. Sylvia Mulcahy). London: J.M. Dent & Sons, 1975. (24 tales) —. Italian Folktales (trans. George Martin). Harmondsworth: Penguin, 1980. (complete 200 tales) —. Marcovaldo or the Seasons in the City (trans. William Weaver). London: Minerva, 1993. —. Mr. Palomar (trans. William Weaver). London: Vintage, 1999. —. Our Ancestors (trans. A. Colquhoun). London: Vintage, 1998. —. The Path to the Nest of Spiders (trans. Archibald Colquhoun). Boston: Beacon, 1957. —. The Path to the Spiders' Nests (trans. A. Colquhoun, revised by Martin McLaughlin). London: Jonathan Cape, 1993. —. t zero (trans. William Weaver). New York: Harcourt, Brace & World, 1969. —. The Road to San Giovanni (trans. Tim Parks). New York: Vintage International, 1993. —. Six Memos for the Next Millennium (trans. Patrick Creagh). New York: Vintage International, 1993. —. The Watcher and Other Stories (trans. William Weaver). New York: Harcourt, Brace & Company, 1971. Secondary sources Barenghi, Mario, and Bruno Falcetto. Romanzi e racconti di Italo Calvino. Milano: Mondadori, 1991. Bernardini Napoletano, Francesca. I segni nuovi di Italo Calvino. Rome: Bulzoni, 1977. Bonura, Giuseppe. Invito alla lettura di Calvino. Milan: U. Mursia, 1972. Calvino, Italo. Uno scrittore pomeridiano: Intervista sull'arte della narrativa a cura di William Weaver e Damian Pettigrew con un ricordo di Pietro Citati. Rome: minimum fax, 2003. . Corti, Maria. 'Intervista: Italo Calvino' in Autografo 2 (October 1985): 47–53. Di Carlo, Franco. Come leggere I nostri antenati. Milan: U. Mursia, 1958. (1998 ). McLaughlin, Martin. Italo Calvino. Edinburgh: Edinburgh University Press, 1998. (pb. ). Weiss, Beno. Understanding Italo Calvino. Columbia: University of South Carolina Press, 1993. . Online sources Italo Calvino at Emory University Online Resources and Links Outside the Town of Malbork A Site for Italo Calvino The Words that Failed Calvino on Che Guevara http://atlantecalvino.unige.ch/ vizualisation of Calvino's work by Further readingGeneral''' Benussi, Cristina (1989). Introduzione a Calvino. Rome: Laterza. Bartoloni, Paolo (2003). Interstitial Writing: Calvino, Caproni, Sereni and Svevo. Leicester: Troubador. Bloom, Harold (ed.)(2002). Bloom's Major Short Story Writers: Italo Calvino. Broomall, Pennsylvania: Chelsea House. Bolongaro, Eugenio (2003). Italo Calvino and the Compass of Literature. Toronto: University of Toronto Press. Cannon, JoAnn (1981). Italo Calvino: Writer and Critic. Ravenna: Longo Press. Carter III, Albert Howard (1987). Italo Calvino: Metamorphoses of Fantasy. Ann Arbor, Michigan: UMI Research Press. Chubb, Stephen (1997). I, Writer, I, Reader: the Concept of the Self in the Fiction of Italo Calvino. Leicester: Troubador. Gabriele, Tomassina (1994). Italo Calvino:
|
and actually reduced the number of nuclear warheads held by the US and Soviets. SALT II was never ratified by the US Senate, but its terms were honored by both sides until 1986, when the Reagan administration "withdrew" after it had accused the Soviets of violating the pact. In the 1980s, President Ronald Reagan launched the Strategic Defense Initiative as well as the MX and Midgetman ICBM programs. China developed a minimal independent nuclear deterrent entering its own cold war after an ideological split with the Soviet Union beginning in the early 1960s. After first testing a domestic built nuclear weapon in 1964, it went on to develop various warheads and missiles. Beginning in the early 1970s, the liquid fuelled DF-5 ICBM was developed and used as a satellite launch vehicle in 1975. The DF-5, with a range of —long enough to strike the Western United States and the Soviet Union—was silo deployed, with the first pair in service by 1981 and possibly twenty missiles in service by the late 1990s. China also deployed the JL-1 Medium-range ballistic missile with a reach of aboard the ultimately unsuccessful type 92 submarine. Post-Cold War In 1991, the United States and the Soviet Union agreed in the START I treaty to reduce their deployed ICBMs and attributed warheads. , all five of the nations with permanent seats on the United Nations Security Council have operational long-range ballistic missile systems; Russia, the United States, and China also have land-based ICBMs (the US missiles are silo-based, while China and Russia have both silo and road-mobile (DF-31, RT-2PM2 Topol-M missiles). Israel is believed to have deployed a road mobile nuclear ICBM, the Jericho III, which entered service in 2008; an upgraded version is in development. India successfully test fired Agni V, with a strike range of more than on 19 April 2012, claiming entry into the ICBM club. The missile's actual range is speculated by foreign researchers to be up to with India having downplayed its capabilities to avoid causing concern to other countries. By 2012 there was speculation by some intelligence agencies that North Korea is developing an ICBM. North Korea successfully put a satellite into space on 12 December 2012 using the Unha-3 rocket. The United States claimed that the launch was in fact a way to test an ICBM. (See Timeline of first orbital launches by country.) In early July 2017, North Korea claimed for the first time to have tested successfully an ICBM capable of carrying a large thermonuclear warhead. In July 2014, China announced the development of its newest generation of ICBM, the Dongfeng-41 (DF-41), which has a range of 12,000 kilometres (7,500 miles), capable of reaching the United States, and which analysts believe is capable of being outfitted with MIRV technology. Most countries in the early stages of developing ICBMs have used liquid propellants, with the known exceptions being the Indian Agni-V, the planned but cancelled South African RSA-4 ICBM, and the now in service Israeli Jericho III. The RS-28 Sarmat (Russian: РС-28 Сармат; NATO reporting name: SATAN 2), is a Russian liquid-fueled, MIRV-equipped, super-heavy thermonuclear armed intercontinental ballistic missile in development by the Makeyev Rocket Design Bureau from 2009, intended to replace the previous R-36 missile. Its large payload would allow for up to 10 heavy warheads or 15 lighter ones or up to 24 hypersonic glide vehicles Yu-74, or a combination of warheads and massive amounts of countermeasures designed to defeat anti-missile systems; it was announced by the Russian military as a response to the US Prompt Global Strike. Flight phases The following flight phases can be distinguished: boost phase: 3 to 5 minutes; it is shorter for a solid-fuel rocket than for a liquid-propellant rocket; depending on the trajectory chosen, typical burnout speed is , up to ; altitude at the end of this phase is typically . midcourse phase: approx. 25 minutes – sub-orbital spaceflight with a flightpath being a part of an ellipse with a vertical major axis; the apogee (halfway through the midcourse phase) is at an altitude of approximately ; the semi-major axis is between ; the projection of the flightpath on the Earth's surface is close to a great circle, slightly displaced due to earth rotation during the time of flight; the missile may release several independent warheads and penetration aids, such as metallic-coated balloons, aluminum chaff, and full-scale warhead decoys. reentry/terminal phase (starting at an altitude of ): 2 minutes – impact is at a speed of up to (for early ICBMs less than ); see also maneuverable reentry vehicle. ICBMs usually use the trajectory which optimizes range for a given amount of payload (the minimum-energy trajectory); an alternative is a depressed trajectory, which allows less payload, shorter flight time, and has a much lower apogee. Modern ICBMs Modern ICBMs typically carry multiple independently targetable reentry vehicles (MIRVs), each of which carries a separate nuclear warhead, allowing a single missile to hit multiple targets. MIRV was an outgrowth of the rapidly shrinking size and weight of modern warheads and the Strategic Arms Limitation Treaties (SALT I and SALT II), which imposed limitations on the number of launch vehicles. It has also proved to be an "easy answer" to proposed deployments of anti-ballistic missile (ABM) systems: It is far less expensive to add more warheads to an existing missile system than to build an ABM system capable of shooting down the additional warheads; hence, most ABM system proposals have been judged to be impractical. The first operational ABM systems were deployed in the United States during the 1970s. The Safeguard ABM facility, located in North Dakota, was operational from 1975 to 1976. The Soviets deployed their ABM-1 Galosh system around Moscow in the 1970s, which remains in service. Israel deployed a national ABM system based on the Arrow missile in 1998, but it is mainly designed to intercept shorter-ranged theater ballistic missiles, not ICBMs. The Alaska-based United States national missile defense system attained initial operational capability in 2004. ICBMs can be deployed from multiple platforms:
|
Amerika, von Braun's team developed the A9/10 ICBM, intended for use in bombing New York and other American cities. Initially intended to be guided by radio, it was changed to be a piloted craft after the failure of Operation Elster. The second stage of the A9/A10 rocket was tested a few times in January and February 1945. After the war, the US executed Operation Paperclip, which took von Braun and hundreds of other leading German scientists to the United States to develop IRBMs, ICBMs, and launchers for the US Army. This technology was predicted by US Army General Hap Arnold, who wrote in 1943: Cold War After World War II, the Americans and the Soviets started rocket research programs based on the V-2 and other German wartime designs. Each branch of the US military started its own programs, leading to considerable duplication of effort. In the Soviet Union, rocket research was centrally organized although several teams worked on different designs. In the Soviet Union, early development was focused on missiles able to attack European targets. That changed in 1953, when Sergei Korolyov was directed to start development of a true ICBM able to deliver newly developed hydrogen bombs. Given steady funding throughout, the R-7 developed with some speed. The first launch took place on 15 May 1957 and led to an unintended crash from the site. The first successful test followed on 21 August 1957; the R-7 flew over and became the world's first ICBM. The first strategic-missile unit became operational on 9 February 1959 at Plesetsk in north-west Russia. It was the same R-7 launch vehicle that placed the first artificial satellite in space, Sputnik, on 4 October 1957. The first human spaceflight in history was accomplished on a derivative of R-7, Vostok, on 12 April 1961, by Soviet cosmonaut Yuri Gagarin. A heavily modernized version of the R-7 is still used as the launch vehicle for the Soviet/Russian Soyuz spacecraft, marking more than 60 years of operational history of Sergei Korolyov's original rocket design. The US initiated ICBM research in 1946 with the RTV-A-2 Hiroc project. This was a three-stage effort with the ICBM development not starting until the third stage. However, funding was cut after only three partially successful launches in 1948 of the second stage design, used to test variations on the V-2 design. With overwhelming air superiority and truly intercontinental bombers, the newly forming US Air Force did not take the problem of ICBM development seriously. Things changed in 1953 with the Soviet testing of their first thermonuclear weapon, but it was not until 1954 that the Atlas missile program was given the highest national priority. The Atlas A first flew on 11 June 1957; the flight lasted only about 24 seconds before the rocket blew up. The first successful flight of an Atlas missile to full range occurred 28 November 1958. The first armed version of the Atlas, the Atlas D, was declared operational in January 1959 at Vandenberg, although it had not yet flown. The first test flight was carried out on 9 July 1959, and the missile was accepted for service on 1 September. The R-7 and Atlas each required a large launch facility, making them vulnerable to attack, and could not be kept in a ready state. Failure rates were very high throughout the early years of ICBM technology. Human spaceflight programs (Vostok, Mercury, Voskhod, Gemini, etc.) served as a highly visible means of demonstrating confidence in reliability, with successes translating directly to national defense implications. The US was well behind the Soviets in the Space Race and so US President John F. Kennedy increased the stakes with the Apollo program, which used Saturn rocket technology that had been funded by President Dwight D. Eisenhower. These early ICBMs also formed the basis of many space launch systems. Examples include R-7, Atlas, Redstone, Titan, and Proton, which was derived from the earlier ICBMs but never deployed as an ICBM. The Eisenhower administration supported the development of solid-fueled missiles such as the LGM-30 Minuteman, Polaris and Skybolt. Modern ICBMs tend to be smaller than their ancestors, due to increased accuracy and smaller and lighter warheads, and use solid fuels, making them less useful as orbital launch vehicles. The Western view of the deployment of these systems was governed by the strategic theory of mutual assured destruction. In the 1950s and 1960s, development began on anti-ballistic missile systems by both the Americans and Soviets. Such systems were restricted by the 1972 Anti-Ballistic Missile Treaty. The first successful ABM test was conducted by the Soviets in 1961, which later deployed a fully operational system defending Moscow in the 1970s (see Moscow ABM system). The 1972 SALT treaty froze the number of ICBM launchers of both the Americans and the Soviets at existing levels and allowed new submarine-based SLBM launchers only if an equal number of land-based ICBM launchers were dismantled. Subsequent talks, called SALT II, were held from 1972 to 1979 and actually reduced the number of nuclear warheads held by the US and Soviets. SALT II was never ratified by the US Senate, but its terms were honored by both sides until 1986, when the Reagan administration "withdrew" after it had accused the Soviets of violating the pact. In the 1980s, President Ronald Reagan launched the Strategic Defense Initiative as well as the MX and Midgetman ICBM programs. China developed a minimal independent nuclear deterrent entering its own cold war after an ideological split with the Soviet Union beginning in the early 1960s. After first testing a domestic built nuclear weapon in 1964, it went on to develop various warheads and missiles. Beginning in the early 1970s, the liquid fuelled DF-5 ICBM was developed and used as a satellite launch vehicle in 1975. The DF-5, with a range of —long enough to strike the Western United States and the Soviet Union—was silo deployed, with the first pair in service by 1981 and possibly twenty missiles in service by the late 1990s. China also deployed the JL-1 Medium-range ballistic missile with a reach of aboard the ultimately unsuccessful type 92 submarine. Post-Cold War In 1991, the United States and the Soviet Union agreed in the START I treaty to reduce their deployed ICBMs and attributed warheads. , all five of the nations with permanent seats on the United Nations Security Council have operational long-range ballistic missile systems; Russia, the United States, and China also have land-based ICBMs (the US missiles are silo-based, while China and Russia have both silo and road-mobile (DF-31, RT-2PM2 Topol-M missiles). Israel is believed to have deployed a road mobile nuclear ICBM, the Jericho III, which entered service in 2008; an upgraded version is in development. India successfully test fired Agni V, with a strike range of more than on 19 April 2012, claiming entry into the ICBM club. The missile's actual range is speculated by foreign researchers to be up to with India having downplayed its capabilities to avoid causing concern to other countries. By 2012 there was speculation by some intelligence agencies that North Korea is developing an ICBM. North Korea successfully put a satellite into space on 12 December 2012 using the Unha-3 rocket. The United States claimed that the launch was in fact a way to test an ICBM. (See Timeline of first orbital launches by country.) In early July 2017, North Korea claimed for the first time to have tested successfully an ICBM capable of carrying a large thermonuclear warhead. In July 2014, China announced the development of its newest generation of ICBM, the Dongfeng-41 (DF-41), which has a range of 12,000 kilometres (7,500 miles), capable of reaching the United States, and which analysts believe is capable of being outfitted with MIRV technology. Most countries in the early stages of developing ICBMs have used liquid propellants, with the known exceptions being the Indian Agni-V, the planned but cancelled South African RSA-4 ICBM, and the now in service Israeli Jericho III. The RS-28 Sarmat (Russian: РС-28 Сармат; NATO reporting name: SATAN 2), is a Russian liquid-fueled, MIRV-equipped, super-heavy thermonuclear armed intercontinental ballistic missile in development by the Makeyev Rocket Design Bureau from 2009, intended to replace the previous R-36 missile. Its large payload would allow for up to 10 heavy warheads or 15 lighter ones or up to 24 hypersonic glide vehicles Yu-74, or a combination of warheads and massive amounts of countermeasures designed to defeat anti-missile systems; it was announced by the Russian military as a response to the US Prompt Global Strike. Flight phases The following flight phases can be distinguished: boost phase: 3 to 5 minutes; it is shorter for a solid-fuel rocket than for a liquid-propellant rocket; depending on the trajectory chosen, typical burnout speed is , up to ; altitude at the end of this phase is typically . midcourse phase: approx. 25 minutes – sub-orbital spaceflight with a flightpath being a part of an ellipse with a vertical major axis; the apogee (halfway through the midcourse phase) is at an altitude of approximately ; the semi-major axis is between ; the projection of the flightpath on the Earth's surface is close to a great circle, slightly displaced due to earth rotation during the time of flight; the missile may release several independent warheads and penetration aids, such as metallic-coated balloons, aluminum chaff, and full-scale warhead decoys. reentry/terminal phase
|
sessions are mostly informal gatherings at which people play Irish traditional music. The Irish language word for "session" is seisiún. This article discusses tune-playing, although "session" can also refer to a singing session or a mixed session (tunes and songs). Barry Foy's Field Guide to the Irish Music Session defines a session as: ...a gathering of Irish traditional musicians for the purpose of celebrating their common interest in the music by playing it together in a relaxed, informal setting, while in the process generally beefing up the mystical cultural mantra that hums along uninterruptedly beneath all manifestations of Irishness worldwide. Social and cultural aspects The general session scheme is that someone starts a tune, and those who know it join in. Good session etiquette requires not playing if one does not know the tune (or at least quietly playing an accompaniment part) and waiting until a tune one knows comes along. In an "open" session, anyone who is able to play Irish music is welcome. Most often there are more-or-less recognized session leaders; sometimes there are no leaders. At times a song will be sung or a slow air played by a single musician between sets. Locations and times Sessions are usually held in public houses or taverns.
|
public houses or taverns. A pub owner might have one or two musicians paid to come regularly in order for the session to have a base. These musicians can perform during any gaps during the day or evening when no other performers are there and wish to play. Sunday afternoons and weekday nights (especially Tuesday and Wednesday) are common times for sessions to be scheduled, on the theory that these are the least likely times for dances and concerts to be held, and therefore the times that professional musicians will be most able to show up. Sessions can be held in homes or at various public places in addition to pubs; often at a festival sessions will be got together in the beer tent or in the vendor's booth of a music-loving craftsperson or dealer. When a particularly large musical event "takes over" an entire village, spontaneous sessions may erupt on the street corners. Sessions may also take place occasionally at wakes. House sessions are not as common now as they were in the past. This can be seen in the book Peig by Peig Sayers. In the early stages of the book when Peig was young they often went to sessions at peoples houses in a practice called 'bothántiocht'. See also
|
which calves (breaks off) from an ice shelf or glacier may become an iceberg. Sea ice can be forced together by currents and winds to form pressure ridges up to tall. Navigation through areas of sea ice occurs in openings called "polynyas" or "leads" or requires the use of a special ship called an "icebreaker". On land and structures Ice on land ranges from the largest type called an "ice sheet" to smaller ice caps and ice fields to glaciers and ice streams to the snow line and snow fields. Aufeis is layered ice that forms in Arctic and subarctic stream valleys. Ice, frozen in the stream bed, blocks normal groundwater discharge, and causes the local water table to rise, resulting in water discharge on top of the frozen layer. This water then freezes, causing the water table to rise further and repeat the cycle. The result is a stratified ice deposit, often several meters thick. Freezing rain is a type of winter storm called an ice storm where rain falls and then freezes producing a glaze of ice. Ice can also form icicles, similar to stalactites in appearance, or stalagmite-like forms as water drips and re-freezes. The term "ice dam" has three meanings (others discussed below). On structures, an ice dam is the buildup of ice on a sloped roof which stops melt water from draining properly and can cause damage from water leaks in buildings. On rivers and streams Ice which forms on moving water tends to be less uniform and stable than ice which forms on calm water. Ice jams (sometimes called "ice dams"), when broken chunks of ice pile up, are the greatest ice hazard on rivers. Ice jams can cause flooding, damage structures in or near the river, and damage vessels on the river. Ice jams can cause some hydropower industrial facilities to completely shut down. An ice dam is a blockage from the movement of a glacier which may produce a proglacial lake. Heavy ice flows in rivers can also damage vessels and require the use of an icebreaker to keep navigation possible. Ice discs are circular formations of ice surrounded by water in a river. Pancake ice is a formation of ice generally created in areas with less calm conditions. On lakes Ice forms on calm water from the shores, a thin layer spreading across the surface, and then downward. Ice on lakes is generally four types: primary, secondary, superimposed and agglomerate. Primary ice forms first. Secondary ice forms below the primary ice in a direction parallel to the direction of the heat flow. Superimposed ice forms on top of the ice surface from rain or water which seeps up through cracks in the ice which often settles when loaded with snow. Shelf ice occurs when floating pieces of ice are driven by the wind piling up on the windward shore. Candle ice is a form of rotten ice that develops in columns perpendicular to the surface of a lake. An ice shove occurs when ice movement, caused by ice expansion and/or wind action, occurs to the extent that ice pushes onto the shores of lakes, often displacing sediment that makes up the shoreline. In the air Rime Rime is a type of ice formed on cold objects when drops of water crystallize on them. This can be observed in foggy weather, when the temperature drops during the night. Soft rime contains a high proportion of trapped air, making it appear white rather than transparent, and giving it a density about one quarter of that of pure ice. Hard rime is comparatively dense. Pellets Ice pellets are a form of precipitation consisting of small, translucent balls of ice. This form of precipitation is also referred to as "sleet" by the United States National Weather Service. (In British English "sleet" refers to a mixture of rain and snow.) Ice pellets are usually smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is PL. Ice pellets form when a layer of above-freezing air is located between above the ground, with sub-freezing air both above and below it. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front. Hail Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted up again. Hail has a diameter of or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least and GS for smaller. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to and weigh more than . In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing . Hail-producing clouds are often identifiable by their green coloration. The growth rate is maximized at about , and becomes vanishingly small much below as supercooled water droplets become rare. For this reason, hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Entrainment of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is actually less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater depth. Hail in the tropics occurs mainly at higher elevations. Snow Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than , because to freeze, a few molecules in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice; then the droplet freezes around this "nucleus". Experiments show that this "homogeneous" nucleation of cloud droplets only occurs at temperatures lower than . In warmer clouds an aerosol particle or "ice nucleus" must be present in (or in contact with) the droplet to act as a nucleus. Our understanding of what particles make efficient ice nuclei is poor – what we do know is they are very rare compared to that cloud condensation nuclei on which liquid droplets form. Clays, desert dust and biological particles may be effective, although to what extent is unclear. Artificial nuclei are used in cloud seeding. The droplet then grows by condensation of water vapor onto the ice surfaces. Diamond dust So-called "diamond dust", also known as ice needles or ice crystals, forms at temperatures approaching due to air with slightly higher moisture from aloft mixing with colder, surface-based air. The METAR identifier for diamond dust within international hourly weather reports is IC. Ablation Ablation of ice refers to both its melting and its dissolution. The melting of ice means entails the breaking of hydrogen bonds between the water molecules. The ordering of the molecules in the solid breaks down to a less ordered state and the solid melts to become a liquid. This is achieved by increasing the internal energy of the ice beyond the melting point. When ice melts it absorbs as much energy as would be required to heat an equivalent amount of water by 80 °C. While melting, the temperature of the ice surface remains constant at 0 °C. The rate of the melting process depends on the efficiency of the energy exchange process. An ice surface in fresh water melts solely by free convection with a rate that depends linearly on the water temperature, T∞, when T∞ is less than 3.98 °C, and superlinearly when T∞ is equal to or greater than 3.98 °C, with the rate being proportional to (T∞ − 3.98 °C)α, with α = for T∞ much greater than 8 °C, and α = for in between temperatures T∞. In salty ambient conditions, dissolution rather than melting often causes the ablation of ice. For example, the temperature of the Arctic Ocean is generally below the melting point of ablating sea ice. The phase transition from solid to liquid is achieved by mixing salt and water molecules, similar to the dissolution of sugar in water, even though the water temperature is far below the melting point of the sugar. Thus the dissolution rate is limited by salt transport whereas melting can occur at much higher rates that are characteristic for heat transport. Role in human activities Humans have used ice for cooling and food preservation for centuries, relying on harvesting natural ice in various forms and then transitioning to the mechanical production of the material. Ice also presents a challenge to transportation in various forms and a setting for winter sports. Cooling Ice has long been valued as a means of cooling. In 400 BC Iran, Persian engineers had already mastered the technique of storing ice in the middle of summer in the desert. The ice was brought in during the winters from nearby mountains in bulk amounts, and stored in specially designed, naturally cooled refrigerators, called yakhchal (meaning ice storage). This was a large underground space (up to 5000 m3) that had thick walls (at least two meters at the base) made of a special mortar called sarooj, composed of sand, clay, egg whites, lime, goat hair, and ash in specific proportions, and which was known to be resistant to heat transfer. This mixture was thought to be completely water impenetrable. The space often had access to a qanat, and often contained a system of windcatchers which could easily bring temperatures inside the space down to frigid levels on summer days. The ice was used to chill treats for royalty. Harvesting There were thriving industries in 16th–17th century England whereby low-lying areas along the Thames Estuary were flooded during the winter, and ice harvested in carts and stored inter-seasonally in insulated wooden houses as a provision to an icehouse often located in large country houses, and widely used to keep fish fresh when caught in distant waters. This was allegedly copied by an Englishman who had seen the same activity in China. Ice was imported into England from Norway on a considerable scale as early as 1823. In the United States, the first cargo of ice was sent from New York City to Charleston, South Carolina, in 1799, and by the first half of the 19th century, ice harvesting had become a big business. Frederic Tudor, who became known as the "Ice King", worked on developing better insulation products for long distance shipments of ice, especially to the tropics; this became known as the ice trade. Trieste sent ice to Egypt, Corfu, and Zante; Switzerland, to France; and Germany sometimes was supplied from Bavarian lakes. The Hungarian Parliament building used ice harvested in the winter from Lake Balaton for air conditioning. Ice houses were used to store ice formed in the winter, to make ice available all year long, and an early type of refrigerator known as an icebox was cooled using a block of ice placed inside it. In many cities, it was not unusual to have a regular ice delivery service during the summer. The advent of artificial refrigeration technology has since made delivery of ice obsolete. Ice is still harvested for ice and snow sculpture events. For example, a swing saw is used to get ice for the Harbin International Ice and Snow Sculpture Festival each year from the frozen surface of the Songhua River. Mechanical production Ice is now produced on an industrial scale, for uses including food storage and processing, chemical manufacturing, concrete mixing and curing, and consumer or packaged ice. Most commercial icemakers produce three basic types of fragmentary ice: flake, tubular and plate, using a variety of techniques. Large batch ice makers can produce up to 75 tons of ice per day. In 2002, there were 426 commercial ice-making companies in the United States, with a combined value of shipments of $595,487,000. Home refrigerators can also make ice with a built in icemaker, which will typically make ice cubes or crushed ice. Stand-alone icemaker units that make ice cubes are often called ice machines. Transportation Ice can present challenges to safe transportation on land, sea and in the air. Land travel Ice forming on roads is a dangerous winter hazard. Black ice is very difficult to see, because it lacks the expected frosty surface. Whenever there is freezing rain or snow which occurs at a temperature near the melting point, it is common for ice to build up on the windows of vehicles. Driving safely requires the removal of the ice build-up. Ice scrapers are tools designed to break the ice free and clear the windows, though removing the ice can be a long and laborious process. Far enough below the freezing point, a thin layer of ice crystals can form on the inside surface of windows. This usually happens when a vehicle has been left alone after being driven for a while,
|
depending on its history of pressure and temperature. When cooled slowly, correlated proton tunneling occurs below (, ) giving rise to macroscopic quantum phenomena. Virtually all ice on Earth's surface and in its atmosphere is of a hexagonal crystalline structure denoted as ice Ih (spoken as "ice one h") with minute traces of cubic ice, denoted as ice Ic and, more recently found, Ice VII inclusions in diamonds. The most common phase transition to ice Ih occurs when liquid water is cooled below (, ) at standard atmospheric pressure. It may also be deposited directly by water vapor, as happens in the formation of frost. The transition from ice to water is melting and from ice directly to water vapor is sublimation. Ice is used in a variety of ways, including for cooling, for winter sports, and ice sculpting. Physical properties As a naturally occurring crystalline inorganic solid with an ordered structure, ice is considered to be a mineral. It possesses a regular crystalline structure based on the molecule of water, which consists of a single oxygen atom covalently bonded to two hydrogen atoms, or H–O–H. However, many of the physical properties of water and ice are controlled by the formation of hydrogen bonds between adjacent oxygen and hydrogen atoms; while it is a weak bond, it is nonetheless critical in controlling the structure of both water and ice. An unusual property of water is that its solid form—ice frozen at atmospheric pressure—is approximately 8.3% less dense than its liquid form; this is equivalent to a volumetric expansion of 9%. The density of ice is 0.9167–0.9168 g/cm3 at 0 °C and standard atmospheric pressure (101,325 Pa), whereas water has a density of 0.9998–0.999863 g/cm3 at the same temperature and pressure. Liquid water is densest, essentially 1.00 g/cm3, at 4 °C and begins to lose its density as the water molecules begin to form the hexagonal crystals of ice as the freezing point is reached. This is due to hydrogen bonding dominating the intermolecular forces, which results in a packing of molecules less compact in the solid. Density of ice increases slightly with decreasing temperature and has a value of 0.9340 g/cm3 at −180 °C (93 K). When water freezes, it increases in volume (about 9% for fresh water). The effect of expansion during freezing can be dramatic, and ice expansion is a basic cause of freeze-thaw weathering of rock in nature and damage to building foundations and roadways from frost heaving. It is also a common cause of the flooding of houses when water pipes burst due to the pressure of expanding water when it freezes. The result of this process is that ice (in its most common form) floats on liquid water, which is an important feature in Earth's biosphere. It has been argued that without this property, natural bodies of water would freeze, in some cases permanently, from the bottom up, resulting in a loss of bottom-dependent animal and plant life in fresh and sea water. Sufficiently thin ice sheets allow light to pass through while protecting the underside from short-term weather extremes such as wind chill. This creates a sheltered environment for bacterial and algal colonies. When sea water freezes, the ice is riddled with brine-filled channels which sustain sympagic organisms such as bacteria, algae, copepods and annelids, which in turn provide food for animals such as krill and specialised fish like the bald notothen, fed upon in turn by larger animals such as emperor penguins and minke whales. When ice melts, it absorbs as much energy as it would take to heat an equivalent mass of water by 80 °C. During the melting process, the temperature remains constant at 0 °C. While melting, any energy added breaks the hydrogen bonds between ice (water) molecules. Energy becomes available to increase the thermal energy (temperature) only after enough hydrogen bonds are broken that the ice can be considered liquid water. The amount of energy consumed in breaking hydrogen bonds in the transition from ice to water is known as the heat of fusion. As with water, ice absorbs light at the red end of the spectrum preferentially as the result of an overtone of an oxygen–hydrogen (O–H) bond stretch. Compared with water, this absorption is shifted toward slightly lower energies. Thus, ice appears blue, with a slightly greener tint than liquid water. Since absorption is cumulative, the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the ice. Other colors can appear in the presence of light absorbing impurities, where the impurity is dictating the color rather than the ice itself. For instance, icebergs containing impurities (e.g., sediments, algae, air bubbles) can appear brown, grey or green. Phases Ice may be any one of the 19 known solid crystalline phases of water, or in an amorphous solid state at various densities. Most liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together. However, the strong hydrogen bonds in water make it different: for some pressures higher than , water freezes at a temperature below 0 °C, as shown in the phase diagram below. The melting of ice under high pressures is thought to contribute to the movement of glaciers. Ice, water, and water vapour can coexist at the triple point, which is exactly 273.16 K (0.01 °C) at a pressure of 611.657 Pa. The kelvin was in fact defined as of the difference between this triple point and absolute zero, though this definition changed in May 2019. Unlike most other solids, ice is difficult to superheat. In an experiment, ice at −3 °C was superheated to about 17 °C for about 250 picoseconds. Subjected to higher pressures and varying temperatures, ice can form in 19 separate known crystalline phases. With care, at least 15 of these phases (one of the known exceptions being ice X) can be recovered at ambient pressure and low temperature in metastable form. The types are differentiated by their crystalline structure, proton ordering, and density. There are also two metastable phases of ice under pressure, both fully hydrogen-disordered; these are IV and XII. Ice XII was discovered in 1996. In 2006, XIII and XIV were discovered. Ices XI, XIII, and XIV are hydrogen-ordered forms of ices Ih, V, and XII respectively. In 2009, ice XV was found at extremely high pressures and −143 °C. At even higher pressures, ice is predicted to become a metal; this has been variously estimated to occur at 1.55 TPa or 5.62 TPa. As well as crystalline forms, solid water can exist in amorphous states as amorphous ice (ASW) of varying densities. Water in the interstellar medium is dominated by amorphous ice, making it likely the most common form of water in the universe. Low-density ASW (LDA), also known as hyperquenched glassy water, may be responsible for noctilucent clouds on Earth and is usually formed by deposition of water vapor in cold or vacuum conditions. High-density ASW (HDA) is formed by compression of ordinary ice Ih or LDA at GPa pressures. Very-high-density ASW (VHDA) is HDA slightly warmed to 160K under 1–2 GPa pressures. In outer space, hexagonal crystalline ice (the predominant form found on Earth) is extremely rare. Amorphous ice is more common; however, hexagonal crystalline ice can be formed by volcanic action. Ice from a theorized superionic water may possess two crystalline structures. At pressures in excess of such superionic ice would take on a body-centered cubic structure. However, at pressures in excess of the structure may shift to a more stable face-centered cubic lattice. It is speculated that superionic ice could compose the interior of ice giants such as Uranus and Neptune. Friction properties The low coefficient of friction ("slipperiness") of ice has been attributed to the pressure of an object coming into contact with the ice, melting a thin layer of the ice and allowing the object to glide across the surface. For example, the blade of an ice skate, upon exerting pressure on the ice, would melt a thin layer, providing lubrication between the ice and the blade. This explanation, called "pressure melting", originated in the 19th century. It, however, did not account for skating on ice temperatures lower than , which is often skated upon. A second theory describing the coefficient of friction of ice suggested that ice molecules at the interface cannot properly bond with the molecules of the mass of ice beneath (and thus are free to move like molecules of liquid water). These molecules remain in a semi-liquid state, providing lubrication regardless of pressure against the ice exerted by any object. However, the significance of this hypothesis is disputed by experiments showing a high coefficient of friction for ice using atomic force microscopy. A third theory is "friction heating", which suggests that friction of the material is the cause of the ice layer melting. However, this theory does not sufficiently explain why ice is slippery when standing still even at below-zero temperatures. A comprehensive theory of ice friction takes into account all the above-mentioned friction mechanisms. This model allows quantitative estimation of the friction coefficient of ice against various materials as a function of temperature and sliding speed. In typical conditions related to winter sports and tires of a vehicle on ice, melting of a thin ice layer due to the frictional heating is the primary reason for the slipperiness. The mechanism controlling the frictional properties of ice is still an active area of scientific study. Natural formation The term that collectively describes all of the parts of the Earth's surface where water is in frozen form is the cryosphere. Ice is an important component of the global climate, particularly in regard to the water cycle. Glaciers and snowpacks are an important storage mechanism for fresh water; over time, they may sublimate or melt. Snowmelt is an important source of seasonal fresh water. The World Meteorological Organization defines several kinds of ice depending on origin, size, shape, influence and so on. Clathrate hydrates are forms of ice that contain gas molecules trapped within its crystal lattice. On the oceans Ice that is found at sea may be in the form of drift ice floating in the water, fast ice fixed to a shoreline or anchor ice if attached to the sea bottom. Ice which calves (breaks off) from an ice shelf or glacier may become an iceberg. Sea ice can be forced together by currents and winds to form pressure ridges up to tall. Navigation through areas of sea ice occurs in openings called "polynyas" or "leads" or requires the use of a special ship called an "icebreaker". On land and structures Ice on land ranges from the largest type called an "ice sheet" to smaller ice caps and ice fields to glaciers and ice streams to the snow line and snow fields. Aufeis is layered ice that forms in Arctic and subarctic stream valleys. Ice, frozen in the stream bed, blocks normal groundwater discharge, and causes the local water table to rise, resulting in water discharge on top of the frozen layer. This water then freezes, causing the water table to rise further and repeat the cycle. The result is a stratified ice deposit, often several meters thick. Freezing rain is a type of winter storm called an ice storm where rain falls and then freezes producing a glaze of ice. Ice can also form icicles, similar to stalactites in appearance, or stalagmite-like forms as water drips and re-freezes. The term "ice dam" has three meanings (others discussed below). On structures, an ice dam is the buildup of ice on a sloped roof which stops melt water from draining properly and can cause damage from water leaks in buildings. On rivers and streams Ice which forms on moving water tends to be less uniform and stable than ice which forms on calm water. Ice jams (sometimes called "ice dams"), when broken chunks of ice pile up, are the greatest ice hazard on rivers. Ice jams can cause flooding, damage structures in or near the river, and damage vessels on the river. Ice jams can cause some hydropower industrial facilities to completely shut down. An ice dam is a blockage from the movement of a glacier which may produce a proglacial lake. Heavy ice flows in rivers can also damage vessels and require the use of an icebreaker to keep navigation possible. Ice discs are circular formations of ice surrounded by water in a river. Pancake ice is a formation of ice generally created in areas with less calm conditions. On lakes Ice forms on calm water from the shores, a thin layer spreading across the surface, and then downward. Ice on lakes is generally four types: primary, secondary, superimposed and agglomerate. Primary ice forms first. Secondary ice forms below the primary ice in a direction parallel to the direction of the heat flow. Superimposed ice forms on top of the ice surface from rain or water which seeps up through cracks in the ice which often settles when loaded with snow. Shelf ice occurs when floating pieces of ice are driven by the wind piling up on the windward shore. Candle ice is a form of rotten ice that develops in columns perpendicular to the surface of a lake. An ice shove occurs when ice movement, caused by ice expansion and/or wind action, occurs to the extent that ice pushes onto the shores of lakes, often displacing sediment that makes up the shoreline. In the air Rime Rime is a type of ice formed on cold objects when drops of water crystallize on them. This can be observed in foggy weather, when the temperature drops during the night. Soft rime contains a high proportion of trapped air, making it appear white rather than transparent, and giving it a density about one quarter of that of pure ice. Hard rime is comparatively dense. Pellets Ice pellets are a form of precipitation consisting of small, translucent balls of ice.
|
valence shell tend to be very reactive. Atoms that are strongly electronegative (as is the case with halogens) often have only one or two empty orbitals in their valence shell, and frequently bond with other molecules or gain electrons to form anions. Atoms that are weakly electronegative (such as alkali metals) have relatively few valence electrons, which can easily be shared with atoms that are strongly electronegative. As a result, weakly electronegative atoms tend to distort their electron cloud and form cations. Formation Ionic bonding can result from a redox reaction when atoms of an element (usually metal), whose ionization energy is low, give some of their electrons to achieve a stable electron configuration. In doing so, cations are formed. An atom of another element (usually nonmetal) with greater electron affinity accepts one or more electrons to attain a stable electron configuration, and after accepting electrons an atom becomes an anion. Typically, the stable electron configuration is one of the noble gases for elements in the s-block and the p-block, and particular stable electron configurations for d-block and f-block elements. The electrostatic attraction between the anions and cations leads to the formation of a solid with a crystallographic lattice in which the ions are stacked in an alternating fashion. In such a lattice, it is usually not possible to distinguish discrete molecular units, so that the compounds formed are not molecular in nature. However, the ions themselves can be complex and form molecular ions like the acetate anion or the ammonium cation. For example, common table salt is sodium chloride. When sodium (Na) and chlorine (Cl) are combined, the sodium atoms each lose an electron, forming cations (Na+), and the chlorine atoms each gain an electron to form anions (Cl−). These ions are then attracted to each other in a 1:1 ratio to form sodium chloride (NaCl). Na + Cl → Na+ + Cl− → NaCl However, to maintain charge neutrality, strict ratios between anions and cations are observed so that ionic compounds, in general, obey the rules of stoichiometry despite not being molecular compounds. For compounds that are transitional to the alloys and possess mixed ionic and metallic bonding, this may not be the case anymore. Many sulfides, e.g., do form non-stoichiometric compounds. Many ionic compounds are referred to as salts as they can also be formed by the neutralization reaction of an Arrhenius base like NaOH with an Arrhenius acid like HCl NaOH + HCl → NaCl + H2O The salt NaCl is then said to consist of the acid rest Cl− and the base rest Na+. The removal of electrons to form the cation is endothermic, raising the system's overall energy. There may also be energy changes associated with breaking of existing bonds or the addition of more than one electron to form anions. However, the action of the anion's accepting the cation's valence electrons and the subsequent attraction of the ions to each other releases (lattice) energy and, thus, lowers the overall energy of the system. Ionic bonding will occur only if the overall energy change for the reaction is favorable. In general, the reaction is exothermic, but, e.g., the formation of mercuric oxide (HgO) is endothermic. The charge of the resulting ions is a major factor in the strength of ionic bonding, e.g. a salt C+A− is held together by electrostatic forces roughly four times weaker than C2+A2− according to Coulomb's law, where C and A represent a generic cation and anion respectively. The sizes of the ions and the particular packing of the lattice are ignored in this rather simplistic argument. Structures Ionic compounds in the solid state form lattice structures. The two principal factors in determining the form of the lattice are the relative charges of the ions and their relative sizes. Some structures are adopted by a number of compounds; for example, the structure of the rock salt sodium chloride is also adopted by many alkali halides, and binary oxides such as magnesium oxide. Pauling's rules provide guidelines for predicting and rationalizing the crystal structures of ionic crystals Strength of the bonding For a solid crystalline ionic compound the enthalpy change in forming the solid from gaseous ions is termed the lattice energy. The experimental value for the lattice energy can be determined using the Born–Haber cycle. It can also be
|
like HCl NaOH + HCl → NaCl + H2O The salt NaCl is then said to consist of the acid rest Cl− and the base rest Na+. The removal of electrons to form the cation is endothermic, raising the system's overall energy. There may also be energy changes associated with breaking of existing bonds or the addition of more than one electron to form anions. However, the action of the anion's accepting the cation's valence electrons and the subsequent attraction of the ions to each other releases (lattice) energy and, thus, lowers the overall energy of the system. Ionic bonding will occur only if the overall energy change for the reaction is favorable. In general, the reaction is exothermic, but, e.g., the formation of mercuric oxide (HgO) is endothermic. The charge of the resulting ions is a major factor in the strength of ionic bonding, e.g. a salt C+A− is held together by electrostatic forces roughly four times weaker than C2+A2− according to Coulomb's law, where C and A represent a generic cation and anion respectively. The sizes of the ions and the particular packing of the lattice are ignored in this rather simplistic argument. Structures Ionic compounds in the solid state form lattice structures. The two principal factors in determining the form of the lattice are the relative charges of the ions and their relative sizes. Some structures are adopted by a number of compounds; for example, the structure of the rock salt sodium chloride is also adopted by many alkali halides, and binary oxides such as magnesium oxide. Pauling's rules provide guidelines for predicting and rationalizing the crystal structures of ionic crystals Strength of the bonding For a solid crystalline ionic compound the enthalpy change in forming the solid from gaseous ions is termed the lattice energy. The experimental value for the lattice energy can be determined using the Born–Haber cycle. It can also be calculated (predicted) using the Born–Landé equation as the sum of the electrostatic potential energy, calculated by summing interactions between cations and anions, and a short-range repulsive potential energy term. The electrostatic potential can be expressed in terms of the interionic separation and a constant (Madelung constant) that takes account of the geometry of the crystal. The further away from the nucleus the weaker the shield. The Born-Landé equation gives a reasonable fit to the lattice energy of, e.g., sodium chloride, where the calculated (predicted) value is −756 kJ/mol, which compares to −787 kJ/mol using the Born–Haber cycle. In aqueous solution the binding strength can be described by the Bjerrum or Fuoss equation as function of the ion charges, rather independent of the nature of the ions such as polarizability or size The strength of salt bridges is most often evaluated by measurements of equilibria between molecules containing cationic and anionic sites, most often in solution. Equilibrium constants in water indicate additive free energy contributions for each salt bridge. Another method for the identification of hydrogen bonds also in complicated molecules is crystallography, sometimes also NMR-spectroscopy. The attractive forces defining the strength of ionic bonding can be modeled by Coulomb's Law. Ionic bond strengths are typically (cited ranges vary) between 170 and 1500 kJ/mol. Polarization power effects Ions in crystal lattices of purely ionic compounds are spherical; however, if the positive ion is small and/or highly charged, it will distort the electron cloud of the negative ion, an effect summarised in Fajans' rules. This polarization of the negative ion leads to a build-up of extra charge density between the two nuclei, that is, to partial covalency. Larger negative ions are more easily polarized, but the effect is usually important only when positive ions with charges of 3+ (e.g., Al3+) are involved. However, 2+ ions (Be2+) or even 1+ (Li+) show some polarizing power because their sizes are so small (e.g., LiI is ionic but has some covalent bonding present). Note that this is not the ionic polarization effect that refers to displacement of ions in the lattice due to the application of an electric field. Comparison with covalent bonding In ionic bonding, the atoms are bound by attraction of oppositely charged ions, whereas, in covalent bonding, atoms are bound by sharing electrons to attain stable electron configurations. In covalent bonding, the molecular geometry around each atom is determined by valence shell electron pair repulsion VSEPR rules, whereas, in ionic materials, the geometry follows maximum packing rules. One could say that covalent bonding is more directional in the sense that the energy penalty for not adhering to the optimum bond angles is large, whereas ionic bonding has no such penalty. There are no shared electron pairs to repel each other, the ions should simply be packed as efficiently as possible. This often leads to much higher coordination numbers. In NaCl, each ion has 6 bonds and all bond angles are 90°. In CsCl the coordination number is 8. By comparison carbon typically has a maximum of four bonds. Purely ionic bonding cannot exist, as the proximity of the entities involved in the bonding allows some degree of sharing electron density between them. Therefore, all ionic bonding has some covalent character. Thus, bonding is considered ionic where the ionic character is greater than the covalent character. The larger the difference in electronegativity between the two types of atoms involved in the bonding, the more ionic (polar) it is. Bonds with partially ionic and partially covalent character are called polar covalent bonds. For example, Na–Cl and Mg–O interactions have a few percent covalency, while Si–O bonds are usually ~50% ionic and ~50% covalent. Pauling estimated that an electronegativity difference of 1.7 (on the Pauling scale) corresponds to 50% ionic character, so that a difference greater than 1.7 corresponds to a bond which is predominantly ionic. Ionic character in covalent bonds can be directly measured for atoms having quadrupolar nuclei (2H, 14N, 81,79Br, 35,37Cl or 127I). These
|
organisations. IBF may also refer to: Businesses International Banking Facility, a legal entity of a US bank Irish Banking Federation, a banking representative body in Ireland Sports International Bandy Federation,
|
one of several boxing organisations. IBF may also refer to: Businesses International Banking Facility, a legal entity of a US bank Irish Banking Federation, a banking representative body in Ireland Sports International
|
cytokines, while they can also act as scavengers that rid the body of worn-out cells and other debris, and as antigen-presenting cells (APC) that activate the adaptive immune system. Dendritic cells are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system. Granulocytes are leukocytes that have granules in their cytoplasm. In this category are neutrophils, mast cells, basophils, and eosinophils. Mast cells reside in connective tissues and mucous membranes, and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma. Innate lymphoid cells (ILCs) are a group of innate immune cells that are derived from common lymphoid progenitor and belong to the lymphoid lineage. These cells are defined by absence of antigen specific B or T cell receptor (TCR) because of the lack of recombination activating gene. ILCs do not express myeloid or dendritic cell markers. Natural killer cells (NK) are lymphocytes and a component of the innate immune system which does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self." This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors which essentially put the brakes on NK cells. Inflammation Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. The pattern-recognition receptors called inflammasomes are multiprotein complexes (consisting of an NLR, the adaptor protein ASC, and the effector molecule pro-caspase-1) that form in response to cytosolic PAMPs and DAMPs, whose function is to generate active forms of the inflammatory cytokines IL-1β and IL-18. Humoral defenses The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane. Adaptive immune system The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it. Recognition of antigen The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface and recognizes native (unprocessed) antigen without any need for antigen processing. Such antigens may be large molecules found on the surfaces of pathogens, but can also be small haptens (such as penicillin) attached to carrier molecule. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. When B or T cells encounter their related antigens they multiply and many "clones" of the cells are produced that target the same antigen. This is called clonal selection. Antigen presentation to T lymphocytes Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule. Cell mediated immunity There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response. Killer T cells Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below). Helper T cells Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks. Helper T cells express T cell receptors that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (such as Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells. Gamma delta T cells Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells. Humoral immune response A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells. Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another. Immunological memory When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory. Physiological regulation The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration. Hormones Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. Vitamin D When a T-cell encounters a foreign pathogen, it extends a vitamin D receptor. This is essentially a signaling device that allows the T-cell to bind to the active form of vitamin D, the steroid hormone calcitriol. T-cells have a symbiotic relationship with vitamin D. Not only does the T-cell extend a vitamin D receptor, in essence asking to bind to the steroid hormone version of vitamin D, calcitriol, but the T-cell expresses the gene CYP27B1, which is the gene responsible for converting the pre-hormone version of vitamin D, calcidiol into calcitriol. Only after binding to calcitriol can T-cells perform their intended function. Other immune system cells that are known to express CYP27B1 and thus activate vitamin D calcidiol, are dendritic cells, keratinocytes and macrophages. Sleep and rest The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep. In people suffering from sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation. These disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma. In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both innate and adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine causes increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cell activation, proliferation, and differentiation. During this time of a slowly evolving adaptive immune response, there is a peak in undifferentiated or less differentiated cells, like naïve and central memory T cells. In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) supports the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses. During wake periods, differentiated effector cells, such as cytotoxic natural killer cells and cytotoxic T lymphocytes, peak to elicit an effective response against any intruding pathogens. Anti-inflammatory molecules, such as cortisol and catecholamines, also peak during awake active times. Inflammation would cause serious cognitive and physical impairments if it were to occur during wake times, and inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time. Repair and regeneration The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians such as in axolotl limb regeneration. According to one hypothesis, organisms that can regenerate (e.g., axolotls) could be less immunocompetent than organisms that cannot regenerate. Disorders of human immunity Failures of host defense occur and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities. Immunodeficiencies Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Severe combined immunodeficiency is a rare genetic disorder characterized by the disturbed development of functional T cells and B cells caused by numerous genetic mutations. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency. Autoimmunity Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune disorders. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Hypersensitivity Hypersensitivity is an immune response that damages the body's own tissues. It is divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen. Type II hypersensitivity occurs when antibodies bind to antigens on the individual's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune
|
the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells. Gamma delta T cells Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells. Humoral immune response A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells. Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another. Immunological memory When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory. Physiological regulation The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration. Hormones Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. Vitamin D When a T-cell encounters a foreign pathogen, it extends a vitamin D receptor. This is essentially a signaling device that allows the T-cell to bind to the active form of vitamin D, the steroid hormone calcitriol. T-cells have a symbiotic relationship with vitamin D. Not only does the T-cell extend a vitamin D receptor, in essence asking to bind to the steroid hormone version of vitamin D, calcitriol, but the T-cell expresses the gene CYP27B1, which is the gene responsible for converting the pre-hormone version of vitamin D, calcidiol into calcitriol. Only after binding to calcitriol can T-cells perform their intended function. Other immune system cells that are known to express CYP27B1 and thus activate vitamin D calcidiol, are dendritic cells, keratinocytes and macrophages. Sleep and rest The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep. In people suffering from sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation. These disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma. In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both innate and adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine causes increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cell activation, proliferation, and differentiation. During this time of a slowly evolving adaptive immune response, there is a peak in undifferentiated or less differentiated cells, like naïve and central memory T cells. In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) supports the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses. During wake periods, differentiated effector cells, such as cytotoxic natural killer cells and cytotoxic T lymphocytes, peak to elicit an effective response against any intruding pathogens. Anti-inflammatory molecules, such as cortisol and catecholamines, also peak during awake active times. Inflammation would cause serious cognitive and physical impairments if it were to occur during wake times, and inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time. Repair and regeneration The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians such as in axolotl limb regeneration. According to one hypothesis, organisms that can regenerate (e.g., axolotls) could be less immunocompetent than organisms that cannot regenerate. Disorders of human immunity Failures of host defense occur and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities. Immunodeficiencies Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Severe combined immunodeficiency is a rare genetic disorder characterized by the disturbed development of functional T cells and B cells caused by numerous genetic mutations. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency. Autoimmunity Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune disorders. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Hypersensitivity Hypersensitivity is an immune response that damages the body's own tissues. It is divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen. Type II hypersensitivity occurs when antibodies bind to antigens on the individual's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve contact dermatitis. These reactions are mediated by T cells, monocytes, and macrophages. Idiopathic inflammation Inflammation is one of the first responses of the
|
exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly, the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. Between six and nine months after birth, a child’s immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules. During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-estradiol (an estrogen) and, in males, is testosterone. Estradiol usually begins to act around the age of 10 and testosterone some months later. There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, including an increased risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system. The female sex hormone 17-β-estradiol has been shown to regulate the level of immunological response, while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA, increase immune response. As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life. Physical changes during puberty such as thymic involution also affect immunological response. Ecoimmunology and behavioural immunity Ecoimmunology, or ecological immunology, explores the relationship between the immune system of an organism and its social, biotic and abiotic environment. More recent ecoimmunological research has focused on host pathogen defences traditionally considered "non-immunological", such as pathogen avoidance, self-medication, symbiont-mediated defenses, and fecundity trade-offs. Behavioural immunity, a phrase coined by Mark Schaller, specifically refers to psychological pathogen avoidance drivers, such as disgust aroused by stimuli encountered around pathogen-infected individuals, such as the smell of vomit. More broadly, "behavioural" ecological immunity has been demonstrated in multiple species. For example, the Monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites. These toxins reduce parasite growth in the offspring of the infected Monarch. However, when uninfected Monarch butterflies are forced to feed only on these toxic plants, they suffer a fitness cost as reduced lifespan relative to other uninfected Monarch butterflies. This indicates that laying eggs on toxic plants is a costly behaviour in Monarchs which has probably evolved to reduce the severity of parasite infection. Symbiont-mediated defenses are also heritable across host generations, despite a non-genetic direct basis for the transmission. Aphids, for example, rely on several different symbionts for defense from key parasites, and can vertically transmit their symbionts from parent to offspring. Therefore, a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring, allowing coevolution with parasites attacking the host in a way similar to traditional immunity. Immunotherapy The use of immune system components or antigens to treat a disease or disorder is known as immunotherapy. Immunotherapy is most commonly used to treat allergies, autoimmune disorders such as Crohn’s disease and rheumatoid arthritis, and certain cancers. Immunotherapy is also often used in the immunosuppressed (such as HIV patients) and people suffering from other immune deficiencies. This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α. Diagnostic immunology The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that are not exact matches. Cancer immunology The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer. The immunology concerned with physiological reaction characteristic of the immune state. Reproductive immunology This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia. Theoretical immunology Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring, among others, stated that the active immune agents were soluble components (molecules) found in the organism's "humors" rather than its cells. In the mid-1950s, Macfarlane Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions. See also History of immunology Immunomics International Reviews of Immunology List of immunologists Osteoimmunology Outline of immunology References External links American Association of Immunologists British Society for Immunology Federation of Clinical
|
IgE and IgA do not cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast milk. These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually short-lived and of low affinity. These antibodies can also produce a negative response. If a child is exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly, the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. Between six and nine months after birth, a child’s immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules. During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-estradiol (an estrogen) and, in males, is testosterone. Estradiol usually begins to act around the age of 10 and testosterone some months later. There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, including an increased risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system. The female sex hormone 17-β-estradiol has been shown to regulate the level of immunological response, while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA, increase immune response. As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life. Physical changes during puberty such as thymic involution also affect immunological response. Ecoimmunology and behavioural immunity Ecoimmunology, or ecological immunology, explores the relationship between the immune system of an organism and its social, biotic and abiotic environment. More recent ecoimmunological research has focused on host pathogen defences traditionally considered "non-immunological", such as pathogen avoidance, self-medication, symbiont-mediated defenses, and fecundity trade-offs. Behavioural immunity, a phrase coined by Mark Schaller, specifically refers to psychological pathogen avoidance drivers, such as disgust aroused by stimuli encountered around pathogen-infected individuals, such as the smell of vomit. More broadly, "behavioural" ecological immunity has been demonstrated in multiple species. For example, the Monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites. These toxins reduce parasite growth in the offspring of the infected Monarch. However, when uninfected Monarch butterflies are forced to feed only on these toxic plants, they suffer a fitness cost as reduced lifespan relative to other uninfected Monarch butterflies. This indicates that laying eggs on toxic plants is a costly behaviour in Monarchs which has probably evolved to reduce the severity of parasite infection. Symbiont-mediated defenses are also heritable across host generations, despite a non-genetic direct basis for the transmission. Aphids, for example, rely on several different symbionts for defense from key parasites, and can vertically transmit their symbionts from parent to offspring. Therefore, a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring, allowing coevolution with parasites attacking the host in a way similar to traditional immunity. Immunotherapy The use of immune system components or antigens to treat a disease or disorder is known as immunotherapy. Immunotherapy is most commonly used to treat allergies, autoimmune disorders such as Crohn’s disease and rheumatoid arthritis, and certain cancers. Immunotherapy is also often used in the immunosuppressed (such as HIV patients) and people suffering from other immune deficiencies. This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α. Diagnostic immunology The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that are not exact matches. Cancer immunology The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer. The immunology concerned with physiological reaction characteristic of the immune state. Reproductive immunology This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia. Theoretical immunology Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring, among others, stated that the active immune agents were soluble components (molecules) found in the organism's "humors" rather than its cells. In the mid-1950s, Macfarlane Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions.
|
Projects Authority Institute of Practitioners in Advertising Involvement and Participation Association, for employee involvement in the workplace United States Independence Party of America Independent Pilots Association Independent practice association, of physicians Innovations for Poverty Action Innovative Products of America, a tool manufacturer Institute for Propaganda Analysis Island Pacific Academy, Hawaii Other Independent Psychiatric Association of Russia Institute of Public Affairs, Poland Instituto Superior Autónomo de Estudos Politécnicos, Portugal Science and technology Intermediate power amplifier of a radio transmitter Ipa (spider), a genus of spiders Chemistry 3-Indolepropionic acid, a biological substance Isopropyl alcohol, a chemical compound Computing "Identity, Policy, and Audit", as in FreeIPA Software for biologists from Ingenuity
|
refers to: India pale ale, a style of beer International Phonetic Alphabet, a system of phonetic notation IPA may also refer to: Organizations International Insolvency Practitioners Association, of the UK and Ireland Institute of Public Administration (disambiguation) International Permafrost Association International Phonetic Association, behind the International Phonetic Alphabet International Play Association International Police Association International Polka Association International Presentation Association, network of Presentation Sisters International Psychoanalytical Association International Publishers Association, representing book and journal publishing Australia Institute of Public Accountants Institute of Public Affairs India Indian
|
subjecting the beer to a cold stage comprising rapidly cooling the beer to a temperature of about its freezing point in such a manner that ice crystals are formed therein in only minimal amounts. The resulting cooled beer is then mixed for a short period of time with a beer slurry containing ice crystals, without any appreciable collateral increase in the amount of ice crystals in the resulting mixture. Finally, the so-treated beer is extracted from the mixture." The company provides the following explanation for the layman: "During this unique process, the temperature is reduced until fine ice crystals form in the beer. Then using an exclusive process, the crystals are removed. The result is a full-flavoured balanced beer." Miller acquired the U.S. marketing and distribution rights to Molson's products, and first introduced the Molson product in the United States in August 1993 as Molson Ice. Miller also introduced the Icehouse brand under the Plank Road Brewery brand name shortly thereafter, and it is still sold nationwide. Anheuser-Busch introduced Bud Ice (5.5% ABV) in 1994, and it remains one of the country's top selling ice beers. Bud Ice has a somewhat lower alcohol content than most other ice beer brands. In 1995, Anheuser-Busch also introduced two other major brands: Busch Ice (5.9% ABV, introduced 1995) and Natural Ice (also 5.9% ABV, also introduced in 1995). Natural Ice is the No. 1 selling ice beer brand in the United States; its low price makes it very popular on college campuses all over the country. Keystone Ice, a value-based subdivision of Coors, also produces a 5.9% ABV brew labeled Keystone Ice. Common ice beer brands in Canada in 2017, with approximately 5.5 to 6 per cent alcohol content, include Carling Ice, Molson Keystone Ice, Molson's Black Ice, Busch Ice, Old Milwaukee Ice, Brick's Laker Ice and Labatt Ice. There is a Labatt Maximum Ice too, with 7.1 per cent alcohol. Characteristics and regulation The ice beers are typically known for their high alcohol-to-cash ratio. In some areas, a substantial number of ice beer products are considered to often be bought by "street drunks," and are prohibited for sale. For example, most of the products that are explicitly listed as prohibited in the beer and malt liquor category in the Seattle area are ice beers. See also Applejack (drink) References External links Beer styles Canadian
|
a manner that ice crystals are formed therein in only minimal amounts. The resulting cooled beer is then mixed for a short period of time with a beer slurry containing ice crystals, without any appreciable collateral increase in the amount of ice crystals in the resulting mixture. Finally, the so-treated beer is extracted from the mixture." The company provides the following explanation for the layman: "During this unique process, the temperature is reduced until fine ice crystals form in the beer. Then using an exclusive process, the crystals are removed. The result is a full-flavoured balanced beer." Miller acquired the U.S. marketing and distribution rights to Molson's products, and first introduced the Molson product in the United States in August 1993 as Molson Ice. Miller also introduced the Icehouse brand under the Plank Road Brewery brand name shortly thereafter, and it is still sold nationwide. Anheuser-Busch introduced Bud Ice (5.5% ABV) in 1994, and it remains one of the country's top selling ice beers. Bud Ice has a somewhat lower alcohol content than most other ice beer brands. In 1995, Anheuser-Busch also introduced two other major brands: Busch Ice (5.9% ABV, introduced 1995) and Natural Ice (also 5.9% ABV, also introduced in 1995). Natural Ice is the No. 1 selling ice beer brand in the United States; its low price makes it very popular on college campuses all over the country. Keystone Ice, a value-based subdivision of Coors, also produces a 5.9% ABV brew labeled Keystone Ice. Common ice beer brands in Canada in 2017, with approximately 5.5 to 6 per cent alcohol content, include Carling Ice, Molson Keystone Ice, Molson's Black Ice, Busch Ice, Old Milwaukee Ice, Brick's Laker Ice and Labatt Ice. There is a Labatt Maximum Ice too, with 7.1 per cent alcohol. Characteristics and regulation The ice beers are typically known for their high alcohol-to-cash ratio. In some areas, a substantial number of ice beer products are considered to often be bought by "street drunks," and are prohibited for sale. For example, most of the products that are explicitly listed as prohibited in the beer and malt liquor category in the Seattle area are ice beers. See also Applejack (drink) References External links Beer
|
that support both binary operations, such as rings, integral domains, and fields. The multiplicative identity is often called unity in the latter context (a ring with unity). This should not be confused with a unit in ring theory, which is any element having a multiplicative inverse. By its own definition, unity itself is necessarily a unit. Examples Properties In the example S = {e,f} with the equalities given, S is a semigroup. It demonstrates the possibility for to have several left identities. In fact, every element can be a left identity. In a similar manner, there can be several right identities. But if there is both a right identity and a left identity, then they must be equal, resulting in a single two-sided identity. To see this, note that if is a left identity and is a right identity, then . In particular, there can never be more than one two-sided identity: if there were two, say and , then would have to be equal to both and . It is also quite possible for to have no identity element, such as the case of even integers under the multiplication operation. Another common example is the cross product of vectors, where the absence of an identity element is related to the fact that the direction of any nonzero cross product is always orthogonal to any element multiplied. That is, it is not possible to obtain
|
used in algebraic structures such as groups and rings. The term identity element is often shortened to identity (as in the case of additive identity and multiplicative identity) when there is no possibility of confusion, but the identity implicitly depends on the binary operation it is associated with. Definitions Let be a set equipped with a binary operation ∗. Then an element of is called a left identity if for all in , and a right identity if for all in . If is both a left identity and a right identity, then it is called a two-sided identity, or simply an identity. An identity with respect to addition is called an additive identity (often denoted as 0) and an identity with respect to multiplication is called a multiplicative identity (often denoted as 1). These need not be ordinary addition and multiplication—as the underlying operation could be rather arbitrary. In the case of a group for example, the identity element is sometimes simply denoted by the symbol . The distinction between additive and multiplicative identity is used most often for sets that support both binary operations, such as rings, integral domains, and fields. The multiplicative identity is often called unity in the latter context (a ring with unity). This should not be confused with a unit in ring theory, which is any element having
|
instruments. An instrumental can exist in music notation, after it is written by a composer; in the mind of the composer (especially in cases where the composer themselves will perform the piece, as in the case of a blues solo guitarist or a folk music fiddle player); as a piece that is performed live by a single instrumentalist or a musical ensemble, which could range in components from a duo or trio to a large big band, concert band or orchestra. In a song that is otherwise sung, a section that is not sung but which is played by instruments can be called an instrumental interlude, or, if it occurs at the beginning of the song, before the singer starts to sing, an instrumental introduction. If the instrumental section highlights the skill, musicality, and often the virtuosity of a particular performer (or group of performers), the section may be called a "solo" (e.g., the guitar solo that is a key section of heavy metal music and hard rock songs). If the instruments are percussion instruments, the interlude can be called a percussion interlude or "percussion break". These interludes are a form of break in the song. In popular music In commercial popular music, instrumental tracks are sometimes renderings, remixes of a corresponding release that features vocals, but they may also be compositions originally conceived without vocals. One example of a genre in which both vocal/instrumental and solely instrumental songs are produced is blues. A blues band often uses mostly songs that have lyrics that are sung, but during the band's show, they may also perform instrumental songs which only include electric guitar, harmonica, upright bass/electric bass and drum kit. Number-one instrumentals Borderline cases Some recordings which include brief or non-musical use of the human voice are typically considered instrumentals. Examples include songs
|
can be called a percussion interlude or "percussion break". These interludes are a form of break in the song. In popular music In commercial popular music, instrumental tracks are sometimes renderings, remixes of a corresponding release that features vocals, but they may also be compositions originally conceived without vocals. One example of a genre in which both vocal/instrumental and solely instrumental songs are produced is blues. A blues band often uses mostly songs that have lyrics that are sung, but during the band's show, they may also perform instrumental songs which only include electric guitar, harmonica, upright bass/electric bass and drum kit. Number-one instrumentals Borderline cases Some recordings which include brief or non-musical use of the human voice are typically considered instrumentals. Examples include songs with the following: Short verbal interjections (as in "Tequila" or "Topsy" or "Wipe Out" or "The Hustle" or "Bentley's Gonna Sort You Out") Repetitive nonsense words (e.g., "la la..." (as in "Calcutta") or "Woo Hoo") Non-musical spoken passages in the background of the track (e.g., "To Live Is to Die" by Metallica; "Wasteland" by Chelsea Grin) Wordless vocal effects, such as drones (e.g., "Rockit" or "Flying") Vocal percussion, such as beatbox B-sides on rap singles Yelling, (e.g. "Cry for a Shadow") Yodeling (e.g., "Hocus Pocus") Whistling (e.g., "I Was Kaiser Bill's Batman" or "Colonel Bogey March") Spoken statements at the end of the track (e.g., God Bless the Children of the Beast by Mötley Crüe, For the Love of God by Steve Vai) Non-musical vocal recordings taken from other media (e.g., "Vampires" by Godsmack) Field recordings which may or may not contain non-lyrical words. (e.g., many songs by Godspeed You! Black Emperor and other post-rock bands.) Songs including actual musical—rhythmic, melodic, and lyrical—vocals might still be categorized as instrumentals if the vocals appear only as a short part of an extended piece (e.g., "Unchained Melody" (Les Baxter), "Batman Theme", "TSOP (The Sound of Philadelphia)", "Pick Up the
|
This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly subdividing each edge into the golden mean along the direction of its vector. The five octahedra defining any given icosahedron form a regular polyhedral compound, while the two icosahedra that can be defined in this way from any given octahedron form a uniform polyhedron compound. Spherical coordinates The locations of the vertices of a regular icosahedron can be described using spherical coordinates, for instance as latitude and longitude. If two vertices are taken to be at the north and south poles (latitude ±90°), then the other ten vertices are at latitude = ±26.57°. These ten vertices are at evenly spaced longitudes (36° apart), alternating between north and south latitudes. This scheme takes advantage of the fact that the regular icosahedron is a pentagonal gyroelongated bipyramid, with D5d dihedral symmetry—that is, it is formed of two congruent pentagonal pyramids joined by a pentagonal antiprism. Orthogonal projections The icosahedron has three special orthogonal projections, centered on a face, an edge and a vertex: As a configuration This configuration matrix represents the icosahedron. The rows and columns correspond to vertices, edges, and faces. The diagonal numbers say how many of each element occur in the whole icosahedron. The nondiagonal numbers say how many of the column's element occur in or at the row's element. Here is the configuration expanded with k-face elements and k-figures. The diagonal element counts are the ratio of the full Coxeter group H3, order 120, divided by the order of the subgroup with mirror removal. Spherical tiling The icosahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Other facts An icosahedron has 43,380 distinct nets. To color the icosahedron, such that no two adjacent faces have the same color, requires at least 3 colors. A problem dating back to the ancient Greeks is to determine which of two shapes has larger volume, an icosahedron inscribed in a sphere, or a dodecahedron inscribed in the same sphere. The problem was solved by Hero, Pappus, and Fibonacci, among others. Apollonius of Perga discovered the curious result that the ratio of volumes of these two shapes is the same as the ratio of their surface areas. Both volumes have formulas involving the golden ratio, but taken to different powers. As it turns out, the icosahedron occupies less of the sphere's volume (60.54%) than the dodecahedron (66.49%). Construction by a system of equiangular lines The following construction of the icosahedron avoids tedious computations in the number field necessary in more elementary approaches. The existence of the icosahedron amounts to the existence of six equiangular lines in . Indeed, intersecting such a system of equiangular lines with a Euclidean sphere centered at their common intersection yields the twelve vertices of a regular icosahedron as can easily be checked. Conversely, supposing the existence of a regular icosahedron, lines defined by its six pairs of opposite vertices form an equiangular system. In order to construct such an equiangular system, we start with this 6 × 6 square matrix: A straightforward computation yields (where is the 6 × 6 identity matrix). This implies that has eigenvalues and , both with multiplicity 3 since is symmetric and of trace zero. The matrix induces thus a Euclidean structure on the quotient space , which is isomorphic to since the kernel of has dimension 3. The image under the projection of the six coordinate axes in forms a system of six equiangular lines in intersecting pairwise at a common acute angle of . Orthogonal projection of the positive and negative basis vectors of onto the -eigenspace of yields thus the twelve vertices of the icosahedron. A second straightforward construction of the icosahedron uses representation theory of the alternating group acting by direct isometries on the icosahedron. Symmetry The rotational symmetry group of the regular icosahedron is isomorphic to the alternating group on five letters. This non-abelian simple group is the only non-trivial normal subgroup of the symmetric group on five letters. Since the Galois group of the general quintic equation is isomorphic to the symmetric group on five letters, and this normal subgroup is simple and non-abelian, the general quintic equation does not have a solution in radicals. The proof of the Abel–Ruffini theorem uses this simple fact, and Felix Klein wrote a book that made use of the theory of icosahedral symmetries to derive an analytical solution to the general quintic equation, . See icosahedral symmetry: related geometries for further history, and related symmetries on seven and eleven letters. The full symmetry group of the icosahedron (including reflections) is known as the full icosahedral group, and is isomorphic to the product of the rotational symmetry group and the group of size two, which is generated by the reflection through the center of the icosahedron. Stellations The icosahedron has a large number of stellations. According to specific rules defined in the book The Fifty-Nine Icosahedra, 59 stellations were identified for the regular icosahedron. The first form is the icosahedron itself. One is a regular Kepler–Poinsot polyhedron. Three are regular compound polyhedra. Facetings The small stellated dodecahedron, great dodecahedron, and great icosahedron are three facetings of the regular icosahedron. They share the same vertex arrangement. They all have 30 edges. The regular icosahedron and great dodecahedron share the same edge arrangement but differ in faces (triangles vs pentagons), as do the small stellated dodecahedron and great icosahedron (pentagrams vs triangles). Geometric relations Inscribed in other Platonic solids The regular icosahedron is the dual polyhedron of the regular dodecahedron. An icosahedron can be inscribed in a dodecahedron by placing its vertices at the face centers of the dodecahedron, and vice versa. An icosahedron can be inscribed in an octahedron by placing its 12 vertices on the 12 edges of the octahedron such that they divide each edge into its two golden sections. Because the golden sections are unequal, there are five different ways to do this consistently, so five disjoint icosahedra can be inscribed in each octahedron. An icosahedron of edge length ≈ 0.618 can be inscribed in a unit-edge-length cube by placing six of its edges (3 orthogonal opposite pairs) on the square faces of the cube, centered on the face centers and parallel or perpendicular to the square's edges. Because there are five times as many icosahedron edges as cube faces, there are five ways to do this consistently, so five disjoint icosahedra can be inscribed in each cube. The edge lengths of the cube and the inscribed icosahedron are in the golden ratio. Relations to the 600-cell and other 4-polytopes The icosahedron is the dimensional analogue of the 600-cell, a regular 4-dimensional polytope. The 600-cell has icosahedral cross sections of two sizes, and each
|
five octahedra defining any given icosahedron form a regular polyhedral compound, while the two icosahedra that can be defined in this way from any given octahedron form a uniform polyhedron compound. Spherical coordinates The locations of the vertices of a regular icosahedron can be described using spherical coordinates, for instance as latitude and longitude. If two vertices are taken to be at the north and south poles (latitude ±90°), then the other ten vertices are at latitude = ±26.57°. These ten vertices are at evenly spaced longitudes (36° apart), alternating between north and south latitudes. This scheme takes advantage of the fact that the regular icosahedron is a pentagonal gyroelongated bipyramid, with D5d dihedral symmetry—that is, it is formed of two congruent pentagonal pyramids joined by a pentagonal antiprism. Orthogonal projections The icosahedron has three special orthogonal projections, centered on a face, an edge and a vertex: As a configuration This configuration matrix represents the icosahedron. The rows and columns correspond to vertices, edges, and faces. The diagonal numbers say how many of each element occur in the whole icosahedron. The nondiagonal numbers say how many of the column's element occur in or at the row's element. Here is the configuration expanded with k-face elements and k-figures. The diagonal element counts are the ratio of the full Coxeter group H3, order 120, divided by the order of the subgroup with mirror removal. Spherical tiling The icosahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Other facts An icosahedron has 43,380 distinct nets. To color the icosahedron, such that no two adjacent faces have the same color, requires at least 3 colors. A problem dating back to the ancient Greeks is to determine which of two shapes has larger volume, an icosahedron inscribed in a sphere, or a dodecahedron inscribed in the same sphere. The problem was solved by Hero, Pappus, and Fibonacci, among others. Apollonius of Perga discovered the curious result that the ratio of volumes of these two shapes is the same as the ratio of their surface areas. Both volumes have formulas involving the golden ratio, but taken to different powers. As it turns out, the icosahedron occupies less of the sphere's volume (60.54%) than the dodecahedron (66.49%). Construction by a system of equiangular lines The following construction of the icosahedron avoids tedious computations in the number field necessary in more elementary approaches. The existence of the icosahedron amounts to the existence of six equiangular lines in . Indeed, intersecting such a system of equiangular lines with a Euclidean sphere centered at their common intersection yields the twelve vertices of a regular icosahedron as can easily be checked. Conversely, supposing the existence of a regular icosahedron, lines defined by its six pairs of opposite vertices form an equiangular system. In order to construct such an equiangular system, we start with this 6 × 6 square matrix: A straightforward computation yields (where is the 6 × 6 identity matrix). This implies that has eigenvalues and , both with multiplicity 3 since is symmetric and of trace zero. The matrix induces thus a Euclidean structure on the quotient space , which is isomorphic to since the kernel of has dimension 3. The image under the projection of the six coordinate axes in forms a system of six equiangular lines in intersecting pairwise at a common acute angle of . Orthogonal projection of the positive and negative basis vectors of onto the -eigenspace of yields thus the twelve vertices of the icosahedron. A second straightforward construction of the icosahedron uses representation theory of the alternating group acting by direct isometries on the icosahedron. Symmetry The rotational symmetry group of the regular icosahedron is isomorphic to the alternating group on five letters. This non-abelian simple group is the only non-trivial normal subgroup of the symmetric group on five letters. Since the Galois group of the general quintic equation is isomorphic to the symmetric group on five letters, and this normal subgroup is simple and non-abelian, the general quintic equation does not have a solution in radicals. The proof of the Abel–Ruffini theorem uses this simple fact, and Felix Klein wrote a book that made use of the theory of icosahedral symmetries to derive an analytical solution to the general quintic equation, . See icosahedral symmetry: related geometries for further history, and related symmetries on seven and eleven letters. The full symmetry group of the icosahedron (including reflections) is known as the full icosahedral group, and is isomorphic to the product of the rotational symmetry group and the group of size two, which is generated by the reflection through the center of the icosahedron. Stellations The icosahedron has a large number of stellations. According to specific rules defined in the book The Fifty-Nine Icosahedra, 59 stellations were identified for the regular icosahedron. The first form is the icosahedron itself. One is a regular Kepler–Poinsot polyhedron. Three are regular compound polyhedra. Facetings The small stellated dodecahedron, great dodecahedron, and great icosahedron are three facetings of the regular icosahedron. They share the same vertex arrangement. They all have 30 edges. The regular icosahedron and great dodecahedron share the same edge arrangement but differ in faces (triangles vs pentagons), as do the small stellated dodecahedron and great icosahedron (pentagrams vs triangles). Geometric relations Inscribed in other Platonic solids The regular icosahedron is the dual polyhedron of the regular dodecahedron. An icosahedron can be inscribed in a dodecahedron by placing its vertices at the face centers of the dodecahedron, and vice versa. An icosahedron can be inscribed in an octahedron by placing its 12 vertices on the 12 edges of the octahedron such that they divide each edge into its two golden sections. Because the golden sections are unequal, there are five different ways to do this consistently, so five disjoint icosahedra can be inscribed in each octahedron. An icosahedron of edge length ≈ 0.618 can be inscribed in a unit-edge-length cube by placing six of its edges (3 orthogonal opposite pairs) on the square faces of the cube, centered on the face centers and parallel or perpendicular to the square's edges. Because there are five times as many icosahedron edges as cube faces, there are five ways to do this consistently, so five disjoint icosahedra can be inscribed in each cube. The edge lengths of the cube and the inscribed icosahedron are in the golden ratio. Relations to the 600-cell and other 4-polytopes The icosahedron is the dimensional analogue of the 600-cell, a regular 4-dimensional polytope. The 600-cell has icosahedral cross sections of two sizes, and each of its 120 vertices is an icosahedral pyramid; the icosahedron is the vertex figure of the 600-cell. The unit-radius 600-cell has tetrahedral cells of edge length ≈ 0.618, 20 of which meet at each vertex to form an icosahedral pyramid (a 4-pyramid with an icosahedron as its base). Thus the 600-cell contains 120 icosahedra of edge length ≈ 0.618. The 600-cell also contains unit-edge-length cubes and unit-edge-length octahedra as interior features formed by its unit-length chords. In the unit-radius 120-cell (another regular 4-polytope which is both the dual of the 600-cell and a compound of 5 600-cells) we find all three kinds of inscribed icosahedra (in a dodecahedron, in an octahedron, and in a cube). A semiregular 4-polytope, the snub 24-cell, has icosahedral cells. Relations to other uniform polytopes The icosahedron is unique among the Platonic solids in possessing a dihedral angle not less than 120°. Its dihedral angle is approximately 138.19°. Thus, just as hexagons have angles not less than 120° and cannot be used as the faces of a convex regular polyhedron because such a construction would not meet the requirement that at least three faces meet at a vertex and leave a positive defect for folding in three dimensions, icosahedra cannot be used as the cells of a convex regular polychoron because, similarly, at least three cells must meet at an edge and leave a positive defect for folding in four dimensions (in general for a convex polytope in n dimensions, at least three facets must meet at a peak and leave a positive defect for folding in n-space). However, when combined with suitable cells having smaller dihedral angles, icosahedra can
|
early 1840s, and New Scotland Yard was faced with granite from the quarry at Merrivale. Merrivale Quarry continued excavating and working its own granite until the 1970s, producing gravestones and agricultural rollers. Work at Merrivale continued until the 1990s, for the last 20 years imported stone such as gabbro from Norway and Italian marble was dressed and polished. The unusual pink granite at Great Trowlesworthy Tor was also quarried, and there were many other small granite quarries dotted around the moor. Various metamorphic rocks were also quarried in the metamorphic aureole around the edge of the moor, most notably at Meldon. Gunpowder factory In 1844 a factory for making gunpowder was built on the open moor, not far from Postbridge. Gunpowder was needed for the tin mines and granite quarries then in operation on the moor. The buildings were widely spaced from one another for safety and the mechanical power for grinding ("incorporating") the powder was derived from waterwheels driven by a leat. Now known as "Powdermills" or "Powder Mills", there are extensive remains of this factory still visible. Two chimneys still stand and the walls of the two sturdily-built incorporating mills with central waterwheels survive well: they were built with substantial walls but flimsy roofs so that in the event of an explosion, the force of the blast would be directed safely upwards. The ruins of a number of ancillary buildings also survive. A proving mortar—a type of small cannon used to gauge the strength of the gunpowder—used by the factory still lies by the side of the road to the nearby pottery. Peat-cutting Peat-cutting for fuel occurred at some locations on Dartmoor until certainly the 1970s, usually for personal use. The right of Dartmoor commoners to cut peat for fuel is known as turbary. These rights were conferred a long time ago, pre-dating most written records. The area once known as the Turbary of Alberysheved between the River Teign and the headwaters of the River Bovey is mentioned in the Perambulation of the Forest of Dartmoor of 1240 (by 1609 the name of the area had changed to Turf Hill). An attempt was made to commercialise the cutting of peat in 1901 at Rattle Brook Head, however this quickly failed. Warrens From at least the 13th century until early in the 20th, rabbits were kept on a commercial scale, both for their flesh and their fur. Documentary evidence for this exists in place names such as Trowlesworthy Warren (mentioned in a document dated 1272)
|
at the end of the 19th century is William Crossing's The Dartmoor Worker. Mining In former times, lead, silver, tin and copper were mined extensively on Dartmoor. The most obvious evidence of mining to the casual visitor to Dartmoor are the remains of the old engine-house at Wheal Betsy which is alongside the A386 road between Tavistock and Okehampton. The word Wheal has a particular meaning in Devon and Cornwall being either a tin or a copper mine, however in the case of Wheal Betsy it was principally lead and silver which were mined. Once widely practised by many miners across the moor, by the early 1900s only a few tinners remained, and mining had almost completely ceased twenty years later. Some of the more significant mines were Eylesbarrow, Knock Mine, Vitifer Mine and Hexworthy Mine. The last active mine in the Dartmoor area was Great Rock Mine, which shut down in 1969. Quarrying Dartmoor granite has been used in many Devon and Cornish buildings. The prison at Princetown was built from granite taken from Walkhampton Common. When the horse tramroad from Plymouth to Princetown was completed in 1823, large quantities of granite were more easily transported. There were three major granite quarries on the moor: Haytor, Foggintor and Merrivale. The granite quarries around Haytor were the source of the stone used in several famous structures, including the New London Bridge, completed in 1831. This granite was transported from the moor via the Haytor Granite Tramway, stretches of which are still visible. The extensive quarries at Foggintor provided granite for the construction of London's Nelson's Column in the early 1840s, and New Scotland Yard was faced with granite from the quarry at Merrivale. Merrivale Quarry continued excavating and working its own granite until the 1970s, producing gravestones and agricultural rollers. Work at Merrivale continued until the 1990s, for the last 20 years imported stone such as gabbro from Norway and Italian marble was dressed and polished. The unusual pink granite at Great Trowlesworthy Tor was also quarried, and there were many other small granite quarries dotted around the moor. Various metamorphic rocks were also quarried in the metamorphic aureole
|
and functional programming (in which it is connected to the property of referential transparency). The term was introduced by Benjamin Peirce in the context of elements of algebras that remain invariant when raised to a positive integer power, and literally means "(the quality of having) the same power", from + potence (same + power). Definition An element of a set equipped with a binary operator is said to be idempotent under if . The binary operation is said to be idempotent if . Examples In the monoid of the natural numbers with multiplication, only 0 and 1 are idempotent. Indeed, and , which does not hold for other natural numbers. In a magma , an identity element or an absorbing element , if it exists, is idempotent. Indeed, and . In a group , the identity element is the only idempotent element. Indeed, if is an element of such that , then and finally by multiplying on the left by the inverse element of . In the monoids and of the power set of the set with set union and set intersection respectively, and are idempotent. Indeed, , and . In the monoids and of the Boolean domain with logical disjunction and logical conjunction respectively, and are idempotent. Indeed, , and . In a Boolean ring, multiplication is idempotent. In a Tropical semiring, addition is idempotent. Idempotent functions In the monoid of the functions from a set to itself with function composition , idempotent elements are the functions such that , that is such that (in other words, the image of each element is a fixed point of ). For example: the absolute value is idempotent. Indeed, , that is ; constant functions are idempotent; the identity function is idempotent; the floor, ceiling and fractional part functions are idempotent; the subgroup generated function from the power set of a group to itself is idempotent; the convex hull function from the power set of an affine space over the reals to itself is idempotent; the closure and interior functions of the power set of a topological space to itself are idempotent; the Kleene star and Kleene plus functions of the power set of a monoid to itself are idempotent; the idempotent endomorphisms of a vector space are its projections. If the set has elements, we can partition it into chosen fixed points and non-fixed points under , and then is the number of different idempotent functions. Hence, taking into account all possible partitions, is the total number of possible idempotent functions on the set. The integer sequence of the number of idempotent functions as given by the sum above for n = 0, 1, 2, 3, 4, 5, 6, 7, 8, ... starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, ... . Neither the property of being idempotent nor that of being not is preserved under function composition. As an example for the former, mod 3 and are both idempotent, but is not, although happens to be. As an example for the latter, the negation function on the Boolean domain is not idempotent, but is. Similarly, unary negation of real numbers is not idempotent, but is. In both cases, the composition is simply the identity function, which is idempotent. Computer science meaning In computer science, the term
|
words if the function from the system state space to itself associated with the subroutine is idempotent in the mathematical sense given in the definition; in functional programming, a pure function is idempotent if it is idempotent in the mathematical sense given in the definition. This is a very useful property in many situations, as it means that an operation can be repeated or retried as often as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was already performed or not. Computer science examples A function looking up a customer's name and address in a database is typically idempotent, since this will not cause the database to change. Similarly, a request for changing a customer's address to XYZ is typically idempotent, because the final address will be the same no matter how many times the request is submitted. However, a customer request for placing an order is typically not idempotent since multiple requests will lead to multiple orders being placed. A request for canceling a particular order is idempotent because no matter how many requests are made the order remains canceled. A sequence of idempotent subroutines where at least one subroutine is different from the others, however, is not necessarily idempotent if a later subroutine in the sequence changes a value that an earlier subroutine depends on—idempotence is not closed under sequential composition. For example, suppose the initial value of a variable is 3 and there is a subroutine sequence that reads the variable, then changes it to 5, and then reads it again. Each step in the sequence is idempotent: both steps reading the variable have no side effects and the step changing the variable to 5 will always have the same effect no matter how many times it is executed. Nonetheless, executing the entire sequence once produces the output (3, 5), but executing it a second time produces the output (5, 5), so the sequence is not idempotent. int x = 3; void read() { printf("%d\n", x); } void change() { x = 5; } void sequence() { read(); change(); read(); } int main() { sequence(); // prints "3\n5\n" sequence(); // prints "5\n5\n" return 0; } In the Hypertext Transfer Protocol (HTTP), idempotence and safety are the major attributes that separate HTTP methods. Of the major HTTP methods, GET, PUT, and DELETE should be implemented in an idempotent manner according to the standard, but POST doesn't need to be. GET retrieves the state of a resource; PUT updates the state of a resource; and DELETE deletes a resource. As in the example above, reading data usually has no side effects, so it is idempotent (in fact nullipotent). Updating and deleting a given data are each usually idempotent as long as the request uniquely identifies the resource and only that resource again in the future. PUT and DELETE with unique identifiers reduce to the simple case of assignment to a variable of either a value or the null-value, respectively, and are idempotent for the same reason; the end result is always the same as the result of the initial execution, even if the response differs. Violation of the unique identification requirement in storage or deletion typically causes violation of idempotence. For example, storing or deleting a given set of content without specifying a unique identifier: POST requests, which do not need to be idempotent, often do not contain unique identifiers, so the creation of the identifier is delegated to the receiving system which then creates a corresponding new record. Similarly, PUT and DELETE requests with
|
of the city of Ithaca. Geography and climate Geography The valley in which Cayuga Lake is located is long and narrow with a north–south orientation. Ithaca is located at the southern end (the "head") of the lake, but the valley continues to the southwest behind the city. Originally a river valley, it was deepened and widened by the action of Pleistocene ice sheets over the last several hundred thousand years. These ice sheets gouged the land crosswise to preexisting streams, producing hanging valleys. Once the last ice sheets receded — around twenty or thirty thousand years ago — these streams cut deep into the steep hillsides, forming the many distinctive gorges, rapids, and waterfalls seen in the region; examples include Fall and Cascadilla Creeks in Ithaca, and nearby Buttermilk Falls, Enfield Gorge, and Taughannock Falls. Cayuga Lake is the most recent lake in a long series of lakes which developed as the ice retreated northward. The lake drains to the north, and was formed behind a dam of glacial debris called a moraine. Rock in the region is predominantly Devonian shale and sandstone. North of Ithaca, it is relatively fossil rich. The world-renowned fossils found in this area can be examined at the Museum of the Earth. Glacial erratics can also be found in the area. Ithaca was founded on flat land just south of the lake — land that formed in fairly recent geological times when silt filled the southern end of the lake. The city ultimately spread to the adjacent hillsides, which rise several hundred feet above the central flats: East Hill, West Hill, and South Hill. The Cornell campus is loosely bounded to the north and south by Fall and Cascadilla Creeks, respectively. The natural vegetation of the Ithaca area is northern temperate broadleaf forest. It is dominated by deciduous trees, including maple, sycamore, black walnut, birch, and oak; coniferous trees include white pine, Norway spruce, and eastern hemlock. The city of Ithaca has a rich diversity of tree plantings, with over 190 species, including cherry, southern magnolia, and ginkgo. In addition to visual beauty, this species diversification helps reduce the impact of arboreal epidemics, such as that caused by the emerald ash borer. Climate According to the Köppen climate classification method, Ithaca experiences a warm-summer humid continental climate, also known as a hemiboreal climate (Dfb). Summers are warm but brief, and it is cool-to-cold the rest of the year, with long, snowy winters; an average of of snow falls per year. In addition, frost may occur any time of year except mid-summer. Winter is typically characterized by freezing temperatures, cloudy skies and light-to-moderate snows, with some heavier falls; the largest snowfall in one day was on February 14, 1914. But the season is also variable; there can be short mild periods with some rain, but also outbreaks of frigid air with night temperatures down to or lower. Summers usually bring sunshine, along with moderate heat and humidity, but also frequent afternoon thunderstorms. Nights are pleasant and sometimes cool. Occasionally, there can be heatwaves, with temperatures rising into the to range, but they tend to be brief. The average date of the first freeze is October 5, and the average date of the last freeze is May 15, giving Ithaca a growing season of 141 days. The average date of the first and last snowfalls are November 12 and April 7, respectively. Extreme temperatures range from as recently as February 2, 1961, up to on July 9, 1936. The valley flatland has slightly cooler weather in winter, and occasionally Ithaca residents experience simultaneous snow on the hills and rain in the valley. The phenomenon of mixed precipitation (rain, wind, and snow), common in the late fall and early spring, is known tongue-in-cheek as ithacation to many of the local residents. Due to the microclimates created by the impact of the lakes, the region surrounding Ithaca (Finger Lakes American Viticultural Area) experiences a short but adequate growing season for winemaking similar to the Rhine Valley wine district of Germany. As such, the region is home to many wineries. Demographics Ithaca is the principal city of the Ithaca-Cortland Combined Statistical Area, which includes the Ithaca Metropolitan Statistical Area (Tompkins County) and the Cortland Micropolitan Statistical Area (Cortland County), which had a combined population of 145,100 at the 2000 census. As of the census of 2000, there were 29,287 people, 10,287 households, and 2,962 families residing in the city. The population density was 5,360.9 people per square mile (2,071.0/km2). There were 10,736 housing units at an average density of 1,965.2 per square mile (759.2/km2). The racial makeup of the city was 73.97% White, 13.65% Asian, 6.71% Black or African American, 0.39% Native American, 0.05% Pacific Islander, 1.86% from other races, and 3.36% from two or more races. Hispanic or Latino of any race were 5.31% of the population. There were 10,287 households, out of which 14.2% had children under the age of 18 living with them, 19.0% were married couples living together, 7.8% had a female householder with no husband present, and 71.2% were non-families. 43.3% of all households were made up of individuals, and 7.4% had someone living alone who was 65 years of age or older. The average household size was 2.13 and the average family size was 2.81. In the city, the population was spread out, with 9.2% under the age of 18, 53.8% from 18 to 24, 20.1% from 25 to 44, 10.6% from 45 to 64, and 6.3% who were 65 years of age or older. The median age was 22 years. For every 100 females, there were 102.6 males. For every 100 females age 18 and over, there were 102.2 males. The median income for a household in the city was $21,441, and the median income for a family was $42,304. Males had a median income of $29,562 versus $27,828 for females. The per capita income for the city was $13,408. About 13.2% of individuals and 4.2% of families were below the poverty line. Greater Ithaca The term "Greater Ithaca" encompasses both the City and Town of Ithaca, as well as several smaller settled places within or adjacent to the Town: Municipalities Village of Groton Village of Lansing the southern part of the Town of Lansing Village of Cayuga Heights Hamlet of Forest Home Hamlet of South Hill Census-designated places East Ithaca Northeast Ithaca Northwest Ithaca Local government There are two governmental entities in the area: the Town of Ithaca and the City of Ithaca. The Town of Ithaca is one of the nine towns comprising Tompkins County. The City of Ithaca is surrounded by, but legally independent of, the Town. The City of Ithaca has a mayor–council government. The charter of the City of Ithaca provides for a full-time mayor and city judge, each independent and elected at-large. Since 1995, the mayor has been elected to a four-year term, and since 1989, the city judge has been elected to a six-year term. Since 1983, the city has been divided into five wards. Each elects two representatives to the city council, known as the Common Council, for staggered four-year terms. In March 2015, the Common Council unanimously adopted a resolution recognizing freedom from domestic violence as a fundamental human right. Since students won the right to vote where they attend colleges, some have become more active in local politics. In 2004, Gayraud Townsend, a 20-year-old senior in Cornell's School of Industrial and Labor Relations, was sworn in as alderman of the city council, representing the fourth Ward. He is the first black male to be elected to the council and was then the youngest African American to be elected to office in the United States. He served his full term and has mentored other young student politicians. In 2011, Cornell graduate Svante Myrick was elected Mayor of the City of Ithaca, becoming the youngest mayor in the city's history. In December, 2005, the City and Town governments began discussing opportunities for increased government consolidation, including the possibility of joining the two into a single entity. This topic had been previously discussed in 1963 and 1969. Cayuga Heights, a village adjacent to the city on its northeast, voted against annexation into the city of Ithaca in 1954. Politics Politically, the majority of the city's voters (many of them students) have supported liberalism and the Democratic Party. A November, 2004 study by ePodunk lists it as New York's most liberal city. This contrasts with the more conservative leanings of the generally rural Upstate New York region; the city's voters are also more liberal than those in the rest of Tompkins County. In 2008, Barack Obama, running against New York State's US Senator Hillary Clinton, won Tompkins County in the Democratic Presidential Primary, the only county that he won in New York State. Obama won Tompkins County (including Ithaca) by a wide margin of 41% over his opponent John McCain in the November, 2008 election. Sister city Ithaca is a sister city of: Eldoret, Kenya Education Colleges Ithaca is a major educational center in Central New York. The two major post-secondary educational institutions located in Ithaca were each founded in the late nineteenth century. In 1865, Ezra Cornell founded Cornell University, which overlooks the town from East Hill. It was opened as a coeducational institution. Women first enrolled in 1870. Ezra Cornell also established a public library for the city. Ithaca College was founded as the Ithaca Conservatory of Music in 1892. Ithaca College was originally located in the downtown area but relocated to South Hill in the 1960s. In 2018, there were 23,600 students enrolled at Cornell and 6,700 at Ithaca College. Tompkins Cortland Community College is located in the neighboring town of Dryden, and has an extension center in downtown Ithaca. Empire State College offers non-traditional college courses to adults in downtown Ithaca. Public schools The Ithaca City School District, based in Ithaca, encompasses the city and its surrounding area and enrolls about 5,500 K-12 students in eight elementary schools (roughly one for every neighborhood), two middle schools (Boynton and Dewitt), Ithaca High School and the Lehman Alternative Community School, a combined middle and high school. Several private elementary and secondary schools are located in the Ithaca area, including the Roman Catholic Immaculate Conception School, the Cascadilla School, the New Roots Charter School, the Elizabeth Ann Clune Montessori School, the Namaste Montessori School (in the Trumansburg area) and the Ithaca Waldorf School. Ithaca has two networks for supporting its home-schooling families: Loving Education At Home (LEAH) and the Northern Light Learning Center (NLLC). TST BOCES is located in Tompkins County. Library The Tompkins County Public Library, located at 101 East Green Street, serves as the public library for Tompkins County and is the Central Library for the Finger Lakes Library System. The library serves over 37,000 registered borrowers and contains 250,000 items in its circulating collection. Economy The economy of Ithaca is based on education and further supported by agriculture, technology and tourism. As of 2006, Ithaca has continued to have one of the few expanding economies in New York State outside New York City.
|
Ithaca is home to Ithaca Hours, one of the first local currency systems in the United States. It was developed by Paul Glover. Music Ithaca is the home of the Cayuga Chamber Orchestra. The Cornell Concert Series has been hosting musicians and ensembles of international stature since 1903. For its initial 84 years, the series featured Western classical artists exclusively. In 1987, however, the series broke with tradition to present Ravi Shankar and has since grown to encompass a broader spectrum of the world's great music. Now, it balances a mix of Western classical music, traditions from around the world, jazz, and new music in these genres. In a single season, Cornell Concert Series presents performers ranging from the Leipzig Tomanerchor and Danish Quartet to Simon Shaheen, Vida Guitar Quartet, and Eighth Blackbird. The School of Music at Ithaca College was founded in 1892 by William Egbert as a music conservatory on Buffalo Street. Among the degree programs offered are those in Performance, Theory, Music Education and Composition. Since 1941, the School of Music has been accredited by the National Association of Schools of Music. Ithaca's Suzuki school, Ithaca Talent Education, provides musical training for children of all ages and also teacher training for undergraduate and graduate-level students. The Community School of Music and Art uses an extensive scholarship system to offer classes and lessons to any student, regardless of age, background, economic status or artistic ability. A number of musicians call Ithaca home, most notably Samite of Uganda, The Burns Sisters, The Horse Flies, Johnny Dowd, Mary Lorson, cellist Hank Roberts, reggae band John Brown's Body, Kurt Riley, X Ambassadors, and Alex Kresovich. Old-time music is a staple, and folk music is featured weekly on WVBR-FM's Bound for Glory, North America's longest-running live folk concert broadcast. The Finger Lakes GrassRoots Festival of Music and Dance, hosted by local band Donna the Buffalo, is held annually during the third week in July in the nearby village of Trumansburg, with more than 60 local, national and international acts. Ithaca is the center of a thriving live music scene, featuring over 200 groups playing most genres of American popular music, the predominant genres being Folk, Rock, Blues, Jazz and Country. There are over 80 live music venues within a 40-mile radius of the city, including cafes, pubs, clubs and concert halls. Transportation In 2009, the Ithaca metropolitan statistical area (MSA) ranked as the highest in the United States for the percentage of commuters who walked to work (15.1 percent). In 2013, the Ithaca MSA ranked as the second-lowest in the United States for percentage of commuters who traveled by private vehicle (68.7 percent). During the same year, 17.5 percent of commuters in the Ithaca MSA walked to work. Roads Ithaca is in the rural Finger Lakes region about northwest of New York City; the nearest larger cities, Binghamton and Syracuse, are an hour's drive away by car, Rochester and Scranton are two hours, Buffalo and Albany are three. New York City, Philadelphia, Toronto, and Ottawa are about four hours away. Ithaca lies at over a half-hour's drive from any interstate highway, and all car trips to Ithaca involve some driving on two-lane state rural highways. The city is at the convergence of many regional two-lane state highways: Routes 13, 13A, 34, 79, 89, 96, 96B and 366. These are usually not congested except in Ithaca proper. However, Route 79 between the I-81 access at Whitney Point and Ithaca receives a significant amount of Ithaca-bound congestion right before Ithaca's colleges reopen after breaks. In July, 2008, a non-profit called Ithaca Carshare began a carsharing service in Ithaca. Ithaca Carshare has a fleet of vehicles shared by over 1500 members as of July, 2015 and has become a popular service among both city residents and the college communities. Vehicles are located throughout Ithaca downtown and at the two major institutions. With Ithaca Carshare as the first locally-run carsharing organization in New York State, others have since launched in Buffalo, Albany, and Syracuse. Rideshare services to promote carpooling and vanpooling are operated by ZIMRIDE and VRIDE. A community mobility education program, Way2Go, is operated by Cornell Cooperative Extension of Tompkins County. Way2Go's website provides consumer information and videos. Way2Go works collaboratively to help people save money, stress less, go green and improve mobility options. The 2-1-1 Tompkins/Cortland Help line connects people with services, including transportation, in the community, by telephone and web on a 24/7 basis. The information and referral service is operated by the Human Services Coalition of Tompkins County, Inc. Together, 2-1-1 Information and Referral and Way2Go are a one-call, one-click resource designed to mobility services information for Ithaca and throughout Tompkins County. As a growing urban area, Ithaca is facing steady increases in levels of vehicular traffic on the city grid and on the state highways. Outlying areas have limited bus service, and many people consider a car essential. However, many consider Ithaca a walkable and bikeable community. One positive trend for the health of downtown Ithaca is the new wave of increasing urban density in and around the Ithaca Commons. Because the downtown area is the region's central business district, dense mixed-use development that includes housing may increase the proportion of people who can walk to work and recreation and mitigate the likely-increased pressure on already-busy roads as Ithaca grows. The downtown area is also the area best-served by frequent public transportation. Still, traffic congestion around the Commons is likely to progressively increase. Bus There is frequent intercity bus service by Greyhound Lines, New York Trailways, OurBus, FlixBus, and Shortline (Coach USA), particularly to Binghamton and New York City, with limited service to Rochester, Buffalo and Syracuse, and (via connections in Binghamton) to Utica and Albany. OurBus also provides limited holiday services to Allentown, Pennsylvania, Philadelphia, and Washington, DC. Cornell University runs a premium campus-to-campus bus between its Ithaca campus and its medical school in Manhattan, New York City which is open to the public. Starting in September, 2019, intercity buses serving Ithaca operate from the downtown bus stop at 131 East Green Street, as the former Greyhound bus station on West State Street closed due to staff retirement and building maintenance issues. However, OurBus now picks up and drops off on Seneca Street, near the downtown Starbucks and Hilton Garden Inn. Ithaca is the center of an extensive bus public transportation network. Tompkins Consolidated Area Transit, Inc. (TCAT, Inc.) is a not-for-profit corporation that provides public transportation for Tompkins County, New York. TCAT was reorganized as a non-profit corporation in 2004 and is primarily supported locally by Cornell University, the City of Ithaca and Tompkins County. TCAT's ridership increased from 2.7 million in 2004 to 4.4 million in 2013. TCAT operates 34 routes, many running seven days a week. It has frequent service to downtown, Cornell University, Ithaca College, and the Shops at Ithaca Mall in the Town of Lansing, but less-frequent service to many residential and rural areas, including Trumansburg and Newfield. Chemung County Transit (C-TRAN) runs weekday commuter service from Chemung County to Ithaca. Cortland Transit runs commuter service to Cornell University. Tioga County Public Transit operated three routes to Ithaca and Cornell, but ceased operations on November 30, 2014. GADABOUT Transportation Services, Inc. provides demand-response paratransit service for seniors over 60 and people with disabilities. Ithaca Dispatch provides local and regional taxi service. In addition, Ithaca Airline Limousine and IthaCar Service connect to the local airports. Airports Ithaca is served by Ithaca Tompkins International Airport, located about three miles to the northeast of the city center. In late 2019, the airport completed a major $34.8 million renovation which included a larger terminal with additional passenger gates and jet bridges, expanded passenger amenities, and a customs facility that enables it to receive international charter and private flights. American Eagle (American Airlines commuter subsidiary) offers daily flights to its Charlotte hub, operated by PSA Airlines using Bombardier CRJ700 commuter-jet aircraft, offering both First and Economy Class service. Delta Connection provides service to its hub at Detroit Metro airport, operated by its commuter partner Endeavor Air, using the Bombardier CRJ200 commuter-jet. United Express offers daily flights to Washington Dulles International Airport, operated by its commuter partner Air Wisconsin, using the Bombardier CRJ200 commuter-jet. However, service to Washington (IAD) will end on March 2, 2022, and will be replaced by service to New York City/Newark (EWR) on March 4, 2022, operated by its commuter partner Gojet Airlines, using the two-class Bombardier CRJ550 commuter-jet. Railways Into the mid-twentieth century, it was possible to reach Ithaca by passenger rail. At least two trains per day serviced Ithaca along either the Delaware, Lackawanna and Western Railroad (until March 31, 1942) or the Lehigh Valley Railroad. The trip took "about seven hours" from New York City, "about eight hours" from Philadelphia, and "about three hours" from Buffalo. There has been no passenger rail service since February 4, 1961. From the 1870s-on, there were trains to Buffalo via Geneva, New York; to New York City via Wilkes-Barre, Pennsylvania (both Lehigh Valley Railroad); to Hoboken, New Jersey, with a train-change in Owego and a routing via Binghamton and Scranton, Pennsylvania (until March 31, 1942) (DL&W); and to the US northeast via Cortland, New York (Lehigh Valley Railroad). The Lehigh Valley's top New York City-Ithaca-Buffalo passenger train, the daylight Black Diamond, was optimistically publicized as 'The Handsomest Train in the World', perhaps to compensate for its roundabout route to New York City (south to Waverly, New York; southeast to Wilkes-Barre and Allentown, Pennsylvania; then east across New Jersey). It was named after the railroad's largest commodity, anthracite coal, and made its last run on May 11, 1959. Until March 31, 1942, the Lackawanna Railroad operated two shuttle trains a day between Ithaca and Owego, where passengers could transfer to trains to Buffalo and Chicago to the west and eastbound to Binghamton, Scranton, Pennsylvania and Hoboken, New Jersey: across the Hudson River from New York City. Until September 15, 1958, the Lackawanna maintained Syracuse-Binghamton service through nearby Cortland, to the east. Until May 11, 1959, two Lehigh Valley trains a day made both westbound and eastbound stops in Ithaca. The last passenger train making stops in Ithaca was the Lehigh Valley's overnight Maple Leaf, discontinued on February 4, 1961. Within Ithaca, electric railways ran along Stewart Avenue and Eddy Street. In fact, Ithaca was the fourth community in New York state with a street railway; streetcars ran from 1887 to summer, 1935. On December 8, 2018, the Ithaca Central Railroad, a Watco subsidiary, took over operation via lease of the 48.8-mile Norfolk Southern Ithaca Secondary line from Sayre, Pennsylvania to the Cargill Salt mine site on the eastern shore of Cayuga Lake, near Myers Point. Unit coal trains carrying bituminous coal were delivered to the Ithaca Central at Sayre by Norfolk Southern for less than eight months afterward, traveling to the Ridge site of the Cayuga Operating Company: a coal-burning power plant (known as Milliken Station during NYSEG ownership). Unit trains of coal are now gone, as the power plant closed on August 29, 2019, when it ran out of coal, and was officially retired in October, 2019. (As of 2022, there are ambitious, proposed plans to convert its brownfield site into a major data center.) The main rail freight traffic is now salt from the Cargill salt mine farther north. The Norfolk Southern tracks, headed north on the former Lehigh Valley Auburn and Ithaca Branch, include a distinctive section in Ithaca that runs along the side of Fulton St. (NY13 southbound), although
|
university's charter stipulated that students should enjoy "full liberty of conscience." Columbia was founded by Anglicans, who composed 10 of the college's first 15 presidents. Penn and Cornell were officially nonsectarian, though Protestants were well represented in their respective founding. In the early nineteenth century, the specific purpose of training Calvinist ministers was handed off to theological seminaries, but a denominational tone and religious traditions including compulsory chapel often lasted well into the twentieth century. "Ivy League" is sometimes used as a way of referring to an elite class, even though institutions such as Cornell University were among the first in the United States to reject racial and gender discrimination in their admissions policies. This dates back to at least 1935. Novels and memoirs attest this sense, as a social elite; to some degree independent of the actual schools. History of the athletic league 19th and early 20th centuries The first formal athletic league involving eventual Ivy League schools (or any US colleges, for that matter) was created in 1870 with the formation of the Rowing Association of American Colleges. The RAAC hosted a de facto national championship in rowing during the period 1870–1894. In 1895, Cornell, Columbia, and Penn founded the Intercollegiate Rowing Association, which remains the oldest collegiate athletic organizing body in the US. To this day, the IRA Championship Regatta determines the national champion in rowing and all of the Ivies are regularly invited to compete.A basketball league was later created in 1902, when Columbia, Cornell, Harvard, Yale, and Princeton formed the Eastern Intercollegiate Basketball League; they were later joined by Penn and Dartmouth. In 1906, the organization that eventually became the National Collegiate Athletic Association was formed, primarily to formalize rules for the emerging sport of football. But of the 39 original member colleges in the NCAA, only two of them (Dartmouth and Penn) later became Ivies. In February 1903, intercollegiate wrestling began when Yale accepted a challenge from Columbia, published in the Yale News. The dual meet took place prior to a basketball game hosted by Columbia and resulted in a tie. Two years later, Penn and Princeton also added wrestling teams, leading to the formation of the student-run Intercollegiate Wrestling Association, now the Eastern Intercollegiate Wrestling Association (EIWA), the first and oldest collegiate wrestling league in the US. In 1930, Columbia, Cornell, Dartmouth, Penn, Princeton and Yale formed the Eastern Intercollegiate Baseball League; they were later joined by Harvard, Brown, Army and Navy. Before the formal establishment of the Ivy League, there was an "unwritten and unspoken agreement among certain Eastern colleges on athletic relations". The earliest reference to the "Ivy colleges" came in 1933, when Stanley Woodward of the New York Herald Tribune used it to refer to the eight current members plus Army. In 1935, the Associated Press reported on an example of collaboration between the schools: Despite such collaboration, the universities did not seem to consider the formation of the league as imminent. Romeyn Berry, Cornell's manager of athletics, reported the situation in January 1936 as follows: Within a year of this statement and having held month-long discussions about the proposal, on December 3, 1936, the idea of "the formation of an Ivy League" gained enough traction among the undergraduate bodies of the universities that the Columbia Daily Spectator, The Cornell Daily Sun, The Dartmouth, The Harvard Crimson, The Daily Pennsylvanian, The Daily Princetonian and the Yale Daily News would simultaneously run an editorial entitled "Now Is the Time", encouraging the seven universities to form the league in an effort to preserve the ideals of athletics. Part of the editorial read as follows: The Ivies have been competing in sports as long as intercollegiate sports have existed in the United States. Rowing teams from Harvard and Yale met in the first sporting event held between students of two U.S. colleges on Lake Winnipesaukee, New Hampshire, on August 3, 1852. Harvard's team, "The Oneida", won the race and was presented with trophy black walnut oars from then-presidential nominee General Franklin Pierce. The proposal did not succeed—on January 11, 1937, the athletic authorities at the schools rejected the "possibility of a heptagonal league in football such as these institutions maintain in basketball, baseball and track." However, they noted that the league "has such promising possibilities that it may not be dismissed and must be the subject of further consideration." Post-World War II In 1945 the presidents of the eight schools signed the first Ivy Group Agreement, which set academic, financial, and athletic standards for the football teams. The principles established reiterated those put forward in the Harvard-Yale-Princeton presidents' Agreement of 1916. The Ivy Group Agreement established the core tenet that an applicant's ability to play on a team would not influence admissions decisions: In 1954, the presidents extended the Ivy Group Agreement to all intercollegiate sports, effective with the 1955–56 basketball season. This is generally reckoned as the formal formation of the Ivy League. As part of the transition, Brown, the only Ivy that had not joined the EIBL, did so for the 1954–55 season. A year later, the Ivy League absorbed the EIBL. The Ivy League claims the EIBL's history as its own. Through the EIBL, it is the oldest basketball conference in Division I. As late as the 1960s many of the Ivy League universities' undergraduate programs remained open only to men, with Cornell the only one to have been coeducational from its founding (1865) and Columbia being the last (1983) to become coeducational. Before they became coeducational, many of the Ivy schools maintained extensive social ties with nearby Seven Sisters women's colleges, including weekend visits, dances and parties inviting Ivy and Seven Sisters students to mingle. This was the case not only at Barnard College and Radcliffe College, which are adjacent to Columbia and Harvard, but at more distant institutions as well. The movie Animal House includes a satiric version of the formerly common visits by Dartmouth men to Massachusetts to meet Smith and Mount Holyoke women, a drive of more than two hours. As noted by Irene Harwarth, Mindi Maline, and Elizabeth DeBra, "The 'Seven Sisters' was the name given to Barnard, Smith, Mount Holyoke, Vassar, Bryn Mawr, Wellesley, and Radcliffe, because of their parallel to the Ivy League men's colleges." In 1982 the Ivy League considered adding two members, with Army, Navy, and Northwestern as the most likely candidates; if it had done so, the league could probably have avoided being moved into the recently created Division I-AA (now Division I FCS) for football. In 1983, following the admission of women to Columbia College, Columbia University and Barnard College entered into an athletic consortium agreement by which students from both schools compete together on Columbia University women's athletic teams, which replaced the women's teams previously sponsored by Barnard.When Army and Navy departed the Eastern Intercollegiate Baseball League in 1992, nearly all intercollegiate competition involving the eight schools became united under the Ivy League banner. The two major exceptions are wrestling, with the Ivies that sponsor wrestling—all except Dartmouth and Yale—members of the EIWA and hockey, with the Ivies that sponsor hockey—all except Penn and Columbia—members of ECAC Hockey. COVID-19 pandemic The Ivy League was the first athletic conference to respond to the COVID-19 pandemic by shutting down all athletic competition in March 2020, leaving many Spring schedules unfinished. The Fall 2020 schedule was canceled in July, and winter sports were canceled before Thanksgiving. Of the 357 men's basketball teams in Division I, only ten did not play; the Ivy League made up eight of those ten. By giving up its automatic qualifying bid to March Madness, the Ivy League forfeited at least $280,000 in NCAA basketball funds. As a consequence of the pandemic, an unprecedented number of student athletes in the Ivy League either transferred to other schools, or temporarily unenrolled in hopes of maintaining their eligibility to play post-pandemic. Some Ivy alumni expressed displeasure with the League's position. In February 2021 it was reported that Yale declined a multi-million dollar offer from alum Joseph Tsai to create a sequestered "bubble" for the lacrosse team. The league announced in a May 2021 joint statement that "regular athletic competition" would resume "across all sports" in fall 2021. Academics Admissions The Ivy League schools are highly selective, with all schools reporting acceptance rates at or below approximately 10% at all of the universities. For the class of 2025, six of the eight schools reported acceptance rates below 6%. Admitted students come from around the world, although those from the Northeastern United States make up a significant proportion of students. In 2021, all eight Ivy League schools recorded record high numbers of applications and record low acceptance rates. Year over year increases in the number of applicants ranged from a 14.5% increase at Princeton to a 51% increase at Columbia. There have been arguments that Ivy League schools discriminate against Asian-American candidates. For example, in August 2020, the US Justice Department argued that Yale University discriminated against Asian-American candidates on the basis of their race, a charge the university denied. Harvard was subject to a similar challenge in 2019 from an Asian American student group, with regard to which a federal judge found Harvard to be in compliance with constitutional requirements. The student group has since appealed that decision, and the appeal is still pending as of August 2020. Prestige Members of the League have been highly ranked by various university rankings. All of the Ivy League schools are consistently ranked within the top 20 national universities by the U.S. News & World Report Best Colleges Ranking. The Wall Street Journal rankings place all eight of the universities within the top 15 in the country. Further, Ivy League members have produced many Nobel laureates and winners of the Nobel Memorial Prize in Economic Sciences. Collaboration Collaboration between the member schools is illustrated by the student-led Ivy Council that meets in the fall and spring of each year, with representatives from every Ivy League school. The governing body of the Ivy League is the Council of Ivy Group presidents, composed of each university president. During meetings, the presidents discuss common procedures and initiatives for their universities. The universities collaborate academically through the IvyPlus Exchange Scholar Program, which allows students to cross-register at one of the Ivies or another eligible school such as the University of California at Berkeley, the University of Chicago, the Massachusetts Institute of Technology, and Stanford University. Culture Fashion and lifestyle Different fashion trends and styles have emerged from Ivy League campuses over time, and fashion trends such as Ivy League and preppy are styles often associated with the Ivy League and its culture. Ivy League style is a
|
to March Madness, the Ivy League forfeited at least $280,000 in NCAA basketball funds. As a consequence of the pandemic, an unprecedented number of student athletes in the Ivy League either transferred to other schools, or temporarily unenrolled in hopes of maintaining their eligibility to play post-pandemic. Some Ivy alumni expressed displeasure with the League's position. In February 2021 it was reported that Yale declined a multi-million dollar offer from alum Joseph Tsai to create a sequestered "bubble" for the lacrosse team. The league announced in a May 2021 joint statement that "regular athletic competition" would resume "across all sports" in fall 2021. Academics Admissions The Ivy League schools are highly selective, with all schools reporting acceptance rates at or below approximately 10% at all of the universities. For the class of 2025, six of the eight schools reported acceptance rates below 6%. Admitted students come from around the world, although those from the Northeastern United States make up a significant proportion of students. In 2021, all eight Ivy League schools recorded record high numbers of applications and record low acceptance rates. Year over year increases in the number of applicants ranged from a 14.5% increase at Princeton to a 51% increase at Columbia. There have been arguments that Ivy League schools discriminate against Asian-American candidates. For example, in August 2020, the US Justice Department argued that Yale University discriminated against Asian-American candidates on the basis of their race, a charge the university denied. Harvard was subject to a similar challenge in 2019 from an Asian American student group, with regard to which a federal judge found Harvard to be in compliance with constitutional requirements. The student group has since appealed that decision, and the appeal is still pending as of August 2020. Prestige Members of the League have been highly ranked by various university rankings. All of the Ivy League schools are consistently ranked within the top 20 national universities by the U.S. News & World Report Best Colleges Ranking. The Wall Street Journal rankings place all eight of the universities within the top 15 in the country. Further, Ivy League members have produced many Nobel laureates and winners of the Nobel Memorial Prize in Economic Sciences. Collaboration Collaboration between the member schools is illustrated by the student-led Ivy Council that meets in the fall and spring of each year, with representatives from every Ivy League school. The governing body of the Ivy League is the Council of Ivy Group presidents, composed of each university president. During meetings, the presidents discuss common procedures and initiatives for their universities. The universities collaborate academically through the IvyPlus Exchange Scholar Program, which allows students to cross-register at one of the Ivies or another eligible school such as the University of California at Berkeley, the University of Chicago, the Massachusetts Institute of Technology, and Stanford University. Culture Fashion and lifestyle Different fashion trends and styles have emerged from Ivy League campuses over time, and fashion trends such as Ivy League and preppy are styles often associated with the Ivy League and its culture. Ivy League style is a style of men's dress, popular during the late 1950s, believed to have originated on Ivy League campuses. The clothing stores J. Press and Brooks Brothers represent perhaps the quintessential Ivy League dress manner. The Ivy League style is said to be the predecessor to the preppy style of dress. Preppy fashion started around 1912 to the late 1940s and 1950s as the Ivy League style of dress. J. Press represents the quintessential preppy clothing brand, stemming from the collegiate traditions that shaped the preppy subculture. In the mid-twentieth century J. Press and Brooks Brothers, both being pioneers in preppy fashion, had stores on Ivy League school campuses, including Harvard, Princeton, and Yale. Some typical preppy styles also reflect traditional upper class New England leisure activities, such as equestrian, sailing or yachting, hunting, fencing, rowing, lacrosse, tennis, golf, and rugby. Longtime New England outdoor outfitters, such as L.L. Bean, became part of conventional preppy style. This can be seen in sport stripes and colors, equestrian clothing, plaid shirts, field jackets and nautical-themed accessories. Vacationing in Palm Beach, Florida, long popular with the East Coast upper class, led to the emergence of bright colors combinations in leisure wear seen in some brands such as Lilly Pulitzer. By the 1980s, other brands such as Lacoste, Izod and Dooney & Bourke became associated with preppy style. Today, these styles continue to be popular on Ivy League campuses, throughout the U.S., and abroad, and are oftentimes labeled as "Classic American style" or "Traditional American style". Social elitism The Ivy League is often associated with the upper class White Anglo-Saxon Protestant community of the Northeast, Old money, or more generally, the American upper middle and upper classes. Although most Ivy League students come from upper-middle and upper-class families, the student body has become increasingly more economically and ethnically diverse. The universities provide significant financial aid to help increase the enrollment of lower income and middle class students. Several reports suggest, however, that the proportion of students from less-affluent families remains low. Phrases such as "Ivy League snobbery" are ubiquitous in nonfiction and fiction writing of the early and mid-twentieth century. A Louis Auchincloss character dreads "the aridity of snobbery which he knew infected the Ivy League colleges". A business writer, warning in 2001 against discriminatory hiring, presented a cautionary example of an attitude to avoid (the bracketed phrase is his): The phrase Ivy League historically has been perceived as connected not only with academic excellence but also with social elitism. In 1936, sportswriter John Kieran noted that student editors at Harvard, Yale, Columbia, Princeton, Cornell, Dartmouth, and Penn were advocating the formation of an athletic association. In urging them to consider "Army and Navy and Georgetown and Fordham and Syracuse and Brown and Pitt" as candidates for membership, he exhorted: Aspects of Ivy stereotyping were illustrated during the 1988 presidential election, when George H. W. Bush (Yale '48) derided Michael Dukakis (graduate of Harvard Law School) for having "foreign-policy views born in Harvard Yard's boutique." New York Times columnist Maureen Dowd asked "Wasn't this a case of the pot calling the kettle elite?" Bush explained, however, that, unlike Harvard, Yale's reputation was "so diffuse, there isn't a symbol, I don't think, in the Yale situation, any symbolism in it. ... Harvard boutique to me has the connotation of liberalism and elitism" and said Harvard in his remark was intended to represent "a philosophical enclave" and not a statement about class. Columnist Russell Baker opined that "Voters inclined to loathe and fear elite Ivy League schools rarely make fine distinctions between Yale and Harvard. All they know is that both are full of rich, fancy, stuck-up and possibly dangerous intellectuals who never sit down to supper in their undershirt no matter how hot the weather gets." Still, the next five consecutive presidents all attended Ivy League schools for at least part of their education—George H. W. Bush (Yale undergrad), Bill Clinton (Yale Law School), George W. Bush (Yale undergrad, Harvard Business School), Barack Obama (Columbia undergrad, Harvard Law School), and Donald Trump (Penn undergrad). U.S. presidents in the Ivy League Of the 45 persons who have served as President of the United States, 16 have graduated from an Ivy League university. Of them, eight have degrees from Harvard, five from Yale, three from Columbia, two from Princeton and one from Penn. Twelve presidents have earned Ivy undergraduate degrees. Four of these were transfer students: Woodrow Wilson transferred from Davidson College, Barack Obama transferred from Occidental College, Donald Trump transferred from Fordham University, and John F. Kennedy transferred from Princeton to Harvard. John Adams was the first president to graduate from college, graduating from Harvard in 1755. Student demographics Race and ethnicity Geographic distribution Students of the Ivy League largely hail from the Northeast, largely from the New York City, Boston, and Philadelphia areas. As all eight Ivy League universities are within the Northeast, most graduates end up working and residing in the Northeast after graduation. An unscientific survey of Harvard seniors from the Class of 2013 found that 42% hailed from the Northeast and 55% overall were planning on working and residing in the Northeast. Boston and New York City are traditionally where many Ivy League graduates end up living. Socioeconomics and social class Students of the Ivy League, both graduate and undergraduate, come primarily from upper middle and upper class families. In recent years, however, the universities have looked towards increasing socioeconomic and class diversity, by providing greater financial aid packages to applicants from lower, working, and lower middle class American families. In 2013, 46% of Harvard undergraduate students came from families in the top 3.8% of all American households (i.e., over $200,000 annual income). In 2012, the bottom 25% of the American income distribution accounted for only 3–4% of students at Brown, a figure that had remained unchanged since 1992. In 2014, 69% of incoming freshmen students at Yale College came from families with annual incomes of over $120,000, putting most Yale College students in the upper middle and/or upper class. (The median household income in the U.S. in 2013 was $52,700.) In the 2011–2012 academic year, students qualifying for Pell Grants (federally funded scholarships on the basis of need) comprised 20% at Harvard, 18% at Cornell, 17% at Penn, 16% at Columbia, 15% at Dartmouth and Brown, 14% at Yale, and 12% at Princeton. Nationally, 35% of American university students qualify for a Pell Grant. Competition and athletics Ivy champions are recognized in sixteen men's and sixteen women's sports. In some sports, Ivy teams actually compete as members of another league, the Ivy championship being decided by isolating the members' records in play against each other; for example, the six league members who participate in ice hockey do so as members of ECAC Hockey, but an Ivy champion is extrapolated each year. In one sport, rowing, the Ivies recognize team champions for each sex in both heavyweight and lightweight divisions. While the Intercollegiate Rowing Association governs all four sex- and bodyweight-based divisions of rowing, the only one that is sanctioned by the NCAA is women's heavyweight. The Ivy League was the last Division I basketball conference to institute a conference postseason tournament; the first tournaments for men and women were held at the end of the 2016–17 season. The tournaments only award the Ivy League automatic bids for the NCAA Division I Men's and Women's Basketball Tournaments; the official conference championships continue to be awarded based solely on regular-season results. Before the 2016–17 season, the automatic bids were based solely on regular-season record, with a one-game playoff (or series of one-game playoffs if more than two teams were tied) held to determine the automatic bid. The Ivy League is one of only two Division I conferences which award their official basketball championships solely on regular-season results; the other is the Southeastern Conference. Since its inception, an Ivy League school has yet to win either the men's or women's Division I NCAA Basketball Tournament. On average, each Ivy school has more than 35 varsity teams. All eight are in the top 20 for number of sports offered for both men and women among Division I schools. Unlike most Division I athletic conferences, the Ivy League prohibits the granting of athletic scholarships; all scholarships awarded are need-based (financial aid). In addition, the Ivies have a rigid policy against redshirting, even for medical reasons; an athlete loses a year of eligibility for every year enrolled at an Ivy institution. Additionally, the Ivies prohibit graduate students from participating in intercollegiate athletics, even if they have remaining athletic eligibility. The only exception to the ban on graduate students is that seniors graduating in 2021 are being allowed to play at their current institutions as graduate students in 2021–22. This was a one-time-only response to the Ivies shutting down most intercollegiate athletics in 2020–21 due to COVID-19. Ivy League teams' non-league games are often against the members of the Patriot League, which have similar academic standards and athletic scholarship policies (although unlike the Ivies, the Patriot League allows both redshirting and play by eligible graduate students). In the time before recruiting for college sports became dominated by those offering athletic scholarships and lowered academic standards for athletes, the Ivy League was successful in many sports relative to other universities in the country. In particular, Princeton won 26 recognized national championships in college football (last in 1935), and Yale won 18 (last in 1927). Both of these totals are considerably higher than those of other historically strong programs such as Alabama, which has won 15, Notre Dame, which claims 11 but is credited by many sources with 13, and USC, which has won 11. Yale, whose coach Walter Camp was the "Father of American Football," held on to its place as the all-time wins leader in college football throughout the entire 20th century, but was finally passed by Michigan on November 10, 2001. Harvard, Yale, Princeton and Penn each have over a dozen former scholar-athletes enshrined in the College Football Hall of Fame. Currently Dartmouth holds the record for most Ivy League football titles, with 18, followed closely by Harvard and Penn, each with 17 titles. In addition, the Ivy League has produced Super Bowl winners Kevin Boothe (Cornell), two-time Pro Bowler Zak DeOssie (Brown), Sean Morey (Brown), All-Pro selection Matt Birk (Harvard), Calvin Hill (Yale), Derrick Harmon (Cornell) and 1999 "Mr. Irrelevant" Jim Finn (Penn). Beginning with the 1982 football season, the Ivy League has competed in Division I-AA (renamed FCS The Ivy League teams are eligible for the FCS tournament held to determine the national champion, and the league champion is eligible for an automatic bid (and any other team may qualify for an at-large selection) from the NCAA. However, since its inception in 1956, the Ivy League has not played any postseason games due to concerns about the extended December schedule's effects on academics. (The last postseason game for a member was , the 1934 Rose Bowl, won by For this reason, any Ivy League team invited to the FCS playoffs turns down the bid. The Ivy League plays a strict 10-game schedule, compared to other FCS members' schedules of 11 (or, in some seasons, 12) regular season games, plus post-season, which expanded in 2013 to five rounds with 24 teams, with a bye week for the top eight teams. Football is the only sport in which the Ivy League declines to compete for a national title. In addition to varsity football, Penn, Princeton and Cornell also field teams in the 10-team Collegiate Sprint Football League, in which all players must weigh 178 pounds or less. With Princeton canceling the program in 2016, Penn is the last remaining founding members of the league from its 1934 debut, and Cornell is the next-oldest, joining in 1937. Yale and Columbia previously fielded teams in the league but no longer do so. Teams The Ivy League is home to some of the oldest college rugby teams in the United States. Although these teams are not "varsity" sports, they compete annually in the Ivy Rugby Conference. Historical results The table above includes the number of team championships won from the beginning of official Ivy League competition (1956–57 academic year) through 2016–17. Princeton and Harvard have on occasion won ten or more Ivy League titles in a year, an achievement accomplished 10 times by Harvard and 24 times by Princeton, including a conference-record 15 championships in 2010–11. Only once has one of the other six schools earned more than eight titles in a single academic year (Cornell with nine in 2005–06). In the 38 academic years beginning 1979–80, Princeton has averaged 10 championships per year, one-third of the conference total of 33 sponsored sports. In the 12 academic years beginning 2005–06 Princeton has won championships in 31 different sports, all except wrestling and men's tennis. Rivalries Rivalries run deep in the Ivy League. For instance, Princeton and Penn are longstanding men's basketball rivals; "Puck Frinceton" T-shirts are worn by Quaker fans at games. In only 11 instances in the history of Ivy League basketball, and in only seven seasons since Yale's 1962 title, has neither Penn nor Princeton won at least a share of the Ivy League title in basketball, with Princeton champion or co-champion 26 times and Penn 25 times. Penn has won 21 outright, Princeton 19 outright. Princeton has been a co-champion 7 times, sharing 4 of those titles with Penn (these 4 seasons represent the only times Penn has been co-champion). Harvard won its first title of either variety in 2011, losing a dramatic play-off game to Princeton for the NCAA tournament bid, then rebounded to win outright championships in 2012, 2013, and 2014. Harvard also won the 2013 Great Alaska Shootout, defeating TCU to become the only Ivy League school to win the now-defunct tournament. Rivalries exist between other Ivy league teams in other sports, including Cornell and Harvard in hockey, Harvard and Princeton in swimming, and Harvard and Penn in football (Penn and Harvard have won 28 Ivy League Football Championships since 1982, Penn-16; Harvard-12). During that time Penn has had 8 undefeated Ivy League Football Championships and Harvard has had 6 undefeated Ivy League Football Championships. In men's lacrosse, Cornell and Princeton are perennial rivals, and they are two of three Ivy League teams to have won the NCAA tournament. In 2009, the Big Red and Tigers met for their 70th game in the NCAA tournament. No team other than Harvard or Princeton has won the men's swimming conference title outright since 1972, although Yale, Columbia, and Cornell have shared the title with Harvard and Princeton during this time. Similarly, no program other than Princeton and Harvard has won the women's swimming championship since Brown's 1999 title. Princeton or Cornell has won every indoor and outdoor track and field championship, both men's and women's, every year since 2002–03, with one exception (Columbia women won the indoor championship in 2012). Harvard and Yale are football and crew rivals although the competition has become unbalanced; Harvard has won all but one of the last 15 football games and all but one of the last 13 crew races. Intra-conference football rivalries The Yale–Princeton series is the nation's second-longest by games played, exceeded only by "The Rivalry" between Lehigh and Lafayette, which began later in 1884
|
the founder of the system, Paul Glover, moved out of the area. While in Ithaca, Glover had acted as an evangelist and networker for HOURS, helping spread their use and helping businesses find ways to spend HOURS they had received. Secondly, the use of HOURS declined as a result of the general shift away from cash transactions towards electronic transfers with debit or credit cards. Glover has emphasized that every local currency needs at least one full-time networker to "promote, facilitate and troubleshoot" currency circulation. Origin Ithaca HOURS were started by Paul Glover in November 1991. The system has historical roots in scrip and alternative and local currencies that proliferated in America during the Great Depression. While doing research into local economics during 1989, Glover had seen an "Hour" note 19th century British industrialist Robert Owen issued to his workers for spending at his company store. After Ithaca HOURS began, he discovered that Owen's Hours were based on Josiah Warren's "Time Store" notes of 1827. In May 1991, local student Patrice Jennings interviewed Glover about the Ithaca LETS enterprise. This conversation strongly reinforced his interest in trade systems. Jennings's research on the Ithaca LETS and its failure was integral to the development of the HOUR currency; conversations between Jennings and Glover helped ensure that HOURS used knowledge of what had not worked with the LETS system. Within a few days, he had designs for the HOUR and Half HOUR notes. He established that each HOUR would be worth the equivalent of $10, which was about the average hourly amount that workers earned in surrounding Tompkins County, although the exact rate of exchange for any given transaction was to be decided by the parties themselves. At GreenStar Cooperative Market, a local food co-op, Glover approached Gary Fine, a local massage therapist, with photocopied samples. Fine became the first person to sign a list formally agreeing to accept HOURS in exchange for services. Soon after, Jim Rohrrsen, the proprietor of a local toy store, became the first retailer to sign-up to accept Ithaca HOURS in exchange for merchandise. When the system was first started, 90 people agreed to accept HOURS as pay for their services. They all agreed to accept HOURS despite the lack of a business plan or guarantee. Glover then began to ask for small donations to help pay for printing HOURS. Fine Line Printing completed the first run of 1,500 HOURS and 1,500 Half HOURS in October 1991. These notes, the first modern local currency, were nearly twice as large as the current Ithaca HOURS. Because they didn't fit well in people's wallets, almost all of the original notes have been removed from circulation. The first issue of Ithaca Money was printed at Our Press, a printing shop in Chenango Bridge, New York, on October 16, 1991. The next day Glover issued 10 HOURS to Ithaca Hours, the organization he founded to run the system, as the first of four reimbursements for the cost of printing HOURS. The day after that, October 18, 1991, 382 HOURS were disbursed and prepared for mailing to the first 93 pioneers. On October 19, 1991, Glover bought a samosa from Catherine Martinez at the Farmers' Market with Half HOUR #751—the first use of an HOUR. Several other Market vendors enrolled that day. During the next years more than a thousand individuals enrolled to accept HOURS, plus 500 businesses. Stacks of the Ithaca Money newspaper were distributed all over town with an invitation to "join the fun." A Barter Potluck was held at GIAC on November 12, 1991, the first of many monthly gatherings where food and
|
that would be difficult to reproduce, and each bill is stamped with a serial number, to discourage counterfeiting. In 2002, a one-tenth hour bill was introduced, partly due to the encouragement and funding from Alternatives Federal Credit Union and feedback from retailers who complained about the awkwardness of only having larger denominations with which to work; the bills bear the signatures of both HOURS president Steve Burke and the president of AFCU. Although Ithaca HOUR notes can be found, in recent years they have fallen into disuse for several reasons. First, the founder of the system, Paul Glover, moved out of the area. While in Ithaca, Glover had acted as an evangelist and networker for HOURS, helping spread their use and helping businesses find ways to spend HOURS they had received. Secondly, the use of HOURS declined as a result of the general shift away from cash transactions towards electronic transfers with debit or credit cards. Glover has emphasized that every local currency needs at least one full-time networker to "promote, facilitate and troubleshoot" currency circulation. Origin Ithaca HOURS were started by Paul Glover in November 1991. The system has historical roots in scrip and alternative and local currencies that proliferated in America during the Great Depression. While doing research into local economics during 1989, Glover had seen an "Hour" note 19th century British industrialist Robert Owen issued to his workers for spending at his company store. After Ithaca HOURS began, he discovered that Owen's Hours were based on Josiah Warren's "Time Store" notes of 1827. In May 1991, local student Patrice Jennings interviewed Glover about the Ithaca LETS enterprise. This conversation strongly reinforced his interest in trade systems. Jennings's research on the Ithaca LETS and its failure was integral to the development of the HOUR currency; conversations between Jennings and Glover helped ensure that HOURS used knowledge of what had not worked with the LETS system. Within a few days, he had designs for the HOUR and Half HOUR notes. He established that each HOUR would be worth the equivalent of $10, which was about the average hourly amount that workers earned in surrounding Tompkins County, although the exact rate of exchange for any given transaction was to be decided by the parties themselves. At GreenStar Cooperative Market, a local food co-op, Glover approached Gary Fine, a local massage therapist, with photocopied samples. Fine became the first person to sign a list formally agreeing to accept HOURS in exchange for services. Soon after, Jim Rohrrsen, the proprietor of a local toy store, became the first retailer to sign-up to accept Ithaca HOURS in exchange for merchandise. When the system was first started, 90 people agreed to accept HOURS as pay for their services. They all agreed to accept HOURS despite the lack of a business plan or guarantee. Glover then began to ask for small donations to help pay for printing HOURS. Fine Line Printing completed the first run of 1,500 HOURS and 1,500 Half HOURS in October 1991. These notes, the first modern local currency, were nearly twice as large as the current Ithaca HOURS. Because they didn't fit well in people's wallets, almost
|
clouds are sometimes also called diffuse clouds. An interstellar cloud is formed by the gas and dust particles from a red giant in its later life. Chemical compositions The chemical composition of interstellar clouds is determined by studying electromagnetic radiation or EM radiation that they emanate, and we receive – from radio waves through visible light, to gamma rays on the electromagnetic spectrum – that we receive from them. Large radio telescopes scan the intensity in the sky of particular frequencies of electromagnetic radiation which are characteristic of certain molecules' spectra. Some interstellar clouds are cold and tend to give out EM radiation of large wavelengths. A map of the abundance of these molecules can be made, enabling an understanding of the varying composition of the clouds. In hot clouds, there are often ions of many elements, whose spectra can be seen in visible and ultraviolet light. Radio telescopes can also scan over the frequencies from one point in the map, recording the intensities of each type of molecule. Peaks of frequencies mean that an abundance of that molecule or atom is present in the cloud. The height of the peak is proportional to the relative percentage that it makes up. Unexpected chemicals detected in interstellar clouds Until recently the rates of reactions in interstellar clouds were expected to be very slow,
|
conditions, such as formaldehyde, methanol, and vinyl alcohol. The reactions needed to create such substances are familiar to scientists only at the much higher temperatures and pressures of earth and earth-based laboratories. The fact that they were found indicates that these chemical reactions in interstellar clouds take place faster than suspected, likely in gas-phase reactions unfamiliar to organic chemistry as observed on earth. These reactions are studied in the CRESU experiment. Interstellar clouds also provide a medium to study the presence and proportions of metals in space. The presence and ratios of these elements may help develop theories on the means of their production, especially when their proportions are inconsistent with those expected to arise from stars as a result of fusion and thereby suggest alternate means, such as cosmic ray spallation. High-velocity cloud These interstellar clouds possess a velocity higher than can be explained by the rotation of the Milky Way. By definition, these clouds must have a vlsr greater than 90 km s−1, where vlsr is the local standard rest velocity. They are detected primarily in the 21 cm line of neutral hydrogen, and typically have a lower portion of heavy elements than is normal for interstellar clouds in the Milky Way. Theories intended to explain these unusual clouds include materials left over from the formation of the galaxy, or tidally-displaced matter drawn away from other galaxies or members of the Local Group. An example of the latter is the Magellanic Stream. To narrow down the origin of these clouds, a better understanding of their distances and metallicity is needed. High-velocity clouds are identified with an HVC prefix, as with HVC 127-41-330. See also List of molecules in interstellar space Nebula Interplanetary medium – interplanetary dust Interstellar medium – interstellar dust Intergalactic medium – Intergalactic dust Local Interstellar Cloud G-Cloud References External links High Velocity Cloud — The Swinburne Astronomy Online (SAO) encyclopedia. Cloud
|
finds the cult of Imhotep during the New Kingdom () sufficiently distinct from the usual offerings made to other commoners that the epithet "demi-god" is likely justified to describe his veneration. The first references to the healing abilities of Imhotep occur from the Thirtieth Dynasty () onward, some 2,200 years after his death. Imhotep is among the few non-royal Egyptians who were deified after their deaths, and until the 21st century, he was one of nearly a dozen non-royals to achieve this status. The center of his cult was in Memphis. The location of his tomb remains unknown, despite efforts to find it. The consensus is that it is hidden somewhere at Saqqara. Historicity Imhotep's historicity is confirmed by two contemporary inscriptions made during his lifetime on the base or pedestal of one of Djoser's statues and also by a graffito on the enclosure wall surrounding Sekhemkhet's unfinished step pyramid. The latter inscription suggests that Imhotep outlived Djoser by a few years and went on to serve in the construction of Pharaoh Sekhemkhet's pyramid, which was abandoned due to this ruler's brief reign. Architecture and engineering Imhotep was one of the chief officials of the Pharaoh Djoser. Concurring with much later legends, egyptologists credit him with the design and construction of the Pyramid of Djoser, a step pyramid at Saqqara built during the 3rd Dynasty. He may also have been responsible for the first known use of stone columns to support a building. Despite these later attestations, the pharaonic Egyptians themselves never credited Imhotep as the designer of the stepped pyramid, nor with the invention of stone architecture. Deification God of medicine Two thousand years after his death, Imhotep's status had risen to that of a god of medicine and healing. Eventually, Imhotep was equated with Thoth, the god of architecture, mathematics, and medicine, and patron of scribes: Imhotep's cult was merged with that of his own former tutelary god. He was revered in the region of Thebes as the "brother" of Amenhotep, son of Hapu – another deified architect – in the temples dedicated to Thoth. Because of his association with health, the Greeks equated Imhotep with Asklepios, their own god of health who
|
of his own former tutelary god. He was revered in the region of Thebes as the "brother" of Amenhotep, son of Hapu – another deified architect – in the temples dedicated to Thoth. Because of his association with health, the Greeks equated Imhotep with Asklepios, their own god of health who also was a deified mortal. According to myth, Imhotep's mother was a mortal named Kheredu-ankh, she too being eventually revered as a demi-goddess as the daughter of Banebdjedet. Alternatively, since Imhotep was known as the "Son of Ptah", his mother was sometimes claimed to be Sekhmet, the patron of Upper Egypt whose consort was Ptah. Post-Alexander period The Upper Egyptian Famine Stela, which dates from the Ptolemaic period (305–30 BCE), bears an inscription containing a legend about a famine lasting seven years during the reign of Djoser. Imhotep is credited with having been instrumental in ending it. One of his priests explained the connection between the god Khnum and the rise of the Nile to the Pharaoh, who then had a dream in which the Nile god spoke to him, promising to end the drought. A demotic papyrus from the temple of Tebtunis, dating to the 2nd century CE, preserves a long story about Imhotep. The Pharaoh Djoser plays a prominent role in the story, which also mentions Imhotep's family; his father the god Ptah, his mother Khereduankh, and his younger sister Renpetneferet. At one point Djoser desires Renpetneferet, and Imhotep disguises himself and tries to rescue her. The text also refers to the royal tomb of Djoser. Part of the legend includes an anachronistic battle between the Old Kingdom and the Assyrian armies where Imhotep fights an Assyrian sorceress in a duel of magic. As an instigator of Egyptian culture, Imhotep's idealized image lasted well into the Roman period. In the Ptolemaic period, the Egyptian priest and historian Manetho credited him with inventing the method of a stone-dressed building during Djoser's reign, though he was not the first to actually build with stone. Stone walling, flooring, lintels, and jambs had appeared sporadically during the Archaic Period, though it is true that a building of the size of the step pyramid made entirely out of stone had never before been constructed. Before Djoser, Pharaohs were buried in mastaba tombs. Medicine Egyptologist James Peter Allen states that "The Greeks equated him with their own god of medicine, Asklepios. In popular culture Imhotep is the antagonistic title character of Universal's 1932 film The Mummy and its 1999 remake, along with a sequel to the remake. Imhotep was also portrayed in the television show Stargate SG1 as being a false god and an alien known as a Goa’uld. See also Imhotep Museum History of ancient Egypt Ancient Egyptian architecture Ancient Egyptian medicine References Further reading External links 27th-century BC
|
Ionic on the interior, and incorporated a Corinthian column, the earliest known, at the center rear of the cella. Sources also identify Ictinus as architect of the Telesterion at Eleusis, a gigantic hall used in the Eleusinian Mysteries. Pericles also commissioned Ictinus to design the Telesterion (Hall of mystery ) at Eleusis, but his involvement was terminated when Pericles fell from power. Three other architects took over instead. It seems likely that Ictinus's reputation was harmed by his links with the fallen ruler, as he is singled out for condemnation by Aristophanes in
|
to design the Telesterion (Hall of mystery ) at Eleusis, but his involvement was terminated when Pericles fell from power. Three other architects took over instead. It seems likely that Ictinus's reputation was harmed by his links with the fallen ruler, as he is singled out for condemnation by Aristophanes in his play The Birds, dated to around 414 BC. It depicts the royal kite or ictinus – a play on the architect's name – not as a noble bird of prey but as a scavenger stealing sacrifices from the
|
the church mainly out of stone, rather than wood, “He compacted it of baked brick and mortar, and in many places bound it together with iron, but made no use of wood, so that the church should no longer prove combustible.” Isidore of Miletus and Anthemius of Tralles originally planned on a main hall of the Hagia Sophia that measured 70 by 75 metres (230 x 250 ft), making it the largest church in Constantinople, but the original dome was nearly 6 metres (20 ft) lower than it was constructed, “Justinian suppressed these riots and took the opportunity of marking his victory by erecting in 532-7 the new Hagia Sophia, one of the largest, most lavish, and most expensive buildings of all time.” Although Isidore of Miletus and Anthemius of Tralles were not formally educated in architecture, they were scientists who could organize the logistics of drawing thousands of labourers and unprecedented loads of rare raw materials from around the Roman Empire to construct the Hagia Sophia for Emperor Justinian I. The finished product was built in admirable form for the Roman Emperor, “All of these elements marvellously fitted together in mid-air, suspended from one another and reposing only on the parts adjacent to them, produce a unified and most remarkable harmony in the work, and yet do not allow the spectators to rest their gaze upon any one of them for a length of time.” The Hagia Sophia architects innovatively combined the longitudinal structure of a Roman basilica and the central plan of a drum-supported dome, in order to withstand the high magnitude earthquakes of the Marmara Region, “However, in May 558, little more than 20 years after the Church’s dedication, following the earthquakes of August 553 and December 557, parts of the central dome and its supporting structure system collapsed.” The Hagia Sophia was repeatedly cracked by earthquakes
|
the church mainly out of stone, rather than wood, “He compacted it of baked brick and mortar, and in many places bound it together with iron, but made no use of wood, so that the church should no longer prove combustible.” Isidore of Miletus and Anthemius of Tralles originally planned on a main hall of the Hagia Sophia that measured 70 by 75 metres (230 x 250 ft), making it the largest church in Constantinople, but the original dome was nearly 6 metres (20 ft) lower than it was constructed, “Justinian suppressed these riots and took the opportunity of marking his victory by erecting in 532-7 the new Hagia Sophia, one of the largest, most lavish, and most expensive buildings of all time.” Although Isidore of Miletus and Anthemius of Tralles were not formally educated in architecture, they were scientists who could organize the logistics of drawing thousands of labourers and unprecedented loads of rare raw materials from around the Roman Empire to construct the Hagia Sophia for Emperor Justinian I. The finished product was built in admirable form for the Roman Emperor, “All of these elements marvellously fitted together in mid-air, suspended from one another and reposing only on the parts adjacent to them, produce
|
the Board of Governors voted to appoint Yukiya Amano "by acclamation," and IAEA General Conference in September 2009 approved. He took office on 1 December 2009. After Amano's death, his Chief of Coordination Cornel Feruta of Romania was named Acting Director General. On 2 August 2019, Rafael Grossi was presented as the Argentine candidate to become the Director General of IAEA. On 28 October 2019, the IAEA Board of Governors held its first vote to elect the new Director General, but none of the candidates secured the two-thirds majority in the 35-member IAEA Board of Governors needed to be elected. The next day, 29 October, the second voting round was held, and Grossi won 24 of the 23 needed votes required for Director General Appointment. He assumed office on 3 December 2019. Following a special meeting of the IAEA General Conference to approve his appointment, on 3 December Grossi became the first Latin American to head the Agency. Structure and function General The IAEA's mission is guided by the interests and needs of Member States, strategic plans and the vision embodied in the IAEA Statute (see below). Three main pillars – or areas of work – underpin the IAEA's mission: Safety and Security; Science and Technology; and Safeguards and Verification. The IAEA as an autonomous organisation is not under direct control of the UN, but the IAEA does report to both the UN General Assembly and Security Council. Unlike most other specialised international agencies, the IAEA does much of its work with the Security Council, and not with the United Nations Economic and Social Council. The structure and functions of the IAEA are defined by its founding document, the IAEA Statute (see below). The IAEA has three main bodies: the Board of Governors, the General Conference, and the Secretariat. The IAEA exists to pursue the "safe, secure and peaceful uses of nuclear sciences and technology" (Pillars 2005). The IAEA executes this mission with three main functions: the inspection of existing nuclear facilities to ensure their peaceful use, providing information and developing standards to ensure the safety and security of nuclear facilities, and as a hub for the various fields of science involved in the peaceful applications of nuclear technology. The IAEA recognises knowledge as the nuclear energy industry's most valuable asset and resource, without which the industry cannot operate safely and economically. Following the IAEA General Conference since 2002 resolutions the Nuclear Knowledge Management, a formal programme was established to address Member States' priorities in the 21st century. In 2004, the IAEA developed a Programme of Action for Cancer Therapy (PACT). PACT responds to the needs of developing countries to establish, to improve, or to expand radiotherapy treatment programs. The IAEA is raising money to help efforts by its Member States to save lives and to reduce suffering of cancer victims. The IAEA has established programs to help developing countries in planning to build systematically the capability to manage a nuclear power program, including the Integrated Nuclear Infrastructure Group, which has carried out Integrated Nuclear Infrastructure Review missions in Indonesia, Jordan, Thailand and Vietnam. The IAEA reports that roughly 60 countries are considering how to include nuclear power in their energy plans. To enhance the sharing of information and experience among IAEA Member States concerning the seismic safety of nuclear facilities, in 2008 the IAEA established the International Seismic Safety Centre. This centre is establishing safety standards and providing for their application in relation to site selection, site evaluation and seismic design. Board of Governors The Board of Governors is one of two policy making bodies of the IAEA. The Board consists of 22 member states elected by the General Conference, and at least 10 member states nominated by the outgoing Board. The outgoing Board designates the ten members who are the most advanced in atomic energy technology, plus the most advanced members from any of the following areas that are not represented by the first ten: North America, Latin America, Western Europe, Eastern Europe, Africa, Middle East and South Asia, South East Asia, the Pacific, and the Far East. These members are designated for one year terms. The General Conference elects 22 members from the remaining nations to two-year terms. Eleven are elected each year. The 22 elected members must also represent a stipulated geographic diversity. The 35 Board members for the 2018–2019 period are: Argentina, Armenia, Australia, Azerbaijan, Belgium, Brazil, Canada, Chile, China, Ecuador, Egypt, France, Germany, India, Indonesia, Italy, Japan, Jordan, Kenya, the Republic of Korea, Morocco, the Netherlands, Niger, Pakistan, Portugal, the Russian Federation, Serbia, South Africa, the Sudan, Sweden, Thailand, the United Kingdom of Great Britain and Northern Ireland, the United States of America, Uruguay and the Bolivarian Republic of Venezuela. The Board, in its five yearly meetings, is responsible for making most of the policy of the IAEA. The Board makes recommendations to the General Conference on IAEA activities and budget, is responsible for publishing IAEA standards and appoints the Director General subject to General Conference approval. Board members each receive one vote. Budget matters require a two-thirds majority. All other matters require only a simple majority. The
|
who previously served as an IAEA's chief of cabinet, whose appointment was approved at the special session of the IAEA's General Conference on 2 December 2019, as the successor of Yukiya Amano, who died in July 2019. History In 1953, U.S. President Dwight D. Eisenhower proposed the creation of an international body to both regulate and promote the peaceful use of atomic power (nuclear power), in his Atoms for Peace address to the UN General Assembly. In September 1954, the United States proposed to the General Assembly the creation of an international agency to take control of fissile material, which could be used either for nuclear power or for nuclear weapons. This agency would establish a kind of "nuclear bank." The United States also called for an international scientific conference on all of the peaceful aspects of nuclear power. By November 1954, it had become clear that the Soviet Union would reject any international custody of fissile material if the United States did not agree to a disarmament first, but that a clearing house for nuclear transactions might be possible. From 8 to 20 August 1955, the United Nations held the International Conference on the Peaceful Uses of Atomic Energy in Geneva, Switzerland. In October 1957, a Conference on the IAEA Statute was held at the Headquarters of the United Nations to approve the founding document for the IAEA, which was negotiated in 1955–1957 by a group of twelve countries. The Statute of the IAEA was approved on 23 October 1956 and came into force on 29 July 1957. Former US Congressman W. Sterling Cole served as the IAEA's first Director General from 1957 to 1961. Cole served only one term, after which the IAEA was headed by two Swedes for nearly four decades: the scientist Sigvard Eklund held the job from 1961 to 1981, followed by former Swedish Foreign Minister Hans Blix, who served from 1981 to 1997. Blix was succeeded as Director General by Mohamed ElBaradei of Egypt, who served until November 2009. Beginning in 1986, in response to the nuclear reactor explosion and disaster near Chernobyl, Ukraine, the IAEA increased its efforts in the field of nuclear safety. The same happened after the 2011 Fukushima disaster in Fukushima, Japan. Both the IAEA and its then Director General, ElBaradei, were awarded the Nobel Peace Prize in 2005. In ElBaradei's acceptance speech in Oslo, he stated that only one percent of the money spent on developing new weapons would be enough to feed the entire world, and that, if we hope to escape self-destruction, then nuclear weapons should have no place in our collective conscience, and no role in our security. On 2 July 2009, Yukiya Amano of Japan was elected as the Director General for the IAEA, defeating Abdul Samad Minty of South Africa and Luis E. Echávarri of Spain. On 3 July 2009, the Board of Governors voted to appoint Yukiya Amano "by acclamation," and IAEA General Conference in September 2009 approved. He took office on 1 December 2009. After Amano's death, his Chief of Coordination Cornel Feruta of Romania was named Acting Director General. On 2 August 2019, Rafael Grossi was presented as the Argentine candidate to become the Director General of IAEA. On 28 October 2019, the IAEA Board of Governors held its first vote to elect the new Director General, but none of the candidates secured the two-thirds majority in the 35-member IAEA Board of Governors needed to be elected. The next day, 29 October, the second voting round was held, and Grossi won 24 of the 23 needed votes required for Director General Appointment. He assumed office on 3 December 2019. Following a special meeting of the IAEA General Conference to approve his appointment, on 3 December Grossi became the first Latin American to head the Agency. Structure and function General The IAEA's mission is guided by the interests and needs of Member States, strategic plans and the vision embodied in the IAEA Statute (see below). Three main pillars – or areas of work – underpin the IAEA's mission: Safety and Security; Science and Technology; and Safeguards and Verification. The IAEA as an autonomous organisation is not under direct control of the UN, but the IAEA does report to both the UN General Assembly and Security Council. Unlike most other specialised international agencies, the IAEA does much of its work with the Security Council, and not with the United Nations Economic and Social Council. The structure and functions of the IAEA are defined by its founding document, the IAEA Statute (see below). The IAEA has three main bodies: the Board of Governors, the General Conference, and the Secretariat. The IAEA exists to pursue the "safe, secure and peaceful uses of nuclear sciences and technology" (Pillars 2005). The IAEA executes this mission with three main functions: the inspection of existing nuclear facilities to ensure their peaceful use, providing information and developing standards to ensure the safety and security of nuclear facilities, and as a hub for the various fields of science involved in the peaceful applications of nuclear technology. The IAEA recognises knowledge as the nuclear energy industry's most valuable asset and resource, without which the industry cannot operate safely and economically. Following the IAEA General Conference since 2002 resolutions the Nuclear Knowledge Management, a formal programme was established to address Member States' priorities in the 21st century. In 2004, the IAEA developed a Programme of Action for Cancer Therapy (PACT). PACT responds to the needs of developing countries to establish, to improve, or to expand radiotherapy treatment programs. The IAEA is raising money to help efforts by its Member States to save lives and to reduce suffering of cancer victims. The IAEA has established programs to help developing countries in planning to build systematically the capability to manage a nuclear power program, including the Integrated Nuclear Infrastructure Group, which has carried out Integrated Nuclear Infrastructure Review missions in Indonesia, Jordan, Thailand and Vietnam. The IAEA reports that roughly 60 countries are considering how to include nuclear power in their energy plans. To enhance the sharing of information and experience among IAEA Member States concerning the seismic safety of nuclear facilities, in 2008 the IAEA established the International Seismic Safety Centre. This centre is establishing safety standards and providing for their application in relation to site selection, site evaluation and seismic design. Board of Governors The Board of Governors is one of two policy making bodies of the IAEA. The Board consists of 22 member states elected by the General Conference, and at least 10 member states nominated by the outgoing Board. The outgoing Board designates the ten members who are the most advanced in atomic energy technology, plus the most advanced members from any of the following areas that are not represented by the first ten: North America, Latin America, Western Europe, Eastern Europe, Africa, Middle East and South Asia, South East Asia, the Pacific, and the Far East. These members are designated for one year terms. The General Conference elects 22 members from the remaining nations to two-year terms. Eleven are elected each year. The 22 elected members must also represent a stipulated geographic diversity. The 35 Board members for the 2018–2019 period are: Argentina, Armenia, Australia, Azerbaijan, Belgium, Brazil, Canada, Chile, China, Ecuador, Egypt, France, Germany, India, Indonesia, Italy, Japan, Jordan, Kenya, the Republic of Korea, Morocco, the Netherlands, Niger, Pakistan, Portugal, the Russian Federation, Serbia, South Africa, the Sudan, Sweden, Thailand, the United Kingdom of Great Britain and Northern Ireland, the United States of America, Uruguay and the Bolivarian Republic of Venezuela. The Board, in its five yearly meetings, is responsible for making most of the policy of the IAEA. The Board makes recommendations to the General Conference on IAEA activities and budget, is responsible for publishing IAEA standards and appoints the Director General subject to General Conference approval. Board members each receive one vote. Budget matters require a two-thirds majority. All other matters require only a simple majority. The simple majority also has the power to stipulate issues that will thereafter require a two-thirds majority. Two-thirds of all Board members must be present to call a vote. The Board elects its own chairman. General Conference The General Conference is made up of all 173 member states. It meets once a year, typically in September, to approve the actions and budgets passed on from the Board of Governors. The General Conference also approves the nominee for Director General and requests reports from the Board on issues in question (Statute). Each member receives one vote. Issues of budget, Statute amendment and suspension of a member's privileges require a two- thirds majority and all other issues require a simple majority. Similar to the Board, the General Conference can, by simple majority, designate issues to require a two- thirds majority. The General Conference elects a President at each annual meeting to facilitate an effective meeting. The President only serves for the duration of the session (Statute). The main function of the General Conference is to serve
|
pattern, where a prefix of C is usually added to an IATA code to create the ICAO code. For example, Calgary International Airport is YYC or CYYC. (In contrast, airports in Hawaii are in the Pacific region and so have ICAO codes that start with PH; Kona International Airport's code is PHKO. Similarly, airports in Alaska have ICAO codes that start with PA. Merrill Field, for instance is PAMR.) Note that not all airports are assigned codes in both systems; for example, airports that do not have airline service do not need an IATA code. Airline codes ICAO also assigns 3-letter airline codes (versus the more-familiar 2-letter IATA codes—for example, UAL vs. UA for United Airlines). ICAO also provides telephony designators to aircraft operators worldwide, a one- or two-word designator used on the radio, usually, but not always, similar to the aircraft operator name. For example, the identifier for Japan Airlines International is JAL and the designator is Japan Air, but Aer Lingus is EIN and Shamrock. Thus, a Japan Airlines flight numbered 111 would be written as "JAL111" and pronounced "Japan Air One One One" on the radio, while a similarly numbered Aer Lingus would be written as "EIN111" and pronounced "Shamrock One One One". In the US, FAA practices require the digits of the flight number to be spoken in group format ("Japan Air One Eleven" in the above example) while individual digits are used for the aircraft tail number used for unscheduled civil flights. Aircraft registrations ICAO maintains the standards for aircraft registration ("tail numbers"), including the alphanumeric codes that identify the country of registration. For example, airplanes registered in the United States have tail numbers starting with N, and airplanes registered in Bahrain have tail numbers starting with A9C. Aircraft type designators ICAO is also responsible for issuing 2-4 character alphanumeric aircraft type designators for those aircraft types which are most commonly provided with air traffic service. These codes provide an abbreviated aircraft type identification, typically used in flight plans. For example, the Boeing 747-100, -200 and -300 are given the type designators B741, B742 and B743 respectively. Use of the International System of Units ICAO recommends a unification of units of measurement within aviation based on the International System of Units (SI). Technically this makes SI units preferred, but in practice the following non-SI units are still in widespread use within commercial aviation: Knots (kn) for speed. Nautical mile (nm) for distance. Foot (ft) for elevation. Knots, nautical miles and feet have been permitted for temporary use since 1979, but a termination date has not yet been established, which would complete metrication of worldwide aviation. Since 2010, ICAO recommends using: Kilometres per hour (km/h) for speed during travel. Metres per second (m/s) for wind speed during landing. Kilometres (km) for distance. Metres (m) for elevation. Notably, aviation in Russia and China currently use km/h for reporting airspeed, and many present-day European glider planes also indicate airspeed in kilometres per hour. China and North Korea use metres for reporting altitude when communicating with pilots. Russia also formerly used metres exclusively for reporting altitude, but in 2011 changed to feet for high altitude flight. From February 2017, Russian airspace started transitioning to reporting altitude in feet only. Runway lengths are now commonly given in metres worldwide, except in North America where feet are commonly used. The table below summarizes some of the units commonly used in flight and ground operations, as well as their recommended replacement. A full list of recommended units can be found in annex 5 to the Convention on International Civil Aviation. Table of units Altitude, elevation, height. Regions and regional offices ICAO has a headquarters, seven regional offices, and one regional sub-office: Headquarters – Montreal, Quebec, Canada Asia and Pacific (APAC) – Bangkok, Thailand; Sub-office – Beijing, China Eastern and Southern African (ESAF) – Nairobi, Kenya Europe and North Atlantic (EUR/NAT) – Paris, France Middle East (MID) – Cairo, Egypt North American, Central American and Caribbean (NACC) – Mexico City, Mexico South American (SAM) – Lima, Peru Western and Central African (WACAF) – Dakar, Senegal Leadership List of Secretaries General List of Council Presidents Environment Emissions from international aviation are specifically excluded from the targets agreed under the Kyoto Protocol. Instead, the Protocol invites developed countries to pursue the limitation or reduction of emissions through the International Civil Aviation Organization. ICAO's environmental committee continues to consider the potential for using market-based measures such as trading and charging, but this work is unlikely to lead to global action. It is currently developing guidance for states who wish to include aviation in an emissions trading scheme (ETS) to meet their Kyoto commitments, and for airlines who wish to participate voluntarily in a trading scheme. Emissions from domestic aviation are included within the Kyoto targets agreed by countries. This has led to some national policies such as fuel and emission taxes for domestic air travel in the Netherlands and Norway, respectively. Although some countries tax the fuel used by domestic aviation, there is no duty on kerosene used on international flights. ICAO is currently opposed to the inclusion of aviation in the European Union Emission Trading Scheme (EU ETS). The EU, however, is pressing ahead with its plans to include aviation.
|
1903 in Berlin, Germany, but no agreements were reached among the eight countries that attended. At the second convention in 1906, also held in Berlin, twenty-seven countries attended. The third convention, held in London in 1912 allocated the first radio callsigns for use by aircraft. ICAN continued to operate until 1945. Fifty-two countries signed the Chicago Convention on International Civil Aviation, also known as the Chicago Convention, in Chicago, Illinois, on 7 December 1944. Under its terms, a Provisional International Civil Aviation Organization was to be established, to be replaced in turn by a permanent organization when twenty-six countries ratified the convention. Accordingly, PICAO began operating on 6 June 1945, replacing ICAN. The twenty-sixth country ratified the convention on 5 March 1947 and, consequently, PICAO was disestablished on 4 April 1947 and replaced by ICAO, which began operations the same day. In October 1947, ICAO became an agency of the United Nations under its Economic and Social Council (ECOSOC). In April 2013, Qatar offered to serve as the new permanent seat of the Organization. Qatar promised to construct a massive new headquarters for ICAO and to cover all moving expenses, stating that Montreal "was too far from Europe and Asia", "had cold winters", was hard to attend due to the Canadian government's slow issuance of visas, and that the taxes imposed on ICAO by Canada were too high. According to The Globe and Mail, Qatar's invitation was at least partly motivated by the pro-Israel foreign policy of Canadian Prime Minister Stephen Harper. Approximately one month later, Qatar withdrew its bid after a separate proposal to the ICAO's governing council to move the ICAO triennial conference to Doha was defeated by a vote of 22–14. Taiwan controversy In January 2020, ICAO blocked a number of Twitter users—among them think-tank analysts, employees of the United States Congress, and journalists—who mentioned Taiwan in tweets related to ICAO. Many of the tweets concerned the COVID-19 pandemic and Taiwan's exclusion from ICAO safety and health bulletins due to Chinese pressure. In response to questions from reporters, ICAO issued a tweet stating that publishers of "irrelevant, compromising and offensive material" would be "precluded". Since that action, the organization has followed a policy of blocking anyone asking about it. The United States House Committee on Foreign Affairs harshly criticized ICAO's perceived failure to uphold principles of fairness, inclusion, and transparency by silencing non-disruptive opposing voices. Senator Marco Rubio also criticized the move. The Ministry of Foreign Affairs (Taiwan) (MOFA) and Taiwanese legislators criticized the move with MOFA head Jaushieh Joseph Wu tweeting in support of those blocked. Anthony Philbin, chief of communications of the ICAO Secretary General, rejected criticism of ICAO's handling of the situation: "We felt we were completely warranted in taking the steps we did to defend the integrity of the information and discussions our followers should reasonably expect from our feeds." In exchanges with International Flight Network, Philbin refused to acknowledge the existence of Taiwan. On 1 February 2020, the US State Department issued a press release which heavily criticized ICAO's actions, characterizing them as "outrageous, unacceptable, and not befitting of a UN organization." Statute The 9th edition of the Convention on International Civil Aviation includes modifications from years 1948 up to 2006. ICAO refers to its current edition of the convention as the Statute and designates it as ICAO Document 7300/9. The convention has 19 Annexes that are listed by title in the article Convention on International Civil Aviation. Membership , there are 193 ICAO members, consisting of 192 of the 193 UN members (all but Liechtenstein, which lacks an international airport), plus the Cook Islands. Despite Liechtenstein not being a direct party to ICAO, its government has delegated Switzerland to enter into the treaty on its behalf, and the treaty applies in the territory of Liechtenstein. The Republic of China (Taiwan) was a founding member of ICAO but was replaced by the People's Republic of China as the legal representative of China in 1971 and as such, did not take part in the organization. In 2013, Taiwan was for the first time invited to attend the ICAO Assembly, at its 38th session, as a guest under the name of Chinese Taipei. , it has not been invited to participate again, due to renewed PRC pressure. Host government, Canada, supports Taiwan's inclusion in ICAO. Support also comes from Canada's commercial sector with the president of the Air Transport Association of Canada saying in 2019 that "It's about safety in aviation so from a strictly operational and non-political point of view, I believe Taiwan should be there." Council The Council of ICAO is elected by the Assembly every 3 years and consists of 36 members elected in 3 groups. The present Council was elected in October 2019. The structure of the present Council is as follows: Standards ICAO also standardizes certain functions for use in the airline industry, such as the Aeronautical Message Handling System (AMHS). This makes it a standards organization. Each country should have an accessible Aeronautical Information Publication (AIP), based on standards defined by ICAO, containing information essential to air navigation. Countries are required to update their AIP manuals every 28 days and so provide definitive regulations, procedures and information for each country about airspace and airports. ICAO's standards also dictate that temporary hazards to aircraft must be regularly published using NOTAMs. ICAO defines an International Standard Atmosphere (also known as ICAO Standard Atmosphere), a model of the standard variation of pressure, temperature, density, and viscosity with altitude in the Earth's atmosphere. This is useful in calibrating instruments and designing aircraft. The standardized pressure is also used in calibrating instruments in-flight, particularly above the transition altitude. ICAO is active in infrastructure management, including communication, navigation and surveillance / air traffic management (CNS/ATM) systems, which employ digital technologies (like satellite systems with various levels of automation) in order to maintain a seamless global air traffic management system. Passport standards ICAO has published standards for machine-readable passports. Machine-readable passports have an area where some of the information otherwise written in textual form is also written as strings of alphanumeric characters, printed in a manner suitable for optical character recognition. This enables border controllers and other law enforcement agents to process such passports more quickly, without having to enter the information manually into a computer. ICAO's technical standard for machine-readable passports is contained in Document 9303 Machine Readable Travel Documents. A more recent standard covers biometric passports. These contain biometrics to authenticate the identity of travellers. The passport's critical information is stored on a tiny RFID computer chip, much like information stored on smart cards. Like some smart cards, the passport book design calls for an embedded contactless chip that is able to hold digital signature data to ensure the integrity of the passport and the biometric data. Aerodrome reference code Registered codes Both ICAO and IATA have their own airport and airline code systems. Airport codes ICAO uses 4-letter airport codes (vs. IATA's 3-letter codes). The ICAO code is based on the region and country of the airport—for example, Charles de Gaulle Airport has an ICAO code of LFPG, where L indicates Southern Europe, F, France, PG, Paris de Gaulle, while Orly Airport has the code LFPO (the 3rd letter sometimes refers to the particular flight information region (FIR) or the last two may be arbitrary). In most parts of the world, ICAO and IATA codes are unrelated; for example, Charles de Gaulle Airport has an IATA code of CDG. However, the location prefix for continental United States is K, and ICAO codes are usually the IATA code with this prefix. For example, the ICAO code for Los Angeles International Airport is KLAX. Canada follows a similar pattern, where a prefix of C is usually added to an IATA code to create the ICAO code. For example, Calgary International Airport is YYC or CYYC. (In contrast,
|
chemicals, goods in packaged form, sewage, garbage and air pollution. The original MARPOL was signed on 17 February 1973, but did not come into force due to lack of ratifications. The current convention is a combination of 1973 Convention and the 1978 Protocol. It entered into force on 2 October 1983. As of May 2013, 152 states, representing 99.2 per cent of the world's shipping tonnage, are involved in the convention. In 1983 the IMO established the World Maritime University in Malmö, Sweden. Headquarters The IMO headquarters are located in a large purpose-built building facing the River Thames on the Albert Embankment, in Lambeth, London. The organisation moved into its new headquarters in late 1982, with the building being officially opened by Queen Elizabeth II on 17 May 1983. The architects of the building were Douglass Marriott, Worby & Robinson. The front of the building is dominated by a seven-metre high, ten-tonne bronze sculpture of the bow of a ship, with a lone seafarer maintaining a look-out. The previous headquarters of IMO were at 101 Piccadilly (now the home of the Embassy of Japan), prior to that at 22 Berners Street in Fitzrovia and originally in Chancery Lane. Membership To become a member of the IMO, a state ratifies a multilateral treaty known as the Convention on the International Maritime Organization. As of 2020, there are 174 member states of the IMO, which includes 173 of the UN member states plus the Cook Islands. The first state to ratify the convention was Canada in 1948. The three most recent members to join were Armenia and Nauru (which became IMO members in January and May 2018, respectively) and Botswana (who joined the IMO in October 2021). These are the current members with the year they joined: Albania (1993) Algeria (1963) Angola (1977) Antigua and Barbuda (1986) Argentina (1953) Armenia (2018) Australia (1952) Austria (1975) Azerbaijan (1995) Bahamas (1976) Bahrain (1976) Bangladesh (1976) Barbados (1970) Belarus (2016) Belgium (1951) Belize (1990) Benin (1980) Bolivia (1987) Bosnia and Herzegovina (1993) Botswana (2021) Brazil (1963) Brunei Darussalam (1984) Bulgaria (1960) Cabo Verde (1976) Cambodia (1961) Cameroon (1961) Canada (1948) Chile (1972) China (1973) Colombia (1974) Comoros (2001) Congo (1975) Cook Islands (2008) Costa Rica (1981) Côte d'Ivoire (1960) Croatia (1992) Cuba (1966) Cyprus (1973) Czechia (1993) Democratic People's Republic of Korea (1986) Democratic Republic of the Congo (1973) Denmark (1959) Djibouti (1979) Dominica (1979) Dominican Republic (1953) Ecuador (1956) Egypt (1958) El Salvador (1981) Equatorial Guinea (1972) Eritrea (1993) Estonia (1992) Ethiopia (1975) Fiji (1983) Finland (1959) France (1952) Gabon (1976) Gambia (1979) Georgia (1993) Germany (1959) Ghana (1959) Greece (1958) Grenada (1998) Guatemala (1983) Guinea (1975) Guinea-Bissau (1977) Guyana (1980) Haiti (1953) Honduras (1954) Hungary (1970) Iceland (1960) India (1959) Indonesia (1961) Iran (1958) Iraq (1973) Ireland (1951) Israel (1952) Italy (1957) Jamaica (1976) Japan (1958) Jordan (1973) Kazakhstan (1994) Kenya (1973) Kiribati (2003) Kuwait (1960) Latvia (1993) Lebanon (1966) Liberia (1959) Libya (1970) Lithuania (1995) Luxembourg (1991) Madagascar (1961) Malawi (1989) Malaysia (1971) Maldives (1967) Malta (1966) Marshall Islands (1998) Mauritania (1961) Mauritius (1978) Mexico (1954) Monaco (1989) Mongolia (1996) Montenegro (2006) Morocco (1962) Mozambique (1979) Myanmar (1951) Namibia (1994) Nauru (2018) Nepal (1979) Netherlands (1949) New Zealand (1960) Nicaragua (1982) Nigeria (1962) North Macedonia (1993) Norway (1958) Oman (1974) Pakistan (1958) Palau (2011) Panama (1958) Papua New Guinea (1976) Paraguay (1993) Peru (1968) Philippines (1964) Poland (1960) Portugal (1976) Qatar (1977) Republic of Korea (1962) Republic of Moldova (2001) Romania (1965) Russian Federation (1958) Saint Kitts and Nevis (2001) Saint Lucia (1980) Saint Vincent and the Grenadines (1981) Samoa (1996) San Marino (2002) São Tomé and Príncipe (1990) Saudi Arabia (1969) Senegal (1960) Serbia (2000) Seychelles (1978) Sierra Leone (1973) Singapore (1966) Slovakia (1993) Slovenia (1993) Solomon Islands (1988) Somalia (1978) South Africa (1995) Spain (1962) Sri Lanka (1972) Sudan (1974) Suriname (1976) Sweden (1959) Switzerland (1955) Syria (1963) Tanzania (1974) Thailand (1973) Timor-Leste (2005) Togo (1983) Tonga (2000) Trinidad and Tobago (1965) Tunisia (1963) Turkey (1958) Turkmenistan (1993) Tuvalu (2004) Uganda (2009) Ukraine (1994) United Arab Emirates (1980) United Kingdom (1949) United States of America (1950) Uruguay (1968) Vanuatu (1986) Venezuela (1975) Viet Nam (1984) Yemen (1979) Zambia (2014) Zimbabwe (2005) The three associate members of the IMO are the Faroe Islands, Hong Kong and Macao. In 1961, the territories of Sabah and Sarawak, which had been included through the participation of United Kingdom, became joint associate members. In 1963 they became part of Malaysia. Most UN member states that are not members of IMO are landlocked countries. These include Afghanistan, Andorra, Bhutan, Botswana, Burkina Faso, Burundi, Central African Republic, Chad, Kyrgyzstan, Laos, Lesotho, Liechtenstein, Mali, Niger, Rwanda, South Sudan, Swaziland, Tajikistan and Uzbekistan. However, the Federated States of Micronesia, an island-nation in the Pacific Ocean, is also a non-member, as is the same for similar Taiwan, itself a non-member of the UN. Structure The IMO consists of an Assembly, a Council and five main Committees: the Maritime Safety Committee; the Marine Environment Protection Committee; the Legal Committee; the Technical Co-operation Committee and the Facilitation Committee. A number of Sub-Committees support the work of the main technical committees. Legal instruments IMO is the source of approximately 60 legal instruments that guide the regulatory development of its member states to improve safety at sea, facilitate trade among seafaring states and protect the maritime environment. The most well known is the International Convention for the Safety of Life at Sea (SOLAS), as well as International Convention on Oil Pollution Preparedness, Response and Co-operation (OPRC). Others include the International Oil Pollution Compensation Funds (IOPC). It also functions as a depository of yet to be ratified treaties, such as the International Convention on Liability and Compensation for Damage in Connection with the Carriage of Hazardous and Noxious Substances by Sea, 1996 (HNS Convention) and Nairobi International Convention of Removal of Wrecks (2007). IMO regularly enacts regulations, which are broadly enforced by national and local maritime authorities in member countries, such
|
IMCO's first task was to update that convention; the resulting 1960 convention was subsequently recast and updated in 1974 and it is that convention that has been subsequently modified and updated to adapt to changes in safety requirements and technology. When IMCO began its operations in 1959 certain other pre-existing conventions were brought under its aegis, most notable the International Convention for the Prevention of Pollution of the Sea by Oil (OILPOL) 1954. The first meetings of the newly formed IMCO were held in London in 1959. Throughout its existence IMCO, later renamed the IMO in 1982, has continued to produce new and updated conventions across a wide range of maritime issues covering not only safety of life and marine pollution but also encompassing safe navigation, search and rescue, wreck removal, tonnage measurement, liability and compensation, ship recycling, the training and certification of seafarers, and piracy. More recently SOLAS has been amended to bring an increased focus on maritime security through the International Ship and Port Facility Security (ISPS) Code. The IMO has also increased its focus on smoke emissions from ships. In January 1959, IMO began to maintain and promote the 1954 OILPOL Convention. Under the guidance of IMO, the convention was amended in 1962, 1969, and 1971. Torrey Canyon As oil trade and industry developed, many people in the industry began to recognise a need for further improvements in regards to oil pollution prevention at sea. This became increasingly apparent in 1967, when the tanker Torrey Canyon spilled 120,000 tons of crude oil when it ran aground entering the English Channel The Torrey Canyon grounding was the largest oil pollution incident recorded up to that time. This incident prompted a series of new conventions. Maritime pollution convention IMO held an emergency session of its Council to deal with the need to readdress regulations pertaining to maritime pollution. In 1969, the IMO Assembly decided to host an international gathering in 1973 dedicated to this issue. The goal at hand was to develop an international agreement for controlling general environmental contamination by ships when out at sea. During the next few years IMO brought to the forefront a series of measures designed to prevent large ship accidents and to minimise their effects. It also detailed how to deal with the environmental threat caused by routine ship duties such as the cleaning of oil cargo tanks or the disposal of engine room wastes. By tonnage, the aforementioned was a bigger problem than accidental pollution. The most significant thing to come out of this conference was the International Convention for the Prevention of Pollution from Ships, 1973. It covers not only accidental and operational oil pollution but also different types of pollution by chemicals, goods in packaged form, sewage, garbage and air pollution. The original MARPOL was signed on 17 February 1973, but did not come into force due to lack of ratifications. The current convention is a combination of 1973 Convention and the 1978 Protocol. It entered into force on 2 October 1983. As of May 2013, 152 states, representing 99.2 per cent of the world's shipping tonnage, are involved in the convention. In 1983 the IMO established the World Maritime University in Malmö, Sweden. Headquarters The IMO headquarters are located in a large purpose-built building facing the River Thames on the Albert Embankment, in Lambeth, London. The organisation moved into its new headquarters in late 1982, with the building being officially opened by Queen Elizabeth II on 17 May 1983. The architects of the building were Douglass Marriott, Worby & Robinson. The front of the building is dominated by a seven-metre high, ten-tonne bronze sculpture of the bow of a ship, with a lone seafarer maintaining a look-out. The previous headquarters of IMO were at 101 Piccadilly (now the home of the Embassy of Japan), prior to that at 22 Berners Street in Fitzrovia and originally in Chancery Lane. Membership To become a member of the IMO, a state ratifies a multilateral treaty known as the Convention on the International Maritime Organization. As of 2020, there are 174 member states of the IMO, which includes 173 of the UN member states plus the Cook Islands. The first state to ratify the convention was Canada in 1948. The three most recent members to join were Armenia and Nauru (which became IMO members in January and May 2018, respectively) and Botswana (who joined the IMO in October 2021). These are the current members with the year they joined: Albania (1993) Algeria (1963) Angola (1977) Antigua and Barbuda (1986) Argentina (1953) Armenia (2018) Australia (1952) Austria (1975) Azerbaijan (1995) Bahamas (1976) Bahrain (1976) Bangladesh (1976) Barbados (1970) Belarus (2016) Belgium (1951) Belize (1990) Benin (1980) Bolivia (1987) Bosnia and Herzegovina (1993) Botswana (2021) Brazil (1963) Brunei Darussalam (1984) Bulgaria (1960) Cabo Verde (1976) Cambodia (1961) Cameroon (1961) Canada (1948) Chile (1972) China (1973) Colombia (1974) Comoros (2001) Congo (1975) Cook Islands (2008) Costa Rica (1981) Côte d'Ivoire (1960) Croatia (1992) Cuba (1966) Cyprus (1973) Czechia (1993) Democratic People's Republic of Korea (1986) Democratic Republic of the Congo (1973) Denmark (1959) Djibouti (1979) Dominica (1979) Dominican Republic (1953) Ecuador (1956) Egypt (1958) El Salvador (1981) Equatorial Guinea (1972) Eritrea (1993) Estonia (1992) Ethiopia (1975) Fiji (1983) Finland (1959) France (1952) Gabon (1976) Gambia (1979) Georgia (1993) Germany (1959) Ghana (1959) Greece (1958) Grenada (1998) Guatemala (1983) Guinea (1975) Guinea-Bissau (1977) Guyana (1980) Haiti (1953) Honduras (1954) Hungary (1970) Iceland (1960) India (1959) Indonesia (1961) Iran (1958) Iraq (1973) Ireland (1951) Israel (1952) Italy (1957) Jamaica (1976) Japan (1958) Jordan (1973) Kazakhstan (1994) Kenya (1973) Kiribati (2003) Kuwait (1960) Latvia (1993) Lebanon (1966) Liberia (1959) Libya (1970) Lithuania (1995) Luxembourg (1991) Madagascar (1961) Malawi (1989) Malaysia (1971) Maldives (1967) Malta (1966) Marshall Islands (1998) Mauritania (1961) Mauritius (1978) Mexico (1954) Monaco (1989) Mongolia (1996) Montenegro (2006) Morocco (1962) Mozambique (1979) Myanmar (1951) Namibia (1994) Nauru (2018) Nepal (1979) Netherlands (1949) New Zealand (1960) Nicaragua (1982) Nigeria (1962) North Macedonia (1993) Norway (1958) Oman (1974) Pakistan (1958) Palau (2011) Panama (1958) Papua New Guinea (1976) Paraguay (1993) Peru (1968) Philippines (1964) Poland (1960) Portugal (1976) Qatar (1977) Republic of Korea (1962) Republic of Moldova (2001) Romania (1965) Russian Federation (1958) Saint Kitts and Nevis (2001) Saint Lucia (1980) Saint Vincent and the Grenadines (1981) Samoa (1996) San Marino (2002) São Tomé and Príncipe (1990) Saudi Arabia (1969) Senegal (1960) Serbia (2000) Seychelles (1978) Sierra Leone (1973) Singapore (1966) Slovakia (1993) Slovenia (1993) Solomon Islands (1988) Somalia (1978) South Africa (1995) Spain (1962) Sri Lanka (1972) Sudan (1974) Suriname (1976) Sweden (1959) Switzerland (1955) Syria (1963) Tanzania (1974) Thailand (1973) Timor-Leste (2005) Togo (1983) Tonga (2000) Trinidad and Tobago (1965) Tunisia (1963) Turkey (1958) Turkmenistan (1993) Tuvalu (2004) Uganda (2009) Ukraine (1994) United Arab Emirates (1980) United Kingdom (1949) United States of America (1950) Uruguay (1968) Vanuatu (1986) Venezuela (1975) Viet Nam (1984) Yemen (1979) Zambia (2014) Zimbabwe (2005) The three associate members of the IMO are the Faroe Islands, Hong Kong and Macao. In 1961, the territories of Sabah and Sarawak, which had been included through the participation of United Kingdom, became joint associate members. In 1963 they became part of Malaysia. Most UN member states that are not members of IMO are landlocked countries. These include Afghanistan, Andorra, Bhutan, Botswana, Burkina Faso, Burundi, Central African Republic, Chad, Kyrgyzstan, Laos, Lesotho, Liechtenstein, Mali, Niger, Rwanda, South Sudan, Swaziland, Tajikistan and Uzbekistan. However, the Federated States of Micronesia, an island-nation in the Pacific Ocean, is also a non-member, as is the same for similar Taiwan, itself a non-member of the UN. Structure The IMO consists of an Assembly, a Council and five main Committees: the Maritime Safety Committee; the Marine Environment Protection Committee; the Legal Committee; the Technical Co-operation Committee and the Facilitation Committee. A number of Sub-Committees support the work of the main technical committees. Legal instruments IMO is the source of approximately 60 legal instruments that guide the regulatory development of its member states to improve safety at sea, facilitate trade among seafaring states and protect the maritime environment. The most well known is the International Convention for the Safety of Life at Sea (SOLAS), as well as International Convention on Oil Pollution Preparedness, Response and Co-operation (OPRC). Others include the International Oil Pollution Compensation Funds (IOPC). It also functions as a depository of yet to be ratified treaties, such as the International Convention on Liability and Compensation for Damage in Connection with the Carriage of Hazardous and Noxious Substances by Sea, 1996
|
International Labour Conference (ILC), began on 29 October 1919 at the Pan American Union Building in Washington, D.C. and adopted the first six International Labour Conventions, which dealt with hours of work in industry, unemployment, maternity protection, night work for women, minimum age, and night work for young persons in industry. The prominent French socialist Albert Thomas became its first director-general. Despite open disappointment and sharp critique, the revived International Federation of Trade Unions (IFTU) quickly adapted itself to this mechanism. The IFTU increasingly oriented its international activities around the lobby work of the ILO. At the time of establishment, the U.S. government was not a member of ILO, as the US Senate rejected the covenant of the League of Nations, and the United States could not join any of its agencies. Following the election of Franklin Delano Roosevelt to the U.S. presidency, the new administration made renewed efforts to join the ILO without league membership. On 19 June 1934, the U.S. Congress passed a joint resolution authorizing the president to join ILO without joining the League of Nations as a whole. On 22 June 1934, the ILO adopted a resolution inviting the U.S. government to join the organization. On 20 August 1934, the U.S. government responded positively and took its seat at the ILO. Wartime and the United Nations During the Second World War, when Switzerland was surrounded by German troops, ILO director John G. Winant made the decision to leave Geneva. In August 1940, the government of Canada officially invited the ILO to be housed at McGill University in Montreal. Forty staff members were transferred to the temporary offices and continued to work from McGill until 1948. The ILO became the first specialized agency of the United Nations system after the demise of the league in 1946. Its constitution, as amended, includes the Declaration of Philadelphia (1944) on the aims and purposes of the organization. Cold War era Beginning in the late 1950s the organization was under pressure to make provisions for the potential membership of ex-colonies which had become independent; in the Director General's report of 1963 the needs of the potential new members were first recognized. The tensions produced by these changes in the world environment negatively affected the established politics within the organization and they were the precursor to the eventual problems of the organization with the USA In July, 1970, the United States withdrew 50% of its financial support to the ILO following the appointment of an assistant director-general from the Soviet Union. This appointment (by the ILO's British director-general, C. Wilfred Jenks) drew particular criticism from AFL–CIO president George Meany and from Congressman John E. Rooney. However, the funds were eventually paid. On 12 June 1975, the ILO voted to grant the Palestinian Liberation Organization observer status at its meetings. Representatives of the United States and Israel walked out of the meeting. The U.S. House of Representatives subsequently decided to withhold funds. The United States gave notice of full withdrawal on 6 November 1975, stating that the organization had become politicized. The United States also suggested that representation from communist countries was not truly "tripartite"—including government, workers, and employers—because of the structure of these economies. The withdrawal became effective on 1 November 1977. The United States returned to the organization in 1980 after extracting some concession from the organization. It was partly responsible for the ILO's shift away from a human rights approach and towards support for the Washington Consensus. Economist Guy Standing wrote "the ILO quietly ceased to be an international body attempting to redress structural inequality and became one promoting employment equity". In 1981, the government of Poland declared martial law. It interrupted the activities of Solidarność detained many of its leaders and members. The ILO Committee on Freedom of Association filed a complaint against Poland at the 1982 International Labour Conference. A Commission of Inquiry established to investigate found Poland had violated ILO Conventions No. 87 on freedom of association and No. 98 on trade union rights, which the country had ratified in 1957. The ILO and many other countries and organizations put pressure on the Polish government, which finally gave legal status to Solidarność in 1989. During that same year, there was a roundtable discussion between the government and Solidarnoc which agreed on terms of relegalization of the organization under ILO principles. The government also agreed to hold the first free elections in Poland since the Second World War. Offices ILO headquarters The ILO is headquartered in Geneva, Switzerland. In its first months of existence in 1919, it offices were located in London, only to move to Geneva in the summer 1920. The first seat in Geneva was on the Pregny hill in the Ariana estate, in the building that used to host the Thudicum boarding school and currently the headquarters of the International Committee of the Red Cross. As the office grew, the Office relocated to a purpose-built headquarters by the shores of lake Leman, designed by Georges Epitaux and inaugurated in 1926 (currently the sear of the World Trade Organization). During the Second World War the Office was temporarily relocated to McGill University in Montreal, Canada. The current seat of the ILO's headquarters is located on the Pregny hill, not far from its initial seat. The building, a biconcave rectangular block designed by Eugène Beaudoin, Pier Luigi Nervi and Alberto Camenzind, was purpose-built between 1969-1974 in a severe rationalist style and, at the time of construction, constituted the largest administrative building in Switzerland. Regional offices Regional Office for Africa, in Abidjan, Côte d'Ivoire Regional Office for Asia and the Pacific, in Bangkok, Thailand Regional Office for Europe and Central Asia, in Geneva, Switzerland Regional Office for Latin America and the Caribbean, in Lima, Peru Regional Office for the Arab States, in Beirut, Lebanon Sub-regional offices Called "Decent Work Technical Support Teams (DWT)", they provide technical support to the work of a number of countries under their area of competence. DWT for North Africa, in Cairo, Egypt DWT for West Africa, in Dakar, Senegal DWT for Eastern and Southern Africa, in Pretoria, South Africa DWT for Central Africa, in Yaoundé, Cameroon DWT for the Arab States, in Beirut, Lebanon DWT for South Asia, in New Delhi, India DWT for East and South-East Asia and the Pacific, in Bangkok, Thailand DWT for Central and Eastern Europe, in Budapest, Hungary DWT for Eastern Europe and Central Asia, in Moscow, Russia DWT for the Andean Countries, in Lima, Peru DWT for the Caribbean Countries, in Port of Spain, Trinidad and Tobago DWT for Central American Countries, in San José, Costa Rica DWT for Countries of the South Cone of Latin America, in Santiago, Chile Country and liaison offices In Africa: Abidjan, Abuja, Addis Ababa, Algiers, Antananarivo, Cairo, Dakar, Dar es Salaam, Harare, Kinshasa, Lusaka, Pretoria, Yaoundé In the Arab States: Beirut, Doha, Jerusalem In Asia and the Pacific: Bangkok, Beijing, Colombo, Dhaka, Hanoi, Islamabad, Jakarta, Kabul, Kathmandu, Manila, New Delhi, Suva, Tokyo, Yangon In Europe and Central Asia: Ankara, Berlin, Brussels, Budapest, Lisbon, Madrid, Moscow, Paris, Rome In the Americas: Brasilia, Buenos Aires, Mexico City, New York, Lima, Port-of-Spain, San José, Santiago, Washington Programmes Labour statistics The ILO is a major provider of labour statistics. Labour statistics are an important tool for its member states to monitor their progress toward improving labour standards. As part of their statistical work, ILO maintains several databases. This database covers 11 major data series for over 200 countries. In addition, ILO publishes a number of compilations of labour statistics, such as the Key Indicators of Labour Markets (KILM). KILM covers 20 main indicators on labour participation rates, employment, unemployment, educational attainment, labour cost, and economic performance. Many of these indicators have been prepared by other organizations. For example, the Division of International Labour Comparisons of the U.S. Bureau of Labor Statistics prepares the hourly compensation in manufacturing indicator. The U.S. Department of Labor also publishes a yearly report containing a List of Goods Produced by Child Labor or Forced Labor issued by the Bureau of International Labor Affairs. The December 2014 updated edition of the report listed a total of 74 countries and 136 goods. Training and teaching units The International Training Centre of the International Labour Organization (ITCILO) is based in Turin, Italy. Together with the University of Turin Department of Law, the ITC offers training for ILO officers and secretariat members, as well as offering educational programmes. The ITC offers more than 450 training and educational programmes and projects every year for some 11,000 people around the world. For instance, the ITCILO offers a Master of Laws programme in management of development, which aims specialize professionals in the field of cooperation and development. Child labour The term child labour is often defined as work that deprives children of their childhood, potential, dignity, and is harmful to their physical and mental development. Child labour refers to work that is mentally, physically, socially or morally dangerous and harmful to children. Further, it can involve interfering with their schooling by depriving them of the opportunity to attend school, obliging them to leave school prematurely, or requiring them to attempt to combine school attendance with excessively long and heavy work. In its most extreme forms, child labour involves children being enslaved, separated from their families, exposed to serious hazards and illnesses and left to fend for themselves on the streets of large cities – often at a very early age. Whether or not particular forms of "work" can be called child labour depends on the child's age, the type and hours of work performed, the conditions under which it is performed and the objectives pursued by individual countries. The answer varies from country to country, as well as among sectors within countries. ILO's response to child labour The ILO's International Programme on the Elimination of Child Labour (IPEC) was created in 1992 with the overall goal of the progressive elimination of child labour, which was to be achieved through strengthening the capacity of countries to deal with the problem and promoting a worldwide movement to combat child labour. The IPEC currently has operations in 88 countries, with an annual expenditure on technical cooperation projects that reached over US$61 million in 2008. It is the largest programme of its kind globally and the biggest single operational programme of the ILO. The number and range of the IPEC's partners have expanded over the years and now include employers' and workers' organizations, other international and government agencies, private businesses, community-based organizations, NGOs, the media, parliamentarians, the judiciary, universities, religious groups and children and their families. The IPEC's work to eliminate child labour is an important facet of the ILO's Decent Work Agenda. Child labour prevents children from acquiring the skills and education they need for a better future, Exceptions in indigenous communities Because of different cultural views involving labour, the ILO developed a series of culturally sensitive mandates, including convention Nos. 169, 107, 138, and 182, to protect indigenous culture, traditions, and identities. Convention Nos. 138 and 182 lead in the fight against child labour, while Nos. 107 and 169 promote the rights of indigenous and tribal peoples and protect their right to define their own developmental priorities. In many indigenous communities, parents believe children learn important life lessons through the act of work and through the participation in daily life. Working is seen as a learning process preparing children of the future tasks they will eventually have to do as an adult. It is a belief that the family's and child well-being and survival is a shared responsibility between members of the whole family. They also see work as an intrinsic part of their child's developmental process. While these attitudes toward child work remain, many children and parents from indigenous communities still highly value education. Issues Forced labour The ILO has considered the fight against forced labour to be one of its main priorities. During the interwar years, the issue was mainly considered a colonial phenomenon, and the ILO's concern was to establish minimum standards protecting the inhabitants of colonies from the worst abuses committed by economic interests. After 1945, the goal became to set a uniform and universal standard, determined by the higher awareness gained during World War II of politically and economically motivated systems of forced labour, but debates were hampered by the Cold War and by exemptions claimed by colonial powers. Since the 1960s, declarations of labour standards as a component of human rights have been weakened by government of postcolonial countries claiming a need to exercise extraordinary powers over labour in their role as emergency regimes promoting rapid economic development. In June 1998 the International Labour Conference adopted a Declaration on Fundamental Principles and Rights at Work and its follow-up that obligates member states to respect, promote and realize freedom of association and the right to collective bargaining, the elimination of all forms of forced or compulsory labour, the effective abolition of child labour, and the elimination of discrimination in respect of employment and occupation. With the adoption of the declaration, the ILO created the InFocus Programme on Promoting the Declaration which is responsible for the reporting processes and technical cooperation activities associated with the declaration; and it carries out awareness raising, advocacy and knowledge functions. In November 2001, following the publication of the InFocus Programme's first global report on forced labour, the ILO's governing body created a special action programme to combat forced labour (SAP-FL), as part of broader efforts to promote the 1998 Declaration on Fundamental Principles and Rights at Work and its follow-up. Since its inception, the SAP-FL has focused on raising global awareness of forced labour in its different forms, and mobilizing action against its manifestation. Several thematic and country-specific studies and surveys have since been undertaken, on such diverse aspects of forced labour as bonded labour, human trafficking, forced domestic work, rural servitude, and forced prisoner labour. In 2013, the SAP-FL was integrated into the ILO's Fundamental Principles and Rights at Work Branch (FUNDAMENTALS) bringing together the fight against forced and child labour and working in the context of Alliance 8.7. One major tool to fight forced labour was the adoption of the ILO Forced Labour Protocol by the International Labour Conference in 2014. It was ratified for the second time in 2015 and on 9 November 2016 it entered into force. The new protocol brings the existing ILO Convention 29 on Forced Labour, adopted in 1930, into the modern era to address practices such as human trafficking. The accompanying Recommendation 203 provides technical guidance on its implementation. In 2015, the ILO launched a global campaign to end modern slavery, in partnership with the International Organization of Employers (IOE) and the International Trade Union Confederation (ITUC). The 50 for Freedom campaign aims to mobilize public support and encourage countries to ratify the ILO's Forced Labour Protocol. Minimum wage law To protect the right of labours for fixing minimum wage, ILO has created Minimum Wage-Fixing Machinery Convention, 1928, Minimum Wage Fixing Machinery (Agriculture) Convention, 1951 and Minimum Wage Fixing Convention,
|
is headquartered in Geneva, Switzerland, with around 40 field offices around the world, and employs some 3,381 staff across 107 nations, of whom 1,698 work in technical cooperation programmes and projects. The ILO's labour standards are aimed at ensuring accessible, productive, and sustainable work worldwide in conditions of freedom, equity, security and dignity. They are set forth in 189 conventions and treaties, of which eight are classified as fundamental according to the 1998 Declaration on Fundamental Principles and Rights at Work; together they protect freedom of association and the effective recognition of the right to collective bargaining, the elimination of forced or compulsory labour, the abolition of child labour, and the elimination of discrimination in respect of employment and occupation. The ILO is a major contributor to international labour law. Within the UN system the organization has a unique tripartite structure: all standards, policies, and programmes require discussion and approval from the representatives of governments, employers, and workers. This framework is maintained in the ILO's three main bodies: The International Labour Conference, which meets annually to formulate international labour standards; the Governing Body, which serves as the executive council and decides the agency's policy and budget; and the International Labour Office, the permanent secretariat that administers the organization and implements activities. The secretariat is led by the Director-General, Guy Ryder of the United Kingdom, who was elected by the Governing Body in 2012. In 1969, the ILO received the Nobel Peace Prize for improving fraternity and peace among nations, pursuing decent work and justice for workers, and providing technical assistance to other developing nations. In 2019, the organization convened the Global Commission on the Future of Work, whose report made ten recommendations for governments to meet the challenges of the 21st century labour environment; these include a universal labour guarantee, social protection from birth to old age and an entitlement to lifelong learning. With its focus on international development, it is a member of the United Nations Development Group, a coalition of UN organization aimed at helping meet the Sustainable Development Goals. Governance, organization, and membership Unlike other United Nations specialized agencies, the International Labour Organization (ILO) has a tripartite governing structure that brings together governments, employers, and workers of 187 member States, to set labour standards, develop policies and devise programmes promoting decent work for all women and men. The structure is intended to ensure the views of all three groups are reflected in ILO labour standards, policies, and programmes, though governments have twice as many representatives as the other two groups. Governing body The Governing Body is the executive body of the International Labour Organization. It meets three times a year, in March, June and November. It takes decisions on ILO policy, decides the agenda of the International Labour Conference, adopts the draft Programme and Budget of the Organization for submission to the Conference, elects the Director-General, requests information from the member states concerning labour matters, appoints commissions of inquiry and supervises the work of the International Labour Office. The Governing Body is composed of 56 titular members (28 governments, 14 employers and 14 workers) and 66 deputy members (28 governments, 19 employers and 19 workers). Ten of the titular government seats are permanently held by States of chief industrial importance: Brazil, China, France, Germany, India, Italy, Japan, the Russian Federation, the United Kingdom and the United States. The other Government members are elected by the Conference every three years (the last elections were held in June 2017). The Employer and Worker members are elected in their individual capacity. India has assumed the Chairmanship of the Governing Body of International Labour Organization in 2020. Apurva Chandra, Secretary (Labour and Employment) has been elected as the Chairperson of the Governing Body of the ILO for the period October 2020-June 2021. Director-General The current Director-General, Guy Ryder, was elected by the ILO Governing Body in October 2012, and re-elected for a second five-year-term in November 2016. The list of the Directors-General of ILO since its establishment in 1919 is as follows: International Labour Conference Once a year, the ILO organises the International Labour Conference in Geneva to set the broad policies of the ILO, including conventions and recommendations. Also known as the "international parliament of labour", the conference makes decisions about the ILO's general policy, work programme and budget and also elects the Governing Body. Each member state is represented by a delegation: two government delegates, an employer delegate, a worker delegate and their respective advisers. All of them have individual voting rights and all votes are equal, regardless the population of the delegate's member State. The employer and worker delegates are normally chosen in agreement with the most representative national organizations of employers and workers. Usually, the workers and employers' delegates coordinate their voting. All delegates have the same rights and are not required to vote in blocs. Delegate have the same rights, they can express themselves freely and vote as they wish. This diversity of viewpoints does not prevent decisions being adopted by very large majorities or unanimously. Heads of State and prime ministers also participate in the Conference. International organizations, both governmental and others, also attend but as observers. The 109th session of the International Labour Conference was delayed from 2020 to May 2021 and was held on line because of the COVID-19 pandemic. The first meeting was on 20 May 2021 in Geneva for the election of its officers. The next sittings are in June, November and December. Membership The ILO has 187 state members. 186 of the 193 member states of the United Nations plus the Cook Islands are members of the ILO. The UN member states which are not members of the ILO are Andorra, Bhutan, Liechtenstein, Micronesia, Monaco, Nauru, and North Korea. The ILO constitution permits any member of the UN to become a member of the ILO. To gain membership, a nation must inform the director-general that it accepts all the obligations of the ILO constitution. Other states can be admitted by a two-thirds vote of all delegates, including a two-thirds vote of government delegates, at any ILO General Conference. The Cook Islands, a non-UN state, joined in June 2015. Members of the ILO under the League of Nations automatically became members when the organization's new constitution came into effect after World War II. Position within the UN The ILO is a specialized agency of the United Nations (UN). As with other UN specialized agencies (or programmes) working on international development, the ILO is also a member of the United Nations Development Group. Normative function Conventions Through July 2018, the ILO had adopted 189 conventions. If these conventions are ratified by enough governments, they come in force. However, ILO conventions are considered international labour standards regardless of ratification. When a convention comes into force, it creates a legal obligation for ratifying nations to apply its provisions. Every year the International Labour Conference's Committee on the Application of Standards examines a number of alleged breaches of international labour standards. Governments are required to submit reports detailing their compliance with the obligations of the conventions they have ratified. Conventions that have not been ratified by member states have the same legal force as recommendations. In 1998, the 86th International Labour Conference adopted the Declaration on Fundamental Principles and Rights at Work. This declaration contains four fundamental policies: The right of workers to associate freely and bargain collectively The end of forced and compulsory labour The end of child labour The end of unfair discrimination among workers The ILO asserts that its members have an obligation to work towards fully respecting these principles, embodied in relevant ILO conventions. The ILO conventions that embody the fundamental principles have now been ratified by most member states. Protocols This device is employed for making conventions more flexible or for amplifying obligations by amending or adding provisions on different points. Protocols are always linked to Convention, even though they are international treaties they do not exist on their own. As with Conventions, Protocols can be ratified. Recommendations Recommendations do not have the binding force of conventions and are not subject to ratification. Recommendations may be adopted at the same time as conventions to supplement the latter with additional or more detailed provisions. In other cases recommendations may be adopted separately and may address issues separate from particular conventions. History Origins While the ILO was established as an agency of the League of Nations following World War I, its founders had made great strides in social thought and action before 1919. The core members all knew one another from earlier private professional and ideological networks, in which they exchanged knowledge, experiences, and ideas on social policy. Prewar "epistemic communities", such as the International Association for Labour Legislation (IALL), founded in 1900, and political networks, such as the socialist Second International, were a decisive factor in the institutionalization of international labour politics. In the post–World War I euphoria, the idea of a "makeable society" was an important catalyst behind the social engineering of the ILO architects. As a new discipline, international labour law became a useful instrument for putting social reforms into practice. The utopian ideals of the founding members—social justice and the right to decent work—were changed by diplomatic and political compromises made at the Paris Peace Conference of 1919, showing the ILO's balance between idealism and pragmatism. Over the course of the First World War, the international labour movement proposed a comprehensive programme of protection for the working classes, conceived as compensation for labour's support during the war. Post-war reconstruction and the protection of labour unions occupied the attention of many nations during and immediately after World War I. In Great Britain, the Whitley Commission, a subcommittee of the Reconstruction Commission, recommended in its July 1918 Final Report that "industrial councils" be established throughout the world. The British Labour Party had issued its own reconstruction programme in the document titled Labour and the New Social Order. In February 1918, the third Inter-Allied Labour and Socialist Conference (representing delegates from Great Britain, France, Belgium and Italy) issued its report, advocating an international labour rights body, an end to secret diplomacy, and other goals. And in December 1918, the American Federation of Labor (AFL) issued its own distinctively apolitical report, which called for the achievement of numerous incremental improvements via the collective bargaining process. IFTU Bern Conference As the war drew to a close, two competing visions for the post-war world emerged. The first was offered by the International Federation of Trade Unions (IFTU), which called for a meeting in Bern, Switzerland, in July 1919. The Bern meeting would consider both the future of the IFTU and the various proposals which had been made in the previous few years. The IFTU also proposed including delegates from the Central Powers as equals. Samuel Gompers, president of the AFL, boycotted the meeting, wanting the Central Powers delegates in a subservient role as an admission of guilt for their countries' role in bringing about war. Instead, Gompers favoured a meeting in Paris which would consider President Woodrow Wilson's Fourteen Points only as a platform. Despite the American boycott, the Bern meeting went ahead as scheduled. In its final report, the Bern Conference demanded an end to wage labour and the establishment of socialism. If these ends could not be immediately achieved, then an international body attached to the League of Nations should enact and enforce legislation to protect workers and trade unions. Commission on International Labour Legislation Meanwhile, the Paris Peace Conference sought to dampen public support for communism. Subsequently, the Allied Powers agreed that clauses should be inserted into the emerging peace treaty protecting labour unions and workers' rights, and that an international labour body be established to help guide international labour relations in the future. The advisory Commission on International Labour Legislation was established by the Peace Conference to draft these proposals. The Commission met for the first time on 1 February 1919, and Gompers was elected as the chairman. Two competing proposals for an international body emerged during the Commission's meetings. The British proposed establishing an international parliament to enact labour laws which each member of the League would be required to implement. Each nation would have two delegates to the parliament, one each from labour and management. An international labour office would collect statistics on labour issues and enforce the new international laws. Philosophically opposed to the concept of an international parliament and convinced that international standards would lower the few protections achieved in the United States, Gompers proposed that the international labour body be authorized only to make recommendations and that enforcement be left up to the League of Nations. Despite vigorous opposition from the British, the American proposal was adopted. Gompers also set the agenda for the draft charter protecting workers' rights. The Americans made 10 proposals. Three were adopted without change: That labour should not be treated as a commodity; that all workers had the right to a wage sufficient to live on; and that women should receive equal pay for equal work. A proposal protecting the freedom of speech, press, assembly, and association was amended to include only freedom of association. A proposed ban on the international shipment of goods made by children under the age of 16 was amended to ban goods made by children under the age of 14. A proposal to require an eight-hour work day was amended to require the eight-hour work day or the 40-hour work week (an exception was made for countries where productivity was low). Four other American proposals were rejected. Meanwhile, international delegates proposed three additional clauses, which were adopted: One or more days for weekly rest; equality of laws for foreign workers; and regular and frequent inspection of factory conditions. The Commission issued its final report on 4 March 1919, and the Peace Conference adopted it without amendment on 11 April. The report became Part XIII of the Treaty of Versailles. Interwar period The first annual conference, referred to as the International Labour Conference (ILC), began on 29 October 1919 at the Pan American Union Building in Washington, D.C. and adopted the first six International Labour Conventions, which dealt with hours of work in industry, unemployment, maternity protection, night work for women, minimum age, and night work for young persons in industry. The prominent French socialist Albert Thomas became its first director-general. Despite open disappointment and sharp critique, the revived International Federation of Trade Unions (IFTU) quickly adapted itself to this mechanism. The IFTU increasingly oriented its international activities around the lobby work of the ILO. At the time of establishment, the U.S. government was not a member of ILO, as the US Senate rejected the covenant of the League of Nations, and the United States could not join any of its agencies. Following the election of Franklin Delano Roosevelt to the U.S. presidency, the new administration made renewed efforts to join the ILO without league membership. On 19 June 1934, the U.S. Congress passed a joint resolution authorizing the president to join ILO without joining the League of Nations as a whole. On 22 June 1934, the ILO adopted a resolution inviting the U.S. government to join the organization. On 20 August 1934, the U.S. government responded positively and took its seat at the ILO. Wartime and the United Nations During the Second World War, when Switzerland was surrounded by German troops, ILO director John G. Winant made the decision to leave Geneva. In August 1940, the government of Canada officially invited the ILO to be housed at McGill University in Montreal. Forty staff members were transferred to the temporary offices and continued to work from McGill until 1948. The ILO became the first specialized agency of the United Nations system after the demise of the league in 1946. Its constitution, as amended, includes the Declaration of Philadelphia (1944) on the aims and purposes of the organization. Cold War era Beginning in the late 1950s the organization was under pressure to make provisions for the potential membership of ex-colonies which had become independent; in the Director General's report of 1963 the needs of the potential new members were first recognized. The tensions produced by these changes in the world environment negatively affected the established politics within the organization and they were the precursor to the eventual problems of the organization with the USA In July, 1970, the United States withdrew 50% of its financial support to the ILO following the appointment of an assistant director-general from the Soviet Union. This appointment (by the ILO's British director-general, C. Wilfred Jenks) drew particular criticism from AFL–CIO president George Meany and from Congressman John E. Rooney. However, the funds were eventually paid. On 12 June 1975, the ILO voted to grant the Palestinian Liberation Organization observer status at its meetings. Representatives of the United States and Israel walked out of the meeting. The U.S. House of Representatives subsequently decided to withhold funds. The United States gave notice of full withdrawal on 6 November 1975, stating that the organization had become politicized. The United States also suggested that representation from communist countries was not truly "tripartite"—including government, workers, and employers—because of the structure of these economies. The withdrawal became effective on 1 November 1977. The United States returned to the organization in 1980 after extracting some concession from the organization. It was partly responsible for the ILO's shift away from a human rights approach and towards support for the Washington Consensus. Economist Guy Standing wrote "the ILO quietly ceased to be an international body attempting to redress structural inequality and became one promoting employment equity". In 1981, the government of Poland declared martial law. It interrupted the activities of Solidarność detained many of its leaders and members. The ILO Committee on Freedom of Association filed a complaint against Poland at the 1982 International Labour Conference. A Commission of Inquiry established to investigate found Poland had violated ILO Conventions No. 87 on freedom of association and No. 98 on trade union rights, which the country had ratified in 1957. The ILO and many other countries and organizations put pressure on the Polish government, which finally gave legal status to Solidarność in 1989. During that same year, there was a roundtable discussion between the government and Solidarnoc which agreed on terms of relegalization of the organization under ILO principles. The government also agreed to hold the first free elections in Poland since the Second World War. Offices ILO headquarters The ILO is headquartered in Geneva, Switzerland. In its first months of existence in 1919, it offices were located in London, only to move to Geneva in the summer 1920. The first seat in Geneva was on the Pregny hill in the Ariana estate, in the building that used to host the Thudicum boarding school and currently the headquarters of the International Committee of the Red Cross. As the office grew, the Office relocated to a purpose-built headquarters by the shores of lake Leman, designed by Georges Epitaux and inaugurated in 1926 (currently the sear of the World Trade Organization). During the Second World War the Office was temporarily relocated to McGill University in Montreal, Canada. The current seat of the ILO's headquarters is located on the Pregny hill, not far from its initial seat. The building, a biconcave rectangular
|
a privately held company specializing in medical vocabularies Isomaltooligosaccharide, a mixture of short-chain carbohydrates which has a digestion-resistant property SS Imo, a 1889 ship involved in the Halifax Explosion Idiopathic Massive Osteolysis, a name for Gorham's disease imo.im, a video calling and instant messaging app IMO (in my opinion),
|
International Meteorological Organization Irish Medical Organisation, the main organization for doctors in the Republic of Ireland IMO number, a unique identity number issued to seacraft (pattern "1234567") Imo State, Nigeria Icelandic Meteorological Office Intelligent Medical Objects,
|
Northern dialects of Anglo-Saxon, particularly Northumbrian, which also serve as the basis of Northern English dialects such as those of Yorkshire and Newcastle upon Tyne. Northumbria was within the Danelaw and therefore experienced greater influence from Norse than did the Southern dialects. As the political influence of London grew, the Chancery version of the language developed into a written standard across Great Britain, further progressing in the modern period as Scotland became united with England as a result of the Acts of Union of 1707. English was introduced to Ireland twice—a medieval introduction that led to the development of the now-extinct Yola dialect, and a modern introduction in which Hiberno-English largely replaced Irish as the most widely spoken language during the 19th century, following the Act of Union of 1800. Received Pronunciation (RP) is generally viewed as a 19th-century development and is not reflected in North American English dialects (except the affected Transatlantic accent), which are based on 18th-century English. The establishment of the first permanent English-speaking colony in North America in 1607 was a major step towards the globalisation of the language. British English was only partially standardised when the American colonies were established. Isolated from each other by the Atlantic Ocean, the dialects in England and the colonies began evolving independently. The British colonisation of Australia starting in 1788 brought the English language to Oceania. By the 19th century, the standardisation of British English was more settled than it had been in the previous century, and this relatively well-established English was brought to Africa, Asia and New Zealand. It developed both as the language of English-speaking settlers from Britain and Ireland, and as the administrative language imposed on speakers of other languages in the various parts of the British Empire. The first form can be seen in New Zealand English, and the latter in Indian English. In Europe, English received a more central role particularly since 1919, when the Treaty of Versailles was composed not only in French, the common language of diplomacy at the time, but, under special request from American president Woodrow Wilson, also in English – a major milestone in the globalisation of English. The English-speaking regions of Canada and the Caribbean are caught between historical connections with the UK and the Commonwealth and geographical and economic connections with the U.S. In some things they tend to follow British standards, whereas in others, especially commercial, they follow the U.S. standard. English as a global language Braj Kachru divides the use of English into three concentric circles. The inner circle is the traditional base of English and includes countries such as the United Kingdom and Ireland and the anglophone populations of the former British colonies of the United States, Australia, New Zealand, South Africa, Canada, and various islands of the Caribbean, Indian Ocean, and Pacific Ocean. In the outer circle are those countries where English has official or historical importance ("special significance"). This includes most of the countries of the Commonwealth of Nations (the former British Empire), including populous countries such as India, Pakistan, and Nigeria; and others, such as the Philippines, under the sphere of influence of English-speaking countries. English in this circle is used for official purposes such as in business, news broadcasts, schools, and air traffic. Some countries in this circle have made English be their national language. Here English may serve as a useful lingua franca between ethnic and language groups. Higher education, the legislature and judiciary, national commerce, and so on, may all be carried out predominantly in English. The expanding circle refers to those countries where English has no official role, but is nonetheless important for certain functions, e.g., international business and tourism. By the twenty-first century, non-native English speakers have come to outnumber native speakers by a factor of three, according to the British Council. Darius Degher, a professor at Malmö University in Sweden, uses the term decentered English to describe this shift, along with attendant changes in what is considered important to English users and learners. The Scandinavian language area as well as the Netherlands have a near complete bilingualism between their native languages and English as a foreign second language. Elsewhere in Europe, although not universally, English knowledge is still rather common among non-native speakers. In many cases this leads to accents derived from the native languages altering pronunciations of the spoken English in these countries. Research on English as a lingua franca in the sense of "English in the Expanding Circle" is comparatively recent. Linguists who have been active in this field are Jennifer Jenkins, Barbara Seidlhofer, Christiane Meierkord and Joachim Grzega. English as a lingua franca in foreign language teaching English as an additional language (EAL) is usually based on the standards of either American English or British English as well as incorporating foreign terms. English as an international language (EIL) is EAL with emphasis on learning different major dialect forms; in particular, it aims to equip students with the linguistic tools to communicate internationally. Roger Nunn considers different types of competence in relation to the teaching of English as an International Language, arguing that linguistic competence has yet to be adequately addressed in recent considerations of EIL. Several models of "simplified English" have been suggested for teaching English as a foreign language: Basic English, developed by Charles Kay Ogden (and later also I. A. Richards) in the 1930s; a recent revival has been initiated by Bill Templer Threshold Level English, developed by van Ek and Alexander Globish, developed by Jean-Paul Nerrière Basic Global English, developed by Joachim Grzega Furthermore, Randolph Quirk and Gabriele Stein thought about a Nuclear English, which, however, has never been fully developed. With reference to the term "Globish", Robert McCrum has used this to mean "English as global language". Jean-Paul Nerriere uses it for a constructed language. Basic Global English Basic Global English, or BGE, is a concept of global English initiated by German linguist Joachim Grzega. It evolved from the idea of creating a type of English that can be learned more easily than regular British or American English and that serves as a tool for successful global communication. BGE is guided by creating "empathy and tolerance" between speakers in a global context. This applies to the context of global communication, where different speakers with different mother tongues come together. BGE aims to develop this competence as quickly as possible. English language teaching is almost always related to a corresponding culture, e. g., learners either deal with American English and therefore with American culture, or British English and therefore with British culture. Basic Global English seeks to solve this problem by creating one collective version of English. Additionally, its advocates promote it as a system suited for self-teaching as well as classroom teaching. BGE is based on 20 elementary grammar rules that provide a certain degree of variation. For example, regular as well as irregular formed verbs are accepted. Pronunciation rules are not as strict as in British or American English, so there is a certain degree of variation for the learners. Exceptions that cannot be used are pronunciations that would be harmful to mutual understanding and therefore minimize the success of communication. Basic Global English is based on a 750-word vocabulary. Additionally, every learner has to acquire the knowledge of 250 additional words. These words can be chosen freely, according to the specific needs and interests of the learner. BGE provides not only basic language skills, but also so called "Basic Politeness Strategies". These include creating a positive atmosphere, accepting an offer with "Yes, please" or refusing with "No, thank you", and small talk topics to choose and to avoid. Basic Global English has been tested in two elementary schools in Germany. For the practical test of BGE, 12 lessons covered half of a school year. After the BGE teaching, students could answer questions about themselves, their family, their hobbies etc. Additionally they could form questions themselves about the same topics. Besides that, they also learned the numbers from 1 to 31 and vocabulary including things in their school bag and in their classroom. The students as well as the parents had a positive impression of the project. Varying concepts Universality and flexibility International English sometimes refers to English as it is actually being used and developed in the world; as a language owned not just by native speakers, but by all those who come to use it. Basically, it covers the English language at large, often (but not always or necessarily) implicitly seen as standard. It is certainly also commonly used in connection with the acquisition, use, and study of English as the world's lingua franca ('TEIL: Teaching English as an International Language'), and especially when the language is considered as a whole in contrast with British English, American English, South African English, and the like. — McArthur (2002, p. 444–445) It especially means English words and phrases generally understood throughout the English-speaking world as opposed to localisms.
|
Foreign Language & How to Learn One (2005) argues that the international version of English is only adequate for communicating basic ideas. For complex discussions and business/technical situations, English is not an adequate communication tool for non-native speakers of the language. Trimnell also asserts that native English-speakers have become "dependent on the language skills of others" by placing their faith in international English. Appropriation theory Some reject both what they call "linguistic imperialism" and David Crystal's theory of the neutrality of English. They argue that the phenomenon of the global spread of English is better understood in the framework of appropriation (e.g., Spichtinger 2000), that is, English used for local purposes around the world. Demonstrators in non-English speaking countries often use signs in English to convey their demands to TV-audiences around the globe, for example. In English-language teaching, Bobda shows how Cameroon has moved away from a mono-cultural, Anglo-centered way of teaching English and has gradually appropriated teaching material to a Cameroonian context. This includes non-Western topics, such as the rule of Emirs, traditional medicine, and polygamy (1997:225). Kramsch and Sullivan (1996) describe how Western methodology and textbooks have been appropriated to suit local Vietnamese culture. The Pakistani textbook "Primary Stage English" includes lessons such as Pakistan My Country, Our Flag, and Our Great Leader (Malik 1993: 5,6,7), which might sound jingoistic to Western ears. Within the native culture, however, establishing a connection between English Language Teaching (ELT), patriotism, and Muslim faith is seen as one of the aims of ELT. The Punjab Textbook Board openly states: "The board ... takes care, through these books to inoculate in the students a love of the Islamic values and awareness to guard the ideological frontiers of your [the students] home lands." (Punjab Text Book Board 1997). Many Englishes Many difficult choices must be made if further standardisation of English is pursued. These include whether to adopt a current standard, or move towards a more neutral, but artificial one. A true International English might supplant both current American and British English as a variety of English for international communication, leaving these as local dialects, or would rise from a merger of General American and standard British English with admixture of other varieties of English and would generally replace all these varieties of English. We may, in due course, all need to be in control of two standard Englishes—the one which gives us our national and local identity, and the other which puts us in touch with the rest of the human race. In effect, we may all need to become bilingual in our own language. — David Crystal (1988: p. 265) This is the situation long faced by many users of English who possess a "non-standard" dialect of English as their birth tongue but have also learned to write (and perhaps also speak) a more standard dialect. (This phenomenon is known in linguistics as diglossia.) Many academics often publish material in journals requiring different varieties of English and change style and spellings as necessary without great difficulty. As far as spelling is concerned, the differences between American and British usage became noticeable due to the first influential lexicographers (dictionary writers) on each side of the Atlantic. Samuel Johnson's dictionary of 1755 greatly favoured Norman-influenced spellings such as centre and colour; on the other hand, Noah Webster's first guide to American spelling, published in 1783, preferred spellings like center and the Latinate color. The difference in strategy and philosophy of Johnson and Webster are largely responsible for the main division in English spelling that exists today. However, these differences are extremely minor. Spelling is but a small part of the differences between dialects of English, and may not even reflect dialect differences at all (except in phonetically spelled dialogue). International English refers to much more than an agreed spelling pattern. Dual standard Two approaches to International English are the individualistic and inclusive approach and the new dialect approach. The individualistic approach gives control to individual authors to write and spell as they wish (within purported standard conventions) and to accept the validity of differences. The Longman Grammar of Spoken and Written English, published in 1999, is a descriptive study of both American and British English in which each chapter follows individual spelling conventions according to the preference of the main editor of that chapter. The new dialect approach appears in The Cambridge Guide to English Usage (Peters, 2004), which attempts to avoid any language bias and accordingly uses an idiosyncratic international spelling system of mixed American and British forms (but tending to prefer the American English spellings). See also African English Business English Commonwealth English English as a second or foreign language English for specific purposes English-medium education Esperanto Euro English International auxiliary language Translanguaging Notes References Acar, A. (2006). "Models, Norms and Goals for English as an International Language Pedagogy and Task Based Language Teaching and Learning.", The Asian EFL Journal, Volume 8. Issue 3, Article 9. Albu, Rodica (2005). "Using English(es). Introduction to the Study of Present-day English Varieties & Terminological Glossary", 3rd edition. Iasi: Demiurg. Berger, Lutz, Joachim Grzega, and Christian Spannagel, eds. Lernen durch Lehren im Fokus: Berichte von LdL-Einsteigern und LdL-Experten: epubli, 2011. Print. Biber, Douglas; Johansson, Stig; Leech, Geoffrey; Conrad, Susan; Finnegan, Edward (1999). Longman Grammar of Spoken and Written English. Harlow, Essex: Pearson Education. . Bobda, Augustin Simo (1997) "Sociocultural Constraints in EFL Teaching in Cameroon." In: Pütz, Martin (ed.) The cultural Context in Foreign Language Teaching. Frankfurt a.M.: Lang. 221–240. Bosso, Rino (2018). “First steps in exploring computer-mediated English as a lingua franca”. In Martin-Rubió, Xavier (ed.). Contextualising English as a lingua franca: from data to insights. Newcastle upon Tyne: Cambridge Scholars, 10–35. Crystal, David (1988). The English Language. London: Penguin. . ————— (1997). English as a Global Language. Cambridge: Cambridge University Press. . Erling, Elizabeth J. (2000). "International/Global/World English: Is a Consensus Possible?", Postgraduate Conference Proceedings, The University of Edinburgh, Department of Applied Linguistics. (Postscript.) Grzega, Joachim (2005), "Reflection on Concepts of English for Europe: British English, American English, Euro-English, Global English", Journal for EuroLinguistiX 2: 44–64 Grzega, Joachim (2005), “Towards Global English via Basic Global English (BGE): Socioeconomic and Pedagogic Ideas for a European and Global Language (with Didactic Examples for Native Speakers of German), Journal for EuroLinguistiX 2: 65–164. (For Basic Global English see also the press releases accessible at the Basic Global English website) Grzega, Joachim, and Marion Schöner. “Basic Global English (BGE) as a Way for Children to Acquire Global Communicative Competence: Report on Elementary School Project.” Journal for EuroLinguistiX 4 (2007): 5–18. Print. Grzega, Joachim. “Globish and Basic Global English (BGE): Two Alternatives for a Rapid Acquisition of Communicative Competence in a Globalized World?” Journal for EuroLinguistiX 3 (2006): 1–13. Print. Grzega, Joachim. “LdL im Englischunterricht an Grund- und Hauptschulen.” Lernen durch Lehren im Fokus: Berichte von LdL-Einsteigern und LdL-Experten. Ed. Lutz Berger, Joachim Grzega, and Christian Spannagel: epubli, 2011. 39–46. Print. Grzega, Joachim. “Towards Global English Via Basic Global English (BGE): Socioeconomic and Pedagogic Ideas for a European and Global Language (with Didactic Examples for Native Speakers of German).” Journal for EuroLinguistiX 2 (2005): 65–164. Print. House, Juliane (2002), “Pragmatic Competence in Lingua Franca English”, in: Knapp, Karlfried / Meierkord, Christiane (eds.), Lingua Franca Communication, 245–267, Frankfurt (Main): Peter Lang. Jenkins, Jennifer (2003), World Englishes, London: Routledge. Kachru, Braj (1985), "Standards, Codification and Sociolinguistic Realism", in: Quirk, Randolph (ed.), English in the World, 11–34, Cambridge: Cambridge University Press. Kachru, Braj (1986). The Alchemy of English: The Spread, Functions, and Models of Non-native Englishes. Chicago: University of Illinois Press. . Klaire Kramsch and Patricia Sullivan (1996) "Appropriate Pedagogy". ELT Journal 50/3 199–212. Malik, S.A. Primary Stage English (1993). Lahore: Tario Brothers. McArthur, T. (Oxford, 1992) "The Oxford Companion to the English Language," Oxford University Press, ————— (2001). "World English and World Englishes: Trends, tensions, varieties, and standards", Language Teaching Vol. 34, issue 1. Available in PDF format at Cambridge: Language Teaching: Sample article and Learning and Teacher Support Centre: McArthur. ————— (2002). Oxford Guide to World English. Oxford: Oxford University Press. hardback, paperback. Mechan-Schmidt, Frances. "Basic Instincts: Frances Mechan-Schmidt discovers a new teaching method that reduces English to just a thousand words." The Linguist 48.2 (2009): 18–19. Print. Meierkord, Christiane (1996), Englisch als Medium der interkulturellen Kommunikation: Untersuchungen zum non-native/non-native-speakers-Diskurs, Frankfurt (Main) etc.: Lang. Nerrière, Jean-Paul and Hon, David (2009), Globish The World Over, IGI, Paris. Nerrière in Globish (Video) Ogden, Charles K. (1934), The System of Basic English, New York: Harcourt, Brace & Co. Paredes, Xoán M. and da Silva Mendes, S. (2002). "The Geography of Languages: a strictly geopolitical issue? The case of 'international English'", Chimera 17:104–112, University College Cork, Ireland (PDF) Peters, Pam (2004). The Cambridge Guide to English Usage. Cambridge: Cambridge University Press. . Phillipson, Robert (1992). Linguistic Imperialism. Oxford: Oxford University Press. . Quirk, Randolph (1981), “International Communication and the Concept of Nuclear English”, in: Smith, Larry E. (ed.), English for Cross-Cultural Communication, 151–165, London: Macmillan. Seidlhofer, Barbara (2004), “Research Perspectives on Teaching English as a Lingua Franca”, Annual Review of Applied Linguistics 24: 209–239. Spichtinger, David (2000). "The Spread of English and its Appropriation." Diplomarbeit zur Erlangung des Magistergrades der Philosophie eingereicht an der Geisteswissenschaftlichen Fakultät
|
mission is "to promote the education of the public in the study of Africa and its languages and cultures". Its operations includes seminars, journals, monographs, edited volumes and stimulating scholarship within Africa. Publications The IAI has been involved in scholarly publishing since 1927. Scholars whose work has been published by the institute include Emmanuel Akeampong, Samir Amin, Karin Barber, Alex de Waal, Patrick Chabal, Mary Douglas, E.E. Evans Pritchard, Jack Goody, Jane Guyer, Monica Hunter, Bronislaw Malinowski, Z.K. Matthews, D.A. Masolo, Achille Mbembe, Thomas Mofolo, John Middleton, Simon Ottenburg, J.D.Y. Peel, Mamphela Ramphele, Isaac Schapera, Monica Wilson and V.Y. Mudimbe. IAI publications fall into a number of series, notably International African Library and International African Seminars. The International African Library is published from volume 41 (2011) by Cambridge University Press; Volumes 7-40 are available from Edinburgh University Press. there are 49 volumes. Archives The archives of the International African Institute are held at the Archives Division of the Library of the London School of Economics. An online catalogue of
|
Amin, Karin Barber, Alex de Waal, Patrick Chabal, Mary Douglas, E.E. Evans Pritchard, Jack Goody, Jane Guyer, Monica Hunter, Bronislaw Malinowski, Z.K. Matthews, D.A. Masolo, Achille Mbembe, Thomas Mofolo, John Middleton, Simon Ottenburg, J.D.Y. Peel, Mamphela Ramphele, Isaac Schapera, Monica Wilson and V.Y. Mudimbe. IAI publications fall into a number of series, notably International African Library and International African Seminars. The International African Library is published from volume 41 (2011) by Cambridge University Press; Volumes 7-40 are available from Edinburgh University Press. there are 49 volumes. Archives The archives of the International African Institute are held at the Archives Division of the Library of the London School of Economics. An online catalogue of these papers is available. History Africa alphabet In 1928, the IAI (then IIALC) published an "Africa Alphabet" to facilitate standardization of Latin-based writing systems for African languages. Prize for African language literature, 1929-50 From April 1929 to 1950, the IAI offered prizes
|
Art and Ideas Inter-American Institute for Global Change Research International African Institute International Association for Identification Israel Aerospace Industries (Ha-Taasiya Ha-Avirit) Islamic
|
African Institute International Association for Identification Israel Aerospace Industries (Ha-Taasiya Ha-Avirit) Islamic Army in Iraq Independent Administrative Institution Intelligent Actuator (International Automation Industry), Japanese
|
IGF-1 expression is required for achieving maximal growth. Gene knockout studies in mice have confirmed this, though other animals are likely to regulate the expression of these genes in distinct ways. While IGF-2 may be primarily fetal in action it is also essential for development and function of organs such as the brain, liver, and kidney. Factors that are thought to cause variation in the levels of GH and IGF-1 in the circulation include an individual's genetic make-up, the time of day, age, sex, exercise status, stress levels, nutrition level, body mass index (BMI), disease state, race, estrogen status, and xenobiotic intake. IGF-1 has an involvement in regulating neural development including neurogenesis, myelination, synaptogenesis, and dendritic branching and neuroprotection after neuronal damage. Increased serum levels of IGF-I in children have been associated with higher IQ. IGF-1 shapes the development of the cochlea through controlling apoptosis. Its deficit can cause hearing loss. Serum level of it also underlies a correlation between short height and reduced hearing abilities particularly around 3–5 years of age, and at age 18 (late puberty). IGF receptors The IGFs are known to bind the IGF-1 receptor, the insulin receptor, the IGF-2 receptor, the insulin-related receptor and possibly other receptors. The IGF-1 receptor is the "physiological" receptor—IGF-1 binds to it at significantly higher affinity than it binds the insulin receptor. Like the insulin receptor, the IGF-1 receptor is a receptor tyrosine kinase—meaning the receptor signals by causing the addition of a phosphate molecule on particular tyrosines. The IGF-2 receptor only binds IGF-2 and acts as a "clearance receptor"—it activates no intracellular signaling pathways, functioning only as an IGF-2 sequestering agent and preventing IGF-2 signaling. Organs and tissues affected by IGF-1 Since many distinct tissue types express the IGF-1 receptor, IGF-1's effects are diverse. It acts as a neurotrophic factor, inducing the survival of neurons. It may catalyse skeletal muscle hypertrophy, by inducing protein synthesis, and by blocking muscle atrophy. It is protective for cartilage cells, and is associated with activation of osteocytes, and thus may be an anabolic factor for bone. Since at high concentrations it is capable of activating the insulin receptor, it can also complement for the effects of insulin. Receptors for IGF-1 are found in vascular smooth muscle, while typical receptors for insulin are not found in vascular smooth muscle. IGF-binding proteins IGF-1 and IGF-2 are regulated by a family of proteins known as the IGF-binding proteins. These proteins help to modulate IGF action in complex ways that involve both inhibiting IGF action by preventing binding to the IGF-1 receptor as well as promoting IGF action possibly through aiding in delivery to the receptor and increasing IGF half-life. Currently, there are seven characterized IGF Binding Proteins (IGFBP1 to IGFBP7). There is currently significant data suggesting that IGFBPs play important roles in addition to their ability to regulate IGFs. IGF-1 and IGFBP-3 are GH dependent, whereas
|
race, estrogen status, and xenobiotic intake. IGF-1 has an involvement in regulating neural development including neurogenesis, myelination, synaptogenesis, and dendritic branching and neuroprotection after neuronal damage. Increased serum levels of IGF-I in children have been associated with higher IQ. IGF-1 shapes the development of the cochlea through controlling apoptosis. Its deficit can cause hearing loss. Serum level of it also underlies a correlation between short height and reduced hearing abilities particularly around 3–5 years of age, and at age 18 (late puberty). IGF receptors The IGFs are known to bind the IGF-1 receptor, the insulin receptor, the IGF-2 receptor, the insulin-related receptor and possibly other receptors. The IGF-1 receptor is the "physiological" receptor—IGF-1 binds to it at significantly higher affinity than it binds the insulin receptor. Like the insulin receptor, the IGF-1 receptor is a receptor tyrosine kinase—meaning the receptor signals by causing the addition of a phosphate molecule on particular tyrosines. The IGF-2 receptor only binds IGF-2 and acts as a "clearance receptor"—it activates no intracellular signaling pathways, functioning only as an IGF-2 sequestering agent and preventing IGF-2 signaling. Organs and tissues affected by IGF-1 Since many distinct tissue types express the IGF-1 receptor, IGF-1's effects are diverse. It acts as a neurotrophic factor, inducing the survival of neurons. It may catalyse skeletal muscle hypertrophy, by inducing protein synthesis, and by blocking muscle atrophy. It is protective for cartilage cells, and is associated with activation of osteocytes, and thus may be an anabolic factor for bone. Since at high concentrations it is capable of activating the insulin receptor, it can also complement for the effects of insulin. Receptors for IGF-1 are found in vascular smooth muscle, while typical receptors for insulin are not found in vascular smooth muscle. IGF-binding proteins IGF-1 and IGF-2 are regulated by a family of proteins known as the IGF-binding proteins. These proteins help to modulate IGF action in complex ways that involve both inhibiting IGF action by preventing binding to the IGF-1 receptor as well as promoting IGF action possibly through aiding in delivery to the receptor and increasing IGF half-life. Currently, there are seven characterized IGF Binding Proteins (IGFBP1 to IGFBP7). There is currently significant data suggesting that IGFBPs play important roles in addition to their ability to regulate IGFs. IGF-1 and IGFBP-3 are GH dependent, whereas IGFBP-1 is insulin regulated. IGFBP-1 production from the liver is significantly elevated during insulinopenia while serum levels of bioactive IGF-1 is increased by insulin. Diseases affected by IGF Studies of recent interest show that the Insulin/IGF axis play an important role in aging. Nematodes, fruit-flies, and other
|
Governance Framework Inoki Genome Federation International Golf Federation International Genetics Federation International Graphical Federation,
|
factor Independent Games Festival Internet Governance Forum Identity Governance Framework Inoki Genome Federation International Golf Federation International Genetics Federation
|
meaning of 'private citizen' with the modern meaning 'fool' to conclude that the Greeks used the word to say that it is selfish and foolish not to participate in public life. But this is not how the Greeks used the word. It is certainly true that the Greeks valued civic participation and criticized non-participation. Thucydides quotes Pericles' Funeral Oration as saying: "[we] regard... him who takes no part in these [public] duties not as unambitious but as useless" (). However, neither he nor any other ancient author uses the word "idiot" to describe non-participants, or in a derogatory sense; its most common use was simply a private citizen or amateur as opposed to a government official, professional, or expert. The derogatory sense came centuries later, and was unrelated to the political meaning. Disability and early classification and nomenclature In 19th- and early 20th-century medicine and psychology, an "idiot" was a person with a very profound intellectual disability. In the early 1900s, Dr. Henry H. Goddard proposed a classification system for intellectual disability based on the Binet-Simon concept of mental age. Individuals with the lowest mental age level (less than three years) were identified as idiots; imbeciles had a mental age of three to seven years, and morons had a mental age of seven to ten years. The term "idiot" was used to refer to people having an IQ below 30 IQ, or intelligence quotient, was originally determined by dividing a person's mental age, as determined by standardized tests, by their actual age. The concept of mental age has fallen into disfavor, though, and IQ is now determined on the basis of statistical distributions. In the obsolete medical classification (ICD-9, 1977), these people were said to have "profound mental retardation" or "profound mental subnormality" with IQ under 20. Regional law United States Until 2007, the California Penal Code Section 26 stated that "Idiots" were one of six types of people who are not capable of committing crimes. In 2007 the code was amended to read "persons who are mentally incapacitated." In 2008, Iowa voters passed a measure replacing "idiot, or insane person" in the State's constitution with "person adjudged mentally incompetent." In several U.S. states, "idiots" do not have the right to vote: Kentucky Section 145 Mississippi Article 12, Section 241 Ohio Article V, Section 6 The constitution of the state of Arkansas was amended in the general election of 2008 to, among other things, repeal a provision (Article 3, Section 5) which had until its repeal prohibited "idiots or insane persons" from voting. In literature
|
no part in these [public] duties not as unambitious but as useless" (). However, neither he nor any other ancient author uses the word "idiot" to describe non-participants, or in a derogatory sense; its most common use was simply a private citizen or amateur as opposed to a government official, professional, or expert. The derogatory sense came centuries later, and was unrelated to the political meaning. Disability and early classification and nomenclature In 19th- and early 20th-century medicine and psychology, an "idiot" was a person with a very profound intellectual disability. In the early 1900s, Dr. Henry H. Goddard proposed a classification system for intellectual disability based on the Binet-Simon concept of mental age. Individuals with the lowest mental age level (less than three years) were identified as idiots; imbeciles had a mental age of three to seven years, and morons had a mental age of seven to ten years. The term "idiot" was used to refer to people having an IQ below 30 IQ, or intelligence quotient, was originally determined by dividing a person's mental age, as determined by standardized tests, by their actual age. The concept of mental age has fallen into disfavor, though, and IQ is now determined on the basis of statistical distributions. In the obsolete medical classification (ICD-9, 1977), these people were said to have "profound mental retardation" or "profound mental subnormality" with IQ under 20. Regional law United States Until 2007, the California Penal Code Section 26 stated that "Idiots" were one of six types of people who are not capable of committing crimes. In 2007 the code was amended to read "persons who are mentally incapacitated." In 2008, Iowa voters passed a measure replacing "idiot, or insane person" in the State's constitution with "person adjudged mentally incompetent." In several U.S. states, "idiots" do not have the right to vote: Kentucky Section 145 Mississippi Article 12, Section 241 Ohio Article V, Section 6 The constitution of the state of Arkansas was amended in the general election of 2008 to, among other things, repeal a provision (Article 3, Section 5) which had until its repeal prohibited "idiots or insane persons" from voting. In literature A few authors have used "idiot" characters in novels, plays and poetry. Often these characters
|
which decline to present an allegiance to the House of Saud. Wahhabism is also characterized by its disinterest in social justice, anticolonialism, or economic equality, expounded upon by the mainstream Islamists. Historically, Wahhabism was state-sponsored and internationally propagated by Saudi Arabia with the help of funding from mainly Saudi petroleum exports, leading to the "explosive growth" of its influence (and subsequently, the influence of Salafism) from the 70s (a phenomenon often dubbed as Petro-Islam). Today, both Wahhabism and Salafism exert their influence worldwide, and they have been indirectly contributing to the upsurge of Salafi Jihadism as well. Militant Islamism/Jihadism Qutbism Qutbism is an ideology formulated by Sayyid Qutb, an influential figure of the Muslim Brotherhood during the 50s and 60s, which justifies the use of violence in order to push the Islamist goals. Qutbism is marked by the two distinct methodological concepts; one is takfirism, which in the context of Qutbism, indicates the excommunication of fellow Muslims who are deemed equivalent to apostate, and another is "offensive Jihad", a concept which promotes violence in the name of Islam against the perceived kuffar (infidels). Based on the two concepts, Qutbism promotes engagement against the state apparatus in order to topple down its regime. Fusion of Qutbism and Salafi Movement had resulted in the development of Salafi jihadism (see below). Qutbism is considered a product of the extreme repression experienced by Qutb and his fellow Muslim Brothers under the Nasser regime, which was resulted from the 1954 Muslim Brothers plot to assassinate Nasser. During the repression, thousands of Muslim Brothers were imprisoned, many of them, including Qutb, tortured and held in concentration camps. Under this condition, Qutb had cultivated his Islamist ideology in his seminal work Ma'alim fi-l-Tariq (Milestones), in which he equated the Muslims within the Nasser regime with secularism and the West, and described them as regression back to jahiliyyah (period of time before the advent of Islam). In this context, he allowed the tafkir (which was an unusual practice before the rejuvenation by Qutb) of said Muslims. Although Qutb was executed before the completion of his ideology, his idea was disseminated and continuously expanded by the later generations, among them Abdullah Yusuf Azzam and Ayman Al-Zawahiri, who was a student of Qutb's brother Muhammad Qutb and later became a mentor of Osama bin Laden. Al-Zawahiri was considered "the purity of Qutb's character and the torment he had endured in prison," and had played an extensive role in the normalization of offensive Jihad within the Qutbist discourse. Both al-Zawahiri and bin Laden had become the core of Jihadist movements which exponentially developed in the backdrop of the late 20th-century geopolitical crisis throughout the Muslim world. Salafi Jihadism Salafi jihadism is a term coined by Gilles Kepel in 2002, referring to the ideology which actively promotes and conducts violence and terrorism in order to pursue the establishment of an Islamic state or a new Caliphate.Deneoux, Guilain (June 2002). "The Forgotten Swamp: Navigating Political Islam". Middle East Policy. pp. 69–71." Today, the term is often simplified to Jihadism or Jihadist movement in popular usage according to Martin Kramer. It is a hybrid ideology between Qutbism, Salafism, Wahhabism and other minor Islamist strains.القطبية الإخوانية والسرورية قاعدة مناهج السلفية التكفيرية. al-Arab Online. Retrieved December 4, 2017. Qutbism taught by scholars like Abdullah Azzam provided the political intellectual underpinnings with the concepts like takfirism, and Salafism and Wahhabism provided the religious intellectual input. Salafi Jihadism makes up a tiny minority of the contemporary Islamist movements. Distinct characteristics of Salafi Jihadism noted by Robin Wright include the formal process of taking bay'ah (oath of allegiance) to the leader, which is inspired by the Wahhabi teaching. Another characteristic is its flexibility to cut ties with the less-popular movements when its strategically or financially convenient, exemplified by the relations between al-Qaeda and al-Nusra Front. Other marked developments of Salafi Jihadism include the concepts of "near enemy" and "far enemy". "Near enemy" connotes the despotic regime occupying the Muslim society, and the term was coined by Mohammed Abdul-Salam Farag in order to justify the assassination of Anwar al-Sadat by the Salafi Jihadi organization Egyptian Islamic Jihad (EIJ) in 1981. Later, the concept of "far enemy" which connotes the West was introduced and formally declared by al-Qaeda in 1996.Al Qaeda grows as its leaders focus on the 'near enemy'. The National. Retrieved December 3, 2017. Salafi Jihadism emerged during the 80s when the Soviet invaded Afghanistan. Local mujahideen had extracted financial, logistical and military support from Saudi Arabia, Pakistan and the United States. Later, Osama bin Laden established al-Qaeda as a transnational Salafi Jihadi organization in 1988 to capitalize on this financial, logistical and military network and to expand their operation. The ideology had seen its rise during the 90s when the Muslim world experienced numerous geopolitical crisis, notably the Algerian Civil War (1991–2002), Bosnian War (1992–1995), and the First Chechen War (1994–1996). Within these conflicts, political Islam often acted as a mobilizing factor for the local belligerents, who demanded financial, logistical and military support from al-Qaeda, in the exchange for active proliferation of the ideology. After the 1998 bombings of US embassies, September 11 attacks (2001), the US-led invasion of Afghanistan (2001) and Iraq (2003), Salafi Jihadism had seen its momentum. However, it got devastated by the US counterterrorism operations, culminating in bin Laden's death in 2011. After the Arab Spring (2011) and subsequent Syrian Civil War (2011–present), the remnants of al-Qaeda franchise in Iraq had restored their capacity, which rapidly developed into the Islamic State of Iraq and the Levant, spreading its influence throughout the conflict zones of MENA region and the globe. History Predecessor movements Some Islamic revivalist movements and leaders pre-dating Islamism include: Ahmad Sirhindi (~1564–1624) was part of a reassertion of orthodoxy within Islamic Mysticism (Taṣawwuf) and was known to his followers as the 'renovator of the second millennium'. It has been said of Sirhindi that he 'gave to Indian Islam the rigid and conservative stamp it bears today.'Qamar-ul Huda (2003), Striving for Divine Union: Spiritual Exercises for Suhraward Sufis, RoutledgeCurzon, pp. 1–4. Ibn Taymiyyah, a Syrian Islamic jurist during the 13th and 14th centuries who is often quoted by contemporary Islamists. Ibn Taymiyya argued against the shirking of Sharia law, was against practices such as the celebration of Muhammad's birthday, and "he believed that those who ask assistance from the grave of the Prophet or saints, are mushrikin (polytheists), someone who is engaged in shirk." Shah Waliullah of India and Muhammad ibn Abd-al-Wahhab of Arabia were contemporaries who met each other while studying in Mecca. Muhammad ibn Abd-al-Wahhab advocated doing away with the later accretions like grave worship and getting back to the letter and the spirit of Islam as preached and practiced by Muhammad. He went on to found Wahhabism. Shah Waliullah was a forerunner of reformist Islamists like Muhammad Abduh, Muhammad Iqbal and Muhammad Asad in his belief that there was "a constant need for new ijtihad as the Muslim community progressed and expanded and new generations had to cope with new problems" and his interest in the social and economic problems of the poor. Sayyid Ahmad Barelvi was a disciple and successor of Shah Waliullah's son who emphasized the 'purification' of Islam from un-Islamic beliefs and practices. He anticipated modern militant Islamists by leading an extremist, jihadist movement and attempted to create an Islamic state based on the enforcement of Islamic law. While he engaged in several wars against the Sikh Empire in the Muslim-majority North-Western India, his followers participated in the Indian Rebellion of 1857 after his death. After the defeat of the Indian Rebellion, some of Shah Waliullah's followers ceased their involvement in military affairs and founded the Dar al-Ulum seminary in 1867 in the town of Deoband. From the school developed the Deobandi movement which became the largest philosophical movement of traditional Islamic thought on the subcontinent and led to the establishment of thousands of madrasahs throughout modern-day India, Pakistan and Bangladesh. Early history The end of the 19th century saw the dismemberment of most of the Muslim Ottoman Empire by non-Muslim European colonial powers. The empire spent massive sums on Western civilian and military technology to try to modernize and compete with the encroaching European powers, and in the process went deep into debt to these powers. In this context, the publications of Jamal ad-din al-Afghani (1837–97), Muhammad Abduh (1849–1905) and Rashid Rida (1865–1935) preached Islamic alternatives to the political, economic, and cultural decline of the empire. Muhammad Abduh and Al-Afghani formed the beginning of the early Islamist movement.The New Encyclopedia of Islam by Cyril Glasse, Rowman and Littlefield, 2001, p. 19Historical Dictionary of Islam by Ludwig W. Wadamed, Scarecrow Press, 2001, p. 233 Abduh's student, Rashid Rida, is widely regarded as one of "the ideological forefathers" of contemporary Islamist movements. The development of Islamism across the Islamic World was spearheaded by three prominent figures in the 1930s: Rashid Rida, early leader of Salafiyya movement and publisher of the widely read magazine Al-Manar; Hassan al-Banna, founder of the Egyptian Muslim Brotherhood; and Mustafa al-Siba’i, founder of the Syrian Muslim Brotherhood. Their ideas included the creation of a truly Islamic society under sharia law, and the rejection of taqlid, the blind imitation of earlier authorities, which they believed deviated from the true messages of Islam. Unlike some later Islamists, Early Salafiyya strongly emphasized the restoration of the Caliphate. Sayyid Rashid Rida The crises experienced across the Muslim World after the collapse of Ottoman Caliphate would re-introduce the debates over the theory of an alternative Islamic state into the centre of Muslim religious-political thinking of the early 20th-century. A combination of events such as the secularisation of Turkey, the aggressiveness of Western colonial empires, the set backs to modernist and liberal movements in Egypt, and the Palestinian crisis would propel this shift. The modern concept of an Islamic state was first articulated by the Syrian-Egyptian Islamic scholar Muhammad Rashid Rida. As the circumstances shifted through further Western cultural and imperial inroads, militant Islamists and fundamentalists stepped up to assert Islamic values using Rida's ideas as the chief vehicle, starting from 1950s. Rashid Rida played a major role in shaping the revolutionary ideology of the early years of the Egyptian Muslim Brotherhood. Fundamentalism initially became the meeting-ground between the Salafiyyah movement and the Wahhäbi movement of Saudi Arabia. These movements later on drifted apart, with the Salafiyyah being increasingly represented by activist and revolutionary trends, and Wahhäbism by purist conservatism that was characterised by political quietism. Rida's Islamic state emphasized the principle of shura, which would be dominated by the '''Ulama' who act as the natural representatives of Muslims. The Salafi proponents of the modern Islamic state conceive it as a testing ground for protecting the moral and cultural integrity of the Muslim Ummah. Rashid Rida played a significant role in forming the ideology of Muslim Brotherhood and other Sunni Islamist movements across the World. In his influential book al-Khilafa aw al-Imama al-'Uzma ("The Caliphate or the Grand Imamate"); Rashid Rida elaborated on the establishment of his proposed “Islamic state” which emphasised the implementation of Sharia as well as the adoption of an Islamic consultation system (shura) that enshrined leading role of the Ulema (Islamic scholars) in political life. This doctrine would become the blue-print of future Islamist movements. Rida believed that societies that properly obeyed Sharia would be able to successfully emerge as alternatives to both capitalism as well as the disorder of class-based socialism; since such a society would be unsusceptible to its temptations. In Rida's Caliphate, the Khalifa was to be the supreme head whose role was to govern by supervising the application of Islamic laws. This was to happen through a partnership between the Mujtahid ulema and the ‘‘true caliph’'; who engage in Ijtihad by evaluating the Scriptures and govern through shura (consultation). This Khilafa shall also be able to revitalise the Islamic civilization, restore political and legal independence to the Muslim umma (community of Muslim believers), and cleanse Islam from the heretical influences of Sufism. Rashid Rida's Islamic political theory would greatly influence many subsequent Islamic revivalist movements across the Arab world. Rida was certain that an Islamic society which implemented Sharia in the proper manner would be able to successfully resist both capitalism as well as the disorder of class-based socialism; since such a society would be unsusceptible to its temptations. Rida belonged to the last generation of Islamic scholars who were educated entirely within a traditional Islamic system, and expressed views in a self-conscious vernacular that owed nothing to the modern West. Islamist intellectuals that succeeded Rida, such as Hasan al-Banna, would not measure up to the former's scholarly credentials. The subsequent generations ushered in the advent of the radical thinker Sayyid Qutb, who in contrast to Rida, did not have detailed knowledge of religious sciences to address Muslims authoritatively on Sharia. An intellectual rather than a populist, Qutb would reject the West entirely in the most forceful manner; while simultaneously employing Western terminology to substantiate his beliefs and used the classical sources to bolster his subjective methodology to Scriptures. Muhammad Iqbal Muhammad Iqbal was a philosopher, poet and politician in British India who is widely regarded as having inspired the Islamic Nationalism and Pakistan Movement in British India. Iqbal is admired as a prominent classical poet by Pakistani, Iranian, Indian and other international scholars of literature. Though Iqbal is best known as an eminent poet, he is also a highly acclaimed "Islamic philosophical thinker of modern times". While studying law and philosophy in England and Germany, Iqbal became a member of the London branch of the All India Muslim League. He came back to Lahore in 1908. While dividing his time between law practice and philosophical poetry, Iqbal had remained active in the Muslim League. He did not support Indian involvement in World War I and remained in close touch with Muslim political leaders such as Muhammad Ali Johar and Muhammad Ali Jinnah. He was a critic of the mainstream Indian nationalist and secularist Indian National Congress. Iqbal's seven English lectures were published by Oxford University press in 1934 in a book titled The Reconstruction of Religious Thought in Islam. These lectures dwell on the role of Islam as a religion as well as a political and legal philosophy in the modern age. Iqbal expressed fears that not only would secularism and secular nationalism weaken the spiritual foundations of Islam and Muslim society, but that India's Hindu-majority population would crowd out Muslim heritage, culture and political influence. In his travels to Egypt, Afghanistan, Palestine and Syria, he promoted ideas of greater Islamic political co-operation and unity, calling for the shedding of nationalist differences. Sir Muhmmad Iqbal was elected as president of the Muslim League in 1930 at its session in Allahabad as well as for the session in Lahore in 1932. In his Allahabad Address on 29 December 1930, Iqbal outlined a vision of an independent state for Muslim-majority provinces in northwestern India. This address later inspired the Pakistan movement. The thoughts and vision of Iqbal later influenced many reformist Islamists, e.g., Muhammad Asad, Sayyid Abul Ala Maududi and Ali Shariati. Sayyid Abul Ala Maududi Sayyid Abul Ala Maududi was an important early twentieth-century figure in the Islamic revival in India, and then after independence from Britain, in Pakistan. Trained as a lawyer he chose the profession of journalism, and wrote about contemporary issues and most importantly about Islam and Islamic law. Maududi founded the Jamaat-e-Islami party in 1941 and remained its leader until 1972. However, Maududi had much more impact through his writing than through his political organising. His extremely influential books (translated into many languages) placed Islam in a modern context, and influenced not only conservative ulema but liberal modernizer Islamists such as al-Faruqi, whose "Islamization of Knowledge" carried forward some of Maududi's key principles. Influenced by the Islamic state theory of Rashid Rida, al-Mawdudi believed that his contemporary situation wherein Muslims increasingly imitated the West in their daily life as comparable to a modern Jahiliyyah . This Jahiliyya was responsible for the decline of the “Ummah” and the erosion of Islamic values. Only by establishing the “Islamic State” which rules by sharia in its true sense could the modern Jahiliyyah be avoided, by upholding Allah's absolute sovereignty over the world. Maududi believed that Islam was all-encompassing: "Everything in the universe is 'Muslim' for it obeys God by submission to His laws... The man who denies God is called Kafir (concealer) because he conceals by his disbelief what is inherent in his nature and embalmed in his own soul." Maududi also believed that Muslim society could not be Islamic without Sharia, and Islam required the establishment of an Islamic state. This state should be a "theo-democracy," based on the principles of: tawhid (unity of God), risala (prophethood) and khilafa (caliphate). Although Maududi talked about Islamic revolution, by "revolution" he meant not the violence or populist policies of the Iranian Revolution, but the gradual changing the hearts and minds of individuals from the top of society downward through an educational process or da'wah. Muslim Brotherhood Roughly contemporaneous with Maududi was the founding of the Muslim Brotherhood in Ismailiyah, Egypt in 1928 by Hassan al Banna. His was arguably the first, largest and most influential modern Islamic political/religious organization. Under the motto "the Qur'an is our constitution," it sought Islamic revival through preaching and also by providing basic community services including schools, mosques, and workshops. Like Maududi, Al Banna believed in the necessity of government rule based on Shariah law implemented gradually and by persuasion, and of eliminating all imperialist influence in the Muslim world. Some elements of the Brotherhood, though perhaps against orders, did engage in violence against the government, and its founder Al-Banna was assassinated in 1949 in retaliation for the assassination of Egypt's premier Mahmud Fami Naqrashi three months earlier. The Brotherhood has suffered periodic repression in Egypt and has been banned several times, in 1948 and several years later following confrontations with Egyptian president Gamal Abdul Nasser, who jailed thousands of members for several years. Despite periodic repression, the Brotherhood has become one of the most influential movements in the Islamic world, particularly in the Arab world. For many years it was described as "semi-legal" and was the only opposition group in Egypt able to field candidates during elections. In the 2011–12 Egyptian parliamentary election, the political parties identified as "Islamist" (the Brotherhood's Freedom and Justice Party, Salafi Al-Nour Party and liberal Islamist Al-Wasat Party) won 75% of the total seats. Mohamed Morsi, an Islamist of Muslim Brotherhood, was the first democratically elected president of Egypt. He was deposed during the 2013 Egyptian coup d'état. Sayyid Qutb Maududi's political ideas influenced Sayyid Qutb a leading member of the Muslim Brotherhood movement, and one of the key philosophers of Islamism and highly influential thinkers of Islamic universalism. Qutb believed things had reached such a state that the Muslim community had literally ceased to exist. It "has been extinct for a few centuries," having reverted to Godless ignorance (Jahiliyya). To eliminate jahiliyya, Qutb argued Sharia, or Islamic law, must be established. Sharia law was not only accessible to humans and essential to the existence of Islam, but also all-encompassing, precluding "evil and corrupt" non-Islamic ideologies like communism, nationalism, or secular democracy. Qutb preached that Muslims must engage in a two-pronged attack of converting individuals through preaching Islam peacefully and also waging what he called militant jihad so as to forcibly eliminate the "power structures" of Jahiliyya—not only from the Islamic homeland but from the face of the earth. Qutb was both a member of the brotherhood and enormously influential in the Muslim world at large. Qutb is considered by some (Fawaz A. Gerges) to be "the founding father and leading theoretician" of modern jihadists, such as Osama bin Laden. However, the Muslim Brotherhood in Egypt and in Europe has not embraced his vision of undemocratic Islamic state and armed jihad, something for which they have been denounced by radical Islamists. Ascendance on international politics Islamic fervor was understood as a weapon that the United States could use as a weapon in its Cold War against the Soviet Union and its communist allies because communism professes atheism. In a September 1957 White House meeting between U.S. President Eisenhower and senior U.S. foreign policy officials, it was agreed to use the communists' lack of religion against them by setting up a secret task force to deliver weapons to Middle East despots, including the Saudi Arabian rulers. "We should do everything possible to stress the 'holy war' aspect" that has currency in the Middle East, President Eisenhower stated in agreement. Six-Day War (1967) The quick and decisive defeat of the Arab troops during the Six-Day War by Israeli troops constituted a pivotal event in the Arab Muslim world. The defeat along with economic stagnation in the defeated countries, was blamed on the secular Arab nationalism of the ruling regimes. A steep and steady decline in the popularity and credibility of secular, socialist and nationalist politics ensued. Ba'athism, Arab socialism, and Arab nationalism suffered, and different democratic and anti-democratic Islamist movements inspired by Maududi and Sayyid Qutb gained ground. Iranian Revolution (1978–1979) The first modern "Islamist state" (with the possible exception of Zia's Pakistan) was established among the Shia of Iran. In a major shock to the rest of the world, Ayatollah Ruhollah Khomeini led the Iranian Revolution of 1979 in order to overthrow the oil-rich, well-armed, Westernized and pro-American secular monarchy ruled by Shah Muhammad Reza Pahlavi. The views of Ali Shariati, the ideologue of the Iranian Revolution, resembled those of Mohammad Iqbal, the ideological father of the State of Pakistan, but Khomeini's beliefs are perceived to be placed somewhere between the beliefs of Shia Islam and the beliefs of Sunni Islamic thinkers like Mawdudi and Qutb. He believed that complete imitation of the Prophet Mohammad and his successors such as Ali for the restoration of Sharia law was essential to Islam, that many secular, Westernizing Muslims were actually agents of the West and therefore serving Western interests, and that acts such as the "plundering" of Muslim lands was part of a long-term conspiracy against Islam by Western governments. His views differed from those of Sunni scholars in: As a Shia, Khomeini looked to Ali ibn Abī Tālib and Husayn ibn Ali Imam, but not caliphs Abu Bakr, Omar or Uthman. Khomeini talked not about restoring the Caliphate or Sunni Islamic democracy, but about establishing a state where the guardianship of the democratic or the dictatorial political system was performed by Shia jurists (ulama) as the successors of Shia Imams until the Mahdi returns from occultation. His concept of velayat-e-faqih ("guardianship of the [Islamic] jurist"), held that the leading Shia Muslim cleric in society—which Khomeini's mass of followers believed and chose to be himself—should serve as the supervisor of the state in order to protect or "guard" Islam and Sharia law from "innovation" and "anti-Islamic laws" passed by dictators or democratic parliaments. The revolution was influenced by Marxism through Islamist thought and also by writings that sought either to counter Marxism (Muhammad Baqir al-Sadr's work) or to integrate socialism and Islamism (Ali Shariati's work). A strong wing of the revolutionary leadership was made up of leftists or "radical populists", such as Ali Akbar Mohtashami-Pur. While initial enthusiasm for the Iranian revolution in the Muslim world was intense, it has waned as critics hold and campaign that "purges, executions, and atrocities tarnished its image". The Islamic Republic has also maintained its hold on power in Iran in spite of US economic sanctions, and has created or assisted like-minded Shia terrorist groups in Iraq,(SCIRI) and Lebanon (Hezbollah) (two Muslim countries that also have large Shiite populations). During the 2006 Israel-Lebanon conflict, the Iranian government enjoyed something of a resurgence in popularity among the predominantly Sunni "Arab street," due to its support for Hezbollah and to President Mahmoud Ahmadinejad's vehement opposition to the United States and his call that Israel shall vanish. Grand Mosque seizure (1979) The strength of the Islamist movement was manifest in an event which might have seemed sure to turn Muslim public opinion against fundamentalism, but did just the opposite. In 1979 the Grand Mosque in Mecca Saudi Arabia was seized by an armed fundamentalist group and held for over a week. Scores were killed, including many pilgrim bystanders in a gross violation of one of the most holy sites in Islam (and one where arms and violence are strictly forbidden). Instead of prompting a backlash against the movement from which the attackers originated, however, Saudi Arabia, already very conservative, responded by shoring up its fundamentalist credentials with even more Islamic restrictions. Crackdowns followed on everything from shopkeepers who did not close for prayer and newspapers that published pictures of women, to the selling of dolls, teddy bears (images of animate objects are considered haraam), and dog food (dogs are considered unclean). In other Muslim countries, blame for and wrath against the seizure was directed not against fundamentalists, but against Islamic fundamentalism's foremost geopolitical enemy—the United States. Ayatollah Khomeini sparked attacks on American embassies when he announced: It is not beyond guessing that this is the work of criminal American imperialism and international Zionism despite the fact that the object of the fundamentalists' revolt was the Kingdom of Saudi Arabia, America's major ally in the region. Anti-American demonstrations followed in the Philippines, Turkey, Bangladesh, India, the UAE, Pakistan, and Kuwait. The US Embassy in Libya was burned by protesters chanting pro-Khomeini slogans and the embassy in Islamabad, Pakistan was burned to the ground. Soviet invasion of Afghanistan (1979–1989) In 1979, the Soviet Union deployed its 40th Army into Afghanistan, attempting to suppress an Islamic rebellion against an allied Marxist regime in the Afghan Civil War. The conflict, pitting indigenous impoverished Muslims (mujahideen) against an anti-religious superpower, galvanized thousands of Muslims around the world to send aid and sometimes to go themselves to fight for their faith. Leading this pan-Islamic effort was Palestinian sheikh Abdullah Yusuf Azzam. While the military effectiveness of these "Afghan Arabs" was marginal, an estimated 16,000 to 35,000 Muslim volunteers came from around the world to fight in Afghanistan. When the Soviet Union abandoned the Marxist Najibullah regime and withdrew from Afghanistan in 1989 (the regime finally fell in 1992), the victory was seen by many Muslims as the triumph of Islamic faith over superior military power and technology that could be duplicated elsewhere. The jihadists gained legitimacy and prestige from their triumph both within the militant community and among ordinary Muslims, as well as the confidence to carry their jihad to other countries where they believed Muslims required assistance.| The "veterans of the guerrilla campaign" returning home to Algeria, Egypt, and other countries "with their experience, ideology, and weapons," were often eager to continue armed jihad. The collapse of the Soviet Union itself, in 1991, was seen by many Islamists, including Bin Laden, as the defeat of a superpower at the hands of Islam. Concerning the $6 billion in aid given by the US and Pakistan's military training and intelligence support to the mujahideen, bin Laden wrote: "[T]he US has no mentionable role" in "the collapse of the Soviet Union ... rather the credit goes to God and the mujahidin" of Afghanistan. Persian Gulf War (1990–1991) Another factor in the early 1990s that worked to radicalize the Islamist movement was the Gulf War, which brought several hundred thousand US and allied non-Muslim military personnel to Saudi Arabian soil to put an end to Saddam Hussein's occupation of Kuwait. Prior to 1990 Saudi Arabia played an important role in restraining the many Islamist groups that received its aid. But when Saddam, secularist and Ba'athist dictator of neighboring Iraq, attacked Kuwait (his enemy in the war), western troops came to protect the Saudi monarchy. Islamists accused the Saudi regime of being a puppet of the west. These attacks resonated with conservative Muslims and the problem did not go away with Saddam's defeat either, since American troops remained stationed in the kingdom, and a de facto cooperation with the Palestinian-Israeli peace process developed. Saudi Arabia attempted to compensate for its loss of prestige among these groups by repressing those domestic Islamists who attacked it (bin Laden being a prime example), and increasing aid to Islamic groups (Islamist madrassas around the world and even aiding some violent Islamist groups) that did not, but its pre-war influence on behalf of moderation was greatly reduced. One result of this was a campaign of attacks on government officials and tourists in Egypt, a bloody civil war in Algeria and Osama bin Laden's terror attacks climaxing in the 9/11 attack. 2000s By the beginning of the twenty first century, "the word secular, a label proudly worn" in the 1960s and 70s was "shunned" and "used to besmirch" political foes in Egypt and the rest of the Muslim world. Islamists surpassed the small secular opposition parties in terms of "doggedness, courage," "risk-taking" or "organizational skills". In the Middle East and Pakistan, religious discourse dominates societies, the airwaves, and thinking about the world. Radical mosques have proliferated throughout Egypt. Book stores are dominated by works with religious themes ... The demand for sharia, the belief that their governments are unfaithful to Islam and that Islam is the answer to all problems, and the certainty that the West has declared war on Islam; these are the themes that dominate public discussion. Islamists may not control parliaments or government palaces, but they have occupied the popular imagination. Opinion polls in a variety of Islamic countries showed that significant majorities opposed groups like ISIS, but also wanted religion to play a greater role in public life. "Post-Islamism" By 2020, approximately 40 years after the Islamic overthrow of the Shah of Iran and the seizure of the Grand Mosque by extremists, a number of observers (Olivier Roy, Mustafa Akyol, Nader Hashemi) detected a decline in the vigor and popularity of Islamism. Islamism had been an idealized/utopian concept to compare with the grim reality of the status quo, but in more than four decades it had failed to establish a "concrete and viable blueprint for society" despite repeated efforts (Olivier Roy); and instead had left a less than inspiring track record of its impact on the world (Nader Hashemi). Consequently, in addition to the trend towards moderation by Islamist or formerly Islamist parties (such as PKS of Indonesia, AKP of Turkey, and PAS of Malaysia) mentioned above, there has been a social/religious and sometimes political backlash against Islamist rule in countries like Turkey, Iran, and Sudan (Mustafa Akyol). Writing in 2020, Mustafa Akyol argues there has been a strong reaction by many Muslims against political Islam, including a weakening of religious faith -- the very thing Islamism was intended to strengthen. He suggests this backlash against Islamism among Muslim youth has come from all the "terrible things" that have happened in the Arab world in the twenty first century "in the name of Islam" -- such as the "sectarian civil wars in Syria, Iraq and Yemen". Polls taken by Arab Barometer in six Arab countries — Algeria, Egypt, Tunisia, Jordan, Iraq and Libya — found "Arabs are losing faith in religious parties and leaders." In 2018-19, in all six countries, fewer than 20% of those asked whether they trusted Islamist parties answered in the affirmative. That percentage had fallen (in all six countries) from when the same question was asked in 2012-14. Mosque attendance also declined more than 10 points on average, and the share of those Arabs describing themselves as "not religious" went from 8% in 2013 to 13% in 2018-19. In Syria, Sham al-Ali reports "Rising apostasy among Syrian youths". Writing in 2021, Nader Hashemi notes that in Iraq, Sudan, Tunisia, Egypt, Gaza, Jordan and other places were Islamist parties have come to power or campaigned to, "one general theme stands. The popular prestige of political Islam has been tarnished by its experience with state power." Even Islamist terrorism was in decline and tended "to be local" rather than pan-Islamic. As of 2021, Al-Qaeda consisted of "a bunch of militias" with no effective central command (Fareed Zakaria). Rise of Islamism by country Afghanistan (Taliban) In Afghanistan, the mujahideen's victory against the Soviet Union in the 1980s did not lead to justice and prosperity, due to a vicious and destructive civil war between political and tribal warlords, making Afghanistan one of the poorest countries on earth. In 1992, the Democratic Republic of Afghanistan ruled by communist forces collapsed, and democratic Islamist elements of mujahideen founded the Islamic State of Afghanistan. In 1996, a more conservative and anti-democratic Islamist movement known as the Taliban rose to power, defeated most of the warlords and took over roughly 80% of Afghanistan. The Taliban were spawned by the thousands of madrasahs the Deobandi movement established for impoverished Afghan refugees and supported by governmental and religious groups in neighboring Pakistan. The Taliban differed from other Islamist movements to the point where they might be more properly described as Islamic fundamentalist or neofundamentalist, interested in spreading "an idealized and systematized version of conservative tribal village customs" under the label of Sharia to an entire country. Their ideology was also described as being influenced by Wahhabism, and the extremist jihadism of their guest Osama bin Laden. The Taliban considered "politics" to be against Sharia and thus did not hold elections. They were led by Abdul Ghani Baradar and Mohammed Omar who was given the title "Amir al-Mu'minin" or Commander of the Faithful, and a pledge of loyalty by several hundred Taliban-selected Pashtun clergy in April 1996. Taliban were overwhelmingly Pashtun and were accused of not sharing power with the approximately 60% of Afghans who belonged to other ethnic groups. (see: Taliban#Ideology and aims) The Taliban's hosting of Osama bin Laden led to an American-organized attack which drove them from power following the 9/11 attacks. Taliban are still very much alive and fighting a vigorous insurgency with suicide bombings and armed attacks being launched against NATO and Afghan government targets. Algeria An Islamist movement influenced by Salafism and the jihad in Afghanistan, as well as the Muslim Brotherhood, was the FIS or Front Islamique de Salut (the Islamic Salvation Front) in Algeria. Founded as a broad Islamist coalition in 1989 it was led by Abbassi Madani, and a charismatic Islamist young preacher, Ali Belhadj. Taking advantage of economic failure and unpopular social liberalization and secularization by the ruling leftist-nationalist FLN government, it used its preaching to advocate the establishment of a legal system following Sharia law, economic liberalization and development program, education in Arabic rather than French, and gender segregation, with women staying home to alleviate the high rate of unemployment among young Algerian men. The FIS won sweeping victories in local elections and it was going to win national elections in 1991 when voting was canceled by a military coup d'état. As Islamists took up arms to overthrow the government, the FIS's leaders were arrested and it became overshadowed by Islamist guerrilla groups, particularly the Islamic Salvation Army, MIA and Armed Islamic Group (or GIA). A bloody and devastating civil war ensued in which between 150,000 and 200,000 people were killed over the next decade. The civil war was not a victory for Islamists. By 2002 the main guerrilla groups had either been destroyed or had surrendered. The popularity of Islamist parties has declined to the point that "the Islamist candidate, Abdallah Jaballah, came a distant third with 5% of the vote" in the 2004 presidential election. Bangladesh Jamaat-e-Islami Bangladesh is the largest Islamist party in the country and supports the implementation of Sharia law and promotes the country's main right-wing politics. Since 2000, the main political opposition Bangladesh Nationalist Party (BNP) has been allied with it and another Islamic party, Islami Oikya Jote. Some of their leaders and supporters, including former ministers and MPs, have been hanged for alleged war crimes during Bangladesh's struggle for independence and speaking against the ruling Bangladesh Awami League. Belgium In the 2012, the party named Islam had four candidates and they were elected in Molenbeek and Anderlecht. In 2018, they ran candidates in 28 municipalities. Its policies include schools must offer halal food and women must be able to wear a headscarf anywhere. Another of the Islam Party's goals is to separate men and women on public transportation. The party's president argues this policy will help protect women from sexual harassment. Denmark The Islamist movements gradually grew since the 1990s. The first Islamist groups and networks were predominantly influenced by the countries they immigrated from. Those involved had close contact with militant Islamists in the Middle East, South Asia and North Africa. Their operations had supporting militant groups financially as their first priority. Since the 1990s, people from the Islamist movements joined several conflicts to train with or participate in fighting with Islamist militants. In the 2000s the Islamist movements grew and by 2014 there were militants among the Islamist movements in Copenhagen, Aarhus and Odense. Several people from crime gangs join Islamist movements that sympathise with militant Islamism. The militant Islamist movement were estimated to encompass some hundreds in 2014. The Danish National Centre for Social Research released a report commissioned by the Ministry of Children, Integration and Social Affairs documenting 15 extremist groups operating in Denmark. The majority of these organizations were non-Muslim far-right or far-left groups, but five were Sunni Islamist groups. These Sunni Islamist groups include Hizb ut-Tahrir Denmark, Dawah-bærere (Dawah Carriers), Kaldet til Islam (The Call to Islam), Dawah-centret (The Dawah Centre), and the Muslimsk Ungdomscenter (The Muslim Youth Centre). All of these Sunni Islamist groups operate in Greater Copenhagen with the exception of Muslimsk Ungdomscenter, which operates in Aarhus. Altogether, roughly 195 to 415 Muslims belong to one of these organizations and most are young men. Egypt (Jihadism) While Qutb's ideas became increasingly radical during his imprisonment prior to his execution in 1966, the leadership of the Brotherhood, led by Hasan al-Hudaybi, remained moderate and interested in political negotiation and activism. Fringe or splinter movements inspired by the final writings of Qutb in the mid-1960s (particularly the manifesto Milestones, a.k.a. Ma'alim fi-l-Tariq) did, however, develop and they pursued a more radical direction. By the 1970s, the Brotherhood had renounced violence as a means of achieving its goals. The path of violence and military struggle was then taken up by the Egyptian Islamic Jihad organization responsible for the assassination of Anwar Sadat in 1981. Unlike earlier anti-colonial movements the extremist group directed its attacks against what it believed were "apostate" leaders of Muslim states, leaders who held secular leanings or who had introduced or promoted Western/foreign ideas and practices into Islamic societies. Its views were outlined in a pamphlet written by Muhammad Abd al-Salaam Farag, in which he states: ...there is no doubt that the first battlefield for jihad is the extermination of these infidel leaders and to replace them by a complete Islamic Order... Another of the Egyptian groups which employed violence in their struggle for Islamic order was al-Gama'a al-Islamiyya (Islamic Group). Victims of their campaign against the Egyptian state in the 1990s included the head of the counter-terrorism police (Major General Raouf Khayrat), a parliamentary speaker (Rifaat al-Mahgoub), dozens of European tourists and Egyptian bystanders, and over 100 Egyptian police. Ultimately the campaign to overthrow the government was unsuccessful, and the major jihadi group, Jamaa Islamiya (or al-Gama'a al-Islamiyya), renounced violence in 2003. Other lesser known groups include the Islamic Liberation Party, Salvation from Hell and Takfir wal-Hijra, and these groups have variously been involved in activities such as attempted assassinations of political figures, arson of video shops and attempted takeovers of government buildings. France The
|
The AP Stylebook entry for Islamist reads as follows: "An advocate or supporter of a political movement that favors reordering government and society in accordance with laws prescribed by Islam. Do not use as a synonym for Islamic fighters, militants, extremists or radicals, who may or may not be Islamists. Where possible, be specific and use the name of militant affiliations: al-Qaida-linked, Hezbollah, Taliban, etc. Those who view the Quran as a political model encompass a wide range of Muslims, from mainstream politicians to militants known as jihadi." Overview Definitions Islamism has been defined as: "the belief that Islam should guide social and political as well as personal life", a form of "religionized politics" and an instance of religious fundamentalism "political movement that favors reordering government and society in accordance with laws prescribed by Islam" (from Associated Press's definition of "Islamist") "[the term 'Islamist' has become shorthand for] 'Muslims we don't like.'" (from Council on American–Islamic Relations's complaint about AP's earlier definition of Islamist) "a theocratic ideology that seeks to impose any version of Islam over society by law". (Maajid Nawaz, a former Islamist turned critic). Subsequently, clarified as "the desire to impose any given interpretation of Islam on society". "the [Islamic] ideology that guides society as a whole and that [teaches] law must be in conformity with the Islamic sharia", a term "used by outsiders to denote a strand of activity which they think justifies their misconception of Islam as something rigid and immobile, a mere tribal affiliation." a movement so broad and flexible it reaches out to "everything to everyone" in Islam, making it "unsustainable". an alternative social provider to the poor masses; an angry platform for the disillusioned young; a loud trumpet-call announcing "a return to the pure religion" to those seeking an identity; a "progressive, moderate religious platform" for the affluent and liberal; ... and at the extremes, a violent vehicle for rejectionists and radicals. an Islamic "movement that seeks cultural differentiation from the West and reconnection with the pre-colonial symbolic universe", "the organised political trend [...] that seeks to solve modern political problems by reference to Muslim texts [...] the whole body of thought which seeks to invest society with Islam which may be integrationist, but may also be traditionalist, reform-minded or even revolutionary" "the active assertion and promotion of beliefs, prescriptions, laws or policies that are held to be Islamic in character," a movement of "Muslims who draw upon the belief, symbols, and language of Islam to inspire, shape, and animate political activity;" which may contain moderate, tolerant, peaceful activists or those who "preach intolerance and espouse violence." "All who seek to Islamize their environment, whether in relation to their lives in society, their family circumstances, or the workplace, may be described as Islamists." Varieties Islamism takes different forms and spans a wide range of strategies and tactics towards the powers in place—"destruction, opposition, collaboration, indifference" that have varied as "circumstances have changed"—and thus is not a united movement. Moderate and reformist Islamists who accept and work within the democratic process include parties like the Tunisian Ennahda Movement. Jamaat-e-Islami of Pakistan is basically a socio-political and democratic Vanguard party but has also gained political influence through military coup d'états in the past. Other Islamist groups like Hezbollah in Lebanon and Hamas in Palestine participate in the democratic and political process as well as armed attacks. Jihadist organizations like al-Qaeda and the Egyptian Islamic Jihad, and groups such as the Taliban, entirely reject democracy, often declaring as kuffar those Muslims who support it (see takfirism), as well as calling for violent/offensive jihad or urging and conducting attacks on a religious basis. Another major division within Islamism is between what Graham E. Fuller has described as the fundamentalist "guardians of the tradition" (Salafis, such as those in the Wahhabi movement) and the "vanguard of change and Islamic reform" centered around the Muslim Brotherhood. Olivier Roy argues that "Sunni pan-Islamism underwent a remarkable shift in the second half of the 20th century" when the Muslim Brotherhood movement and its focus on Islamisation of pan-Arabism was eclipsed by the Salafi movement with its emphasis on "sharia rather than the building of Islamic institutions," and rejection of Shia Islam. Following the Arab Spring, Roy has described Islamism as "increasingly interdependent" with democracy in much of the Arab Muslim world, such that "neither can now survive without the other." While Islamist political culture itself may not be democratic, Islamists need democratic elections to maintain their legitimacy. At the same time, their popularity is such that no government can call itself democratic that excludes mainstream Islamist groups. Relation to Islam The relationship between the notions of Islam and Islamism has been subject to disagreement. Hayri Abaza argues that the failure to distinguish between Islam and Islamism leads many in the West to support illiberal Islamic regimes, to the detriment of progressive moderates who seek to separate religion from politics. A writer for the International Crisis Group maintains that "the conception of 'political Islam'" is a creation of Americans to explain the Iranian Islamic Revolution and apolitical Islam was a historical fluke of the "short-lived era of the heyday of secular Arab nationalism between 1945 and 1970", and it is quietist/non-political Islam, not Islamism, that requires explanation. Another source distinguishes Islamist from Islamic "by the fact that the latter refers to a religion and culture in existence over a millennium, whereas the first is a political/religious phenomenon linked to the great events of the 20th century". Islamists have, at least at times, defined themselves as "Islamiyyoun/Islamists" to differentiate themselves from "Muslimun/Muslims". Daniel Pipes describes Islamism as a modern ideology that owes more to European utopian political ideologies and "isms" than to the traditional Islamic religion. Influence Few observers contest the influence of Islamism within the Muslim world. Following the collapse of the Soviet Union, political movements based on the liberal ideology of free expression and democratic rule have led the opposition in other parts of the world such as Latin America, Eastern Europe and many parts of Asia; however "the simple fact is that political Islam currently reigns as the most powerful ideological force across the Muslim world today". People see the unchanging socioeconomic condition in the Muslim world as a major factor. Olivier Roy believes "the socioeconomic realities that sustained the Islamist wave are still here and are not going to change: poverty, uprootedness, crises in values and identities, the decay of the educational systems, the North-South opposition, and the problem of immigrant integration into the host societies". The strength of Islamism also draws from the strength of religiosity in general in the Muslim world. Compared to Western societies, "[w]hat is striking about the Islamic world is that ... it seems to have been the least penetrated by irreligion". Where other peoples may look to the physical or social sciences for answers in areas which their ancestors regarded as best left to scripture, in the Muslim world, religion has become more encompassing, not less, as "in the last few decades, it has been the fundamentalists who have increasingly represented the cutting edge" of Muslim culture. Writing in 2009, Sonja Zekri described Islamists in Egypt and other Muslim countries as "extremely influential. ... They determine how one dresses, what one eats. In these areas, they are incredibly successful. ... Even if the Islamists never come to power, they have transformed their countries." Political Islamists were described as "competing in the democratic public square in places like Turkey, Tunisia, Malaysia and Indonesia". Types Moderate Islamism Moderate Islamism is the emerging Islamist discourses and movements which considered deviated from the traditional Islamist discourses of the mid-20th century. Moderate Islamism is characterized by pragmatic participation within the existing constitutional and political framework, in the most cases democratic institution. Moderate Islamists make up the majority of the contemporary Islamist movements. From the philosophical perspective, their discourses are represented by reformation or reinterpretation of modern socio-political institutions and values imported from the West including democracy. This had led to the conception of Islamic form of such institutions, and Islamic interpretations are often attempted within this conception. In the example of democracy, Islamic democracy as an Islamized form of the system has been intellectually developed. In Islamic democracy, the concept of shura, the tradition of consultation which considered as Sunnah of the prophet Muhammad, is invoked to Islamically reinterpret and legitimatize the institution of democracy. Performance, goal, strategy, and outcome of moderate Islamist movements vary considerably depending on the country and its socio-political and historical context. In terms of performance, most of the Islamist political parties are oppositions. However, there are few examples they govern or obtain the substantial number of the popular votes. This includes National Congress of Sudan, National Iraqi Alliance of Iraq and Justice and Development Party (PJD) of Morocco. Their goal also ranges widely. The Ennahda Movement of Tunisia and Prosperous Justice Party (PKS) of Indonesia formally resigned their vision of implementing sharia. In Morocco, PJD supported King Muhammad VI's Mudawana, a "startlingly progressive family law" which grants women the right to a divorce, raises the minimum age for marriage to 18, and, in the event of separation, stipulates equal distribution of property. To the contrary, National Congress of Sudan has implemented the strict interpretation of sharia with the foreign support from the conservative states. Movements of the former category are also termed as Post-Islamism (see below). Their political outcome is interdependent with their goal and strategy, in which what analysts call "inclusion-moderation theory" is in effect. Inclusion-moderation theory assumes that the more lenient the Islamists become, the less likely their survival will be threatened. Similarly, the more accommodating the government be, the less extreme Islamists become. Moderate Islamism within the democratic institution is a relatively recent phenomenon. Throughout the 80s and 90s, major moderate Islamist movements such as the Muslim Brotherhood and the Ennahda were excluded from democratic political participation. Islamist movements operated within the state framework were markedly scrutinized during the Algerian Civil War (1991–2002) and after the increase of terrorism in Egypt in the 90s. Reflecting on these failures, Islamists turned increasingly into revisionist and receptive to democratic procedures in the 21st century. The possibility of accommodating this new wave of modernist Islamism has been explored among the Western intellectuals, with the concept such as Turkish model was proposed. The concept was inspired by the perceived success of Turkish Justice and Development Party (AKP) led by Recep Tayyip Erdoğan in harmonizing the Islamist principles within the secular state framework. Turkish model, however, has been considered came "unstuck" after recent purge and violations of democratic principles by the Erdoğan regime. Critics of the concept hold that Islamist aspirations are fundamentally incompatible with the democratic principles, thus even moderate Islamists are totalitarian in nature. As such, it requires strong constitutional checks and the effort of the mainstream Islam to detach political Islam from the public discourses. Post-Islamism Iranian political sociologist Asef Bayat proposed the term Post-Islamism to refer to Islamist movements which departured from traditional Islamist discourses of the mid-20th century, having found that "following a phase of experimentation", the "appeal, energy, symbols and sources of legitimacy of Islamism" were "exhausted, even among its once-ardent supporters. As such, post-Islamism is not anti-Islamic, but rather reflects a tendency to resecularize religion." This state originally pertained only to Iran, where "post-Islamism is expressed in the idea of fusion between Islam (as a personalized faith) and individual freedom and choice; and post-Islamism is associated with the values of democracy and aspects of modernity". A 2008 Lowy Institute for International Policy paper suggests that PKS of Indonesia and AKP of Turkey are post-Islamist. The characterization can be applied to Malaysian Islamic Party (PAS), and used to describe the "ideological evolution" within the Ennahda of Tunisia.<ref>{{cite journal|title=Post-Islamism, ideological evolution and 'la tunisianite of the Tunisian Islamist party al-Nahda|journal=Journal of Political Ideologies|volume=20|number=1|year=2015|pages=27–42|first1=Francesco |last1=Cavatorta|first2=Fabio|last2=Merone|doi=10.1080/13569317.2015.991508|s2cid=143777291}}</ref> Salafi movement The contemporary Salafi movement encompasses a broad range of ultraconservative Islamist doctrines which share the reformist mission of Ibn Taymiyyah. From the perspective of political Islam, the Salafi movement can be broadly categorized into three groups; the quietist (or the purist), the activist (or haraki) and the jihadist (Salafi jihadism, see below). The quietist school advocates for societal reform through religious education and proselytizing rather than political activism. The activist school, to the contrary, encourages political participation within the constitutional and political framework. The jihadist school is inspired by the ideology of Sayyid Qutb (Qutbism, see below), and rejects the legitimacy of secular institutions and promotes the revolution in order to pave the way for the establishment of a new Caliphate. The quietist Salafi movement is stemming from the teaching of Nasiruddin Albani, who challenged the notion of taqlid (imitation, conformity to the legal precedent) as a blind adherence. As such, they alarm the political participation as potentially leading to the division of the Muslim community. This school is exemplified by Madkhalism which is based on the writings of Rabee al-Madkhali. Madkhalism originated in the 90s Saudi Arabia, as a reaction against the rise of the Salafi activism and the threat of Salafi Jihadism. It rejects any kind of opposition against the secular governance, thus endorsed by the authoritarian governments of Egypt and Saudi Arabia during the 90s. The influence of the quietist school has waned significantly in the Middle East recently, as the governments began incorporating Islamist factions emanating from the popular demand. The politically active Salafi movement, Salafi activism or harakis, is based on the religious belief that endorses non-violent political activism in order to protect God's Divine governance. This means that politics is a field which requires Salafi principles to be applied as well, in the same manner with other aspects of society and life. Salafi activism originated in the 50s to 60s Saudi Arabia, where many Muslim Brothers had taken refuge from the prosecution by the Nasser regime. There, Muslim Brothers' Islamism had synthesized with Salafism, and led to the creation of the Salafi activist trend exemplified by the Sahwa movement in the 80s, promulgated by Safar Al-Hawali and Salman al-Ouda. Today, the school makes up the majority of Salafism. There are many active Salafist political parties throughout the Muslim world, including Al Nour Party of Egypt, Al Islah of Yemen and Al Asalah of Bahrain. Wahhabism The antecedent of the contemporary Salafi movement is Wahhabism, which traces back to the 18th-century reform movement in Najd by Muhammad ibn Abd al-Wahhab. Although having different roots, Wahhabism and Salafism are considered more or less merged in the 60s Saudi Arabia.Stephane Lacroix, Al-Albani's Revolutionary Approach to Hadith . Leiden University's ISIM Review, Spring 2008, #21. In the process, Salafism had been greatly influenced by Wahhabism, and today they share the similar religious outlook. Wahhabism is also described as a Saudi brand of Salafism. From the political perspective, Wahhabism is marked in its teaching of bay'ah (oath to allegiance), which requires Muslims to present an allegiance to the ruler of the society. Wahhabis have traditionally given their allegiance to the House of Saud, and this has made them apolitical in Saudi Arabia. However, there are small numbers of other strains including Salafi Jihadist offshoot which decline to present an allegiance to the House of Saud. Wahhabism is also characterized by its disinterest in social justice, anticolonialism, or economic equality, expounded upon by the mainstream Islamists. Historically, Wahhabism was state-sponsored and internationally propagated by Saudi Arabia with the help of funding from mainly Saudi petroleum exports, leading to the "explosive growth" of its influence (and subsequently, the influence of Salafism) from the 70s (a phenomenon often dubbed as Petro-Islam). Today, both Wahhabism and Salafism exert their influence worldwide, and they have been indirectly contributing to the upsurge of Salafi Jihadism as well. Militant Islamism/Jihadism Qutbism Qutbism is an ideology formulated by Sayyid Qutb, an influential figure of the Muslim Brotherhood during the 50s and 60s, which justifies the use of violence in order to push the Islamist goals. Qutbism is marked by the two distinct methodological concepts; one is takfirism, which in the context of Qutbism, indicates the excommunication of fellow Muslims who are deemed equivalent to apostate, and another is "offensive Jihad", a concept which promotes violence in the name of Islam against the perceived kuffar (infidels). Based on the two concepts, Qutbism promotes engagement against the state apparatus in order to topple down its regime. Fusion of Qutbism and Salafi Movement had resulted in the development of Salafi jihadism (see below). Qutbism is considered a product of the extreme repression experienced by Qutb and his fellow Muslim Brothers under the Nasser regime, which was resulted from the 1954 Muslim Brothers plot to assassinate Nasser. During the repression, thousands of Muslim Brothers were imprisoned, many of them, including Qutb, tortured and held in concentration camps. Under this condition, Qutb had cultivated his Islamist ideology in his seminal work Ma'alim fi-l-Tariq (Milestones), in which he equated the Muslims within the Nasser regime with secularism and the West, and described them as regression back to jahiliyyah (period of time before the advent of Islam). In this context, he allowed the tafkir (which was an unusual practice before the rejuvenation by Qutb) of said Muslims. Although Qutb was executed before the completion of his ideology, his idea was disseminated and continuously expanded by the later generations, among them Abdullah Yusuf Azzam and Ayman Al-Zawahiri, who was a student of Qutb's brother Muhammad Qutb and later became a mentor of Osama bin Laden. Al-Zawahiri was considered "the purity of Qutb's character and the torment he had endured in prison," and had played an extensive role in the normalization of offensive Jihad within the Qutbist discourse. Both al-Zawahiri and bin Laden had become the core of Jihadist movements which exponentially developed in the backdrop of the late 20th-century geopolitical crisis throughout the Muslim world. Salafi Jihadism Salafi jihadism is a term coined by Gilles Kepel in 2002, referring to the ideology which actively promotes and conducts violence and terrorism in order to pursue the establishment of an Islamic state or a new Caliphate.Deneoux, Guilain (June 2002). "The Forgotten Swamp: Navigating Political Islam". Middle East Policy. pp. 69–71." Today, the term is often simplified to Jihadism or Jihadist movement in popular usage according to Martin Kramer. It is a hybrid ideology between Qutbism, Salafism, Wahhabism and other minor Islamist strains.القطبية الإخوانية والسرورية قاعدة مناهج السلفية التكفيرية. al-Arab Online. Retrieved December 4, 2017. Qutbism taught by scholars like Abdullah Azzam provided the political intellectual underpinnings with the concepts like takfirism, and Salafism and Wahhabism provided the religious intellectual input. Salafi Jihadism makes up a tiny minority of the contemporary Islamist movements. Distinct characteristics of Salafi Jihadism noted by Robin Wright include the formal process of taking bay'ah (oath of allegiance) to the leader, which is inspired by the Wahhabi teaching. Another characteristic is its flexibility to cut ties with the less-popular movements when its strategically or financially convenient, exemplified by the relations between al-Qaeda and al-Nusra Front. Other marked developments of Salafi Jihadism include the concepts of "near enemy" and "far enemy". "Near enemy" connotes the despotic regime occupying the Muslim society, and the term was coined by Mohammed Abdul-Salam Farag in order to justify the assassination of Anwar al-Sadat by the Salafi Jihadi organization Egyptian Islamic Jihad (EIJ) in 1981. Later, the concept of "far enemy" which connotes the West was introduced and formally declared by al-Qaeda in 1996.Al Qaeda grows as its leaders focus on the 'near enemy'. The National. Retrieved December 3, 2017. Salafi Jihadism emerged during the 80s when the Soviet invaded Afghanistan. Local mujahideen had extracted financial, logistical and military support from Saudi Arabia, Pakistan and the United States. Later, Osama bin Laden established al-Qaeda as a transnational Salafi Jihadi organization in 1988 to capitalize on this financial, logistical and military network and to expand their operation. The ideology had seen its rise during the 90s when the Muslim world experienced numerous geopolitical crisis, notably the Algerian Civil War (1991–2002), Bosnian War (1992–1995), and the First Chechen War (1994–1996). Within these conflicts, political Islam often acted as a mobilizing factor for the local belligerents, who demanded financial, logistical and military support from al-Qaeda, in the exchange for active proliferation of the ideology. After the 1998 bombings of US embassies, September 11 attacks (2001), the US-led invasion of Afghanistan (2001) and Iraq (2003), Salafi Jihadism had seen its momentum. However, it got devastated by the US counterterrorism operations, culminating in bin Laden's death in 2011. After the Arab Spring (2011) and subsequent Syrian Civil War (2011–present), the remnants of al-Qaeda franchise in Iraq had restored their capacity, which rapidly developed into the Islamic State of Iraq and the Levant, spreading its influence throughout the conflict zones of MENA region and the globe. History Predecessor movements Some Islamic revivalist movements and leaders pre-dating Islamism include: Ahmad Sirhindi (~1564–1624) was part of a reassertion of orthodoxy within Islamic Mysticism (Taṣawwuf) and was known to his followers as the 'renovator of the second millennium'. It has been said of Sirhindi that he 'gave to Indian Islam the rigid and conservative stamp it bears today.'Qamar-ul Huda (2003), Striving for Divine Union: Spiritual Exercises for Suhraward Sufis, RoutledgeCurzon, pp. 1–4. Ibn Taymiyyah, a Syrian Islamic jurist during the 13th and 14th centuries who is often quoted by contemporary Islamists. Ibn Taymiyya argued against the shirking of Sharia law, was against practices such as the celebration of Muhammad's birthday, and "he believed that those who ask assistance from the grave of the Prophet or saints, are mushrikin (polytheists), someone who is engaged in shirk." Shah Waliullah of India and Muhammad ibn Abd-al-Wahhab of Arabia were contemporaries who met each other while studying in Mecca. Muhammad ibn Abd-al-Wahhab advocated doing away with the later accretions like grave worship and getting back to the letter and the spirit of Islam as preached and practiced by Muhammad. He went on to found Wahhabism. Shah Waliullah was a forerunner of reformist Islamists like Muhammad Abduh, Muhammad Iqbal and Muhammad Asad in his belief that there was "a constant need for new ijtihad as the Muslim community progressed and expanded and new generations had to cope with new problems" and his interest in the social and economic problems of the poor. Sayyid Ahmad Barelvi was a disciple and successor of Shah Waliullah's son who emphasized the 'purification' of Islam from un-Islamic beliefs and practices. He anticipated modern militant Islamists by leading an extremist, jihadist movement and attempted to create an Islamic state based on the enforcement of Islamic law. While he engaged in several wars against the Sikh Empire in the Muslim-majority North-Western India, his followers participated in the Indian Rebellion of 1857 after his death. After the defeat of the Indian Rebellion, some of Shah Waliullah's followers ceased their involvement in military affairs and founded the Dar al-Ulum seminary in 1867 in the town of Deoband. From the school developed the Deobandi movement which became the largest philosophical movement of traditional Islamic thought on the subcontinent and led to the establishment of thousands of madrasahs throughout modern-day India, Pakistan and Bangladesh. Early history The end of the 19th century saw the dismemberment of most of the Muslim Ottoman Empire by non-Muslim European colonial powers. The empire spent massive sums on Western civilian and military technology to try to modernize and compete with the encroaching European powers, and in the process went deep into debt to these powers. In this context, the publications of Jamal ad-din al-Afghani (1837–97), Muhammad Abduh (1849–1905) and Rashid Rida (1865–1935) preached Islamic alternatives to the political, economic, and cultural decline of the empire. Muhammad Abduh and Al-Afghani formed the beginning of the early Islamist movement.The New Encyclopedia of Islam by Cyril Glasse, Rowman and Littlefield, 2001, p. 19Historical Dictionary of Islam by Ludwig W. Wadamed, Scarecrow Press, 2001, p. 233 Abduh's student, Rashid Rida, is widely regarded as one of "the ideological forefathers" of contemporary Islamist movements. The development of Islamism across the Islamic World was spearheaded by three prominent figures in the 1930s: Rashid Rida, early leader of Salafiyya movement and publisher of the widely read magazine Al-Manar; Hassan al-Banna, founder of the Egyptian Muslim Brotherhood; and Mustafa al-Siba’i, founder of the Syrian Muslim Brotherhood. Their ideas included the creation of a truly Islamic society under sharia law, and the rejection of taqlid, the blind imitation of earlier authorities, which they believed deviated from the true messages of Islam. Unlike some later Islamists, Early Salafiyya strongly emphasized the restoration of the Caliphate. Sayyid Rashid Rida The crises experienced across the Muslim World after the collapse of Ottoman Caliphate would re-introduce the debates over the theory of an alternative Islamic state into the centre of Muslim religious-political thinking of the early 20th-century. A combination of events such as the secularisation of Turkey, the aggressiveness of Western colonial empires, the set backs to modernist and liberal movements in Egypt, and the Palestinian crisis would propel this shift. The modern concept of an Islamic state was first articulated by the Syrian-Egyptian Islamic scholar Muhammad Rashid Rida. As the circumstances shifted through further Western cultural and imperial inroads, militant Islamists and fundamentalists stepped up to assert Islamic values using Rida's ideas as the chief vehicle, starting from 1950s. Rashid Rida played a major role in shaping the revolutionary ideology of the early years of the Egyptian Muslim Brotherhood. Fundamentalism initially became the meeting-ground between the Salafiyyah movement and the Wahhäbi movement of Saudi Arabia. These movements later on drifted apart, with the Salafiyyah being increasingly represented by activist and revolutionary trends, and Wahhäbism by purist conservatism that was characterised by political quietism. Rida's Islamic state emphasized the principle of shura, which would be dominated by the '''Ulama' who act as the natural representatives of Muslims. The Salafi proponents of the modern Islamic state conceive it as a testing ground for protecting the moral and cultural integrity of the Muslim Ummah. Rashid Rida played a significant role in forming the ideology of Muslim Brotherhood and other Sunni Islamist movements across the World. In his influential book al-Khilafa aw al-Imama al-'Uzma ("The Caliphate or the Grand Imamate"); Rashid Rida elaborated on the establishment of his proposed “Islamic state” which emphasised the implementation of Sharia as well as the adoption of an Islamic consultation system (shura) that enshrined leading role of the Ulema (Islamic scholars) in political life. This doctrine would become the blue-print of future Islamist movements. Rida believed that societies that properly obeyed Sharia would be able to successfully emerge as alternatives to both capitalism as well as the disorder of class-based socialism; since such a society would be unsusceptible to its temptations. In Rida's Caliphate, the Khalifa was to be the supreme head whose role was to govern by supervising the application of Islamic laws. This was to happen through a partnership between the Mujtahid ulema and the ‘‘true caliph’'; who engage in Ijtihad by evaluating the Scriptures and govern through shura (consultation). This Khilafa shall also be able to revitalise the Islamic civilization, restore political and legal independence to the Muslim umma (community of Muslim believers), and cleanse Islam from the heretical influences of Sufism. Rashid Rida's Islamic political theory would greatly influence many subsequent Islamic revivalist movements across the Arab world. Rida was certain that an Islamic society which implemented Sharia in the proper manner would be able to successfully resist both capitalism as well as the disorder of class-based socialism; since such a society would be unsusceptible to its temptations. Rida belonged to the last generation of Islamic scholars who were educated entirely within a traditional Islamic system, and expressed views in a self-conscious vernacular that owed nothing to the modern West. Islamist intellectuals that succeeded Rida, such as Hasan al-Banna, would not measure up to the former's scholarly credentials. The subsequent generations ushered in the advent of the radical thinker Sayyid Qutb, who in contrast to Rida, did not have detailed knowledge of religious sciences to address Muslims authoritatively on Sharia. An intellectual rather than a populist, Qutb would reject the West entirely in the most forceful manner; while simultaneously employing Western terminology to substantiate his beliefs and used the classical sources to bolster his subjective methodology to Scriptures. Muhammad Iqbal Muhammad Iqbal was a philosopher, poet and politician in British India who is widely regarded as having inspired the Islamic Nationalism and Pakistan Movement in British India. Iqbal is admired as a prominent classical poet by Pakistani, Iranian, Indian and other international scholars of literature. Though Iqbal is best known as an eminent poet, he is also a highly acclaimed "Islamic philosophical thinker of modern times". While studying law and philosophy in England and Germany, Iqbal became a member of the London branch of the All India Muslim League. He came back to Lahore in 1908. While dividing his time between law practice and philosophical poetry, Iqbal had remained active in the Muslim League. He did not support Indian involvement in World War I and remained in close touch with Muslim political leaders such as Muhammad Ali Johar and Muhammad Ali Jinnah. He was a critic of the mainstream Indian nationalist and secularist Indian National Congress. Iqbal's seven English lectures were published by Oxford University press in 1934 in a book titled The Reconstruction of Religious Thought in Islam. These lectures dwell on the role of Islam as a religion as well as a political and legal philosophy in the modern age. Iqbal expressed fears that not only would secularism and secular nationalism weaken the spiritual foundations of Islam and Muslim society, but that India's Hindu-majority population would crowd out Muslim heritage, culture and political influence. In his travels to Egypt, Afghanistan, Palestine and Syria, he promoted ideas of greater Islamic political co-operation and unity, calling for the shedding of nationalist differences. Sir Muhmmad Iqbal was elected as president of the Muslim League in 1930 at its session in Allahabad as well as for the session in Lahore in 1932. In his Allahabad Address on 29 December 1930, Iqbal outlined a vision of an independent state for Muslim-majority provinces in northwestern India. This address later inspired the Pakistan movement. The thoughts and vision of Iqbal later influenced many reformist Islamists, e.g., Muhammad Asad, Sayyid Abul Ala Maududi and Ali Shariati. Sayyid Abul Ala Maududi Sayyid Abul Ala Maududi was an important early twentieth-century figure in the Islamic revival in India, and then after independence from Britain, in Pakistan. Trained as a lawyer he chose the profession of journalism, and wrote about contemporary issues and most importantly about Islam and Islamic law. Maududi founded the Jamaat-e-Islami party in 1941 and remained its leader until 1972. However, Maududi had much more impact through his writing than through his political organising. His extremely influential books (translated into many languages) placed Islam in a modern context, and influenced not only conservative ulema but liberal modernizer Islamists such as al-Faruqi, whose "Islamization of Knowledge" carried forward some of Maududi's key principles. Influenced by the Islamic state theory of Rashid Rida, al-Mawdudi believed that his contemporary situation wherein Muslims increasingly imitated the West in their daily life as comparable to a modern Jahiliyyah . This Jahiliyya was responsible for the decline of the “Ummah” and the erosion of Islamic values. Only by establishing the “Islamic State” which rules by sharia in its true sense could the modern Jahiliyyah be avoided, by upholding Allah's absolute sovereignty over the world. Maududi believed that Islam was all-encompassing: "Everything in the universe is 'Muslim' for it obeys God by submission to His laws... The man who denies God is called Kafir (concealer) because he conceals by his disbelief what is inherent in his nature and embalmed in his own soul." Maududi also believed that Muslim society could not be Islamic without Sharia, and Islam required the establishment of an Islamic state. This state should be a "theo-democracy," based on the principles of: tawhid (unity of God), risala (prophethood) and khilafa (caliphate). Although Maududi talked about Islamic revolution, by "revolution" he meant not the violence or populist policies of the Iranian Revolution, but the gradual changing the hearts and minds of individuals from the top of society downward through an educational process or da'wah. Muslim Brotherhood Roughly contemporaneous with Maududi was the founding of the Muslim Brotherhood in Ismailiyah, Egypt in 1928 by Hassan al Banna. His was arguably the first, largest and most influential modern Islamic political/religious organization. Under the motto "the Qur'an is our constitution," it sought Islamic revival through preaching and also by providing basic community services including schools, mosques, and workshops. Like Maududi, Al Banna believed in the necessity of government rule based on Shariah law implemented gradually and by persuasion, and of eliminating all imperialist influence in the Muslim world. Some elements of the Brotherhood, though perhaps against orders, did engage in violence against the government, and its founder Al-Banna was assassinated in 1949 in retaliation for the assassination of Egypt's premier Mahmud Fami Naqrashi three months earlier. The Brotherhood has suffered periodic repression in Egypt and has been banned several times, in 1948 and several years later following confrontations with Egyptian president Gamal Abdul Nasser, who jailed thousands of members for several years. Despite periodic repression, the Brotherhood has become one of the most influential movements in the Islamic world, particularly in the Arab world. For many years it was described as "semi-legal" and was the only opposition group in Egypt able to field candidates during elections. In the 2011–12 Egyptian parliamentary election, the political parties identified as "Islamist" (the Brotherhood's Freedom and Justice Party, Salafi Al-Nour Party and liberal Islamist Al-Wasat Party) won 75% of the total seats. Mohamed Morsi, an Islamist of Muslim Brotherhood, was the first democratically elected president of Egypt. He was deposed during the 2013 Egyptian coup d'état. Sayyid Qutb Maududi's political ideas influenced Sayyid Qutb a leading member of the Muslim Brotherhood movement, and one of the key philosophers of Islamism and highly influential thinkers of Islamic universalism. Qutb believed things had reached such a state that the Muslim community had literally ceased to exist. It "has been extinct for a few centuries," having reverted to Godless ignorance (Jahiliyya). To eliminate jahiliyya, Qutb argued Sharia, or Islamic law, must be established. Sharia law was not only accessible to humans and essential to the existence of Islam, but also all-encompassing, precluding "evil and corrupt" non-Islamic ideologies like communism, nationalism, or secular democracy. Qutb preached that Muslims must engage in a two-pronged attack of converting individuals through preaching Islam peacefully and also waging what he called militant jihad so as to forcibly eliminate the "power structures" of Jahiliyya—not only from the Islamic homeland but from the face of the earth. Qutb was both a member of the brotherhood and enormously influential in the Muslim world at large. Qutb is considered by some (Fawaz A. Gerges) to be "the founding father and leading theoretician" of modern jihadists, such as Osama bin Laden. However, the Muslim Brotherhood in Egypt and in Europe has not embraced his vision of undemocratic Islamic state and armed jihad, something for which they have been denounced by radical Islamists. Ascendance on international politics Islamic fervor was understood as a weapon that the United States could use as a weapon in its Cold War against the Soviet Union and its communist allies because communism professes atheism. In a September 1957 White House meeting between U.S. President Eisenhower and senior U.S. foreign policy officials, it was agreed to use the communists' lack of religion against them by setting up a secret task force to deliver weapons to Middle East despots, including the Saudi Arabian rulers. "We should do everything possible to stress the 'holy war' aspect" that has currency in the Middle East, President Eisenhower stated in agreement. Six-Day War (1967) The quick and decisive defeat of the Arab troops during the Six-Day War by Israeli troops constituted a pivotal event in the Arab Muslim world. The defeat along with economic stagnation in the defeated countries, was blamed on the secular Arab nationalism of the ruling regimes. A steep and steady
|
Conditions of Learning for the Florida State University's Department of Educational Research. Definition Instructional theory is different than learning theory. A learning theory describes how learning takes place, and an instructional theory prescribes how to better help people learn. Learning theories often inform instructional theory, and three general theoretical stances take part in this influence: behaviorism (learning as response acquisition), cognitivism (learning as knowledge acquisition), and constructivism (learning as knowledge construction). Instructional theory helps us create conditions that increases the probability of learning. Its goal is understanding the instructional system and to improve the process of instruction. Overview Instructional theories identify what instruction or teaching should be like. It outlines strategies that an educator may adopt to achieve the learning objectives. Instructional theories are adapted based on the educational content and more importantly the learning style of the students. They are used as teaching guidelines/tools by teachers/trainers to facilitate learning. Instructional theories encompass different instructional methods, models and strategies. David Merrill's First Principles of Instruction discusses universal methods of instruction, situational methods and core ideas of the post-industrial paradigm of instruction. Universal Methods of Instruction: Task-Centered Principle - instruction should use a progression of increasingly complex whole tasks. Demonstration Principle - instruction should guide learners through a skill and engage peer discussion/demonstration. Application Principle - instruction should provide intrinsic or corrective feedback and engage peer-collaboration. Activation Principle - instruction should build upon prior knowledge and encourage learners to acquire a structure for organizing new knowledge. Integration Principle - instruction should engage learners in peer-critiques and synthesizing newly acquired knowledge. Situational Methods: based on different approaches to instruction Role play Synectics Mastery learning Direct instruction Discussion Conflict resolution Peer learning Experiential learning Problem-based learning Simulation-based learning based on different learning outcomes: Knowledge Comprehension Application Analysis Synthesis Evaluation Affective development Integrated learning Core ideas for the Post-industrial Paradigm of Instruction: Learner centered vs. teacher centered instruction – with respect to the focus, instruction can be based on the capability and style of the learner or the teacher. Learning by doing vs. teacher presenting – Students often learn more by doing rather than simply listening to instructions given by the teacher. Attainment based vs. time based progress – The instruction can either be based on the focus on the mastery of the concept or the time spent on learning the concept. Customized vs. standardized instruction – The instruction can
|
results of his Taxonomy of Education Objectives—one of the first modern codifications of the learning process. One of the first instructional theorists was Robert M. Gagne, who in 1965 published Conditions of Learning for the Florida State University's Department of Educational Research. Definition Instructional theory is different than learning theory. A learning theory describes how learning takes place, and an instructional theory prescribes how to better help people learn. Learning theories often inform instructional theory, and three general theoretical stances take part in this influence: behaviorism (learning as response acquisition), cognitivism (learning as knowledge acquisition), and constructivism (learning as knowledge construction). Instructional theory helps us create conditions that increases the probability of learning. Its goal is understanding the instructional system and to improve the process of instruction. Overview Instructional theories identify what instruction or teaching should be like. It outlines strategies that an educator may adopt to achieve the learning objectives. Instructional theories are adapted based on the educational content and more importantly the learning style of the students. They are used as teaching guidelines/tools by teachers/trainers to facilitate learning. Instructional theories encompass different instructional methods, models and strategies. David Merrill's First Principles of Instruction discusses universal methods of instruction, situational methods and core ideas of the post-industrial paradigm of instruction. Universal Methods of Instruction: Task-Centered Principle - instruction should use a progression of increasingly complex whole tasks. Demonstration Principle - instruction should guide learners through a skill and engage peer discussion/demonstration. Application Principle - instruction should provide intrinsic or corrective feedback and engage peer-collaboration. Activation Principle - instruction should build upon prior knowledge and encourage learners to acquire a structure for organizing new knowledge. Integration Principle - instruction should engage learners in peer-critiques and synthesizing newly acquired knowledge. Situational Methods: based on different approaches to instruction Role play Synectics Mastery learning Direct instruction Discussion Conflict resolution Peer learning Experiential learning Problem-based learning Simulation-based learning based on different learning outcomes: Knowledge Comprehension Application Analysis Synthesis Evaluation Affective development Integrated learning Core ideas for the Post-industrial Paradigm of Instruction: Learner centered vs. teacher centered instruction – with respect to the focus, instruction can be based on the capability and style of the learner or the teacher. Learning by doing vs. teacher presenting – Students often learn more by doing rather than simply listening to instructions given by the teacher. Attainment based vs. time based progress – The instruction can either be based on the focus on the mastery of the concept or the time spent on learning the concept. Customized vs. standardized instruction – The instruction can be different for different learners or the instruction can be given in general to the entire classroom Criterion referenced vs. norm referenced instruction – Instruction related to different types of evaluations. Collaborative vs. individual instruction – Instruction can be for a team of students or individual students. Enjoyable vs. unpleasant instructions – Instructions can create a pleasant learning experience or a negative one (often to enforce discipline). Teachers must take care to ensure positive experiences. Four tasks of Instructional theory: Knowledge selection Knowledge sequence Interaction management Setting of interaction environment Critiques Paulo Freire's work appears to critique instructional approaches that adhere to the knowledge acquisition stance, and his work Pedagogy of the Oppressed has had a broad influence over a generation of American educators with his critique of
|
the many commercial cultures available. Infusoria can be cultured by soaking any decomposing vegetative matter such as papaya skin in a jar of aged (i.e., chlorine-free) water. The culture starts to proliferate in two to three days, depending on temperature and light received. The water first turns cloudy because of a rise in levels of bacteria, but clears up once the infusoria consume them. At this point, the infusoria are usually visible to the naked eye as small, white motile specks. See also Animalcules References Bibliography Ratcliff, Marc J. (2009). The Emergence of the Systematics of Infusoria. In: The Quest for the Invisible: Microscopy in the Enlightenment. Aldershot: Ashgate.
|
use Infusoria are used by owners of aquaria to feed fish fry; because of its small size it can be used to rear newly hatched fry of many common aquarium species. Many home aquaria are unable to naturally supply sufficient infusoria for fish-rearing, so hobbyists may create and maintain their own supply cultures or use one of the many commercial cultures available. Infusoria can be cultured by soaking any decomposing vegetative matter such as papaya skin in a jar of aged (i.e., chlorine-free) water. The culture starts to proliferate in two to three days, depending on temperature and light received. The water first turns
|
typographer, falsely stated that these are not independent French letters on their own, but mere ligatures (like fi or fl), supported by the delegate team from Bull Publishing Company, who regularly did not print French with Œ/œ in their house style at the time. An anglophone delegate from Canada insisted on retaining Œ/œ but was rebuffed by the French delegate and the team from Bull. These code points were soon filled with × and ÷ under the suggestion of the German delegation. Support for French was further reduced when it was again falsely stated that the letter ÿ is "not French", resulting in the absence of the capital Ÿ. In fact, the letter ÿ is found in a number of French proper names, and the capital letter has been used in dictionaries and encyclopedias. These characters were added to ISO/IEC 8859-15:1999. BraSCII matches the original draft. In 1985, Commodore adopted ECMA-94 for its new AmigaOS operating system. The Seikosha MP-1300AI impact dot-matrix printer, used with the Amiga 1000, included this encoding. In 1990, the very first version of Unicode used the code points of ISO-8859-1 as the first 256 Unicode code points. In 1992, the IANA registered the character map ISO_8859-1:1987, more commonly known by its preferred MIME name of ISO-8859-1 (note the extra hyphen over ISO 8859-1), a superset of ISO 8859-1, for use on the Internet. This map assigns the C0 and C1 control codes to the unassigned code values thus provides for 256 characters via every possible 8-bit value. Code page layout Similar character sets ISO/IEC 8859-15 ISO/IEC 8859-15 was developed in 1999, as an update of ISO/IEC 8859-1. It provides some characters for French and Finnish text and the euro sign, which are missing from ISO/IEC 8859-1. This required the removal of some infrequently used characters from ISO/IEC 8859-1, including fraction symbols and letter-free diacritics: , , , , , , , and . Ironically, three of the newly added characters (, , and ) had already been present in DEC's 1983 Multinational Character Set (MCS), the predecessor to ISO/IEC 8859-1 (1987). Since their original code points were now reused for other purposes, the characters had to be reintroduced under different, less logical code points. ISO-IR-204, a more minor modification, had been registered in 1998, altering ISO-8859-1 by replacing the universal currency sign (¤) with the
|
code points of ISO-8859-1 as the first 256 Unicode code points. In 1992, the IANA registered the character map ISO_8859-1:1987, more commonly known by its preferred MIME name of ISO-8859-1 (note the extra hyphen over ISO 8859-1), a superset of ISO 8859-1, for use on the Internet. This map assigns the C0 and C1 control codes to the unassigned code values thus provides for 256 characters via every possible 8-bit value. Code page layout Similar character sets ISO/IEC 8859-15 ISO/IEC 8859-15 was developed in 1999, as an update of ISO/IEC 8859-1. It provides some characters for French and Finnish text and the euro sign, which are missing from ISO/IEC 8859-1. This required the removal of some infrequently used characters from ISO/IEC 8859-1, including fraction symbols and letter-free diacritics: , , , , , , , and . Ironically, three of the newly added characters (, , and ) had already been present in DEC's 1983 Multinational Character Set (MCS), the predecessor to ISO/IEC 8859-1 (1987). Since their original code points were now reused for other purposes, the characters had to be reintroduced under different, less logical code points. ISO-IR-204, a more minor modification, had been registered in 1998, altering ISO-8859-1 by replacing the universal currency sign (¤) with the euro sign (the same substitution made by ISO-8859-15). Windows-1252 The popular Windows-1252 character set adds all the missing characters provided by ISO/IEC 8859-15, plus a number of typographic symbols, by replacing the rarely used C1 controls in the range 128 to 159 (hex 80 to 9F). It is very common to mislabel Windows-1252 text as being in ISO-8859-1. A common result was that all the quotes and apostrophes (produced by "smart quotes" in word-processing software) were replaced with question marks or boxes on non-Windows operating systems, making text difficult to read. Many web browsers and e-mail clients will interpret ISO-8859-1 control codes as Windows-1252 characters, and that behavior was later standardized in HTML5. Mac Roman The Apple Macintosh computer introduced a character encoding called Mac Roman in 1984. It was meant to be suitable for Western European desktop publishing. It is a superset of ASCII, and has most of the characters that are in ISO-8859-1 and all the extra characters from Windows-1252 but in a totally different arrangement. The few printable characters that are in ISO 8859-1, but not in this set, are often a source of trouble when editing text on websites using older
|
As a result, high-quality typesetting systems often use proprietary or idiosyncratic extensions on top of the ASCII and ISO/IEC 8859 standards, or use Unicode instead. An inexact rule based on practical experience states that if a character or symbol was not already part of a widely used data-processing character set and was also not usually provided on typewriter keyboards for a national language, it did not get in. Hence the directional double quotation marks « and » used for some European languages were included, but not the directional double quotation marks “ and ” used for English and some other languages. French did not get its œ and Œ ligatures because they could be typed as 'oe'. Likewise, Ÿ, needed for all-caps text, was dropped as well. Albeit under different codepoints, these three characters were later reintroduced with ISO/IEC 8859-15 in 1999, which also introduced the new euro sign character €. Likewise Dutch did not get the ij and IJ letters, because Dutch speakers had become used to typing these as two letters instead. Romanian did not initially get its Ș/ș and Ț/ț (with comma) letters, because these letters were initially unified with Ş/ş and Ţ/ţ (with cedilla) by the Unicode Consortium, considering the shapes with comma beneath to be glyph variants of the shapes with cedilla. However, the letters with explicit comma below were later added to the Unicode standard and are also in ISO/IEC 8859-16. Most of the ISO/IEC 8859 encodings provide diacritic marks required for various European languages using the Latin script. Others provide non-Latin alphabets: Greek, Cyrillic, Hebrew, Arabic and Thai. Most of the encodings contain only spacing characters, although the Thai, Hebrew, and Arabic ones do also contain combining characters. The standard makes no provision for the scripts of East Asian languages (CJK), as their ideographic writing systems require many thousands of code points. Although it uses Latin based characters, Vietnamese does not fit into 96 positions (without using combining diacritics such as in Windows-1258) either. Each Japanese syllabic alphabet (hiragana or katakana, see Kana) would fit, as in JIS X 0201, but like several other alphabets of the world they are not encoded in the ISO/IEC 8859 system. The parts of ISO/IEC 8859 ISO/IEC 8859 is divided into the following parts: Each part of ISO/IEC 8859 is designed to support languages that often borrow from each other, so the characters needed by each language are usually accommodated by a single part. However, there are some characters and language combinations that are not accommodated without transcriptions. Efforts were made to make conversions as smooth as possible. For example, German has all of its seven special characters at the same positions in all Latin variants (1–4, 9, 10, 13–16), and in many positions the characters only differ in the diacritics between the sets. In particular, variants 1–4 were designed jointly, and have the property that every encoded character appears either at a given position or not at all. Table At position 0xA0 there's always the non breaking space and 0xAD is mostly the soft hyphen, which only shows at line breaks. Other empty fields are either unassigned or the system used is not able to display them. There are new additions as ISO/IEC 8859-7:2003 and ISO/IEC 8859-8:1999 versions. LRM stands for left-to-right mark (U+200E) and RLM stands for right-to-left mark (U+200F). Relationship to Unicode and the UCS Since 1991, the Unicode Consortium has been working with ISO and IEC to develop the Unicode Standard and ISO/IEC 10646: the Universal Character Set (UCS) in tandem. Newer editions of ISO/IEC 8859 express characters in terms of their Unicode/UCS names and the U+nnnn notation, effectively causing each part of ISO/IEC 8859 to be a Unicode/UCS character encoding scheme that maps a very small subset of the UCS to single 8-bit bytes. The first 256 characters in Unicode and the
|
maps with most, if not all, bytes assigned. These sets have ISO-8859-n as their preferred MIME name or, in cases where a preferred MIME name is not specified, their canonical name. Many people use the terms ISO/IEC 8859-n and ISO-8859-n interchangeably. ISO/IEC 8859-11 did not get such a charset assigned, presumably because it was almost identical to TIS 620. Characters The ISO/IEC 8859 standard is designed for reliable information exchange, not typography; the standard omits symbols needed for high-quality typography, such as optional ligatures, curly quotation marks, dashes, etc. As a result, high-quality typesetting systems often use proprietary or idiosyncratic extensions on top of the ASCII and ISO/IEC 8859 standards, or use Unicode instead. An inexact rule based on practical experience states that if a character or symbol was not already part of a widely used data-processing character set and was also not usually provided on typewriter keyboards for a national language, it did not get in. Hence the directional double quotation marks « and » used for some European languages were included, but not the directional double quotation marks “ and ” used for English and some other languages. French did not get its œ and Œ ligatures because they could be typed as 'oe'. Likewise, Ÿ, needed for all-caps text, was dropped as well. Albeit under different codepoints, these three characters were later reintroduced with ISO/IEC 8859-15 in 1999, which also introduced the new euro sign character €. Likewise Dutch did not get the ij and IJ letters, because Dutch speakers had become used to typing these as two letters instead. Romanian did not initially get its Ș/ș and Ț/ț (with comma) letters, because these letters were initially unified with Ş/ş and Ţ/ţ (with cedilla) by the Unicode Consortium, considering the shapes with comma beneath to be glyph variants of the shapes with cedilla. However, the letters with explicit comma below were later added to the Unicode standard and are also in ISO/IEC 8859-16. Most of the ISO/IEC 8859 encodings provide diacritic marks required for various European languages using the Latin script. Others provide non-Latin alphabets: Greek, Cyrillic, Hebrew, Arabic and Thai. Most of the encodings contain only spacing characters, although the Thai, Hebrew, and Arabic ones do also contain combining characters. The standard makes no provision for the scripts of East Asian languages (CJK), as their ideographic writing systems require many thousands of code points. Although it uses Latin based characters, Vietnamese does not fit into 96 positions (without using combining diacritics such as in Windows-1258) either. Each Japanese syllabic alphabet (hiragana or katakana, see Kana) would fit, as in JIS X 0201, but like several other alphabets of the world they are not encoded in the ISO/IEC 8859 system. The parts of ISO/IEC 8859 ISO/IEC 8859 is divided into the following parts: Each part of ISO/IEC 8859 is designed to support languages that often borrow from each other, so the characters needed by each language are usually accommodated by a single part. However, there are some characters and language combinations that are not accommodated without transcriptions. Efforts were made to make conversions as smooth as possible. For example, German has all of its seven special characters at the same positions in all Latin variants (1–4, 9, 10, 13–16), and in many positions the characters only differ in the diacritics between the sets. In particular, variants 1–4 were designed jointly, and have the property that every encoded character appears either at a given position or not at all. Table At position 0xA0 there's always the non breaking space and 0xAD is mostly the soft hyphen, which only shows at line breaks. Other empty fields are either unassigned or the system used is not able to display them. There are new additions as ISO/IEC 8859-7:2003 and ISO/IEC 8859-8:1999 versions. LRM stands for left-to-right mark (U+200E) and RLM stands for right-to-left mark (U+200F). Relationship to Unicode and the UCS Since 1991, the Unicode Consortium has been working with ISO and IEC to develop the Unicode Standard and ISO/IEC 10646: the Universal Character Set (UCS) in tandem. Newer editions of ISO/IEC 8859 express characters in terms of their Unicode/UCS names and the U+nnnn notation, effectively causing each part of ISO/IEC 8859 to be a Unicode/UCS character encoding scheme that maps a very small subset of the UCS to single 8-bit bytes. The first 256 characters in Unicode and the UCS are identical to those in ISO/IEC-8859-1 (Latin-1). Single-byte character sets including the parts of ISO/IEC 8859 and derivatives of them were favoured throughout the 1990s, having the advantages of being well-established and more easily implemented in software: the equation of one byte to one character is simple and adequate
|
invisible heat; in 1681 the pioneering experimenter Edme Mariotte showed that glass, though transparent to sunlight, obstructed radiant heat. In 1800 the astronomer Sir William Herschel discovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the energy from the Sun was eventually found, through Herschel's studies, to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when changing rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range. Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, to detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, and to detect the overheating of electrical components. Military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm (micrometers). Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting. Definition and relationship to the electromagnetic spectrum There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 700 nanometers (nm) to 1 millimeter (mm). This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of the electromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz). Natural infrared Sunlight, at an effective temperature of 5,780 kelvins (5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 micrometers. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. However, black-body, or thermal, radiation is continuous: it gives off radiation at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy. Regions within the infrared In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed. Visible limit Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. However there is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually be quite bright. And even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions. Commonly used sub-division scheme A commonly used sub-division scheme is: NIR and SWIR together is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". CIE division scheme The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands: ISO 20473 scheme ISO 20473 specifies the following scheme: Astronomy division scheme Astronomers typically divide the infrared spectrum as follows: These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers. Sensor response division scheme A third scheme divides up the band based on the response of various detectors: Near-infrared: from 0.7 to 1.0 μm (from the approximate end of the response of the human eye to that of silicon). Short-wave infrared: 1.0 to 3 μm (from the cut-off of silicon to that of the MWIR atmospheric window). InGaAs covers to about 1.8 μm; the less sensitive lead salts cover this region. Cryogenically cooled MCT detectors can cover the region of 1.0–2.5μm. Mid-wave infrared: 3 to 5 μm (defined by the atmospheric window and covered by indium antimonide, InSb and mercury cadmium telluride, HgCdTe, and partially by lead selenide, PbSe). Long-wave infrared: 8 to 12, or 7 to 14 μm (this is the atmospheric window covered by HgCdTe and microbolometers). Very-long wave infrared (VLWIR) (12 to about 30 μm, covered by doped silicon). Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. However, particularly intense near-IR light (e.g., from IR lasers, IR LED sources, or from bright daylight with the visible light removed by colored gels) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage. Telecommunication bands in the infrared In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors: The C-band is the dominant band for long-distance telecommunication networks. The S and L bands are based on less well established technology, and are not as widely deployed. Heat Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law). Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the idea of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Applications Night vision Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source. The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment. Thermography Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nanometers or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). Hyperspectral imaging A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications. Other imaging In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital
|
To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Applications Night vision Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source. The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment. Thermography Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nanometers or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). Hyperspectral imaging A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications. Other imaging In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can "see" intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy. Tracking Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background. Heating Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing). Infrared radiation is used in cooking, known as broiling or grilling. One energy advantage is that the IR energy heats only opaque objects, such as food, rather than the air around them. Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating. Efficiency is achieved by matching the wavelength of the infrared heater to the absorption characteristics of the material. Cooling A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere. Communications IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that may be concentrated by a lens into a beam that the user aims at the detector. The beam is modulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensive silicon photodiodes, which the receiver uses to convert the detected radiation to an electric current. That electrical signal is passed through a high-pass filter which retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. Free space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen." Infrared lasers are used to provide the light for optical fiber communications systems. Infrared light with a wavelength around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers. IR data transmission of encoded audio versions of printed signs is being researched as an aid for visually impaired people through the RIAS (Remote Infrared Audible Signage) project. Transmitting IR data from one device to another is sometimes referred to as beaming. Spectroscopy Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum. Thin film metrology In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi-Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures. Meteorology Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). Clouds with high and cold tops, such as cyclones or cumulonimbus clouds, are often displayed as red or black, lower warmer clouds such as stratus or stratocumulus are displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low cloud such as stratus or fog can have a temperature similar to the surrounding land or sea surface and does not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low cloud can be distinguished, producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere. Climatology In the field of climatology, atmospheric infrared radiation is monitored to detect trends in
|
and its edge length is if its radius is 1. Only a few uniform polytopes have this property, including the four-dimensional 600-cell, the three-dimensional icosidodecahedron, and the two-dimensional decagon. (The icosidodecahedron is the equatorial cross section of the 600-cell, and the decagon is the equatorial cross section of the icosidodecahedron.) These radially golden polytopes can be constructed, with their radii, from golden triangles which meet at the center, each contributing two radii and an edge. Orthogonal projections The icosidodecahedron has four special orthogonal projections, centered on a vertex, an edge, a triangular face, and a pentagonal face. The last two correspond to the A2 and H2 Coxeter planes. Surface area and volume The surface area A and the volume V of the icosidodecahedron of edge length a are: Spherical tiling The icosidodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Related polytopes The icosidodecahedron is a rectified dodecahedron and also a rectified icosahedron, existing as the full-edge truncation between these regular solids. The icosidodecahedron contains 12 pentagons of the dodecahedron and 20 triangles of the icosahedron: The icosidodecahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3.n)2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *n32 all of these tilings are wythoff construction within a fundamental domain of symmetry, with generator points at the right angle corner of the domain. Dissection The icosidodecahedron is related to the Johnson solid called a pentagonal orthobirotunda created by two pentagonal rotundae connected as mirror images. The icosidodecahedron can therefore be called a pentagonal gyrobirotunda with the gyration between top and bottom halves. Related polyhedra The truncated cube can be turned into an icosidodecahedron by dividing the octagons into two pentagons and two triangles. It has pyritohedral symmetry. Eight uniform star polyhedra share the same vertex arrangement. Of these, two also share
|
each of the 30 vertices. The icosidodecahedron has 6 central decagons. Projected into a sphere, they define 6 great circles. Buckminster Fuller used these 6 great circles, along with 15 and 10 others in two other polyhedra to define his 31 great circles of the spherical icosahedron. Cartesian coordinates Convenient Cartesian coordinates for the vertices of an icosidodecahedron with unit edges are given by the even permutations of: (0, 0, ±φ) (±, ±, ±) where φ is the golden ratio, . The long radius (center to vertex) of the icosidodecahedron is in the golden ratio to its edge length; thus its radius is φ if its edge length is 1, and its edge length is if its radius is 1. Only a few uniform polytopes have this property, including the four-dimensional 600-cell, the three-dimensional icosidodecahedron, and the two-dimensional decagon. (The icosidodecahedron is the equatorial cross section of the 600-cell, and the decagon is the equatorial cross section of the icosidodecahedron.) These radially golden polytopes can be constructed, with their radii, from golden triangles which meet at the center, each contributing two radii and an edge. Orthogonal projections The icosidodecahedron has four special orthogonal projections, centered on a vertex, an edge, a triangular face, and a pentagonal face. The last two correspond to the A2 and H2 Coxeter planes. Surface area and volume The surface area A and the volume V of the icosidodecahedron of edge length a are: Spherical tiling The icosidodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but
|
or extended time formats for greater brevity but decreased precision; the resulting reduced precision time formats are: T[hh][mm] in basic format or [hh]:[mm] in extended format, when seconds are omitted. T[hh], when both seconds and minutes are omitted. As of ISO 8601-1:2019 midnight may only be referred to as "00:00", corresponding to the beginning of a calendar day. Earlier versions of the standard allowed "24:00" corresponding to the end of a day, but this is explicitly disallowed by the 2019 revision. A decimal fraction may be added to the lowest order time element present, in any of these representations. A decimal mark, either a comma or a dot (following ISO 80000-1 according to ISO 8601:1-2019, which does not stipulate a preference except within International Standards, but with a preference for a comma according to ISO 8601:2004) is used as a separator between the time element and its fraction. To denote "14 hours, 30 and one half minutes", do not include a seconds figure. Represent it as "14:30,5", "T1430,5", "14:30.5", or "T1430.5". There is no limit on the number of decimal places for the decimal fraction. However, the number of decimal places needs to be agreed to by the communicating parties. For example, in Microsoft SQL Server, the precision of a decimal fraction is 3 for a DATETIME, i.e., "yyyy-mm-ddThh:mm:ss[.mmm]". Time zone designators Time zones in ISO 8601 are represented as local time (with the location unspecified), as UTC, or as an offset from UTC. Local time (unqualified) If no UTC relation information is given with a time representation, the time is assumed to be in local time. While it may be safe to assume local time when communicating in the same time zone, it is ambiguous when used in communicating across different time zones. Even within a single geographic time zone, some local times will be ambiguous if the region observes daylight saving time. It is usually preferable to indicate a time zone (zone designator) using the standard's notation. Coordinated Universal Time (UTC) If the time is in UTC, add a Z directly after the time without a space. Z is the zone designator for the zero UTC offset. "09:30 UTC" is therefore represented as "09:30Z" or "T0930Z". "14:45:15 UTC" would be "14:45:15Z" or "T144515Z". The Z suffix in the ISO 8601 time representation is sometimes referred to as "Zulu time" because the same letter is used to designate the Zulu time zone. However the ACP 121 standard that defines the list of military time zones makes no mention of UTC and derives the "Zulu time" from the Greenwich Mean Time which was formerly used as the international civil time standard. GMT is no longer precisely defined by the scientific community and can refer to either UTC or UT1 depending on context. Time offsets from UTC The UTC offset is appended to the time in the same way that 'Z' was above, in the form ±[hh]:[mm], ±[hh][mm], or ±[hh]. Negative UTC offsets describe a time zone west of UTC±00:00, where the civil time is behind (or earlier) than UTC so the zone designator will look like "−03:00","−0300", or "−03". Positive UTC offsets describe a time zone at or east of UTC±00:00, where the civil time is the same as or ahead (or later) than UTC so the zone designator will look like "+02:00","+0200", or "+02". Examples "−05:00" for New York on standard time (UTC-05:00) "−04:00" for New York on daylight saving time (UTC-04:00) "+00:00" (but not "−00:00") for London on standard time (UTC±00:00) "+02:00" for Cairo (UTC+02:00) "+05:30" for Mumbai (UTC+05:30) "+14:00" for Kiribati (UTC+14:00) See List of UTC time offsets for other UTC offsets. To represent a negative offset, ISO 8601 specifies using a minus sign. If the interchange character set is limited and does not have a minus sign character, then the hyphen-minus should be used. ASCII does not have a minus sign, so its hyphen-minus character (code is 45 decimal or 2D hexadecimal) would be used. If the character set has a minus sign, then that character should be used. Unicode has a minus sign, and its character code is U+2212 (2212 hexadecimal); the HTML character entity invocation is −. The following times all refer to the same moment: "18:30Z", "22:30+04", "1130−0700", and "15:00−03:30". Nautical time zone letters are not used with the exception of Z. To calculate UTC time one has to subtract the offset from the local time, e.g. for "15:00−03:30" do 15:00 − (−03:30) to get 18:30 UTC. An offset of zero, in addition to having the special representation "Z", can also be stated numerically as "+00:00", "+0000", or "+00". However, it is not permitted to state it numerically with a negative sign, as "−00:00", "−0000", or "−00". The section dictating sign usage states that a plus sign must be used for a positive or zero value, and a minus sign for a negative value. Contrary to this rule, RFC 3339, which is otherwise a profile of ISO 8601, permits the use of "-00", with the same denotation as "+00" but a differing connotation. Combined date and time representations A single point in time can be represented by concatenating a complete date expression, the letter "T" as a delimiter, and a valid time expression. For example, . In ISO 8601:2004 it was permitted to omit the "T" character by mutual agreement as in , but this provision was removed in ISO 8601-1:2019. Separating date and time parts with other characters such as space is not allowed in ISO 8601, but allowed in its profile RFC 3339. If a time zone designator is required, it follows the combined date and time. For example, or . Either basic or extended formats may be used, but both date and time must use the same format. The date expression may be calendar, week, or ordinal, and must use a complete representation. The time may be represented using a specified reduced precision format. Durations Durations define the amount of intervening time in a time interval and are represented by the format P[n]Y[n]M[n]DT[n]H[n]M[n]S or P[n]W as shown on the aside. In these representations, the [n] is replaced by the value for each of the date and time elements that follow the [n]. Leading zeros are not required, but the maximum number of digits for each element should be agreed to by the communicating parties. The capital letters P, Y, M, W, D, T, H, M, and S are designators for each of the date and time elements and are not replaced. P is the duration designator (for period) placed at the start of the duration representation. Y is the year designator that follows the value for the number of years. M is the month designator that follows the value for the number of months. W is the week designator that follows the value for the number of weeks. D is the day designator that follows the value for the number of days. T is the time designator that precedes the time components of the representation. H is the hour designator that follows the value for the number of hours. M is the minute designator that follows the value for the number of minutes. S is the second designator that follows the value for the number of seconds. For example, "P3Y6M4DT12H30M5S" represents a duration of "three years, six months, four days, twelve hours, thirty minutes, and five seconds". Date and time elements including their designator may be omitted if their value is zero, and lower-order elements may also be omitted for reduced precision. For example, "P23DT23H" and "P4Y" are both acceptable duration representations. However, at least one element must be present, thus "P" is not a valid representation for a duration of 0 seconds. "PT0S" or "P0D", however, are both valid and represent the same duration. To resolve ambiguity, "P1M" is a one-month duration and "PT1M" is a one-minute duration (note the time designator, T, that precedes the time value). The smallest value used may also have a decimal fraction, as in "P0.5Y" to indicate half a year. This decimal fraction may be specified with either a comma or a full stop, as in "P0,5Y" or "P0.5Y". The standard does not prohibit date and time values in a duration representation from exceeding their "carry over points" except as noted below. Thus, "PT36H" could be used as well as "P1DT12H" for representing the same duration. But keep in mind that "PT36H" is not the same as "P1DT12H" when switching from or to Daylight saving time. Alternatively, a format for duration based on combined date and time representations may be used by agreement between the communicating parties either in the basic format PYYYYMMDDThhmmss or in the extended format . For example, the first duration shown above would be . However, individual date and time values cannot exceed their moduli (e.g. a value of 13 for the month or 25 for the hour would not be permissible). Although the standard describes a duration as part of time intervals, which are discussed in the next section, the duration format (or a subset thereof) is widely used independent of time intervals, as with the Java 8 Duration class. Time intervals A time interval is the intervening time between two time points. The amount of intervening time is expressed by a duration (as described in the previous section). The two time points (start and end) are expressed by either a combined date and time representation or just a date representation. There are four ways to express a time interval: Start and end, such as "2007-03-01T13:00:00Z/2008-05-11T15:30:00Z" Start and duration, such as "2007-03-01T13:00:00Z/P1Y2M10DT2H30M" Duration and end, such as "P1Y2M10DT2H30M/2008-05-11T15:30:00Z" Duration only, such as "P1Y2M10DT2H30M", with additional context information Of these, the first three require two values separated by an interval designator which is usually a solidus (more commonly referred to as a forward slash "/"). Section 3.2.6 of ISO 8601-1:2019 notes that "A solidus may be replaced by a double hyphen ["--"] by mutual agreement of the communicating partners", and previous versions used notations like "2000--2002". Use of a double hyphen instead of a solidus allows inclusion in computer filenames; in common operating systems, a solidus is a reserved character and is not allowed in a filename. For <start>/<end> expressions, if any elements are missing from the end value, they are assumed to be the same as for the start value including the time zone. This feature of the standard allows for concise representations of time intervals. For example, the date of a two-hour meeting including the start and finish times could be simply shown as "2007-12-14T13:30/15:30", where "/15:30" implies "/2007-12-14T15:30" (the same date as the start), or the beginning and end dates of a monthly billing period as "2008-02-15/03-14", where "/03-14" implies "/2008-03-14" (the same year as the start). If greater precision is desirable to represent the time interval, then more time elements can be added to the representation. An interval denoted can start at any time on and end at any time on , whereas includes the start and end times. To explicitly include all of the start and end dates, the interval would be represented as . Repeating intervals Repeating intervals are specified in clause "4.5 Recurring time interval". They are formed by adding "R[n]/" to the beginning of an interval expression, where R is used as the letter itself and [n] is replaced by the number of repetitions. Leaving out the value for [n] or specifying a value of -1, means an unbounded number of repetitions. A value of 0 for [n] means the interval is not repeated. If the interval specifies the start (forms 1 and 2 above), then this is the start of the repeating interval. If the interval specifies the end but not the start (form 3 above), then this is the end of the repeating interval. For example, to repeat the interval of "P1Y2M10DT2H30M" five times starting at , use . Truncated representations ISO 8601:2000 allowed truncation (by agreement), where leading components of a date or time are omitted. Notably, this allowed two-digit years to be used and the ambiguous formats YY-MM-DD and YYMMDD. This provision was removed in ISO 8601:2004. Only the
|
was first published in 1988, with updates in 1991, 2000, 2004, and 2019. The standard aims to provide a well-defined, unambiguous method of representing calendar dates and times in worldwide communications, especially to avoid misinterpreting numeric dates and times when such data is transferred between countries with different conventions for writing numeric dates and times. In general, ISO 8601 applies to these representations and formats: dates, in the Gregorian calendar (including the proleptic Gregorian calendar); times, based on the 24-hour timekeeping system, with optional UTC offset; time intervals; and combinations thereof. The standard does not assign specific meaning to any element of the dates/times represented: the meaning of any element depends on the context of its use. Dates and times represented cannot use words that do not have a specified numerical meaning within the standard (thus excluding names of years in the Chinese calendar), or that do not use computer characters (excludes images or sounds). In representations that adhere to the ISO 8601 interchange standard, dates and times are arranged such that the greatest temporal term (typically a year) is placed at the left and each successively lesser term is placed to the right of the previous term. Representations must be written in a combination of Arabic numerals and the specific computer characters (such as "-", ":", "T", "W", "Z") that are assigned specific meanings within the standard; that is, such commonplace descriptors of dates (or parts of dates) as "January", "Thursday", or "New Year's Day" are not allowed in interchange representations within the standard. History The first edition of the ISO 8601 standard was published as ISO 8601:1988 in 1988. It unified and replaced a number of older ISO standards on various aspects of date and time notation: ISO 2014, ISO 2015, ISO 2711, ISO 3307, and ISO 4031. It has been superseded by a second edition ISO 8601:2000 in 2000, by a third edition ISO 8601:2004 published on 1 December 2004, and withdrawn and revised by ISO 8601-1:2019 and ISO 8601-2:2019 on 25 February 2019. ISO 8601 was prepared by, and is under the direct responsibility of, ISO Technical Committee TC 154. ISO 2014, though superseded, is the standard that originally introduced the all-numeric date notation in most-to-least-significant order . The ISO week numbering system was introduced in ISO 2015, and the identification of days by ordinal dates was originally defined in ISO 2711. Issued in February 2019, the fourth revision of the standard ISO 8601-1:2019 represents slightly updated contents of the previous ISO 8601:2004 standard, whereas the new ISO 8601-2:2019 defines various extensions such as uncertainties or parts of the Extended Date/Time Format (EDTF). General principles Date and time values are ordered from the largest to smallest unit of time: year, month (or week), day, hour, minute, second, and fraction of second. The lexicographical order of the representation thus corresponds to chronological order, except for date representations involving negative years or time offset. This allows dates to be naturally sorted by, for example, file systems. Each date and time value has a fixed number of digits that must be padded with leading zeros. Representations can be done in one of two formatsa basic format with a minimal number of separators or an extended format with separators added to enhance human readability. The standard notes that "The basic format should be avoided in plain text." The separator used between date values (year, month, week, and day) is the hyphen, while the colon is used as the separator between time values (hours, minutes, and seconds). For example, the 6th day of the 1st month of the year 2009 may be written as in the extended format or simply as "20090106" in the basic format without ambiguity. For reduced precision, any number of values may be dropped from any of the date and time representations, but in the order from the least to the most significant. For example, "2004-05" is a valid ISO 8601 date, which indicates May (the fifth month) 2004. This format will never represent the 5th day of an unspecified month in 2004, nor will it represent a time-span extending from 2004 into 2005. If necessary for a particular application, the standard supports the addition of a decimal fraction to the smallest time value in the representation. Dates The standard uses the Gregorian calendar, which "serves as an international standard for civil use." ISO 8601:2004 fixes a reference calendar date to the Gregorian calendar of 20 May 1875 as the date the (Metre Convention) was signed in Paris (the explicit reference date was removed in ISO 8601-1:2019). However, ISO calendar dates before the convention are still compatible with the Gregorian calendar all the way back to the official introduction of the Gregorian calendar on 15 October 1582. Earlier dates, in the proleptic Gregorian calendar, may be used by mutual agreement of the partners exchanging information. The standard states that every date must be consecutive, so usage of the Julian calendar would be contrary to the standard (because at the switchover date, the dates would not be consecutive). Years ISO 8601 prescribes, as a minimum, a four-digit year [YYYY] to avoid the year 2000 problem. It therefore represents years from 0000 to 9999, year 0000 being equal to 1 BC and all others AD. However, years before 1583 are not automatically allowed by the standard. Instead "values in the range [0000] through [1582] shall only be used by mutual agreement of the partners in information interchange." To represent years before 0000 or after 9999, the standard also permits the expansion of the year representation but only by prior agreement between the sender and the receiver. An expanded year representation [±YYYYY] must have an agreed-upon number of extra year digits beyond the four-digit minimum, and it must be prefixed with a + or − sign instead of the more common AD/BC (or CE/BCE) notation; by convention 1 BC is labelled +0000, 2 BC is labeled −0001, and so on. Calendar dates Calendar date representations are in the form shown in the adjacent box. [YYYY] indicates a four-digit year, 0000 through 9999. [MM] indicates a two-digit month of the year, 01 through 12. [DD] indicates a two-digit day of that month, 01 through 31. For example, "5 April 1981" may be represented as either in the extended format or "19810405" in the basic format. The standard also allows for calendar dates to be written with reduced precision. For example, one may write to mean "1981 April". The 2000 version allowed writing to mean "April 5" but the 2004 version does not allow omitting the year when a month is present. One may simply write "1981" to refer to that year, "198" to refer to the decade from 1980 to 1989 inclusive, or "19"
|
in videogame saga Killzone Other uses in arts, entertainment, and media Isa (album), a 2004 album by Enslaved Isa (comics), of graphic novel series Les Passagers du vent Isa (film), a 2014 television film Isa, a dance in music of the Canary Islands Computing Industry Standard Architecture, a PC computer bus standard Instruction set architecture, the specification for data types, registers, instructions, etc. for a given computer hardware architecture Is-a, a relationship between abstractions in programming Internet Security and Acceleration, a network router, firewall, antivirus program, VPN server and web cache from Microsoft Corporation Education Indian Squash Academy, Chennai, India Independent Schools Association (Australia), mainly for sports Independent Schools Association (UK) Iniciativa de Salud de las Americas, Spanish name for Health Initiative of the Americas Institute for the Study of the Americas, University of London, England Instituto Superior de Agronomia, an agronomy faculty in Lisbon, Portugal Instituto Superior de Arte, an art school in Havana, Cuba International School Amsterdam, the Netherlands International School Augsburg International School of Aleppo, Syria International School of Athens, Greece International School of the Americas, San Antonio, Texas International Studies Association Islamic Saudi Academy Finance Income share agreement, US, a borrowing agreement sometimes used for tuition loans Individual savings account, UK International Standards on Auditing Israel Securities Authority Government Independent Safeguarding Authority, former UK child protection agency Intelligence Services Act 1994, UK Intelligence Support Activity, of the US Army Internal Security Agency, secret service and counter-espionage agency in Poland International Seabed Authority, for mineral-related activities International Searching Authority, for patents International Solar Alliance Interoperability Solutions for European Public Administrations, EU Invention Secrecy Act Iranian Space Agency Israel Security Agency or Shin Bet, Israel Israel Space Agency Israel State Archives, the national archive of Israel Italian Space Agency Organizations and brands Industry Super Australia,
|
used for tuition loans Individual savings account, UK International Standards on Auditing Israel Securities Authority Government Independent Safeguarding Authority, former UK child protection agency Intelligence Services Act 1994, UK Intelligence Support Activity, of the US Army Internal Security Agency, secret service and counter-espionage agency in Poland International Seabed Authority, for mineral-related activities International Searching Authority, for patents International Solar Alliance Interoperability Solutions for European Public Administrations, EU Invention Secrecy Act Iranian Space Agency Israel Security Agency or Shin Bet, Israel Israel Space Agency Israel State Archives, the national archive of Israel Italian Space Agency Organizations and brands Industry Super Australia, the peak body for Industry superannuation funds in Australia Information Systems Associates FZE, an aviation software house Innovative Software Applications, a company taken over by Sorcim in 1982 International Seabed Authority International Socialist Alternative, an international association of Trotskyist political parties. International Society of Arboriculture, a non-profit botanical organization International Society of Automation, a non-profit professional organization for engineers, technicians, and students International Sociological Association International Soling Association International Surfing Association, the world governing authority for the sport of surfing Irish Sailing Association, the governing body for sailing in Ireland International Slackline Association Science
|
signature in 2001/02 of 15-year contracts with seven organizations that had applied for specific seabed areas in which they were authorized to explore for polymetallic nodules. In 2006, a German entity was added to the list. These contractors are: Yuzhmorgeologya (Russian Federation); Interoceanmetal Joint Organization (IOM) (Bulgaria, Cuba, Slovakia, Czech Republic, Poland and Russian Federation); the Government of the Republic of Korea; China Ocean Minerals Research and Development Association (COMRA) (China); Deep Ocean Resources Development Company (DORD) (Japan); Institut français de recherche pour l’exploitation de la mer (IFREMER) (France); the Government of India, the Federal Institute for Geosciences and Natural Resources of Germany. All but one of the current areas of exploration are in the Clarion-Clipperton Zone, in the Equatorial North Pacific Ocean south and southeast of Hawaii. The remaining area, being explored by India, is in the Central Indian Basin of the Indian Ocean. Each area is limited to , of which half is to be relinquished to the Authority after eight years. Each contractor is required to report once a year on its activities in its assigned area. So far, none of them has indicated any serious move to begin commercial exploitation. In 2008, the Authority received two new applications for authorization to explore for polymetallic nodules, coming for the first time from private firms in developing island nations of the Pacific. Sponsored by their respective governments, they were submitted by Nauru Ocean Resources Inc. and Tonga Offshore Mining Limited. A 15-year exploration contract was granted by the Authority to Nauru Ocean Resources Inc. on 22 July 2011 and to Tonga Offshore Mining Limited on 12 January 2012. Fifteen-year exploration contracts for polymetallic nodules were also granted to G-TECH Sea Mineral Resources NV (Belgium) on 14 January 2013; Marawa Research and Exploration Ltd (Kiribati) on 19 January 2015; Ocean Mineral Singapore Pte Ltd on 22 January 2015; UK Seabed Resources Ltd (two contracts on 8 February 2013 and 29 March 2016 respectively); Cook Islands Investment Corporation on 15 July 2016 and more recently China Minmetals Corporation on 12 May 2017. The Authority has signed seven contracts for the exploration for polymetallic sulphides in the South West Indian Ridge, Central Indian Ridge and Mid-Atlantic Ridge with China Ocean Mineral Resources Research and Development Association (18 November 2011); the Government of Russia (29 October 2012); Government of the Republic of Korea (24 June 2014); Institut français de recherche pour l’exploitation de la mer (Ifremer, France, 18 November 2014); the Federal Institute for Geosciences and Natural Resources of Germany (6 May 2015); and the Government of India (26 September 2016) and the Government of the Republic of Poland (12 February 2018). The Authority also holds five contracts for the exploration of cobalt-rich ferromanganese crusts in the Western Pacific Ocean with China Ocean Mineral Resources Research and Development Association (29 April 2014); Japan Oil Gas and Metals National Corporation (JOGMEC, 27 January 2014); Ministry of Natural Resources and Environment of the Russian Federation (10 March 2015), Companhia De Pesquisa de Recursos Minerais (9 November 2015) and the Government of the Republic of Korea (27 March 2018). Activities The Authority's main legislative accomplishment to date has been the adoption, in the year 2000, of regulations governing exploration for polymetallic nodules. These resources, also called manganese nodules, contain varying amounts of manganese, cobalt, copper and nickel. They occur as potato-sized lumps scattered about on the surface of the ocean floor, mainly in the central Pacific Ocean but with some deposits in the Indian Ocean. The Council of the Authority began work, in August 2002, on another set of regulations, covering polymetallic sulfides and cobalt-rich ferromanganese crusts, which are rich sources of such minerals as copper, iron, zinc, silver and gold, as well as cobalt. The sulphides are found around volcanic hot springs, especially in the western Pacific Ocean, while the crusts occur on oceanic ridges and elsewhere at several locations around the world. The Council decided in 2006 to prepare separate sets of regulations for sulphides and for crusts, with priority given to sulphides. It devoted most of its sessions in 2007 and 2008 to this task, but several issues remained unresolved. Chief among these were the definition and configuration of the area to be allocated to contractors for exploration, the fees to be paid to the Authority and the question of how to deal with any overlapping claims that might arise. Meanwhile, the Legal and Technical Commission reported progress on ferromanganese crusts. In addition to its legislative work, the Authority organizes annual workshops on various aspects of seabed exploration, with emphasis on measures to protect the marine environment from any harmful consequences. It disseminates the results of these meetings through publications. Studies over several years covering the key mineral area of the Central Pacific resulted in a technical study on biodiversity, species ranges and gene flow in the abyssal Pacific nodule province, with emphasis on predicting and managing the impacts of deep seabed mining A workshop at Manoa, Hawaii, in October 2007 produced a rationale and recommendations for the establishment of "preservation reference areas" in the Clarion-Clipperton Zone, where nodule mining would be prohibited in order to leave the natural environment intact. The most recent workshop, held at Chennai, India, in February 2008, concerned polymetallic nodule mining technology, with special reference to its current status and challenges ahead Contrary to early hopes that seabed mining would generate extensive revenues for both the exploiting countries and the Authority, no technology has yet been developed for gathering deep-sea minerals at costs that can compete with land-based mines. Until recently, the consensus has been that economic mining of the ocean depths might be decades away. Moreover, the United States, with some of the most advanced ocean technology in the world, has not yet ratified the Law of the Sea Convention
|
national zones of Papua New Guinea, Fiji and Tonga. Papua New Guinea was the first country in the world to grant commercial exploration licenses for seafloor massive sulphide deposits when it granted the initial license to Nautilus Minerals in 1997. Japan's new ocean policy emphasizes the need to develop methane hydrate and hydrothermal deposits within Japan's exclusive economic zone and calls for the commercialization of these resources within the next 10 years. Reporting on these developments in his annual report to the Authority in April 2008, Secretary-General Nandan referred also to the upward trend in demand and prices for cobalt, copper, nickel and manganese, the main metals that would be derived from seabed mining, and he noted that technologies being developed for offshore extraction could be adapted for deep sea mining. In its preamble, UNCLOS defines the international seabed area—the part under ISA jurisdiction—as "the seabed and ocean floor and the subsoil thereof, beyond the limits of national jurisdiction". There are no maps annexed to the Convention to delineate this area. Rather, UNCLOS outlines the areas of national jurisdiction, leaving the rest for the international portion. National jurisdiction over the seabed normally leaves off at seaward from baselines running along the shore, unless a nation can demonstrate that its continental shelf is naturally prolonged beyond that limit, in which case it may claim up to . ISA has no role in determining this boundary. Rather, this task is left to another body established by UNCLOS, the Commission on the Limits of the Continental Shelf, which examines scientific data submitted by coastal states that claim a broader reach. Maritime boundaries between states are generally decided by bilateral negotiation (sometimes with the aid of judicial bodies), not by ISA. Recently, there has been much interest in the possibility of exploiting seabed resources in the Arctic Ocean, bordered by Canada, Denmark, Iceland, Norway, Russia and the United States (see Territorial claims in the Arctic). Mineral exploration and exploitation activities in any seabed area not belonging to these states would fall under ISA jurisdiction. Endowment fund In 2006 the Authority established an Endowment Fund to Support Collaborative Marine Scientific Research on the International Seabed Area. The Fund will aid experienced scientists and technicians from developing countries to participate in deep-sea research organized by international and national institutions. A campaign was launched in February 2008 to identify participants, establish a network of cooperating bodies and seek outside funds to augment the initial $3 million endowment from the Authority. The International Seabed Authority Endowment Fund promotes and encourages the conduct of collaborative marine scientific research in the international seabed area through two main activities: By supporting the participation of qualified scientists and technical personnel from developing countries in marine scientific research programmes and activities. By providing opportunities to these scientists to participate in relevant initiatives. The Secretariat of the International Seabed Authority is facilitating these activities by creating and maintaining an ongoing list of opportunities for scientific collaboration, including research cruises, deep-sea sample analysis, and training and internship programmes. This entails building a network of co-operating groups interested in (or presently undertaking) these types of activities and programmes, such as universities, institutions, contractors with the Authority and other entities. The Secretariat is also actively seeking applications from scientists and other technical personnel from developing nations to be considered for assistance under the Fund. Application guidelines have been prepared for potential recipients to participate in marine scientific research programmes or other scientific co-operation activity, to enroll in training programmes, and to qualify for technical assistance. An advisory panel will evaluate all incoming applications and make recommendations to the Secretary-General of the International Seabed Authority so successful applicants may be awarded with Fund assistance. To maximize opportunities for and participation in the Fund, the Secretariat is also seeking donations and in-kind contributions to build on the initial investment of US$3 million. This entails raising awareness of the Fund, reporting on its successes and encouraging new activities and participants. Voluntary commitments In 2017, the Authority registered seven voluntary commitments with the UN Oceans Conference for Sustainable Development Goal 14. These were: OceanAction15467 – Enhancing the role of women in marine scientific research through capacity building OceanAction15796 – Encouraging dissemination of research results through the ISA Secretary-General Award for Excellence in Deep-Sea Research OceanAction16538 – Abyssal Initiative for Blue Growth (with UN-DESA) OceanAction16494 – Fostering cooperation to promote the sustainable development of Africa's deep seabed resources in support of Africas Blue Economy OceanAction17746 – Enhancing the assessment of essential ecological functions of the deep sea oceans through long-term underwater oceanographic observatories in the Area; OceanAction17776 – Enhancing deep sea marine biodiversity assessment through the creation of online taxonomic atlases linked to deep sea mining activities in the Area Controversy The exact nature of the ISA's mission and authority has been questioned by opponents of the Law of the Sea Treaty who are generally skeptical of multilateral engagement by the United States. The United States is the only major maritime power that has not ratified the Convention (see United States non-ratification of the UNCLOS), with one of the main anti-ratification arguments being a charge that the ISA is flawed or unnecessary. In its original form, the Convention included certain provisions that some found objectionable, such as: Imposition of permit requirements, fees and taxation on seabed mining; ban on mining absent ISA permission Use of collected money for wealth redistribution in addition to ISA administration Mandatory technology transfer Because of these concerns, the United States pushed for modification of the Convention, obtaining a 1994 Agreement on Implementation that somewhat mitigates them and thus modifies the ISA's authority. Despite this change the United States has not ratified the Convention and so is not a member of ISA, although it sends sizable delegations to participate in meetings as an observer. See also International waters Seabed Arms Control Treaty United Nations Trusteeship Council Antarctic Treaty Secretariat United Nations Office for Outer Space Affairs References External links International Seabed Authority Overview – Convention & Related Agreements. UN: United Nations Convention on the Law of the Sea (1982). Law of
|
with the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles. Originally referred to as the PC bus (8-bit) or AT bus (16-bit), it was also termed I/O Channel by IBM. The ISA term was coined as a retronym by competing PC-clone manufacturers in the late 1980s or early 1990s as a reaction to IBM attempts to replace the AT-bus with its new and incompatible Micro Channel architecture. The 16-bit ISA bus was also used with 32-bit processors for several years. An attempt to extend it to 32 bits, called Extended Industry Standard Architecture (EISA), was not very successful, however. Later buses such as VESA Local Bus and PCI were used instead, often along with ISA slots on the same mainboard. Derivatives of the AT bus structure were and still are used in ATA/IDE, the PCMCIA standard, Compact Flash, the PC/104 bus, and internally within Super I/O chips. Even though ISA disappeared from consumer desktops many years ago, it is still used in industrial PCs, where certain specialized expansion cards that never transitioned to PCI and PCI Express are used. History The original PC bus was developed by a team led by Mark Dean at IBM as part of the IBM PC project in 1981. It was an 8-bit bus based on the I/O bus of the IBM System/23 Datamaster system - it used the same physical connector, and a similar signal protocol and pinout. A 16-bit version, the IBM AT bus, was introduced with the release of the IBM PC/AT in 1984. The AT bus was a mostly backward compatible extension of the PC bus—the AT bus connector was a superset of the PC bus connector. In 1988, the 32-bit EISA standard was proposed by the "Gang of Nine" group of PC-compatible manufacturers that included Compaq. Compaq created the term "Industry Standard Architecture" (ISA) to replace "PC compatible". In the process, they retroactively renamed the AT bus to "ISA" to avoid infringing IBM's trademark on its PC and PC/AT systems (and to avoid giving their major competitor, IBM, free advertisement). IBM designed the 8-bit version as a buffered interface to the motherboard buses of the Intel 8088 (16/8 bit) CPU in the IBM PC and PC/XT, augmented with prioritized interrupts and DMA channels. The 16-bit version was an upgrade for the motherboard buses of the Intel 80286 CPU (and expanded interrupt and DMA facilities) used in the IBM AT, with improved support for bus mastering. The ISA bus was therefore synchronous with the CPU clock, until sophisticated buffering methods were implemented by chipsets to interface ISA to much faster CPUs. ISA was designed to connect peripheral cards to the motherboard and allows for bus mastering. Only the first 16 MB of main memory is addressable. The original 8-bit bus ran from the 4.77 MHz clock of the 8088 CPU in the IBM PC and PC/XT. The original 16-bit bus ran from the CPU clock of the 80286 in IBM PC/AT computers, which was 6 MHz in the first models and 8 MHz in later models. The IBM RT PC also used the 16-bit bus. ISA was also used in some non-IBM compatible machines such as Motorola 68k-based Apollo (68020) and Amiga 3000 (68030) workstations, the short-lived AT&T Hobbit and the later PowerPC-based BeBox. Companies like Dell improved the AT bus's performance but in 1987, IBM replaced the AT bus with its proprietary Micro Channel Architecture (MCA). MCA overcame many of the limitations then apparent in ISA but was also an effort by IBM to regain control of the PC architecture and the PC market. MCA was far more advanced than ISA and had many features that would later appear in PCI. However, MCA was also a closed standard whereas IBM had released full specifications and circuit schematics for ISA. Computer manufacturers responded to MCA by developing the Extended Industry Standard Architecture (EISA) and the later VESA Local Bus (VLB). VLB used some electronic parts originally intended for MCA because component manufacturers already were equipped to manufacture them. Both EISA and VLB were backwards-compatible expansions of the AT (ISA) bus. Users of ISA-based machines had to know special information about the hardware they were adding to the system. While a handful of devices were essentially "plug-n-play", this was rare. Users frequently had to configure parameters when adding a new device, such as the IRQ line, I/O address, or DMA channel. MCA had done away with this complication and PCI actually incorporated many of the ideas first explored with MCA, though it was more directly descended from EISA. This trouble with configuration eventually led to the creation of ISA PnP, a plug-n-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. In reality, ISA PnP could be troublesome and did not become well-supported until the architecture was in its final days. PCI slots were the first physically-incompatible expansion ports to directly squeeze ISA off the motherboard. At first, motherboards were largely ISA, including a few PCI slots. By the mid-1990s, the two slot types were roughly balanced, and ISA slots soon were in the minority of consumer systems. Microsoft's PC 99 specification recommended that ISA slots be removed entirely, though the system architecture still required ISA to be present in some vestigial way internally to handle the floppy drive, serial ports, etc., which was why the software compatible LPC bus was created. ISA slots remained for a few more years, and towards the turn of the century it was common to see systems with an Accelerated Graphics Port (AGP) sitting near the central processing unit, an array of PCI slots, and one or two ISA slots near the end. In late 2008, even floppy disk drives and serial ports were disappearing, and the extinction of vestigial ISA (by then the LPC bus) from
|
PC 99 specification recommended that ISA slots be removed entirely, though the system architecture still required ISA to be present in some vestigial way internally to handle the floppy drive, serial ports, etc., which was why the software compatible LPC bus was created. ISA slots remained for a few more years, and towards the turn of the century it was common to see systems with an Accelerated Graphics Port (AGP) sitting near the central processing unit, an array of PCI slots, and one or two ISA slots near the end. In late 2008, even floppy disk drives and serial ports were disappearing, and the extinction of vestigial ISA (by then the LPC bus) from chipsets was on the horizon. PCI slots are "rotated" compared to their ISA counterparts—PCI cards were essentially inserted "upside-down," allowing ISA and PCI connectors to squeeze together on the motherboard. Only one of the two connectors can be used in each slot at a time, but this allowed for greater flexibility. The AT Attachment (ATA) hard disk interface is directly descended from the 16-bit ISA of the PC/AT. ATA has its origins in the IBM Personal Computer Fixed Disk and Diskette Adapter, the standard dual-function floppy disk controller and hard disk controller card for the IBM PC AT; the fixed disk controller on this card implemented the register set and the basic command set which became the basis of the ATA interface (and which differed greatly from the interface of IBM's fixed disk controller card for the PC XT). Direct precursors to ATA were third-party ISA hardcards that integrated a hard disk drive (HDD) and a hard disk controller (HDC) onto one card. This was at best awkward and at worst damaging to the motherboard, as ISA slots were not designed to support such heavy devices as HDDs. The next generation of Integrated Drive Electronics drives moved both the drive and controller to a drive bay and used a ribbon cable and a very simple interface board to connect it to an ISA slot. ATA is basically a standardization of this arrangement plus a uniform command structure for software to interface with the HDC within the drive. ATA has since been separated from the ISA bus and connected directly to the local bus, usually by integration into the chipset, for much higher clock rates and data throughput than ISA could support. ATA has clear characteristics of 16-bit ISA, such as a 16-bit transfer size, signal timing in the PIO modes and the interrupt and DMA mechanisms. ISA bus architecture The PC/XT-bus is an eight-bit ISA bus used by Intel 8086 and Intel 8088 systems in the IBM PC and IBM PC XT in the 1980s. Among its 62 pins were demultiplexed and electrically buffered versions of the 8 data and 20 address lines of the 8088 processor, along with power lines, clocks, read/write strobes, interrupt lines, etc. Power lines included −5 V and ±12 V in order to directly support pMOS and enhancement mode nMOS circuits such as dynamic RAMs among other things. The XT bus architecture uses a single Intel 8259 PIC, giving eight vectorized and prioritized interrupt lines. It has four DMA channels originally provided by the Intel 8237. Three of the DMA channels are brought out to the XT bus expansion slots; of these, 2 are normally already allocated to machine functions (diskette drive and hard disk controller): The PC/AT-bus, a 16-bit (or 80286-) version of the PC/XT bus, was introduced with the IBM PC/AT. This bus was officially termed I/O Channel by IBM. It extends the XT-bus by adding a second shorter edge connector in-line with the eight-bit XT-bus connector, which is unchanged, retaining compatibility with most 8-bit cards. The second connector adds four additional address lines for a total of 24, and 8 additional data lines for a total of 16. It also adds new interrupt lines connected to a second 8259 PIC (connected to one of the lines of the first) and 4 × 16-bit DMA channels, as well as control lines to select 8- or 16-bit transfers. The 16-bit AT bus slot originally used two standard edge connector sockets in early IBM PC/AT machines. However, with the popularity of the AT-architecture and the 16-bit ISA bus, manufacturers introduced specialized 98-pin connectors that integrated the two sockets into one unit. These can be found in almost every AT-class PC manufactured after the mid-1980s. The ISA slot connector is typically black (distinguishing it from the brown EISA connectors and white PCI connectors). Number of devices Motherboard devices have dedicated IRQs (not present in the slots). 16-bit devices can use either PC-bus or PC/AT-bus IRQs. It is therefore possible to connect up to 6 devices that use one 8-bit IRQ each and up to 5 devices that use one 16-bit IRQ each. At the same time, up to 4 devices may use one 8-bit DMA channel each, while up to 3 devices can use one 16-bit DMA channel each. Varying bus speeds Originally, the bus clock was synchronous with the CPU clock, resulting in varying bus clock frequencies among the many different IBM "clones" on the market (sometimes as high as 16 or 20 MHz), leading to software or electrical timing problems for certain ISA cards at bus speeds they were not designed for. Later motherboards or integrated chipsets used a separate clock generator, or a clock divider which either fixed the ISA bus frequency at 4, 6, or 8 MHz or allowed the user to adjust the frequency via the BIOS setup. When used at a higher bus frequency, some ISA cards (certain Hercules-compatible video cards, for instance), could show significant performance improvements. 8/16-bit incompatibilities Memory address decoding for the selection of 8 or 16-bit transfer mode was limited to 128 KiB sections, leading to problems when mixing 8- and 16-bit cards as they could not co-exist in the same 128 KiB area. This is because the MEMCS16 line is required to be set based on the value of LA17-23 only. Past and current use ISA is still used today for specialized industrial purposes. In 2008 IEI Technologies released a modern motherboard for Intel Core 2 Duo processors which, in addition to other special I/O features, is equipped with two ISA slots. It is marketed to industrial and military users who have invested in expensive specialized ISA bus adaptors, which are not available in PCI bus versions. Similarly, ADEK Industrial Computers is releasing a motherboard in early 2013 for Intel Core i3/i5/i7 processors, which contains one (non-DMA) ISA slot. The PC/104 bus, used in industrial and embedded applications, is a derivative of the ISA bus, utilizing the same signal lines with different connectors. The LPC bus has replaced the ISA bus as the connection to the legacy I/O devices on recent motherboards; while physically quite different, LPC looks just like ISA to software, so that the peculiarities of ISA such as the 16 MiB DMA limit (which corresponds to the full address space of the Intel 80286 CPU used in the original IBM AT) are likely to stick around for a while. ATA As explained in the History section, ISA was the basis for development of the ATA interface, used for ATA (a.k.a. IDE) hard disks. Physically, ATA is essentially a simple subset of ISA, with 16 data bits, support for exactly one IRQ and one DMA channel, and 3 address bits. To this ISA subset, ATA adds two IDE address select ("chip select") lines (i.e. address decodes, effectively equivalent to address bits) and a few unique signal lines specific to ATA/IDE hard disks (such as the Cable Select/Spindle Sync. line.) In addition to the physical interface channel, ATA goes beyond and far outside the scope of ISA by also specifying a set of physical device registers to be implemented on every ATA (IDE) drive and a full set of protocols and device commands for controlling fixed disk drives using these registers. The ATA device registers are accessed using the address bits and address select signals in the ATA physical interface channel, and all operations of ATA hard disks are performed using the ATA-specified protocols through the ATA command set. The earliest versions of the ATA standard featured a few simple protocols and a basic command set comparable to the command sets of MFM and RLL controllers (which preceded ATA controllers), but the latest ATA standards have much more complex protocols and instruction sets that include optional commands and protocols providing such advanced optional-use features as sizable hidden system storage areas, password security locking, and programmable geometry translation. A
|
Reagan Administration expressed concern about unrestrained influence from independent scientists or from United Nations bodies such as the UNEP and WMO. The U.S. government was the main force in shaping the IPCC as an autonomous intergovernmental body in which scientists took part both as experts and as official representatives of their governments, which would produce reports backed by all leading relevant scientists, and which then had to gain consensus agreement from every participating government. In this way, the IPCC was formed as a hybrid between a scientific body and an intergovernmental political organisation. The United Nations formally endorsed the creation of the IPCC in 1988, citing the fact that "[c]ertain human activities could change global climate patterns, threatening present and future generations with potentially severe economic and social consequences", and that "[c]ontinued growth in atmospheric concentrations of 'greenhouse' gases could produce global warming with an eventual rise in sea levels, the effects of which could be disastrous for mankind if timely steps are not taken at all levels". To that end, the IPCC was tasked with reviewing peer-reviewed scientific literature and other relevant publications to provide information on the state of knowledge about climate change and its consequences and impacts. Organization The IPCC does not conduct original research, but produces comprehensive assessments, reports on special topics, and methodologies based on. Its assessments build on previous reports, highlighting the trajectory towards the latest knowledge; for example, the wording of the reports from the first to the fifth assessment reflects the growing evidence for a changing climate caused by human activity. The IPCC has adopted and published "Principles Governing IPCC Work", which states that the IPCC will assess: the risk of human-induced climate change, its potential impacts, and possible options for prevention. Pursuant to its governing principles, the IPCC conducts its assessments on a "comprehensive, objective, open and transparent basis" that encompasses all "scientific, technical and socioeconomic information relevant to understanding the scientific basis" of climate change. IPCC reports must be neutral with respect to policy recommendations, but may address the objective scientific, technical and socioeconomic factors relevant to enacting certain policies. The IPCC is currently chaired by Korean economist Hoesung Lee, who has served since 8 October 2015 with the election of the new IPCC Bureau, along with three vice-chairs, (Mali), Ko Barrett (USA) and Thelma Krug (Brazil). Before this election, the IPCC was led by Vice-Chair Ismail El Gizouli, who was designated acting Chair after the resignation of Rajendra K. Pachauri in February 2015. The previous chairs were Rajendra K. Pachauri, elected in May 2002; Robert Watson in 1997; and Bert Bolin in 1988. The chair is assisted by an elected bureau including vice-chairs and working group co-chairs, and by a secretariat. The Panel itself is composed of representatives appointed by governments. Participation of delegates with appropriate expertise is encouraged. Plenary sessions of the IPCC and IPCC Working Groups are held at the level of government representatives. Non-Governmental and Intergovernmental Organizations admitted as observer organizations may also attend. Sessions of the Panel, IPCC Bureau, workshops, expert and lead authors meetings are by invitation only. About 500 people from 130 countries attended the 48th Session of the Panel in Incheon, Republic of Korea, in October 2018, including 290 government officials and 60 representatives of observer organizations. The opening ceremonies of sessions of the Panel and of Lead Author Meetings are open to media, but otherwise IPCC meetings are closed. The IPCC is structured as follows: IPCC Panel: Meets in plenary session about once a year. It controls the organization's structure, procedures, and work programme, and accepts and approves IPCC reports. The Panel is the IPCC corporate entity. Chair: Elected by the Panel. Secretariat: Oversees and manages all activities. Supported by UNEP and WMO. Bureau: Elected by the Panel. Chaired by the Chair. Its 34 members include IPCC Vice-Chairs, Co-Chairs of Working Groups and the Task Force, and Vice-Chairs of the Working Groups. It provides guidance to the Panel on the scientific and technical aspects of its work. Working Groups: Each has two Co-Chairs, one from the developed and one from developing world, and a technical support unit. Sessions of the Working Group approve the Summary for Policymakers of special reports and working group contributions to an assessment report. Each Working Group has a Bureau comprising its Co-Chairs and Vice-Chairs, who are also members of the IPCC Bureau. Working Group I: Assesses scientific aspects of the climate system and climate change. Co-Chairs: Valérie Masson-Delmotte and Panmao Zhai Working Group II: Assesses vulnerability of socioeconomic and natural systems to climate change, consequences, and adaptation options. Co-Chairs: Hans-Otto Pörtner and Debra Roberts Working Group III: Assesses options for limiting greenhouse gas emissions and otherwise mitigating climate change. Co-Chairs: Priyadarshi R. Shukla and Jim Skea Task Force on National Greenhouse Gas Inventories. Co-Chairs: Kiyoto Tanabe and Eduardo Calvo Buendía Task Force Bureau: Comprises the two Co-Chairs, who are also members of the IPCC Bureau, and 12 members. Executive Committee: Comprises the Chair, IPCC Vice-Chairs and the Co-Chairs of the Working Groups and Task Force. Its role includes addressing urgent issues that arise between sessions of the Panel. Funding The IPCC receives funding through a dedicated trust fund, established in 1989 by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO). The trust fund receives annual cash contributions by the WMO, UNEP, and IPCC member governments; payments are voluntary and there is no set amount required. Administrative and operational costs, such as for the secretariat and headquarters, are provided by the WMO, which also sets the IPCC's financial regulations and rules. The Panel is responsible for considering and adopting by consensus the annual budget. Assessment reports The IPCC has published five comprehensive assessment reports reviewing the latest climate science, as well as a number of special reports on particular topics. These reports are prepared by teams of relevant researchers selected by the Bureau from government nominations. Expert reviewers from a wide range of governments, IPCC observer organizations and other organizations are invited at different stages to comment on various aspects of the drafts. The IPCC published its First Assessment Report (FAR) in 1990, a supplementary report in 1992, a Second Assessment Report (SAR) in 1995, a Third Assessment Report (TAR) in 2001, a Fourth Assessment Report (AR4) in 2007 and a Fifth Assessment Report (AR5) in 2014. The IPCC is currently preparing its Sixth Assessment Report (AR6), which is being released in stages and will be completed in 2022. Each assessment report is in three volumes, corresponding to Working Groups I, II, and III. It is completed by a synthesis report that integrates the working group contributions and any special reports produced in that assessment cycle. Scope and preparation of the reports The IPCC does not carry out research nor does it monitor climate related data. Lead authors of IPCC reports assess the available information about climate change based on published sources. According to IPCC guidelines, authors should give priority to peer-reviewed sources. Authors may refer to non-peer-reviewed sources (the "grey literature"), provided that they are of sufficient quality. Examples of non-peer-reviewed sources include model results, reports from government agencies and non-governmental organizations, and industry journals. Each subsequent IPCC report notes areas where the science has improved since the previous report and also notes areas where further research is required. There are generally three stages in the review process: Expert review (6–8 weeks) Government/expert review Government review of: Summaries for Policymakers Overview Chapters Synthesis Report Review comments are in an open archive for at least five years. There are several types of endorsement which documents receive: Approval. Material has been subjected to detailed, line by line discussion and agreement. Working Group Summaries for Policymakers are approved by their Working Groups. Synthesis Report Summary for Policymakers is approved by Panel. Adoption. Endorsed section by section (and not line by line). Panel adopts Overview Chapters of Methodology Reports. Panel adopts IPCC Synthesis Report. Acceptance. Not been subject to line by line discussion and agreement, but presents a comprehensive, objective, and balanced view of the subject matter. Working Groups accept their reports. Task Force Reports are accepted by the Panel. Working Group Summaries for Policymakers are accepted by the Panel after group approval. The Panel is responsible for the IPCC and its endorsement of Reports allows it to ensure they meet IPCC standards. There have been a range of commentaries on the IPCC's procedures, examples of which are discussed later in the article (see also IPCC Summary for Policymakers). Some of these comments have been supportive, while others have been critical. Some commentators have suggested changes to the IPCC's procedures. Authors Each chapter has a number of authors who are responsible for writing and editing the material. A chapter typically has two "coordinating lead authors", ten to fifteen "lead authors", and a somewhat larger number of "contributing authors". The coordinating lead authors are responsible for assembling the contributions of the other authors, ensuring that they meet stylistic and formatting requirements, and reporting to the Working Group chairs. Lead authors are responsible for writing sections of chapters. Contributing authors prepare text, graphs or data for inclusion by the lead authors. Authors for the IPCC reports are chosen from a list of researchers prepared by governments and participating organisations, and by the Working Group/Task Force Bureaux, as well as other experts known through their published work. The choice of authors aims for a range of views, expertise and geographical representation, ensuring representation of experts from developing and developed countries and countries with economies in transition. First assessment report (1990) Second assessment report (1995) Third assessment report (2001) Fourth assessment report (2007) Fifth assessment report (2014) Sixth assessment report (2021/2022) Archiving Papers and electronic files of certain working groups of the IPCC, including reviews and comments on drafts of their Assessment Reports, are archived at the Environmental Science and Public Policy Archives in the Harvard Library. Other reports Special reports In addition to climate assessment reports, the IPCC publishes Special Reports on specific topics. The preparation and approval process for all IPCC Special Reports follows the same procedures as for IPCC Assessment Reports. In the year 2011 two IPCC Special Report were finalized, the Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) and the Special Report on Managing Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX). Both Special Reports were requested by governments. Special Report on Emissions Scenarios (SRES) The Special Report on Emissions Scenarios (SRES) is a report by the IPCC which was published in 2000. The SRES contains "scenarios" of future changes in emissions of greenhouse gases and sulfur dioxide. One of the uses of the SRES scenarios is to project future changes in climate, e.g., changes in global mean temperature. The SRES scenarios were used in the IPCC's Third and Fourth Assessment Reports. The SRES scenarios are "baseline" (or "reference") scenarios, which means that they do not take into account any current or future measures to limit greenhouse gas (GHG) emissions (e.g., the Kyoto Protocol to the United Nations Framework Convention on Climate Change). SRES emissions projections are broadly comparable in range to the baseline projections that have been developed by the scientific community. Comments on the SRES There have been a number of comments on the SRES. Parson et al. (2007) stated that the SRES represented "a substantial advance from prior scenarios". At the same time, there have been criticisms of the SRES. The most prominently publicized criticism of SRES focused on the fact that all but one of the participating models compared gross domestic product (GDP) across regions using market exchange rates (MER), instead of the more correct purchasing-power parity (PPP) approach. This criticism is discussed in the main SRES article. Special report on renewable energy sources and climate change mitigation (SRREN) This report assesses existing literature on renewable energy commercialisation for the mitigation of climate change. It was published in 2012 and covers the six most important renewable energy technologies in a transition, as well as their integration into present and future energy systems. It also takes into consideration the environmental and social consequences associated with these technologies, the cost and strategies to overcome technical as well as non-technical obstacles to their application and diffusion. More than 130 authors from all over the world contributed to the preparation of IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) on a voluntary basis – not to mention more than 100 scientists, who served as contributing authors. Special Report on managing the risks of extreme events and disasters to advance climate change adaptation (SREX) The report was published in 2012. It assesses the effect that climate change has on the threat of natural disasters and how nations can better manage an expected change in the frequency of occurrence and intensity of severe weather patterns. It aims to become a resource for decision-makers to prepare more effectively for managing the risks of these events. A potentially important area for consideration is also the detection of trends in extreme events and the attribution of these trends to human influence. The full report, 594 pages in length, may be found here in PDF form. More than 80 authors, 19 review editors, and more than 100 contributing authors from all over the world contributed to the preparation of SREX. Special Report on Global Warming of 1.5 °C (SR15) When the Paris Agreement was adopted, the UNFCCC invited the Intergovernmental Panel on Climate Change to write a special report on "How can humanity prevent the global temperature rise more than 1.5 degrees above pre-industrial level". The completed report, Special Report on Global Warming of 1.5 °C (SR15), was released on 8 October 2018. Its full title is "Global Warming of 1.5 °C, an IPCC special report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty". The finished report summarizes the findings of scientists, showing that maintaining a temperature rise to below 1.5 °C remains possible, but only through "rapid and far-reaching transitions in energy, land, urban and infrastructure..., and industrial systems". Meeting the Paris target of is possible but would require "deep emissions reductions", "rapid", "far-reaching and unprecedented changes in all aspects of society". In order to achieve the 1.5 °C target, emissions must decline by 45% (relative to 2010 levels) by 2030, reaching net zero by around 2050. Deep reductions in non-CO2 emissions (such as nitrous oxide and methane) will also be required to limit warming to 1.5 °C. Under the pledges of the countries entering the Paris Accord, a sharp rise of 3.1 to 3.7 °C is still expected to occur by 2100. Holding this rise to 1.5 °C avoids the worst effects of a rise by even 2 °C. However, a warming of even 1.5 degrees will still result in large-scale drought, famine, heat stress, species die-off, loss of entire ecosystems, and loss of habitable land, throwing more than 100 million into poverty. Effects will be most drastic in arid regions including the Middle East and the Sahel in Africa, where fresh water will remain in some areas following a 1.5 °C rise in temperatures but are expected to dry up completely if the rise reaches 2 °C. Special Report on climate change and land (SRCCL) The final draft of the "Special Report on climate change and land" (SRCCL)—with the full title, "Special Report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems" was published online on 7 August 2019. Special Report on the Ocean and Cryosphere in a Changing Climate (SROCC) The "Special Report on the Ocean and Cryosphere in a Changing Climate" (SROCC) was approved on 25 September 2019 in Monaco. Among other findings, the report concluded that sea level rises could be up to two
|
El Gizouli, who was designated acting Chair after the resignation of Rajendra K. Pachauri in February 2015. The previous chairs were Rajendra K. Pachauri, elected in May 2002; Robert Watson in 1997; and Bert Bolin in 1988. The chair is assisted by an elected bureau including vice-chairs and working group co-chairs, and by a secretariat. The Panel itself is composed of representatives appointed by governments. Participation of delegates with appropriate expertise is encouraged. Plenary sessions of the IPCC and IPCC Working Groups are held at the level of government representatives. Non-Governmental and Intergovernmental Organizations admitted as observer organizations may also attend. Sessions of the Panel, IPCC Bureau, workshops, expert and lead authors meetings are by invitation only. About 500 people from 130 countries attended the 48th Session of the Panel in Incheon, Republic of Korea, in October 2018, including 290 government officials and 60 representatives of observer organizations. The opening ceremonies of sessions of the Panel and of Lead Author Meetings are open to media, but otherwise IPCC meetings are closed. The IPCC is structured as follows: IPCC Panel: Meets in plenary session about once a year. It controls the organization's structure, procedures, and work programme, and accepts and approves IPCC reports. The Panel is the IPCC corporate entity. Chair: Elected by the Panel. Secretariat: Oversees and manages all activities. Supported by UNEP and WMO. Bureau: Elected by the Panel. Chaired by the Chair. Its 34 members include IPCC Vice-Chairs, Co-Chairs of Working Groups and the Task Force, and Vice-Chairs of the Working Groups. It provides guidance to the Panel on the scientific and technical aspects of its work. Working Groups: Each has two Co-Chairs, one from the developed and one from developing world, and a technical support unit. Sessions of the Working Group approve the Summary for Policymakers of special reports and working group contributions to an assessment report. Each Working Group has a Bureau comprising its Co-Chairs and Vice-Chairs, who are also members of the IPCC Bureau. Working Group I: Assesses scientific aspects of the climate system and climate change. Co-Chairs: Valérie Masson-Delmotte and Panmao Zhai Working Group II: Assesses vulnerability of socioeconomic and natural systems to climate change, consequences, and adaptation options. Co-Chairs: Hans-Otto Pörtner and Debra Roberts Working Group III: Assesses options for limiting greenhouse gas emissions and otherwise mitigating climate change. Co-Chairs: Priyadarshi R. Shukla and Jim Skea Task Force on National Greenhouse Gas Inventories. Co-Chairs: Kiyoto Tanabe and Eduardo Calvo Buendía Task Force Bureau: Comprises the two Co-Chairs, who are also members of the IPCC Bureau, and 12 members. Executive Committee: Comprises the Chair, IPCC Vice-Chairs and the Co-Chairs of the Working Groups and Task Force. Its role includes addressing urgent issues that arise between sessions of the Panel. Funding The IPCC receives funding through a dedicated trust fund, established in 1989 by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO). The trust fund receives annual cash contributions by the WMO, UNEP, and IPCC member governments; payments are voluntary and there is no set amount required. Administrative and operational costs, such as for the secretariat and headquarters, are provided by the WMO, which also sets the IPCC's financial regulations and rules. The Panel is responsible for considering and adopting by consensus the annual budget. Assessment reports The IPCC has published five comprehensive assessment reports reviewing the latest climate science, as well as a number of special reports on particular topics. These reports are prepared by teams of relevant researchers selected by the Bureau from government nominations. Expert reviewers from a wide range of governments, IPCC observer organizations and other organizations are invited at different stages to comment on various aspects of the drafts. The IPCC published its First Assessment Report (FAR) in 1990, a supplementary report in 1992, a Second Assessment Report (SAR) in 1995, a Third Assessment Report (TAR) in 2001, a Fourth Assessment Report (AR4) in 2007 and a Fifth Assessment Report (AR5) in 2014. The IPCC is currently preparing its Sixth Assessment Report (AR6), which is being released in stages and will be completed in 2022. Each assessment report is in three volumes, corresponding to Working Groups I, II, and III. It is completed by a synthesis report that integrates the working group contributions and any special reports produced in that assessment cycle. Scope and preparation of the reports The IPCC does not carry out research nor does it monitor climate related data. Lead authors of IPCC reports assess the available information about climate change based on published sources. According to IPCC guidelines, authors should give priority to peer-reviewed sources. Authors may refer to non-peer-reviewed sources (the "grey literature"), provided that they are of sufficient quality. Examples of non-peer-reviewed sources include model results, reports from government agencies and non-governmental organizations, and industry journals. Each subsequent IPCC report notes areas where the science has improved since the previous report and also notes areas where further research is required. There are generally three stages in the review process: Expert review (6–8 weeks) Government/expert review Government review of: Summaries for Policymakers Overview Chapters Synthesis Report Review comments are in an open archive for at least five years. There are several types of endorsement which documents receive: Approval. Material has been subjected to detailed, line by line discussion and agreement. Working Group Summaries for Policymakers are approved by their Working Groups. Synthesis Report Summary for Policymakers is approved by Panel. Adoption. Endorsed section by section (and not line by line). Panel adopts Overview Chapters of Methodology Reports. Panel adopts IPCC Synthesis Report. Acceptance. Not been subject to line by line discussion and agreement, but presents a comprehensive, objective, and balanced view of the subject matter. Working Groups accept their reports. Task Force Reports are accepted by the Panel. Working Group Summaries for Policymakers are accepted by the Panel after group approval. The Panel is responsible for the IPCC and its endorsement of Reports allows it to ensure they meet IPCC standards. There have been a range of commentaries on the IPCC's procedures, examples of which are discussed later in the article (see also IPCC Summary for Policymakers). Some of these comments have been supportive, while others have been critical. Some commentators have suggested changes to the IPCC's procedures. Authors Each chapter has a number of authors who are responsible for writing and editing the material. A chapter typically has two "coordinating lead authors", ten to fifteen "lead authors", and a somewhat larger number of "contributing authors". The coordinating lead authors are responsible for assembling the contributions of the other authors, ensuring that they meet stylistic and formatting requirements, and reporting to the Working Group chairs. Lead authors are responsible for writing sections of chapters. Contributing authors prepare text, graphs or data for inclusion by the lead authors. Authors for the IPCC reports are chosen from a list of researchers prepared by governments and participating organisations, and by the Working Group/Task Force Bureaux, as well as other experts known through their published work. The choice of authors aims for a range of views, expertise and geographical representation, ensuring representation of experts from developing and developed countries and countries with economies in transition. First assessment report (1990) Second assessment report (1995) Third assessment report (2001) Fourth assessment report (2007) Fifth assessment report (2014) Sixth assessment report (2021/2022) Archiving Papers and electronic files of certain working groups of the IPCC, including reviews and comments on drafts of their Assessment Reports, are archived at the Environmental Science and Public Policy Archives in the Harvard Library. Other reports Special reports In addition to climate assessment reports, the IPCC publishes Special Reports on specific topics. The preparation and approval process for all IPCC Special Reports follows the same procedures as for IPCC Assessment Reports. In the year 2011 two IPCC Special Report were finalized, the Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) and the Special Report on Managing Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX). Both Special Reports were requested by governments. Special Report on Emissions Scenarios (SRES) The Special Report on Emissions Scenarios (SRES) is a report by the IPCC which was published in 2000. The SRES contains "scenarios" of future changes in emissions of greenhouse gases and sulfur dioxide. One of the uses of the SRES scenarios is to project future changes in climate, e.g., changes in global mean temperature. The SRES scenarios were used in the IPCC's Third and Fourth Assessment Reports. The SRES scenarios are "baseline" (or "reference") scenarios, which means that they do not take into account any current or future measures to limit greenhouse gas (GHG) emissions (e.g., the Kyoto Protocol to the United Nations Framework Convention on Climate Change). SRES emissions projections are broadly comparable in range to the baseline projections that have been developed by the scientific community. Comments on the SRES There have been a number of comments on the SRES. Parson et al. (2007) stated that the SRES represented "a substantial advance from prior scenarios". At the same time, there have been criticisms of the SRES. The most prominently publicized criticism of SRES focused on the fact that all but one of the participating models compared gross domestic product (GDP) across regions using market exchange rates (MER), instead of the more correct purchasing-power parity (PPP) approach. This criticism is discussed in the main SRES article. Special report on renewable energy sources and climate change mitigation (SRREN) This report assesses existing literature on renewable energy commercialisation for the mitigation of climate change. It was published in 2012 and covers the six most important renewable energy technologies in a transition, as well as their integration into present and future energy systems. It also takes into consideration the environmental and social consequences associated with these technologies, the cost and strategies to overcome technical as well as non-technical obstacles to their application and diffusion. More than 130 authors from all over the world contributed to the preparation of IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) on a voluntary basis – not to mention more than 100 scientists, who served as contributing authors. Special Report on managing the risks of extreme events and disasters to advance climate change adaptation (SREX) The report was published in 2012. It assesses the effect that climate change has on the threat of natural disasters and how nations can better manage an expected change in the frequency of occurrence and intensity of severe weather patterns. It aims to become a resource for decision-makers to prepare more effectively for managing the risks of these events. A potentially important area for consideration is also the detection of trends in extreme events and the attribution of these trends to human influence. The full report, 594 pages in length, may be found here in PDF form. More than 80 authors, 19 review editors, and more than 100 contributing authors from all over the world contributed to the preparation of SREX. Special Report on Global Warming of 1.5 °C (SR15) When the Paris Agreement was adopted, the UNFCCC invited the Intergovernmental Panel on Climate Change to write a special report on "How can humanity prevent the global temperature rise more than 1.5 degrees above pre-industrial level". The completed report, Special Report on Global Warming of 1.5 °C (SR15), was released on 8 October 2018. Its full title is "Global Warming of 1.5 °C, an IPCC special report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty". The finished report summarizes the findings of scientists, showing that maintaining a temperature rise to below 1.5 °C remains possible, but only through "rapid and far-reaching transitions in energy, land, urban and infrastructure..., and industrial systems". Meeting the Paris target of is possible but would require "deep emissions reductions", "rapid", "far-reaching and unprecedented changes in all aspects of society". In order to achieve the 1.5 °C target, emissions must decline by 45% (relative to 2010 levels) by 2030, reaching net zero by around 2050. Deep reductions in non-CO2 emissions (such as nitrous oxide and methane) will also be required to limit warming to 1.5 °C. Under the pledges of the countries entering the Paris Accord, a sharp rise of 3.1 to 3.7 °C is still expected to occur by 2100. Holding this rise to 1.5 °C avoids the worst effects of a rise by even 2 °C. However, a warming of even 1.5 degrees will still result in large-scale drought, famine, heat stress, species die-off, loss of entire ecosystems, and loss of habitable land, throwing more than 100 million into poverty. Effects will be most drastic in arid regions including the Middle East and the Sahel in Africa, where fresh water will remain in some areas following a 1.5 °C rise in temperatures but are expected to dry up completely if the rise reaches 2 °C. Special Report on climate change and land (SRCCL) The final draft of the "Special Report on climate change and land" (SRCCL)—with the full title, "Special Report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems" was published online on 7 August 2019. Special Report on the Ocean and Cryosphere in a Changing Climate (SROCC) The "Special Report on the Ocean and Cryosphere in a Changing Climate" (SROCC) was approved on 25 September 2019 in Monaco. Among other findings, the report concluded that sea level rises could be up to two feet higher by the year 2100, even if efforts to reduce greenhouse gas emissions and to limit global warming are successful; coastal cities across the world could see so-called "storm[s] of the century" at least once a year. Methodology reports Within IPCC the National Greenhouse Gas Inventory Program develops methodologies to estimate emissions of greenhouse gases. This has been undertaken since 1991 by the IPCC WGI in close collaboration with the Organisation for Economic Co-operation and Development and the International Energy Agency. The objectives of the National Greenhouse Gas Inventory Program are: to develop and refine an internationally agreed methodology and software for the calculation and reporting of national greenhouse gas emissions and removals; and to encourage the widespread use of this methodology by countries participating in the IPCC and by signatories of the UNFCCC. Revised 1996 IPCC Guidelines for National Greenhouse Gas Inventories The 1996 Guidelines for National Greenhouse Gas Inventories provide the methodological basis for the estimation of national greenhouse gas emissions inventories. Over time these guidelines have been completed with good practice reports: Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories and Good Practice Guidance for Land Use, Land-Use Change and Forestry. The 1996 guidelines and the two good practice reports are to be used by parties to the UNFCCC and to the Kyoto Protocol in their annual submissions of national greenhouse gas inventories. 2006 IPCC Guidelines for National Greenhouse Gas Inventories The 2006 IPCC Guidelines for National Greenhouse Gas Inventories is the latest version of these emission estimation methodologies, including a large number of default emission factors. Although the IPCC prepared this new version of the guidelines on request of the parties to the UNFCCC, the methods have not yet been officially accepted for use in national greenhouse gas emissions reporting under the UNFCCC and the Kyoto Protocol. Other activities The IPCC concentrates its activities on the tasks allotted to it by the relevant WMO Executive Council and UNEP Governing Council resolutions and decisions as well as on actions in support of the UNFCCC process. While the preparation of the assessment reports is a major IPCC function, it also supports other activities, such as the Data Distribution Centre and the National Greenhouse Gas Inventories Programme, required under the UNFCCC. This involves publishing default emission factors, which are factors used to derive emissions estimates based on the levels of fuel consumption, industrial production and so on. The IPCC also often answers inquiries from the UNFCCC Subsidiary Body for Scientific and Technological Advice (SBSTA). Awards Nobel Peace Prize in 2007 In December 2007, the IPCC was awarded the Nobel Peace Prize "for their efforts to build up and disseminate greater knowledge about man-made climate change, and to lay the foundations for the measures that are needed to counteract such change". The award is shared with former U.S. Vice-President Al Gore for his work on climate change and the documentary An Inconvenient Truth. Criticism There is widespread support for the IPCC in the scientific community, which is reflected in publications by other scientific bodies 2008 and 2009 and experts; however, critiques of the IPCC have been made. Since 2010, the IPCC has come under yet unparalleled public and political scrutiny. The global IPCC consensus approach has been challenged internally and externally, for example, during the 2009 Climatic Research Unit email controversy ("Climategate"). It is contested by some as an information monopoly with results for both the quality and the impact of the IPCC work as such. Conservative nature of IPCC reports Some critics have contended that the IPCC reports tend to be conservative by consistently underestimating the pace and impacts of global warming, and report only the "lowest common denominator" findings. On the eve of the publication of IPCC's Fourth Assessment Report in 2007 another study was published suggesting that temperatures and sea levels have been rising at or above the maximum rates proposed during IPCC's 2001 Third Assessment Report. The study compared IPCC 2001 projections on temperature and sea level change with observations. Over the six years studied, the actual temperature rise was near the top end of the range given by IPCC's 2001 projection, and the actual
|
United Nations. IPCC may also refer to: Other organisations Independent Police Complaints Commission, defunct public body in England and Wales Independent Police Complaints
|
the Hong Kong Government Irish Peatland Conservation Council, charity to preserve bogs Other uses Integrated Professional Competency Course, a course of the Institute of Chartered Accountants of India Interworld Police Coordinating Company, a
|
outlets for the PC. More than 190 ComputerLand stores already existed, while Sears was in the process of creating a handful of in-store computer centers for sale of the new product. Reception was overwhelmingly positive, with sales estimates from analysts suggesting billions of dollars in sales over the next few years, and the IBM PC immediately became the talk of the entire computing industry. Dealers were overwhelmed with orders, including customers offering pre-payment for machines with no guaranteed delivery date. By the time the machine was shipping, the term "PC" was becoming a household name. Success Sales exceeded IBM's expectations by as much as 800%, shipping 40,000 PCs a month at one point. The company estimated that 50 to 70% of PCs sold in retail stores went to the home. In 1983 they sold more than 750,000 machines, while Digital Equipment Corporation, a competitor whose success among others had spurred them to enter the market, had sold only 69,000 machines in that period. Software support from the industry grew rapidly, with the IBM nearly instantly becoming the primary target for most microcomputer software development. One publication counted 753 software packages available a year after the PC's release, four times as many as the Macintosh had a year after release. Hardware support also grew rapidly, with 30–40 companies competing to sell memory expansion cards within a year. By 1984, IBM's revenue from the PC market was $4 billion, more than twice that of Apple. A 1983 study of corporate customers found that two thirds of large customers standardizing on one computer chose the PC, compared to 9% for Apple. A 1985 Fortune survey found that 56% of American companies with personal computers used PCs, compared to Apple's 16%. Almost as soon as the PC reached the market, rumors of clones began, and the first PC compatible clone was released in June 1982, less than a year after the PC's debut. Hardware For low cost and a quick design turnaround time, the hardware design of the IBM PC used entirely "off-the-shelf" parts from third party manufacturers, rather than unique hardware designed by IBM. The PC is housed in a wide, short steel chassis intended to support the weight of a CRT monitor. The front panel is made of plastic, with an opening where one or two disk drives can be installed. The back panel houses a power inlet and switch, a keyboard connector, a cassette connector and a series of tall vertical slots with blank metal panels which can be removed in order to install expansion cards. Internally, the chassis is dominated by a motherboard which houses the CPU, built-in RAM, expansion RAM sockets, and slots for expansion cards. The IBM PC was highly expandable and upgradeable, but the base factory configuration included: Motherboard The PC is built around a single large circuit board called a motherboard which carries the processor, built-in RAM, expansion slots, keyboard and cassette ports, and the various peripheral integrated circuits that connected and controlled the components of the machine. The peripheral chips included an Intel 8259 PIC, an Intel 8237 DMA controller, and an Intel 8253 PIT. The PIT provides clock "ticks" and dynamic memory refresh timing. CPU and RAM The CPU is an Intel 8088, a cost-reduced form of the Intel 8086 which largely retains the 8086's internal 16-bit logic, but exposes only an 8-bit bus. The CPU is clocked at 4.77 MHz, which would eventually become an issue when clones and later PC models offered higher CPU speeds that broke compatibility with software developed for the original PC. The single base clock frequency for the system was 14.31818 MHz, which when divided by 3, yielded the 4.77 MHz for the CPU (which was considered close enough to the then 5 MHz limit of the 8088), and when divided by 4, yielded the required 3.579545 MHz for the NTSC color carrier frequency. The PC motherboard included a second, empty socket, described by IBM simply as an "auxiliary processor socket", although the most obvious use was the addition of an Intel 8087 math coprocessor, which improved floating-point math performance. From the factory the PC was equipped with either 16 kB or 64 kB of RAM. RAM upgrades were provided both by IBM and third parties as expansion cards, and could upgrade the machine to a maximum of 256 kB. ROM BIOS The BIOS is the firmware of the IBM PC, occupying four 2 kB ROM chips on the motherboard. It provides bootstrap code and a library of common functions that all software can use for many purposes, such as video output, keyboard input, disk access, interrupt handling, testing memory, and other functions. IBM shipped several versions of the BIOS throughout the PC's lifespan. Display While most home computers had built-in video output hardware, IBM took the unusual approach of offering two different graphics options, the MDA and CGA cards. The former provided high-resolution monochrome text, but could not display anything except text, while the latter provided medium- and low-resolution color graphics and text. CGA used the same scan rate as NTSC television, allowing it to provide a composite video output which could be used with any compatible television or composite monitor, as well as a direct-drive TTL output suitable for use with any RGBI monitor using an NTSC scan rate. IBM also sold the 5153 color monitor for this purpose, but it was not available at release and was not released until March 1983. MDA scanned at a higher frequency and required a proprietary monitor, the IBM 5151. The card also included a built-in printer port. Both cards could also be installed simultaneously for mixed graphics and text applications. For instance, AutoCAD, Lotus 1-2-3 and other software allowed use of a CGA Monitor for graphics and a separate monochrome monitor for text menus. Third parties went on to provide an enormous variety of aftermarket graphics adapters, such as the Hercules Graphics Card. The software and hardware of the PC, at release, was designed around a single 8-bit adaptation of the ASCII character set, now known as code page 437. Storage The two bays in the front of the machine could be populated with one or two 5.25″ floppy disk drives, storing 160 kB per disk side for a total of 320 kB of storage on one disk. The floppy drives require a controller card inserted in an expansion slot, and connect with a single ribbon cable with two edge connectors. The IBM floppy controller card provides an external 37-pin D-sub connector for attachment of an external disk drive, although IBM did not offer one for purchase until 1986. As was common for home computers of the era, the IBM PC offered a port for connecting a cassette data recorder. Unlike the typical home computer however, this was never a major avenue for software distribution, probably because very few PCs were sold without floppy drives. The port was removed on the very next PC model, the XT. At release, IBM did not offer any hard disk drive option and adding one was difficult - the PC's stock power supply had inadequate power to run a hard drive, the motherboard did not support BIOS expansion ROMs which was needed to support a hard drive controller, and both PC DOS and the BIOS had no support for hard disks. After the XT was released, IBM altered the design of the 5150 to add most of these capabilities, except for the upgraded power supply. At this point adding a hard drive was possible, but required the purchase of the IBM 5161 Expansion Unit, which contained a dedicated power supply and included a hard drive. Although official hard drive support did not exist, the third party market did provide early hard drives that connected to the floppy disk controller, but required a patched version of PC DOS to support the larger disk sizes. Human interface The only option for human interface provided in the base PC was the built-in keyboard port, meant to connect to the included IBM Model F keyboard. The Model F was initially developed for the IBM Datamaster, and was substantially better than the keyboards provided with virtually all home computers on the market at that time in many regards - number of keys, reliability and ergonomics. While some home computers of the time utilized chiclet keyboards or inexpensive mechanical designs, the IBM keyboard provided good ergonomics, reliable and positive tactile key mechanisms and flip-up feet to adjust its angle. Public reception of the keyboard was extremely positive, with some sources describing it as a major selling point of the PC and even as "the best keyboard available on any microcomputer." At release, IBM provided a Game Control Adapter which offered a 15-pin port intended for the connection of up to two joysticks, each having two analog axes and two buttons. Communications Connectivity to other computers and peripherals was initially provided through serial and parallel ports. IBM provided a serial card based on an 8250 UART. The BIOS supports up to two serial ports. IBM provided two different options for connecting Centronics-compatible parallel printers. One was the IBM Printer Adapter, and the other was integrated into the MDA as the IBM Monochrome Display and Printer Adapter. Expansion The expansion capability of the IBM PC was very significant to its success in the market. Some publications highlighted IBM's uncharacteristic decision to publish complete, thorough specifications of the system bus and memory map immediately on release, with the intention of fostering a market of compatible third-party hardware and software. The motherboard includes five 62-pin card edge connectors which are connected to the CPU's I/O lines. IBM referred to these as "I/O slots," but after the expansion of the PC clone industry they became retroactively known as the ISA bus. At the back of the machine is a metal panel, integrated into the steel chassis of the system unit, with a series of vertical slots lined up with each card slot. Most expansion cards have a matching metal bracket which slots into one of these openings, serving two purposes. First, a screw inserted through a tab on the bracket into the chassis fastens the card securely in place, preventing the card from wiggling out of place. Second, any ports the card provides for external attachment are bolted to the bracket, keeping them secured in place as well. The PC expansion slots can accept an enormous variety of expansion hardware, adding capabilities such as: Graphics Sound Mouse support Expanded memory Additional serial or parallel ports Networking Connection to proprietary industrial or scientific equipment The market reacted as IBM had intended, and within a year or two of the PC's release the available options for expansion hardware were immense. 5161 Expansion Unit The expandability of the PC was important, but had significant limitations. One major limitation was the inability to install a hard drive, as described above. Another was that there were only five expansion slots, which tended to get filled up by essential hardware - a PC with a graphics card, memory expansion, parallel card and serial card was left with only one open slot, for instance. IBM rectified these problems in the later XT, which included more slots and support for an internal hard drive, but at the same time released the 5161 Expansion Unit, which could be used with either the XT or the original PC. The 5161 connected to the PC system unit using a cable and a card plugged into an expansion slot, and provided a second system chassis with more expansion slots and a hard drive. Software IBM initially announced intent to support multiple operating systems: CP/M-86, UCSD p-System, and an in-house product called IBM PC DOS, developed by Microsoft. In practice, IBM's expectation and intent was for the market to primarily use PC-DOS, CP/M-86 was not available for six months after the PC's
|
a year after the PC's release, four times as many as the Macintosh had a year after release. Hardware support also grew rapidly, with 30–40 companies competing to sell memory expansion cards within a year. By 1984, IBM's revenue from the PC market was $4 billion, more than twice that of Apple. A 1983 study of corporate customers found that two thirds of large customers standardizing on one computer chose the PC, compared to 9% for Apple. A 1985 Fortune survey found that 56% of American companies with personal computers used PCs, compared to Apple's 16%. Almost as soon as the PC reached the market, rumors of clones began, and the first PC compatible clone was released in June 1982, less than a year after the PC's debut. Hardware For low cost and a quick design turnaround time, the hardware design of the IBM PC used entirely "off-the-shelf" parts from third party manufacturers, rather than unique hardware designed by IBM. The PC is housed in a wide, short steel chassis intended to support the weight of a CRT monitor. The front panel is made of plastic, with an opening where one or two disk drives can be installed. The back panel houses a power inlet and switch, a keyboard connector, a cassette connector and a series of tall vertical slots with blank metal panels which can be removed in order to install expansion cards. Internally, the chassis is dominated by a motherboard which houses the CPU, built-in RAM, expansion RAM sockets, and slots for expansion cards. The IBM PC was highly expandable and upgradeable, but the base factory configuration included: Motherboard The PC is built around a single large circuit board called a motherboard which carries the processor, built-in RAM, expansion slots, keyboard and cassette ports, and the various peripheral integrated circuits that connected and controlled the components of the machine. The peripheral chips included an Intel 8259 PIC, an Intel 8237 DMA controller, and an Intel 8253 PIT. The PIT provides clock "ticks" and dynamic memory refresh timing. CPU and RAM The CPU is an Intel 8088, a cost-reduced form of the Intel 8086 which largely retains the 8086's internal 16-bit logic, but exposes only an 8-bit bus. The CPU is clocked at 4.77 MHz, which would eventually become an issue when clones and later PC models offered higher CPU speeds that broke compatibility with software developed for the original PC. The single base clock frequency for the system was 14.31818 MHz, which when divided by 3, yielded the 4.77 MHz for the CPU (which was considered close enough to the then 5 MHz limit of the 8088), and when divided by 4, yielded the required 3.579545 MHz for the NTSC color carrier frequency. The PC motherboard included a second, empty socket, described by IBM simply as an "auxiliary processor socket", although the most obvious use was the addition of an Intel 8087 math coprocessor, which improved floating-point math performance. From the factory the PC was equipped with either 16 kB or 64 kB of RAM. RAM upgrades were provided both by IBM and third parties as expansion cards, and could upgrade the machine to a maximum of 256 kB. ROM BIOS The BIOS is the firmware of the IBM PC, occupying four 2 kB ROM chips on the motherboard. It provides bootstrap code and a library of common functions that all software can use for many purposes, such as video output, keyboard input, disk access, interrupt handling, testing memory, and other functions. IBM shipped several versions of the BIOS throughout the PC's lifespan. Display While most home computers had built-in video output hardware, IBM took the unusual approach of offering two different graphics options, the MDA and CGA cards. The former provided high-resolution monochrome text, but could not display anything except text, while the latter provided medium- and low-resolution color graphics and text. CGA used the same scan rate as NTSC television, allowing it to provide a composite video output which could be used with any compatible television or composite monitor, as well as a direct-drive TTL output suitable for use with any RGBI monitor using an NTSC scan rate. IBM also sold the 5153 color monitor for this purpose, but it was not available at release and was not released until March 1983. MDA scanned at a higher frequency and required a proprietary monitor, the IBM 5151. The card also included a built-in printer port. Both cards could also be installed simultaneously for mixed graphics and text applications. For instance, AutoCAD, Lotus 1-2-3 and other software allowed use of a CGA Monitor for graphics and a separate monochrome monitor for text menus. Third parties went on to provide an enormous variety of aftermarket graphics adapters, such as the Hercules Graphics Card. The software and hardware of the PC, at release, was designed around a single 8-bit adaptation of the ASCII character set, now known as code page 437. Storage The two bays in the front of the machine could be populated with one or two 5.25″ floppy disk drives, storing 160 kB per disk side for a total of 320 kB of storage on one disk. The floppy drives require a controller card inserted in an expansion slot, and connect with a single ribbon cable with two edge connectors. The IBM floppy controller card provides an external 37-pin D-sub connector for attachment of an external disk drive, although IBM did not offer one for purchase until 1986. As was common for home computers of the era, the IBM PC offered a port for connecting a cassette data recorder. Unlike the typical home computer however, this was never a major avenue for software distribution, probably because very few PCs were sold without floppy drives. The port was removed on the very next PC model, the XT. At release, IBM did not offer any hard disk drive option and adding one was difficult - the PC's stock power supply had inadequate power to run a hard drive, the motherboard did not support BIOS expansion ROMs which was needed to support a hard drive controller, and both PC DOS and the BIOS had no support for hard disks. After the XT was released, IBM altered the design of the 5150 to add most of these capabilities, except for the upgraded power supply. At this point adding a hard drive was possible, but required the purchase of the IBM 5161 Expansion Unit, which contained a dedicated power supply and included a hard drive. Although official hard drive support did not exist, the third party market did provide early hard drives that connected to the floppy disk controller, but required a patched version of PC DOS to support the larger disk sizes. Human interface The only option for human interface provided in the base PC was the built-in keyboard port, meant to connect to the included IBM Model F keyboard. The Model F was initially developed for the IBM Datamaster, and was substantially better than the keyboards provided with virtually all home computers on the market at that time in many regards - number of keys, reliability and ergonomics. While some home computers of the time utilized chiclet keyboards or inexpensive mechanical designs, the IBM keyboard provided good ergonomics, reliable and positive tactile key mechanisms and flip-up feet to adjust its angle. Public reception of the keyboard was extremely positive, with some sources describing it as a major selling point of the PC and even as "the best keyboard available on any microcomputer." At release, IBM provided a Game Control Adapter which offered a 15-pin port intended for the connection of up to two joysticks, each having two analog axes and two buttons. Communications Connectivity to other computers and peripherals was initially provided through serial and parallel ports. IBM provided a serial card based on an 8250 UART. The BIOS supports up to two serial ports. IBM provided two different options for connecting Centronics-compatible parallel printers. One was the IBM Printer Adapter, and the other was integrated into the MDA as the IBM Monochrome Display and Printer Adapter. Expansion The expansion capability of the IBM PC was very significant to its success in the market. Some publications highlighted IBM's uncharacteristic decision to publish complete, thorough specifications of the system bus and memory map immediately on
|
the English shires into honours or baronies, Irish counties were granted out to the Anglo-Norman noblemen in cantreds, later known as baronies, which in turn were subdivided, as in England, into parishes. Parishes were composed of townlands. However, in many cases, these divisions correspond to earlier, pre-Norman, divisions. While there are 331 baronies in Ireland, and more than a thousand civil parishes, there are around sixty thousand townlands that range in size from one to several thousand hectares. Townlands were often traditionally divided into smaller units called quarters, but these subdivisions are not legally defined. Counties corporate The following towns/cities had charters specifically granting them the status of a county corporate: County of the Town of Carrickfergus (by 1325) County of the City of Cork (1608) County of the Town of Drogheda (1412) County of the City of Dublin (1548) County of the Town of Galway (1610) County of the City of Kilkenny (1610) County of the City of Limerick (1609) County of the City of Waterford (1574) The only entirely new counties created in 1898 were the county boroughs of Londonderry and Belfast. Carrickfergus, Drogheda and Kilkenny were abolished; Galway was also abolished, but recreated in 1986. Exceptions to the county system of control Regional presidencies of Connacht and Munster remained in existence until 1672, with special powers over their subsidiary counties. Tipperary remained a county palatine until the passing of the County Palatine of Tipperary Act 1715, with different officials and procedures from other counties. At the same time, Dublin, until the 19th century, had ecclesiastical liberties with rules outside those applying to the rest of Dublin city and county. Exclaves of the county of Dublin existed in counties Kildare and Wicklow. At least eight other enclaves of one county inside another, or between two others, existed. The various enclaves and exclaves were merged into neighbouring and surrounding counties, primarily in the mid-19th century under a series of Orders in Council. Evolution of functions The Church of Ireland exercised functions at the level of a civil parish that would later be exercised by county authorities. Vestigial feudal power structures of major old estates remained well into the 18th century. Urban corporations operated individual royal charters. Management of counties came to be exercised by grand juries. Members of grand juries were the local payers of rates who historically held judicial functions, taking maintenance roles in regard to roads and bridges, and the collection of "county cess" taxes. They were usually composed of wealthy "country gentlemen" (i.e. landowners, farmers and merchants):A country gentleman as a member of a Grand Jury...levied the local taxes, appointed the nephews of his old friends to collect them, and spent them when they were gathered in. He controlled the boards of guardians and appointed the dispensary doctors, regulated the diet of paupers, inflicted fines and administered the law at petty sessions. The counties were initially used for judicial purposes, but began to take on some governmental functions in the 17th century, notably with grand juries. 19th and 20th centuries In 1836, the use of counties as local government units was further developed, with grand-jury powers extended under the Grand Jury (Ireland) Act 1836. The traditional county of Tipperary was split into two judicial counties (or ridings) following the establishment of assize courts in 1838. Also in that year, local poor law boards, with a mix of magistrates and elected "guardians" took over the health and social welfare functions of the grand juries. Sixty years later, a more radical reorganisation of local government took place with the passage of the Local Government (Ireland) Act 1898. This Act established a county council for each of the thirty-three Irish administrative counties. Elected county councils took over the powers of the grand juries. The boundaries of the traditional counties changed on a number of occasions. The 1898 Act changed the boundaries of Counties Galway, Clare, Mayo, Roscommon, Sligo, Waterford, Kilkenny, Meath and Louth, and others. County Tipperary was divided into two regions: North Riding and South Riding. Areas of the cities of Belfast, Cork, Dublin, Limerick, Derry and Waterford were carved from their surrounding counties to become county boroughs in their own right and given powers equivalent to those of administrative counties. Under the Government of Ireland Act 1920, the island was partitioned between Southern Ireland and Northern Ireland. For the purposes of the Act, ... Northern Ireland shall consist of the parliamentary counties of Antrim, Armagh, Down, Fermanagh, Londonderry and Tyrone, and the parliamentary boroughs of Belfast and Londonderry, and Southern Ireland shall consist of so much of Ireland as is not comprised within the said parliamentary counties and boroughs. The county and county borough borders were thus used to determine the line of partition. Southern Ireland shortly afterwards became the Irish Free State. This partition was entrenched in the Anglo-Irish Treaty, which was ratified in 1922, by which the Irish Free State left the United Kingdom with Northern Ireland making the decision to not separate two days later. Historic and traditional counties Areas that were shired by 1607 and continued as counties until the local government reforms of 1836, 1898 and 2001 are sometimes referred to as "traditional" or "historic" counties. These were distinct from the counties corporate that existed in some of the larger towns and cities, although linked to the county at large for other purposes. From 1898 to 2001, areas with county councils were known as administrative counties, while the counties corporate were designated as county boroughs. From 2001, local government areas were divided between counties and cities. From 2014, they were divided into counties, cities, and cities and counties. Current usage In the Republic of Ireland In the Republic of Ireland the traditional counties are, in general, the basis for local government, planning and community development purposes and are still generally respected for other purposes. They are governed by county councils. Administrative borders have been altered to allocate various towns exclusively into one county having been originally split between two counties. At the establishment of the Irish Free State in 1922, there were 27 administrative counties (with County Tipperary divided into the administrative counties of North Tipperary and South Tipperary) and 4 county boroughs, Dublin, Cork, Galway, Limerick and Waterford. Rural districts were abolished by the Local Government Act 1925 and the Local Government (Dublin) Act 1930 amidst widespread allegations of corruption. Under the Local Government Provisional Order Confirmation Act 1976, part of the urban area of Drogheda, which lay in County Meath, was transferred to County Louth on 1 January 1977. This resulted in the land area of County Louth increasing slightly at the expense of County Meath. The possibility of a similar action with regard to Waterford City has been raised in recent years, though opposition from Kilkenny has been strong. In 1985, Galway became a county borough. County Dublin was abolished as an administrative county in 1994 and divided into three administrative counties: Dún Laoghaire–Rathdown, Fingal, and South Dublin. Under the Local Government Act 2001, the county boroughs of Dublin, Cork, Galway, Limerick and Waterford were re-styled as cities, with the same status in law as counties. The term administrative county was replaced with the term "county". The cities of Limerick and Waterford were merged with their respective counties by the Local Government Reform Act 2014, to form new cities and counties. The same Act also abolished North Tipperary and South Tipperary and re-established County Tipperary as an administrative unit. There are now 31 local government areas: 26 counties, three cities, and two cities and counties. Since 2014, local authorities send representatives to Regional Assemblies overseeing three regions for the purposes of European Structural and Investment Funds: Southern Region, the Eastern and Midland Region, and the Northern and Western Region. From 1994 to 2014, the were eight Regional Authorities, dissolved under the Local Government Reform Act 2014. As placenames, there is a distinction between the traditional counties, listed as "counties", and those created
|
continually shrinking to encompass Dublin, and parts of Meath, Louth and Kildare. Throughout the rest of Ireland, English rule was upheld by the earls of Desmond, Ormond, and Kildare (all created in the 14th-century), with the extension of the county system all but impossible. During the reign of Edward III (1327–77) all franchises, grants and liberties had been temporarily revoked with power passed to the king's sheriffs over the seneschals. This may have been due to the disorganisation caused by the Bruce invasion as well as the renouncing of the Connaught Burkes of their allegiance to the crown. The Earls of Ulster divided their territory up into counties; however, these are not considered part of the Crown's shiring of Ireland. In 1333, the Earldom of Ulster is recorded as consisting of seven counties: Antrim, Blathewyc, Cragferus, Coulrath, del Art, Dun (also known as Ladcathel), and Twescard. Passage to the Crown Of the original lordships or palatine counties: Leinster had passed from Richard de Clare to his daughter, Isabel de Clare, who had married William Marshal, 1st Earl of Pembroke (second creation of title). This marriage was confirmed by King John, with Isabel's lands given to William as consort. The liberty was afterwards divided into five—Carlow, Kildare, Kilkenny, Leix and Wexford—one for each of Marshal's co-heiresses. Meath was divided between the granddaughters of Walter de Lacy: Maud and Margery. Maud's half became the liberty of Trim, and she married Geoffrey de Geneville. Margery's half retained the name Meath, and she married John de Verdon. After the marriage of Maud's daughter Joan to Roger Mortimer, 1st Earl of March, Trim later passed via their descendants to the English Crown. Meath, which had passed to the Talbots, was resumed by Henry VIII under the Statute of Absentees. Ulster was regranted to the de Lacys from John de Courcy, whilst Connaught, which had been granted to William de Burgh, was at some point divided into the liberties of Connaught and Roscommon. William's grandson Walter de Burgh was in 1264 also made lord of Ulster, bringing both Connaught and Ulster under the same lord. In 1352 Elizabeth de Burgh, 4th Countess of Ulster married Lionel of Antwerp, a son of king Edward III. Their daughter Philippa married Edmund Mortimer, 3rd Earl of March. Upon the death of Edmund Mortimer, 5th Earl of March in 1425, both lordships were inherited by Richard of York, 3rd Duke of York and thus passed to the Crown. Tipperary was resumed by King James I, however under Charles II in 1662 was reconstituted for James Butler, 1st Duke of Ormonde. With the passing of liberties to the Crown, the number of Counties of the Cross declined, and only one, Tipperary, survived into the Stuart era; the others had ceased to exist by the reign of Henry VIII. Tudor era It was not until the Tudors, specifically the reign of Henry VIII (1509–47), that crown control started to once again extend throughout Ireland. Having declared himself King of Ireland in 1541, Henry VIII went about converting Irish chiefs into feudal subjects of the crown with land divided into districts, which were eventually amalgamated into the modern counties. County boundaries were still ill-defined; however, in 1543 Meath was split into Meath and Westmeath. Around 1545, the Byrnes and O'Tooles, both native septs who had constantly been a pain for the English administration of the Pale, petitioned the Lord Deputy of Ireland to turn their district into its own county, Wicklow. However, this was ignored. During the reigns of the last two Tudor monarchs, Mary I (1553–58) and Elizabeth I (1558–1603), the majority of the work for the foundation of the modern counties was carried out under the auspices of three Lord Deputies: Thomas Radclyffe, 3rd Earl of Sussex, Sir Henry Sydney, and Sir John Perrot. Mary's reign saw the first addition of actual new counties since the reign of King John. Radclyffe had conquered the districts of Glenmaliry, Irry, Leix, Offaly, and Slewmargy from the O'Moores and O'Connors, and in 1556 a statute decreed that Offaly and part of Glenmaliry would be made into the county of King's County, whilst the rest of Glenmarliry along with Irry, Leix and Slewmargy was formed into Queen's County. Radclyffe brought forth legislation to shire all land as yet unshired throughout Ireland and sought to divide the island into six parts—Connaught, Leinster, Meath, Nether Munster, Ulster, and Upper Munster. However, his administrative reign in Ireland was cut short, and it was not until the reign of Mary's successor, Elizabeth, that this legislation was re-adopted. Under Elizabeth, Radclyffe was brought back to implement it. Sydney during his three tenures as Lord Deputy created two presidencies to administer Connaught and Munster. He shired Connaught into the counties of Galway, Mayo, Roscommon, and Sligo. In 1565 the territory of the O'Rourkes within Roscommon was made into the county of Leitrim. In an attempt to reduce the importance of the province of Munster, Sydney, using the River Shannon as a natural boundary took the former kingdom of Thomond (North Munster) and made it into the county of Clare as part of the presidency of Connaught in 1569. A commission headed by Perrot and others in 1571 declared that the territory of Desmond in Munster was to be made a county of itself, and it had its own sheriff appointed, however in 1606 it was merged with the county of Kerry. In 1575 Sydney made an expedition to Ulster to plan its shiring. However, nothing came to bear. In 1578 the go-ahead was given for turning the districts of the Byrnes and O'Tooles into the county of Wicklow. However, with the outbreak of war in Munster and then Ulster, they resumed their independence. Sydney also sought to split Wexford into two smaller counties, the northern half of which was to be called Ferns, but the matter was dropped as it was considered impossible to properly administer. The territory of the O'Farrells of Annaly, however, which was in Westmeath, in 1583 was formed into the county of Longford and transferred to Connaught. The Desmond rebellion (1579–83) that was taking place in Munster stopped Sydney's work and by the time it had been defeated Sir John Perrot was now Lord Deputy, being appointed in 1584. Perrot would be most remembered for shiring the only province of Ireland that remained effectively outside of English control, that of Ulster. Prior to his tenancy the only proper county in Ulster was Louth, which had been part of the Pale. There were two other long recognised entities north of Louth—Antrim and Down—that had at one time been "counties" of the Earldom of Ulster and were regarded as apart from the unreformed parts of the province. The date Antrim and Down became constituted is unknown. Perrot was recalled in 1588 and the shiring of Ulster would for two decades basically exist on paper as the territory affected remained firmly outside of English control until the defeat of Hugh O'Neill, Earl of Tyrone in the Nine Years' War. These counties were: Armagh, Cavan, Coleraine, Donegal, Fermanagh, Monaghan, and Tyrone. Cavan was formed from the territory of the O'Reilly's of East Breifne in 1584 and had been transferred from Connaught to Ulster. After O'Neill and his allies fled Ireland in 1607 in the Flight of the Earls, their lands became escheated to the Crown and the county divisions designed by Perrot were used as the basis for the grants of the subsequent Plantation of Ulster effected by King James I, which officially started in 1609. Around 1600 near the end of Elizabeth's reign, Clare was made an entirely distinct presidency of its own under the Earls of Thomond and would not return to being part of Munster until after the Restoration in 1660. It was not until the subjugation of the Byrnes and O'Tooles by Lord Deputy Sir Arthur Chichester that in 1606 Wicklow was finally shired. This county was one of the last to be created, yet was the closest to the centre of English power in Ireland. County Londonderry was incorporated in 1613 by the merger of County Coleraine with the barony of Loughinsholin (in County Tyrone), the North West Liberties of Londonderry (in County Donegal), and the North East Liberties of Coleraine (in County Antrim). Demarcation of counties and Tipperary Throughout the Elizabethan era and the reign of her successor James I, the exact boundaries of the provinces and the counties they consisted of remained uncertain. In 1598 Meath is considered a province in Hayne's Description of Ireland, and included the counties of Cavan, East Meath, Longford, and Westmeath. This contrasts to George Carew's 1602 survey where there were only four provinces with Longford part of Connaught and Cavan not mentioned at all with only three counties mentioned for Ulster. During Perrot's tenure as Lord President of Munster before he became Lord Deputy, Munster contained as many as eight counties rather than the six it later consisted of. These eight counties were: the five English counties of Cork, Limerick, Kerry, Tipperary, and Waterford; and the three Irish counties of Desmond, Ormond, and Thomond. Perrot's divisions in Ulster were for the main confirmed by a series of inquisitions between 1606 and 1610 that settled the demarcation of the counties of Connaught and Ulster. John Speed's Description of the Kingdom of Ireland in 1610 showed that there was still a vagueness over what counties constituted the provinces, however, Meath was no longer reckoned a province. By 1616 when the Attorney General for Ireland Sir John Davies departed Ireland, almost all counties had been delimited. The only exception was the county of Tipperary, which still belonged to the palatinate of Ormond. Tipperary would remain an anomaly being in effect two counties, one palatine, the other of the Cross until 1715 during the reign of King George I when an act abolished the "royalties and liberties of the County of Tipperary" and "that whatsoever hath been denominated or called Tipperary or Cross Tipperary, shall henceforth be and remain one county forever, under the name of the County of Tipperary." Between 1838 and 2014, County Tipperary was divided into two ridings/counties, North Tipperary and South Tipperary. Sub-divisions of counties To correspond with the subdivisions of the English shires into honours or baronies, Irish counties were granted out to the Anglo-Norman noblemen in cantreds, later known as baronies, which in turn were subdivided, as in England, into parishes. Parishes were composed of townlands. However, in many cases, these divisions correspond to earlier, pre-Norman, divisions. While there are 331 baronies in Ireland, and more than a thousand civil parishes, there are around sixty thousand townlands that range in size from one to several thousand hectares. Townlands were often traditionally divided into smaller units called quarters, but these subdivisions are not legally defined. Counties
|
directed the Internet Assigned Numbers Authority (IANA) and its predecessor, which assign Internet addresses. IANA was administered from ISI until a nonprofit organization, ICANN, was created for that purpose in 1998. Other achievements Some of the first Net security applications, and one of the world's first portable computers, also originated at ISI. ISI researchers also created or co-created the: GLOBUS grid computing standard LOOM knowledge representation language and environment, or LOOM (ontology) MONARCH supercomputer-on-a-chip Soar (cognitive architecture) for developing intelligent behavioral systems Pegasus (workflow management) In 2011, several ISI natural language experts advised the IBM team that created Watson, the computer that became the first machine to win against human competitors on the Jeopardy! TV show. In 2012, ISI's Kevin Knight spearheaded a successful drive to crack the Copiale cipher, a lengthy encrypted manuscript that had remained unreadable for 250 years. Also in 2012, the USC-Lockheed Martin Quantum Computing Center (QCC) became the first organization to operate a quantum annealing system outside of its manufacturer, D-Wave Systems, Inc. USC, ISI and Lockheed Martin now are performing basic and applied research into quantum computing. A second quantum annealing system is located at NASA Ames Research Center, and is operated jointly by NASA and Google. The USC Andrew and Erna Viterbi School of Engineering was ranked among the nation's top 10 engineering graduate schools by US News & World Report in 2015. Including ISI, USC is ranked first nationally in federal computer science research and development expenditures. Organizational structure ISI is organized into seven divisions focused on differing areas of research expertise: Advanced Electronics: MOSIS shared-services integrated circuit research and fabrication, CMOS and post-CMOS concepts, and biomimetics Computational Systems and Technology: quantum computing; supercomputing; cloud, wireless, reconfigurable and multicore computing; microarchitecture and electronics; science automation technologies; social networks and space systems Informatics Systems Research: grid computing, information security, service-oriented architectures, imaging and medical informatics that aim to transform healthcare discovery processes, practice and delivery. Artificial Intelligence: artificial intelligence in natural language, machine translation, information integration, education, robotics and other disciplines. Networking and Cybersecurity: internet security research and international testbed, internet measurement and monitoring approaches, and sensor networks that emphasize both networking theory and practice. Space Technology and Systems: space research and hands-on involvement for students through the Space Engineering Research Center, operated jointly by ISI and USC. Vision, Image, Speech and Text Analytics: ISI's Center for Vision, Image, Speech and Text Analytics (VISTA) is an internationally recognized leader in areas such as multimedia signal processing, computer vision, and natural language analysis. Smaller, specialized
|
approached the University of California at Los Angeles about creating an off-campus technology institute, but was told that a decision would take 15 months. He then presented the concept to USC, which approved the proposal in five days. ISI was launched with three employees in 1972. Its first proposal was funded by the Defense Advanced Research Projects Agency (DARPA) in 30 days for $6 million. ISI became one of the earliest nodes on ARPANET, the predecessor to the Internet, and in 1977 figured prominently in a demonstration of its international viability. ISI also helped refine the TCP/IP communications protocols fundamental to Net operations, and researcher Paul Mockapetris developed the now-familiar Domain Name System characterized by .com, .org, .net, .gov, and .edu on which the Net still operates. (The names .com, .org et al. were invented at SRI International, an ongoing collaborator.) Steve Crocker originated the Request for Comments (RFC) series, the written record of the network's technical structure and operation that both documented and shaped the emerging Internet. Another ISI researcher, Danny Cohen, became first to implement packet voice and packet video over ARPANET, demonstrating the viability of packet switching for real-time applications. Jonathan Postel collaborated in development of TCP/IP, DNS and the SMTP protocol that supports email. He also edited the RFC for nearly three decades until his sudden death in 1998, when ISI colleagues assumed responsibility. The Institute retained that role until 2009. Postel simultaneously directed the Internet Assigned Numbers Authority (IANA) and its predecessor, which assign Internet addresses. IANA was administered from ISI until a nonprofit organization, ICANN, was created for that purpose in 1998. Other achievements Some of the first Net security applications, and one of the world's first portable computers, also originated at ISI. ISI researchers also created or co-created the: GLOBUS grid computing standard LOOM knowledge representation language and environment, or LOOM (ontology) MONARCH supercomputer-on-a-chip Soar (cognitive architecture) for developing intelligent behavioral systems Pegasus (workflow management) In 2011, several ISI natural language experts advised the IBM team that created Watson, the computer that became the first machine to win against human competitors on the Jeopardy! TV show. In 2012, ISI's Kevin Knight spearheaded a successful drive to crack the Copiale cipher, a lengthy encrypted manuscript that
|
issues arise. These issues include but are not limited to natural disasters, computer/server malfunction, and physical theft. While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized, with information assurance now typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system). It is worthwhile to note that a computer does not necessarily mean a home desktop. A computer is any device with a processor and some memory. Such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones and tablet computers. IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious cyber attacks that often attempt to acquire critical private information or gain control of the internal systems. The field of information security has grown and evolved significantly in recent years. It offers many areas for specialization, including securing networks and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning, electronic record discovery, and digital forensics. Information security professionals are very stable in their employment. more than 80 percent of professionals had no change in employer or employment over a period of a year, and the number of professionals is projected to continuously grow more than 11 percent annually from 2014 to 2019. Threats Information security threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, theft of identity, theft of equipment or information, sabotage, and information extortion. Most people have experienced software attacks of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses in the information technology (IT) field. Identity theft is the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information through social engineering. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile, are prone to theft and have also become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of an organization's website in an attempt to cause loss of confidence on the part of its customers. Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as with ransomware. There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is conduct periodical user awareness. The number one threat to any organisation are users or internal employees, they are also called insider threats. Governments, military, corporations, financial institutions, hospitals, non-profit organisations, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor or a black hat hacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation. From a business perspective, information security must be balanced against cost; the Gordon-Loeb Model provides a mathematical economic approach for addressing this concern. For the individual, information security has a significant effect on privacy, which is viewed very differently in various cultures. Responses to threats Possible responses to a security threat or risk are: reduce/mitigate – implement safeguards and countermeasures to eliminate vulnerabilities or block threats assign/transfer – place the cost of the threat onto another entity or organization such as purchasing insurance or outsourcing accept – evaluate if the cost of the countermeasure outweighs the possible cost of loss due to the threat History Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher c. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands. However, for the most part protection was achieved through the application of procedural handling controls. Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box. As postal services expanded, governments created official organizations to intercept, decipher, read, and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653). In the mid-nineteenth century more complex classification systems were developed to allow governments to manage their information according to the degree of sensitivity. For example, the British Government codified this, to some extent, with the publication of the Official Secrets Act in 1889. Section 1 of the law concerned espionage and unlawful disclosures of information, while Section 2 dealt with breaches of official trust. A public interest defense was soon added to defend disclosures in the interest of the state. A similar law was passed in India in 1889, The Indian Official Secrets Act, which was associated with the British colonial era and used to crack down on newspapers that opposed the Raj's policies. A newer version was passed in 1923 that extended to all matters of confidential or secret information for governance. By the time of the First World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters. Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information. The establishment of computer security inaugurated the history of information security. The need for such appeared during World War II. The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls. An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed. The Enigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted by Alan Turing, can be regarded as a striking example of creating and using secured information. Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture of U-570). Various Mainframe computers were connected online during the Cold War to complete more sophisticated tasks, in a communication process easier than mailing magnetic tapes back and forth by computer centers. As such, the Advanced Research Projects Agency (ARPA), of the United States Department of Defense, started researching the feasibility of a networked system of communication to trade information within the United States Armed Forces. In 1968, the ARPANET project was formulated by Dr. Larry Roberts, which would later evolve into what is known as the internet. In 1973, important elements of ARPANET security were found by internet pioneer Robert Metcalfe to have many flaws such as the: "vulnerability of password structure and formats; lack of safety procedures for dial-up connections; and nonexistent user identification and authorizations", aside from the lack of controls and safeguards to keep data safe from unauthorized access. Hackers had effortless access to ARPANET, as phone numbers were known by the public. Due to these problems, coupled with the constant violation of computer security, as well as the exponential increase in the number of hosts and users of the system, "network security" was often alluded to as "network insecurity". The end of the twentieth century and the early years of the twenty-first century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful, and less expensive computing equipment made electronic data processing within the reach of small business and home users. The establishment of Transfer Control Protocol/Internetwork Protocol (TCP/IP) in the early 1980s enabled different types of computers to communicate. These computers quickly became interconnected through the internet. The rapid growth and widespread use of electronic data processing and electronic business conducted through the internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process, and transmit. The academic disciplines of computer security and information assurance emerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability of information systems. Basic principles Key concepts The CIA triad of confidentiality, integrity, and availability is at the heart of information security. (The members of the classic InfoSec triad—confidentiality, integrity, and availability—are interchangeably referred to in the literature as security attributes, properties, security goals, fundamental aspects, information criteria, critical information characteristics and basic building blocks.) However, debate continues about whether or not this CIA triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy. Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such as non-repudiation do not fit well within the three core concepts. The triad seems to have first been mentioned in a NIST publication in 1977. In 1992 and revised in 2002, the OECD's Guidelines for the Security of Information Systems and Networks proposed the nine generally accepted principles: awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment. Building upon those, in 2004 the NIST's Engineering Principles for Information Technology Security proposed 33 principles. From each of these derived guidelines and practices. In 1998, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian Hexad are a subject of debate amongst security professionals. In 2011, The Open Group published the information security management standard O-ISM3. This standard proposed an operational definition of the key concepts of security, with elements called "security objectives", related to access control (9), availability (3), data quality (1), compliance, and technical (4). In 2009, DoD Software Protection Initiative released the Three Tenets of Cybersecurity which are System Susceptibility, Access to the Flaw, and Capability to Exploit the Flaw. Neither of these models are widely adopted. Confidentiality In information security, confidentiality "is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes." While similar to "privacy," the two words are not interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers. Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals. Integrity In IT security, data integrity means maintaining and assuring the accuracy and completeness of data over its entire lifecycle. This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Information security systems typically incorporate controls to ensure their own integrity, in particular protecting the kernel or core functions against both deliberate and accidental threats. Multi-purpose and multi-user computer systems aim to compartmentalize the data and processing such that no user or process can adversely impact another: the controls may not succeed however, as we see in incidents such as malware infections, hacks, data theft, fraud, and privacy breaches. More broadly, integrity is an information security principle that involves human/social, process, and commercial integrity, as well as data integrity. As such it touches on aspects such as credibility, consistency, truthfulness, completeness, accuracy, timeliness, and assurance. Availability For any information system to serve its purpose, the information must be available when it is needed. This means the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down. In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program. Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect. This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails. Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response, and policy/change management. A successful information security team involves many different key roles to mesh and align for the CIA triad to be provided effectively. Non-repudiation In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction. It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity). The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation). Risk management Broadly speaking, risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). The Certified Information Systems Auditor (CISA) Review Manual 2006 defines risk management as "the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization." There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing, iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerabilities emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Furthermore, these processes have limitations as security breaches are generally rare and emerge in a specific context which may not be easily duplicated. Thus, any process and countermeasure should itself be evaluated for vulnerabilities. It is not possible to identify all risks, nor is it possible to eliminate all
|
protection mechanisms. The building up, layering on, and overlapping of security measures is called "defense in depth." In contrast to a metal chain, which is famously only as strong as its weakest link, the defense in depth strategy aims at a structure where, should one defensive measure fail, other measures will continue to provide protection. Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense in depth strategy. With this approach, defense in depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, and network security, host-based security, and application security forming the outermost layers of the onion. Both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy. Classification An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification. Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The Information Systems Audit and Control Association (ISACA) and its Business Model for Information Security also serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed. The type of information security classification labels selected and used will depend on the nature of the organization, with examples being: In the business sector, labels such as: Public, Sensitive, Private, Confidential. In the government sector, labels such as: Unclassified, Unofficial, Protected, Confidential, Secret, Top Secret, and their non-English equivalents. In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber, and Red. All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures. Access control Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Access control is generally considered in three steps: identification, authentication, and authorization. Identification Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to". Authentication Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe, a claim of identity. The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to. There are three different types of information that can be used for authentication: Something you know: things such as a PIN, a password, or your mother's maiden name Something you have: a driver's license or a magnetic swipe card Something you are: biometrics, including palm prints, fingerprints, voice prints, and retina (eye) scans Strong authentication requires providing more than one type of authentication information (two-factor authentication). The username is the most common form of identification on computer systems today and the password is the most common form of authentication. Usernames and passwords have served their purpose, but they are increasingly inadequate. Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such as Time-based One-time Password algorithms. Authorization After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include role-based access control, available in many advanced database management systems; simple file permissions provided in the UNIX and Windows operating systems; Group Policy Objects provided in Windows network systems; and Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. The U.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type of audit trail. Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions. This principle is used in the government when dealing with difference clearances. Even though two employees in different departments have a top-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to. Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad. Cryptography Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older, less secure applications such as Telnet and File Transfer Protocol (FTP) are slowly being replaced with more secure applications such as Secure Shell (SSH) that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU‑T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction, and they must be available when needed. Public key infrastructure (PKI) solutions address many of the problems that surround key management. Process The terms "reasonable and prudent person", "due care", and "due diligence" have been used in the fields of finance, securities, and law for many years. In recent years these terms have found their way into the fields of computing and information security. U.S. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems. In the business world, stockholders, customers, business partners, and governments have the expectation that corporate officers will run the business in accordance with accepted business practices and in compliance with laws and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that everything necessary is done to operate the business by sound business principles and in a legal, ethical manner. A prudent person is also diligent (mindful, attentive, ongoing) in their due care of the business. In the field of information security, Harris offers the following definitions of due care and due diligence: "Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees." And, [Due diligence are the] "continual activities that make sure the protection mechanisms are continually maintained and operational." Attention should be made to two important points in these definitions. First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing. Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA) provides principles and practices for evaluating risk. It considers all parties that could be affected by those risks. DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden. With increased data breach litigation, companies must balance security controls, compliance, and its mission. Security governance The Software Engineering Institute at Carnegie Mellon University, in a publication titled Governing for Enterprise Security (GES) Implementation Guide, defines characteristics of effective security governance. These include: An enterprise-wide issue Leaders are accountable Viewed as a business requirement Risk-based Roles, responsibilities, and segregation of duties defined Addressed and enforced in policy Adequate resources committed Staff aware and trained A development life cycle requirement Planned, managed, measurable, and measured Reviewed and audited Incident response plans An incident response plan (IRP) is a group of policies that dictate an organizations reaction to a cyber attack. Once an security breach has been identified the plan is initiated. It is important to note that there can be legal implications to a data breach. Knowing local and federal laws is critical. Every plan is unique to the needs of the organization, and it can involve skill sets that are not part of an IT team. For example, a lawyer may be included in the response plan to help navigate legal implications to a data breach. As mentioned above every plan is unique but most plans will include the following: Preparation Good preparation includes the development of an Incident Response Team (IRT). Skills need to be used by this team would be, penetration testing, computer forensics, network security, etc. This team should also keep track of trends in cybersecurity and modern attack strategies. A training program for end users is important as well as most modern attack strategies target users on the network. Identification This part of the incident response plan identifies if there was a security event. When an end user reports information or an admin notices irregularities, an investigation is launched. An incident log is a crucial part of this step. All of the members of the team should be updating this log to ensure that information flows as fast as possible. If it has been identified that a security breach has occurred the next step should be activated. Containment In this phase, the IRT works to isolate the areas that the breach took place to limit the scope of the security event. During this phase it is important to preserve information forensically so it can be analyzed later in the process. Containment could be as simple as physically containing a server room or as complex as segmenting a network to not allow the spread of a virus. Eradication This is where the threat that was identified is removed from the affected systems. This could include using deleting malicious files, terminating compromised accounts, or deleting other components. Some events do not require this step, however it is important to fully understand the event before moving to this step. This will help to ensure that the threat is completely removed. Recovery This stage is where the systems are restored back to original operation. This stage could include the recovery of data, changing user access information, or updating firewall rules or policies to prevent a breach in the future. Without executing this step, the system could still be vulnerable to future security threats. Lessons Learned In this step information that has been gathered during this process is used to make future decisions on security. This step is crucial to the ensure that future events are prevented. Using this information to further train admins is critical to the process. This step can also be used to process information that is distributed from other entities who have experienced a security event. Change management Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers, and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented. Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of management's many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented. Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management. However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity. The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system. Change management is usually overseen by a change review board composed of representatives from key business areas, security, networking, systems administrators, database administration, application developers, desktop support, and the help desk. The tasks of the change review board can be facilitated with the use of automated work flow application. The responsibility of the change review board is to ensure the organization's documented change management procedures are followed. The change management process is as follows Request: Anyone can request a change. The person making the change request may or may not be the same person that performs the analysis or implements the change. When a request for change is received, it may undergo a preliminary review to determine if the requested change is compatible with the organizations business model and practices, and to determine the amount of resources needed to implement the change. Approve: Management runs the business and controls the allocation of resources therefore, management must approve requests for changes and assign a priority for every change. Management might choose to reject a change request if the change is not compatible with the business model, industry standards or best practices. Management might also choose to reject a change request if the change requires more resources than can be allocated for the change. Plan: Planning a change involves discovering the scope and impact of the proposed change; analyzing the complexity of the change; allocation of resources and, developing, testing, and documenting both implementation and back-out plans. Need to define the criteria on which a decision to back out will be made. Test: Every change must be tested in a safe test environment, which closely reflects the actual production environment, before the change is applied to the production environment. The backout plan must also be tested. Schedule: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing the proposed implementation date for potential conflicts with other scheduled changes or critical business activities. Communicate: Once a change has been scheduled it must be communicated. The communication is to give others the opportunity to remind the change review board about other changes or critical business activities that might have been overlooked when scheduling the change. The communication also serves to make the help desk and users aware that a change is about to occur. Another responsibility of the change review board is to ensure that scheduled changes have been properly communicated to those who will be affected by the change or otherwise have an interest in the change. Implement: At the appointed date and time, the changes must be implemented. Part of the planning process was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan should be implemented. Document: All changes must be documented. The documentation includes the initial request for change, its approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change review board critique, the date/time the change was implemented, who implemented it, and whether the change was implemented successfully, failed or postponed. Post-change review: The change review board should hold a post-implementation review of changes. It is particularly important to review failed and backed out changes. The review board should try to understand the problems that were encountered, and look for areas for improvement. Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment. Good change management procedures improve the overall quality and success of changes as they are implemented. This is accomplished through planning, peer review, documentation, and communication. ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps (Full book summary), and ITIL all provide valuable guidance on implementing an efficient and effective change management program information security. Business continuity Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects. BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual. The BCM should be included in an organizations risk analysis plan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function. It encompasses: Analysis of requirements, e.g., identifying critical business functions, dependencies and potential failure points, potential threats and hence incidents or risks of concern to the organization; Specification, e.g., maximum tolerable outage periods; recovery point objectives (maximum acceptable periods of data loss); Architecture and design, e.g., an appropriate combination of approaches including resilience (e.g. engineering IT systems and processes for high availability, avoiding or preventing situations that might interrupt the business), incident and emergency management (e.g., evacuating premises, calling the emergency services, triage/situation assessment and invoking recovery plans), recovery (e.g., rebuilding) and contingency management (generic capabilities to deal positively with whatever occurs using whatever resources are available); Implementation, e.g., configuring and scheduling backups, data transfers, etc., duplicating and strengthening critical elements; contracting with service and equipment suppliers; Testing, e.g., business continuity exercises of various types, costs and assurance levels; Management, e.g., defining strategies, setting objectives and goals; planning and directing the work; allocating funds, people and other resources; prioritization relative to other activities; team building, leadership, control, motivation and coordination with other business functions and activities (e.g., IT, facilities, human resources, risk management, information risk and security, operations); monitoring the situation, checking and updating the arrangements when things change; maturing the approach through continuous improvement, learning and appropriate investment; Assurance, e.g., testing against specified requirements; measuring, analyzing, and reporting key parameters; conducting additional tests, reviews and audits for greater confidence that the arrangements will go to plan if invoked. Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, a disaster recovery plan (DRP) focuses specifically on resuming business operations as quickly as possible after a disaster. A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover critical information and communications technology (ICT) infrastructure. Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan. Laws and regulations Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security. Important industry sector regulations have also been included when they have a significant impact on information security. The UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data Protection Directive (EUDPD) requires that all E.U. members adopt national regulations to standardize the protection of data privacy for citizens throughout the E.U. The Computer Misuse Act 1990 is an Act of the U.K. Parliament making computer crime (e.g., hacking) a criminal offense. The act has become a model upon which several other countries, including Canada and the Republic of Ireland, have drawn inspiration from when subsequently drafting their own information security laws. The E.U.'s Data Retention Directive (annulled) required internet service providers and phone companies to keep data on every electronic message sent and phone call made for between six months and two years. The Family Educational Rights and Privacy Act (FERPA) ( g; 34 CFR Part 99) is a U.S. Federal law that protects the privacy of student education records. The law applies to all schools that receive funds under an applicable program of the U.S. Department of Education. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record. The Federal Financial Institutions Examination Council's (FFIEC) security guidelines for auditors specifies requirements for online banking security. The Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. Additionally, it requires health care providers, insurance providers and employers to safeguard the security and privacy of health data. The Gramm–Leach–Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999, protects the privacy and security of private financial information that financial institutions collect, hold, and process. Section 404 of the Sarbanes–Oxley Act of 2002 (SOX) requires publicly traded companies to assess the effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each fiscal year. Chief information officers are responsible for the security, accuracy, and the reliability of the systems that manage and report the financial data. The act also requires publicly traded companies to engage with independent auditors who must attest to, and report on, the validity of their assessments. The Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing payment account data security. It was developed by the founding payment brands of the PCI Security Standards Council — including American Express, Discover Financial Services, JCB, MasterCard Worldwide, and Visa International — to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures. State security breach notification laws (California and many others) require businesses, nonprofits, and state institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or stolen. The Personal Information Protection and Electronics Document Act (PIPEDA) of Canada supports and promotes electronic commerce by protecting personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act. Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 165/2011) establishes and describes the minimum information security controls that should be deployed by every company which provides electronic communication networks and/or services in Greece in order to protect customers' confidentiality. These include both managerial and technical controls (e.g., log records should be stored for two years). Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 205/2013) concentrates around the protection of the integrity and availability of the services and data offered by Greek telecommunication companies. The law forces these and other related companies to build, deploy, and test appropriate business continuity plans and redundant infrastructures. Culture Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations: Attitudes: Employees’ feelings and
|
y. So, is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed and fixed then its relative price falls. The usual hypothesis, the law of demand, is that the quantity demanded of x would increase at the lower price. The analysis can be generalized to more than two goods. The theoretical generalization to more than one period is a multi-period wealth and income constraint. For example, the same person can gain more productive skills or acquire more productive income-earning assets to earn a higher income. In the multi-period case, something might also happen to the economy beyond the control of the individual to reduce (or increase) the flow of income. Changing measured income and its relation to consumption over time might be modeled accordingly, such as in the permanent income hypothesis. Legal definitions Definitions under the Internal Revenue Code 26 U.S. Code § 61 - Gross income defined. There are also some statutory exclusions from income. Definition under US Case law Income is an "undeniable accessions to wealth, clearly realized, and over which the taxpayer has complete dominion." Commentators say that this is a pretty good definition of income. Taxable income is usually lower than Haig-Simons income. This is because unrealized appreciation (e.g., the increase in the value of stock over the course of a year) is economic income but not taxable income, and because there are many statutory exclusions from taxable income, including workman's compensation, SSI, gifts, child support, and in-kind government transfers. Accounting definitions The International Accounting Standards Board (IASB) uses the following definition: "Income is increases in economic benefits during the accounting period in the form of inflows or enhancements of assets or decreases of liabilities that result in increases in equity, other than those relating to contributions from equity participants." [F.70] (IFRS Framework). According to John Hicks' definitions, income "is the maximum amount which can be spent during a period if there is to be an expectation of maintaining intact, the capital value of prospective receipts (in money terms)”. "Nonincome" Debt Borrowing or repaying money is not income under any definition, for either the borrower or the lender. Interest and forgiveness of debt are income. Psychic income "Nonmonetary joy," such as watching a sunset or having sex, simply is not income. Similarly, nonmonetary suffering, such as heartbreak or labor, are not negative income. This may seem trivial, but the noninclusion of psychic income has important effects in economics and tax policy. It encourages people to find happiness in nonmonetary, nontaxable ways, and means that reported income may overstate or understate the wellbeing of a given individual. Income growth Income per capita has been increasing steadily in most countries. Many factors contribute to people having a higher income, including education, globalisation and favorable political circumstances such as economic freedom and peace. Increases in income also tend to lead to people choosing to work fewer hours. Developed countries (defined as countries with a "developed economy") have higher incomes as opposed to developing countries tending to have lower incomes. Income inequality Income inequality is the extent to which income is distributed in an uneven manner. It can be measured by various methods, including the Lorenz curve and the Gini coefficient. Many economists argue that certain amounts of inequality are necessary and desirable but that excessive inequality leads to efficiency problems and social injustice. Thereby necessitating initiatives like the United Nations Sustainable Development Goal 10 aimed at reducing inequality. National income, measured by statistics such as net national income (NNI), measures the total income of individuals, corporations, and government in the economy. For more information see Measures of national income and output. Income in philosophy and ethics Throughout history, many have written about the impact of income on morality and society. Saint Paul wrote 'For the love of money is a root of all kinds of evil:' (1 Timothy
|
saving opportunity gained by an entity within a specified timeframe, which is generally expressed in monetary terms. Income is difficult to define conceptually and the definition may be different across fields. For example, a person's income in an economic sense may be different from their income as defined by law. An extremely important definition of income is Haig–Simons income, which defines income as Consumption + Change in net worth and is widely used in economics. For households and individuals in the United States, income is defined by tax law as a sum that includes any wage, salary, profit, interest payment, rent, or other form of earnings received in a calendar year. Discretionary income is often defined as gross income minus taxes and other deductions (e.g., mandatory pension contributions), and is widely used as a basis to compare the welfare of taxpayers. In the field of public economics, the concept may comprise the accumulation of both monetary and non-monetary consumption ability, with the former (monetary) being used as a proxy for total income. For a firm, gross income can be defined as sum of all revenue minus the cost of goods sold. Net income nets out expenses: net income equals revenue minus cost of goods sold, expenses, depreciation, interest, and taxes. Economic definitions Full and Haig–Simons income "Full income" refers to the accumulation of both the monetary and the non-monetary consumption-ability of any given entity, such as a person or a household. According to what the economist Nicholas Barr describes as the "classical definition of income" (the 1938 Haig–Simons definition): "income may be defined as the... sum of (1) the market value of rights exercised in consumption and (2) the change in the value of the store of property rights..." Since the consumption potential of non-monetary goods, such as leisure, cannot be measured, monetary income may be thought of as a proxy for full income. As such, however, it is criticized for being unreliable, i.e. failing to accurately reflect affluence (and thus the consumption opportunities) of any given agent. It omits the utility a person may derive from non-monetary income and, on a macroeconomic level, fails to accurately chart social welfare. According to Barr, "in practice money income as a proportion of total income varies widely and unsystematically. Non-observability of full-income prevent a complete characterization of the individual opportunity set, forcing us to use the unreliable yardstick of money income. Factor income In economics, "factor income" is the return accruing for a person, or a nation, derived from the "factors of production": rental income, wages generated by labor, the interest created by capital, and profits from entrepreneurial ventures. In consumer theory 'income' is another name for the "budget constraint," an amount to be spent on different goods x and y in quantities and at prices and . The basic equation for this is This equation implies two things. First buying one more unit of good x implies buying less units of good y. So, is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed and fixed then its relative price falls. The usual hypothesis, the law of demand, is that the quantity demanded of x would increase at the lower price. The analysis can be generalized to more than two goods. The theoretical generalization to more than one period is a multi-period wealth and income constraint. For example, the same person can gain more productive skills or acquire more productive income-earning assets to earn a higher income. In the multi-period case, something might also happen to the economy beyond the control of the individual to reduce (or increase) the flow of income. Changing measured income and its relation to consumption over time might be modeled accordingly, such as in the permanent income hypothesis. Legal definitions Definitions under
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.