sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
strange local customs, in 14 books, with many surprises for the cultural historian and the mythographer, anecdotes about the famous Greek philosophers, poets, historians, and playwrights and myths instructively retold. The emphasis is on various moralizing tales about heroes and rulers, athletes and wise men; reports about food and drink, different styles in dress or lovers, local habits in giving gifts or entertainments, or in religious beliefs and death customs; and comments on Greek painting. Aelian gives an account of fly fishing, using lures of red wool and feathers, of lacquerwork, serpent worship — Essentially the Various History is a Classical "magazine" in the original senses of that word. He is not perfectly trustworthy in details, and his agenda was heavily influenced by Stoic opinions, perhaps so that his readers will not feel guilty, but Jane Ellen Harrison found survivals of archaic rites mentioned by Aelian very illuminating in her Prolegomena to the Study of Greek Religion (1903, 1922). The first printing was in 1545. The standard modern text is Mervin R. Dilts's, of 1974. Two English translations of the Various History, by Fleming (1576) and Stanley (1665) made Aelian's miscellany available to English readers, but after 1665 no English translation appeared, until three English translations appeared almost simultaneously: James G. DeVoto, Claudius Aelianus: Ποικίλης Ἱστορίας (Varia Historia) Chicago, 1995; Diane Ostrom Johnson, An English Translation of Claudius Aelianus' "Varia Historia", 1997; and N. G. Wilson, Aelian: Historical Miscellany in the Loeb Classical Library. Other works Considerable fragments of two other works, On Providence and Divine Manifestations, are preserved in the early medieval encyclopedia, the Suda. Twenty "letters from a farmer" after the manner of Alciphron are also attributed to him. The letters are invented compositions to a fictitious correspondent, which are a device for vignettes of agricultural and rural life, set in Attica, though mellifluous Aelian once boasted that he had never been outside Italy, never been aboard a ship (which is at variance, though, with his own statement, de Natura Animalium XI.40, that he had seen the bull Serapis with his own eyes). Thus conclusions about actual agriculture in the Letters are as likely to evoke Latium as Attica. The fragments have been edited in 1998 by D. Domingo-Foraste, but are not available in English. The Letters are available in the Loeb Classical Library, translated by Allen Rogers Benner and Francis H. Fobes (1949). See also Historiae animalium by Gessner References Further reading Aelian, On Animals. 3 volumes. Translated by A. F. Scholfield. 1958–9. Loeb Classical Library. , , and Aelian, Historical Miscellany. Translated by Nigel G. Wilson. 1997. Loeb Classical Library. Alciphron, Aelian, and Philostratus, The Letters. Translated by A. R. Benner, F. H. Fobes. 1949. Loeb Classical Library. Aelian, On the Nature of Animals. Translated by Gregory McNamee. 2011. Trinity University Press. Ailianos, Vermischte Forschung. Greek and German by Kai Brodersen. 2018. Sammlung Tusculum. De Gruyter Berlin & Boston Ailianos, Tierleben. Greek and German by Kai Brodersen. 2018. Sammlung Tusculum. De Gruyter Berlin & Boston 2018, ISBN Claudius Aelianus, Vom Wesen der Tiere - De natura animalium.
his possessions by way of ransom. If however it has already saved its life by self-castration and is again pursued, then it stands up and reveals that it offers no ground for their eager pursuit, and releases the hunters from all further exertions, for they esteem its flesh less. Often however Beavers with testicles intact, after escaping as far away as possible, have drawn in the coveted part, and with great skill and ingenuity tricked their pursuers, pretending that they no longer possessed what they were keeping in concealment." The Loeb Classical Library introduction characterizes the book as "an appealing collection of facts and fables about the animal kingdom that invites the reader to ponder contrasts between human and animal behavior." Aelian's anecdotes on animals rarely depend on direct observation: they are almost entirely taken from written sources, often Pliny the Elder, but also other authors and works now lost, to whom he is thus a valuable witness. He is more attentive to marine life than might be expected, though, and this seems to reflect first-hand personal interest; he often quotes "fishermen". At times he strikes the modern reader as thoroughly credulous, but at others he specifically states that he is merely reporting what is told by others, and even that he does not believe them. Aelian's work is one of the sources of medieval natural history and of the bestiaries of the Middle Ages. The portions of the text that are still extant are badly mangled and garbled and replete with later interpolations. Conrad Gessner (or Gesner), the Swiss scientist and natural historian of the Renaissance, made a Latin translation of Aelian's work, to give it a wider European audience. An English translation by A. F. Scholfield has been published in the Loeb Classical Library, 3 vols. (1958-59). Varia Historia Various History (, )—for the most part preserved only in an abridged form—is Aelian's other well-known work, a miscellany of anecdotes and biographical sketches, lists, pithy maxims, and descriptions of natural wonders and strange local customs, in 14 books, with
set among the stars as Ursa Major ("the Great Bear"). She was the bear-mother of the Arcadians, through her son Arcas by Zeus. The fourth Galilean moon of Jupiter and a main belt asteroid are named after Callisto. Myth As a follower of Artemis, Callisto, who Hesiod said was the daughter of Lycaon, king of Arcadia, took a vow to remain a virgin, as did all the nymphs of Artemis. According to Hesiod, she was seduced by Zeus, and of the consequences that followed: [Callisto] chose to occupy herself with wild-beasts in the mountains together with Artemis, and, when she was seduced by Zeus, continued some time undetected by the goddess, but afterwards, when she was already with child, was seen by her bathing and so discovered. Upon this, the goddess was enraged and changed her into a beast. Thus she became a bear and gave birth to a son called Arkas. According to the mythographer Apollodorus, Zeus disguised himself as Artemis or Apollo, in order to lure Callisto into his embrace. According to Ovid, it was Jupiter who took the form of Diana so that he might evade his wife Juno's detection, forcing himself upon Callisto while she was separated from Diana and the other nymphs. Callisto's subsequent pregnancy was discovered several months later while she was bathing with Diana and her fellow nymphs. Diana became enraged when she saw that Callisto was pregnant and expelled her from the group. Callisto later gave birth to Arcas. Juno then took the opportunity to avenge her wounded pride and transformed the nymph into a bear. Sixteen years later Callisto, still a bear, encountered her son Arcas hunting in the forest. Just as Arcas was about to kill his own mother with his javelin, Jupiter averted the tragedy by placing mother and son amongst the stars as Ursa Major and Minor, respectively. Juno, enraged that her attempt at revenge had been frustrated, appealed to Tethys that the two might never meet her waters, thus providing a poetic explanation for their circumpolar positions in ancient times. Either Artemis "slew Kallisto with a shot of her silver bow," perhaps urged by the wrath of Juno (Hera) or later Arcas, the eponym of Arcadia, nearly killed his bear-mother, when she had wandered into the forbidden precinct of Zeus. In every case, Zeus placed them both in the sky as the constellations Ursa Major, called Arktos (αρκτος), the "Bear", by Greeks, and Ursa Minor. Origin of the myth The name Kalliste (), "most beautiful", may be recognized as an epithet of the goddess herself, though none of the inscriptions at Athens that record priests of Artemis Kalliste (), date before the third century BCE. Artemis Kalliste was worshiped in Athens in a shrine which lay outside the Dipylon gate, by the side of the road to the Academy. W. S. Ferguson suggested that Artemis Soteira and Artemis Kalliste were joined in a common cult administered by a single priest. The bearlike character of Artemis herself was a feature of the Brauronia. The myth in Catasterismi may be derived from the fact that a set of constellations appear close together in the sky, in and near the Zodiac sign of Libra, namely Ursa Minor, Ursa Major, Boötes, and Virgo. The constellation Boötes, was explicitly identified in the Hesiodic Astronomia () as Arcas, the "Bear-warden" (Arktophylax; ): He is Arkas the son of Kallisto and Zeus, and he lived in the country about Lykaion. After Zeus had seduced Kallisto, Lykaon, pretending not to know of the matter, entertained Zeus, as Hesiod says, and set before him on the table the babe [Arkas] which he had cut up. The stars of Ursa Major were all circumpolar in Athens of 400 BCE, and all but the stars in the Great Bear's left foot were circumpolar in Ovid's Rome, in the first century CE. Now, however, due to the precession of the equinoxes, the feet of the Great Bear constellation do sink below the horizon from Rome and especially from Athens; however, Ursa Minor (Arcas) does remain completely above the horizon, even from latitudes as
was hunting, she was set among the stars as Ursa Major ("the Great Bear"). She was the bear-mother of the Arcadians, through her son Arcas by Zeus. The fourth Galilean moon of Jupiter and a main belt asteroid are named after Callisto. Myth As a follower of Artemis, Callisto, who Hesiod said was the daughter of Lycaon, king of Arcadia, took a vow to remain a virgin, as did all the nymphs of Artemis. According to Hesiod, she was seduced by Zeus, and of the consequences that followed: [Callisto] chose to occupy herself with wild-beasts in the mountains together with Artemis, and, when she was seduced by Zeus, continued some time undetected by the goddess, but afterwards, when she was already with child, was seen by her bathing and so discovered. Upon this, the goddess was enraged and changed her into a beast. Thus she became a bear and gave birth to a son called Arkas. According to the mythographer Apollodorus, Zeus disguised himself as Artemis or Apollo, in order to lure Callisto into his embrace. According to Ovid, it was Jupiter who took the form of Diana so that he might evade his wife Juno's detection, forcing himself upon Callisto while she was separated from Diana and the other nymphs. Callisto's subsequent pregnancy was discovered several months later while she was bathing with Diana and her fellow nymphs. Diana became enraged when she saw that Callisto was pregnant and expelled her from the group. Callisto later gave birth to Arcas. Juno then took the opportunity to avenge her wounded pride and transformed the nymph into a bear. Sixteen years later Callisto, still a bear, encountered her son Arcas hunting in the forest. Just as Arcas was about to kill his own mother with his javelin, Jupiter averted the tragedy by placing mother and son amongst the stars as Ursa Major and Minor, respectively. Juno, enraged that her attempt at revenge had been frustrated, appealed to Tethys that the two might never meet her waters, thus providing a poetic explanation for their circumpolar positions in ancient times. Either Artemis "slew Kallisto with a shot of her silver bow," perhaps urged by the wrath of Juno (Hera) or later Arcas, the eponym of Arcadia, nearly killed his bear-mother, when she had wandered into the forbidden precinct of Zeus. In every case, Zeus placed them both in the sky as the constellations Ursa Major, called Arktos (αρκτος), the "Bear", by Greeks, and Ursa Minor. Origin of the myth The name Kalliste (), "most beautiful", may be recognized as an epithet of the goddess herself, though none of the inscriptions at Athens that record priests of Artemis Kalliste (), date before the third century BCE. Artemis Kalliste was worshiped in Athens in a shrine which lay outside the Dipylon gate, by the side of the road to the Academy. W. S. Ferguson suggested that Artemis Soteira and Artemis Kalliste were joined in a common cult administered by a single priest. The bearlike character of Artemis herself was a feature of the Brauronia. The myth in Catasterismi may be derived from the fact that a set of constellations appear close together in the sky, in and near the Zodiac sign of Libra, namely Ursa Minor, Ursa Major, Boötes, and Virgo. The constellation Boötes, was explicitly identified in the Hesiodic Astronomia () as Arcas, the "Bear-warden" (Arktophylax; ): He is Arkas the son of Kallisto and Zeus, and he lived in the country about Lykaion. After Zeus had seduced Kallisto, Lykaon, pretending not to know of the matter, entertained Zeus, as Hesiod says, and set before him on the table the babe [Arkas] which he had cut up. The stars of Ursa Major were all circumpolar in Athens of 400 BCE, and all but the stars in the Great Bear's left foot were circumpolar in Ovid's
they survive travel very well, but they were usually not sweet enough to be considered cookies by modern standards. Cookies appear to have their origins in 7th century AD Persia, shortly after the use of sugar became relatively common in the region. They spread to Europe through the Muslim conquest of Spain. By the 14th century, they were common in all levels of society throughout Europe, from royal cuisine to street vendors. The first documented instance of the figure-shaped gingerbread man was at the court of Elizabeth I of England in the 16th century. She had the gingerbread figures made and presented in the likeness of some of her important guests. With global travel becoming widespread at that time, cookies made a natural travel companion, a modernized equivalent of the travel cakes used throughout history. One of the most popular early cookies, which traveled especially well and became known on every continent by similar names, was the jumble, a relatively hard cookie made largely from nuts, sweetener, and water. Cookies came to America through the Dutch in New Amsterdam in the late 1620s. The Dutch word "koekje" was Anglicized to "cookie" or cooky. The earliest reference to cookies in America is in 1703, when "The Dutch in New York provided...'in 1703...at a funeral 800 cookies...'" The most common modern cookie, given its style by the creaming of butter and sugar, was not common until the 18th century. The Industrial Revolution in Britain and the consumers it created saw cookies (biscuits) become products for the masses, and firms such as Huntley & Palmers (formed in 1822), McVitie's (formed in 1830) and Carr's (formed in 1831) were all established. The decorative biscuit tin, invented by Huntley & Palmers in 1831, saw British cookies exported around the world. In 1891, Cadbury filed a patent for a chocolate-coated cookie. Classification Cookies are broadly classified according to how they are formed or made, including at least these categories: Bar cookies consist of batter or other ingredients that are poured or pressed into a pan (sometimes in multiple layers) and cut into cookie-sized pieces after baking. In British English, bar cookies are known as "tray bakes". Examples include brownies, fruit squares, and bars such as date squares. Drop cookies are made from a relatively soft dough that is dropped by spoonfuls onto the baking sheet. During baking, the mounds of dough spread and flatten. Chocolate chip cookies (Toll House cookies), oatmeal raisin (or other oatmeal-based) cookies, and rock cakes are popular examples of drop cookies. This may also include thumbprint cookies, for which a small central depression is created with a thumb or small spoon before baking to contain a filling, such as jam or a chocolate chip. In the UK, the term "cookie" often refers only to this particular type of product. Filled cookies are made from a rolled cookie dough filled with a fruit, jam or confectionery filling before baking. Hamantashen are a filled cookie. Molded cookies are also made from a stiffer dough that is molded into balls or cookie shapes by hand before baking. Snickerdoodles and peanut butter cookies are examples of molded cookies. Some cookies, such as hermits or biscotti, are molded into large flattened loaves that are later cut into smaller cookies. No-bake cookies are made by mixing a filler, such as cereal or nuts, into a melted confectionery binder, shaping into cookies or bars, and allowing to cool or harden. Oatmeal clusters and rum balls are no-bake cookies. Pressed cookies are made from a soft dough that is extruded from a cookie press into various decorative shapes before baking. Spritzgebäck is an example of a pressed cookie. Refrigerator cookies (also known as icebox cookies) are made from a stiff dough that is refrigerated to make the raw dough even stiffer before cutting and baking. The dough is typically shaped into cylinders which are sliced into round cookies before baking. Pinwheel cookies and those made by Pillsbury are representative. Rolled cookies are made from a stiffer dough that is rolled out and cut into shapes with a cookie cutter. Gingerbread men are an example. Sandwich cookies are rolled or pressed cookies that are assembled as a sandwich with a sweet filling. Fillings include marshmallow, jam,
Reception Leah Ettman from Nutrition Action has criticized the high calorie count and fat content of supersized cookies, which are extra large cookies; she cites the Panera Kitchen Sink Cookie, a supersized chocolate chip cookie, which measures 5 1/2 inches in diameter and has 800 calories. For busy people who eat breakfast cookies in the morning, Kate Bratskeir from the Huffington Post recommends lower-sugar cookies filled with "heart-healthy nuts and fiber-rich oats". A book on nutrition by Paul Insel et al. notes that "low-fat" or "diet cookies" may have the same number of calories as regular cookies, due to added sugar. Popular culture There are a number of slang usages of the term "cookie". The slang use of "cookie" to mean a person, "especially an attractive woman" is attested to in print since 1920. The catchphrase "that's the way the cookie crumbles", which means "that's just the way things happen" is attested to in print in 1955. Other slang terms include "smart cookie” and “tough cookie.” According to The Cambridge International Dictionary of Idioms, a smart cookie is “someone who is clever and good at dealing with difficult situations.” The word "cookie" has been vulgar slang for "vagina" in the US since 1970. The word "cookies" is used to refer to the contents of the stomach, often in reference to vomiting (e.g., "pop your cookies" a 1960s expression, or "toss your cookies", a 1970s expression). The expression "cookie cutter", in addition to referring literally to a culinary device to rolled cookie dough into shapes, is also used metaphorically to refer to items or things "having the same configuration or look as many others" (e.g., a "cookie cutter tract house") or to label something as "stereotyped or formulaic" (e.g., an action movie filled with "generic cookie cutter characters"). "Cookie duster" is a whimsical expression for a mustache. Cookie Monster is a Muppet on the long-running children's television show Sesame Street. He is best known for his voracious appetite for cookies and his famous eating phrases, such as "Me want cookie!", "Me eat cookie!" (or simply "COOKIE!"), and "Om nom nom nom" (said through a mouth full of food). Notable varieties Alfajor Angel Wings (Chruściki) Animal cracker Anzac biscuit Berger cookie Berner Haselnusslebkuchen Biscotti Biscuit rose de Reims Black and white cookie Blondie Bourbon biscuit Brownie Butter cookie Chocolate chip cookie Chocolate-coated graham cracker Chocolate-coated marshmallow treat Congo bar Digestive biscuit Fat rascal Fattigmann Flies graveyard Florentine biscuit Fortune cookie Fruit squares and bars (date, fig, lemon, raspberry, etc.) Ginger snap Gingerbread house Gingerbread man Graham cookie Hamentashen Hobnob biscuit Joe Frogger Jumble Kifli Koulourakia Krumkake Linzer cookie Macaroon Mexican wedding cake Meringue Nice biscuit Oatmeal raisin cookie Pastelito Peanut butter blossom cookie Peanut butter cookie Pepparkakor Pfeffernüsse Pizzelle Polvorón Qurabiya Rainbow cookie Ranger Cookie Rich tea Riposteria Rosette Rum ball Rusk Russian tea cake Rock cake Sablé Sandbakelse Şekerpare Shortbread Snickerdoodle Speculoos Springerle Spritzgebäck (Spritz) Stroopwafel Sugar cookie Tea biscuit Toruń gingerbread Tuile Wafer Windmill cookie Gallery Related pastries and confections Acıbadem kurabiyesi Animal crackers Berliner (pastry) Bun Candy Cake Churro Cracker (food) Cupcake Danish pastry Doughnut Funnel cake Galette Graham cracker Hershey's Cookies 'n' Creme Kit Kat Halvah Ladyfinger (biscuit) Lebkuchen Mille-feuille Marzipan Mille-feuille (Napoleon) Moon pie Pastry Palmier Petit four Rum ball S'more Snack cake Tartlet Teacake Teething biscuit Whoopie pie Manufacturers Arnott's Biscuits Bahlsen Burton's Foods D.F. Stauffer Biscuit Company DeBeukelaer Famous Amos (Division of Ferrero) Fazer Fox's Biscuits Interbake Foods Jules Destrooper Keebler Lance Lotte Confectionery (Division of Lotte) Lotus Bakeries McKee Foods Meiji Seika Kaisha Ltd. Mrs. Fields Nabisco (Division of Mondelēz International) Nestlé Northern Foods Otis Spunkmeyer (Division of Aryzta) Pillsbury (Division of General Mills) Pinnacle Foods Pepperidge Farm (Division of Campbell Soup Company) Royal Dansk (Division of Kelsen Group) Sunshine Biscuits (historical) United Biscuits Walkers Shortbread Utz Brands Product lines and brands Animal Crackers (Nabisco, Keebler, Cadbury, Bahlsen, others) Anna's (Lotus) Archway Cookies (Lance) Barnum's Animals (Nabisco) Betty Crocker (General Mills, cookie mixes) Biscoff (Lotus) Chips Ahoy! (Nabisco) Chips Deluxe (Keebler) Danish Butter Cookies (Royal Dansk) Duncan Hines (Pinnacle, cookie mixes) Famous Amos (Kellogg) Fig Newton (Nabisco) Fox's Biscuits (Northern) Fudge Shoppe (Keebler) Girl Scout cookie (Keebler, Interbake) Hello Panda (Meiji) Hit (Bahlsen) Hydrox (Sunshine, discontinued by Keebler) Jaffa Cakes (McVitie) Jammie Dodgers (United) Koala's March (Lotte) Leibniz-Keks (Bahlsen) Little Debbie (McKee) Lorna Doone (Nabisco) Maryland Cookies (Burton's) McVitie's (United) Milano (Pepperidge Farm) Nilla Wafers (Nabisco) Nutter Butter (Nabisco) Oreo (Nabisco) Pillsbury (General Mills, cookie mixes) Pecan Sandies (Keebler) Peek Freans (United) Pirouline (DeBeukelaer) Stauffer's (Meiji) Stella D'Oro (Lance) Sunshine
script via an HTTP POST request, they are passed to the script's standard input. The script can then read these environment variables or data from standard input and adapt to the Web browser's request. Example The following Perl program shows all the environment variables passed by the Web server: #!/usr/bin/env perl =head1 DESCRIPTION printenv — a CGI program that just prints its environment =cut print "Content-Type: text/plain\n\n"; for my $var ( sort keys %ENV ) { printf "%s=\"%s\"\n", $var, $ENV{$var}; } If a Web browser issues a request for the environment variables at http://example.com/cgi-bin/printenv.pl/foo/bar?var1=value1&var2=with%20percent%20encoding, a 64-bit Windows 7 Web server running cygwin returns the following information: Some, but not all, of these variables are defined by the CGI standard. Some, such as PATH_INFO, QUERY_STRING, and the ones starting with HTTP_, pass information along from the HTTP request. From the environment, it can be seen that the Web browser is Firefox running on a Windows 7 PC, the Web server is Apache running on a system that emulates Unix, and the CGI script is named cgi-bin/printenv.pl. The program could then generate any content, write that to standard output, and the Web server will transmit it to the browser. The following are environment variables passed to CGI programs: Server specific variables: SERVER_SOFTWARE: name/version of HTTP server. SERVER_NAME: host name of the server, may be dot-decimal IP address. GATEWAY_INTERFACE: CGI/version. Request specific variables: SERVER_PROTOCOL: HTTP/version. SERVER_PORT: TCP port (decimal). REQUEST_METHOD: name of HTTP method (see above). PATH_INFO: path suffix, if appended to URL after program name and a slash. PATH_TRANSLATED: corresponding full path as supposed by server, if PATH_INFO is present. SCRIPT_NAME: relative path to the program, like /cgi-bin/script.cgi. QUERY_STRING: the part of URL after ? character. The query string may be composed of *name=value pairs separated with ampersands (such as var1=val1&var2=val2...) when used to submit form data transferred via GET method as defined by HTML application/x-www-form-urlencoded. REMOTE_HOST: host name of the client, unset if server did not perform such lookup. REMOTE_ADDR: IP address of the client (dot-decimal). AUTH_TYPE: identification type, if applicable. REMOTE_USER used for certain AUTH_TYPEs. REMOTE_IDENT: see ident, only if server performed such lookup. CONTENT_TYPE: Internet media type of input data if PUT or POST method are used, as provided via HTTP header. CONTENT_LENGTH: similarly, size of input data (decimal, in octets) if provided via HTTP header. Variables passed by user agent (HTTP_ACCEPT, HTTP_ACCEPT_LANGUAGE, HTTP_USER_AGENT, HTTP_COOKIE and possibly others) contain values of corresponding HTTP headers and therefore have the same sense. The program returns the result to the Web server in the form of standard output, beginning with a header and a blank line. The header is encoded in the same way as an HTTP header and must include the MIME type of the document returned. The headers, supplemented by the Web server, are generally forwarded with the response back to the user. Here is a simple CGI program written in Python 3 along with the HTML that handles a simple addition problem. add.html: <!DOCTYPE html> <html> <body> <form action="add.cgi" method="POST"> <fieldset> <legend>Enter two numbers to add</legend> <label>First Number: <input type="number" name="num1"></label><br/> <label>Second Number: <input type="number" name="num2"></label><br/> </fieldset> <button>Add</button> </form> </body> </html> add.cgi: #!/usr/bin/env python3 import cgi, cgitb cgitb.enable() input_data = cgi.FieldStorage() print('Content-Type: text/html') # HTML is following print('') # Leave a blank line print('<h1>Addition Results</h1>') try: num1 = int(input_data["num1"].value) num2 = int(input_data["num2"].value) except: print('<output>Sorry, the script cannot turn your inputs into numbers (integers).</output>') raise SystemExit(1) print('<output>{0} + {1} = {2}</output>'.format(num1, num2, num1 + num2))This Python 3 CGI program gets the inputs from the HTML and adds the two numbers together. Deployment A Web server that supports CGI can be configured to interpret a URL that it serves as a reference to a CGI script. A common convention is to have a cgi-bin/ directory at the base of the directory tree and treat all executable files within this directory (and no other, for security) as CGI scripts. Another popular convention is to use filename extensions; for instance, if CGI scripts are consistently given the extension .cgi, the Web server can be configured to interpret all such files as CGI scripts. While convenient, and required by many prepackaged scripts, it opens the server to attack if a remote user can upload executable code with the proper extension. In the case of HTTP PUT or POSTs, the user-submitted data are provided to the program via the standard input. The Web server creates a subset of the environment variables passed to it and adds details pertinent to the HTTP environment. Uses CGI is often used to process input information from the user and produce the appropriate output. An example of a CGI program is one implementing a wiki. If the user agent requests the name of an entry, the Web server executes the CGI program. The CGI program retrieves the source of that entry's page (if one exists), transforms it into HTML, and prints the result. The Web server receives the output from the CGI program and transmits it to the user agent. Then if the user agent clicks the "Edit page" button, the CGI program populates an HTML textarea or other editing control with the page's contents. Finally if the user agent clicks the "Publish page" button, the CGI program transforms the updated HTML into the source of that entry's page and saves it. Security CGI programs run, by default, in the security context of the Web server. When first introduced a number of example scripts were provided with the reference distributions of the NCSA, Apache and CERN Web servers to show how shell scripts or C programs could be coded to make use of the new CGI. One such example script was a CGI program called PHF that implemented a simple phone book. In common with a number of other scripts at the time, this script made use of a function: escape_shell_cmd(). The function was supposed to sanitize its argument, which came from user input and then pass the input to the Unix shell, to be run in the security context of the Web server. The script did not correctly sanitize all input and allowed new lines to be passed to the shell,
collection – files that can be sent to Web browsers connected to this server. For example, if the Web server has the domain name example.com, and its document collection is stored at /usr/local/apache/htdocs/ in the local file system, then the Web server will respond to a request for http://example.com/index.html by sending to the browser the (pre-written) file /usr/local/apache/htdocs/index.html. For pages constructed on the fly, the server software may defer requests to separate programs and relay the results to the requesting client (usually, a Web browser that displays the page to the end user). In the early days of the Web, such programs were usually small and written in a scripting language; hence, they were known as scripts. Such programs usually require some additional information to be specified with the request. For instance, if Wikipedia were implemented as a script, one thing the script would need to know is whether the user is logged in and, if logged in, under which name. The content at the top of a Wikipedia page depends on this information. HTTP provides ways for browsers to pass such information to the Web server, e.g. as part of the URL. The server software must then pass this information through to the script somehow. Conversely, upon returning, the script must provide all the information required by HTTP for a response to the request: the HTTP status of the request, the document content (if available), the document type (e.g. HTML, PDF, or plain text), et cetera. Initially, different server software would use different ways to exchange this information with scripts. As a result, it wasn't possible to write scripts that would work unmodified for different server software, even though the information being exchanged was the same. Therefore, it was decided to specify a way for exchanging this information: CGI (the Common Gateway Interface, as it defines a common way for server software to interface with scripts). Webpage generating programs invoked by server software that operate according to the CGI specification are known as CGI scripts. This specification was quickly adopted and is still supported by all well-known server software, such as Apache, IIS, and (with an extension) node.js-based servers. An early use of CGI scripts was to process forms. In the beginning of HTML, HTML forms typically had an "action" attribute and a button designated as the "submit" button. When the submit button is pushed the URI specified in the "action" attribute would be sent to the server with the data from the form sent as a query string. If the "action" specifies a CGI script then the CGI script would be executed and it then produces an HTML page. Using CGI scripts A Web server allows its owner to configure which URLs shall be handled by which CGI scripts. This is usually done by marking a new directory within the document collection as containing CGI scripts – its name is often cgi-bin. For example, /usr/local/apache/htdocs/cgi-bin could be designated as a CGI directory on the Web server. When a Web browser requests a URL that points to a file within the CGI directory (e.g., http://example.com/cgi-bin/printenv.pl/with/additional/path?and=a&query=string), then, instead of simply sending that file (/usr/local/apache/htdocs/cgi-bin/printenv.pl) to the Web browser, the HTTP server runs the specified script and passes the output of the script to the Web browser. That is, anything that the script sends to standard output is passed to the Web client instead of being shown on-screen in a terminal window. As remarked above, the CGI specification defines how additional information passed with the request is passed to the script. For instance, if a slash and additional directory name(s) are appended to the URL immediately after the name of the script (in this example, /with/additional/path), then that path is stored in the PATH_INFO environment variable before the script is called. If parameters are sent to the script via an HTTP GET request (a question mark appended to the URL, followed by param=value pairs; in the example, ?and=a&query=string), then those parameters are stored in the QUERY_STRING environment variable before the script is called.
also enable new settlers to buy land from those Native Americans who wished to sell. The US government set up the Dawes Commission to manage the land allotment policy; it registered members of the tribe and made allocations of lands. Beginning in 1894, the Dawes Commission was established to register Choctaw and other families of the Indian Territory, so that the former tribal lands could be properly distributed among them. The final list included 18,981 citizens of the Choctaw Nation, 1,639 Mississippi Choctaw, and 5,994 former slaves (and descendants of former slaves), most held by Choctaws in the Indian/Oklahoma Territory. (At the same time, the Dawes Commission registered members of the other Five Civilized Tribes for the same purpose. The Dawes Rolls have become important records for proving tribal membership.) Following completion of the land allotments, the US proposed to end tribal governments of the Five Civilized Tribes and admit the two territories jointly as a state. Territory transition to Oklahoma statehood (1889) The establishment of Oklahoma Territory following the Civil War was a required land cession by the Five Civilized Tribes, who had supported the Confederacy. The government used its railroad access to the Oklahoma Territory to stimulate development there. The Indian Appropriations Bill of 1889 included an amendment by Illinois Representative William McKendree Springer, that authorized President Benjamin Harrison to open the two million acres (8,000 km²) of Oklahoma Territory for settlement, resulting in the Land Run of 1889. The Choctaw Nation was overwhelmed with new settlers and could not regulate their activities. In the late 19th century, Choctaws suffered almost daily from violent crimes, murders, thefts and assaults from whites and from other Choctaws. Intense factionalism divided the traditionalistic "Nationalists" and pro-assimilation "Progressives," who fought for control. In 1905, delegates of the Five Civilized Tribes met at the Sequoyah Convention to write a constitution for an Indian-controlled state. They wanted to have Indian Territory admitted as the State of Sequoyah. Although they took a thoroughly developed proposal to Washington, DC, seeking approval, eastern states' representatives opposed it, not wanting to have two western states created in the area, as the Republicans feared that both would be Democrat-dominated, as the territories had a southern tradition of settlement. President Theodore Roosevelt, a Republican, ruled that the Oklahoma and Indian territories had to be jointly admitted as one state, Oklahoma. To achieve this, tribal governments had to end and all residents accept state government. Many of the leading Native American representatives from the Sequoyah Convention participated in the new state convention. Its constitution was based on many elements of the one developed for the State of Sequoyah. In 1906 the U.S. dissolved the governments of the Five Civilized Tribes. This action was part of continuing negotiations by Native Americans and European Americans over the best proposals for the future. The Choctaw Nation continued to protect resources not stipulated in treaty or law. On November 16, 1907, Oklahoma was admitted to the union as the 46th state. Mississippi Choctaw Delegation to Washington (1914) By 1907, the Mississippi Choctaw were in danger of becoming extinct. The Dawes Commission had sent a large number of the Mississippi Choctaws to Indian Territory, and only 1,253 members remained. Meetings were held in April and May 1913 to try to find a solution to this problem. Wesley Johnson was elected chief of the newly formed Mississippi, Alabama, and Louisiana Choctaw Council at the May 1913 meeting. After some deliberation, the council selected delegates to send to Washington, D.C. to bring attention to their plight. Historian Robert Bruce Ferguson wrote in his 2015 article that: In late January 1914, Chief Wesley Johnson and his delegates (Culbertson Davis and Emil John) traveled to Washington, D. C. ... While they were in Washington, Johnson, Davis, and John met with numerous senators & representatives and persuaded the federals to bring the Choctaw case before Congress. On February 5th, their mission culminated with the meeting of President Woodrow Wilson. Culbertson Davis presented a beaded Choctaw belt as a token of goodwill to the President. Nearly two years after the trip to Washington, the Indian Appropriations Act of May 18, 1916 was passed. A stipulation allowed $1,000 for a investigation on the Mississippi Choctaws' condition. John R. T. Reeves was to "investigate the condition of the Indians living in Mississippi and report to Congress ... as to their needs for additional land and school facilities ..." Reeves submitted his report on November 6, 1916. Hearing at Union, Mississippi In March 1917, federal representatives held hearings, attended by around 100 Choctaws, to examine the needs of the Mississippi Choctaws. Some of the congressmen who presided over the hearings were: Charles D. Carter of Oklahoma, William W. Hastings of Oklahoma, Carl T. Hayden of Arizona, John N. Tillman of Arkansas, and William W. Venable of Mississippi. These hearings resulted in improvements such as improved access to health care, housing, and schools. After Cato H. Sells investigated the Choctaws' condition, the U. S. Bureau of Indian Affairs established the Choctaw Agency on October 8 of 1918. The Choctaw Agency was based in Philadelphia, Mississippi, the center of Indian activity. Dr. Frank J. McKinley was its first superintendent, and he was also the physician. Before 1916, six Indian schools operated in three counties: two in Leake, three in Neshoba, and one in Newton. The names of those schools were: Tubby Rock Indian School, Calcutta Indian School, Revenue Indian school, Red Water Indian School, and Gum Springs Indian School. The Newton Indian school's name is not known. The agency established new schools in the following Indian communities: Bogue Chitto, Bogue Homo, Conehatta, Pearl River, Red Water, Standing Pine, and Tucker. Under segregation, few schools were open to Choctaw children, whom the white southerners classified as non-whites. The Mississippi Choctaws' improvements may have continued if it wasn't dramatically interrupted by world events. World War I slowed down progress for the Indians as Washington's bureaucracy focused on the war. Some Mississippi Choctaws also served during the war. The Spanish Influenza also slowed progress as many Choctaws were killed by the world-wide epidemic. World War I (1918) In the closing days of World War I, a group of Oklahoma Choctaws serving in the U.S. Army used their native language as the basis for secret communication among Americans, as Germans could not understand it. They are now called the Choctaw Code Talkers. The Choctaws were the Native American innovators who served as code talkers. Captain Lawrence, a company commander, overheard Solomon Louis and Mitchell Bobb conversing in the Choctaw language. He learned there were eight Choctaw men in the battalion. Fourteen Choctaw Indian men in the Army's 36th Division trained to use their language for military communications. Their communications, which could not be understood by Germans, helped the American Expeditionary Force win several key battles in the Meuse-Argonne Campaign in France, during the last big German offensive of the war. Within 24 hours after the US Army starting using the Choctaw speakers, they turned the tide of battle by controlling their communications. In less than 72 hours, the Germans were retreating and the Allies were on full attack. The 14 Choctaw Code Talkers were Albert Billy, Mitchell Bobb, Victor Brown, Ben Caterby, James Edwards, Tobias Frazer, Ben Hampton, Solomon Louis, Pete Maytubby, Jeff Nelson, Joseph Oklahombi, Robert Taylor, Calvin Wilson, and Captain Walter Veach. More than 70 years passed before the contributions of the Choctaw Code talkers were fully recognized. On November 3, 1989, in recognition of the important role the Choctaw Code Talkers played during World War I, the French government presented the Chevalier de L'Ordre National du Mérite (the Knight of the National Order of Merit) to the Choctaws Code Talkers. The US Army again used Choctaw speakers for coded language during World War II. Reorganization (1934) During the Great Depression and the Roosevelt Administration, officials began numerous initiatives to alleviate some of the social and economic conditions in the South. The 1933 Special Narrative Report described the dismal state of welfare of Mississippi Choctaws, whose population by 1930 had slightly increased to 1,665 people. John Collier, the US Commissioner for Indian Affairs (now BIA), had worked for a decade on Indian affairs and been developing ideas to change federal policy. He used the report as instrumental support to re-organize the Mississippi Choctaw as the Mississippi Band of Choctaw Indians. This enabled them to establish their own tribal government, and gain a beneficial relationship with the federal government. In 1934, President Franklin Roosevelt signed into law the Indian Reorganization Act. This law proved critical for survival of the Mississippi Choctaw. Baxter York, Emmett York, and Joe Chitto worked on gaining recognition for the Choctaw. They realized that the only way to gain recognition was to adopt a constitution. A rival organization, the Mississippi Choctaw Indian Federation, opposed tribal recognition because of fears of dominance by the Bureau of Indian Affairs (BIA). They disbanded after leaders of the opposition were moved to another jurisdiction. The first Mississippi Band of Choctaw Indians tribal council members were Baxter and Emmett York with Joe Chitto as the first chairperson. With the tribe's adoption of government, in 1944 the Secretary of the Interior declared that would be held in trust for the Choctaw of Mississippi. Lands in Neshoba and surrounding counties were set aside as a federal Indian reservation. Eight communities were included in the reservation land: Bogue Chitto, Bogue Homa, Conehatta, Crystal Ridge, Pearl River, Red Water, Tucker, and Standing Pine. Under the Indian Reorganization Act, the Mississippi Choctaws re-organized on April 20, 1945 as the Mississippi Band of Choctaw Indians. This gave them some independence from the Democrat-dominated state government, which continued with enforcement of racial segregation and discrimination. World War II (1941) World War II was a significant turning point for Choctaws and Native Americans in general. Although the Treaty of Dancing Rabbit Creek stated Mississippi Choctaws had U.S. citizenship, they had become associated with "colored people" as non-white in a state that had imposed racial segregation under Jim Crow laws. State services for Native Americans were non-existent. The state was poor and still dependent on agriculture. In its system of segregation, services for minorities were consistently underfunded. The state constitution and voter registration rules dating from the turn of the 20th century kept most Native Americans from voting, making them ineligible to serve on juries or to be candidates for local or state offices. They were without political representation. A Mississippi Choctaw veteran stated, "Indians were not supposed to go in the military back then ... the military was mainly for whites. My category was white instead of Indian. I don't know why they did that. Even though Indians weren't citizens of this country, couldn't register to vote, didn't have a draft card or anything, they took us anyway." Van Barfoot, a Choctaw from Mississippi, who was a sergeant and later a second lieutenant in the U.S. Army, 157th Infantry, 45th Infantry Division, received the Medal of Honor. Barfoot was commissioned a second lieutenant after he destroyed two German machine gun nests, took 17 prisoners, and disabled an enemy tank. Lt. Colonel Edward E. McClish from Oklahoma was a guerrilla leader in the Philippines. Post-Reorganization The first Mississippi Band of Choctaw Indians regular tribal council meeting was held on July 10, 1945. The members were Joe Chitto (Chairman), J.C. Allen (Vice Chairman), Nicholas Bell (Secretary Treasurer), Tom Bell, Preatice Jackson, Dempsey Morris, Woodrow W. Jackson, Lonnie Anderson, Joseph Farve, Phillip Farve, Will Wilson, Hensley Gibson, Will Jimmie, Baxter York, Ennis Martin, and Jimpson McMillan. After World War II, pressure in Congress mounted to reduce Washington's authority on Native American lands and liquidate the government's responsibilities to them. In 1953 the House of Representatives passed Resolution 108, proposing an end to federal services for 13 tribes deemed ready to handle their own affairs. The same year, Public Law 280 transferred jurisdiction over tribal lands to state and local governments in five states. Within a decade Congress terminated federal services to more than sixty groups despite intense opposition by Indians. Congress settled on a policy to terminate tribes as quickly as possible. Out of concern for the isolation of many Native Americans in rural areas, the federal government created relocation programs to cities to try to expand their employment opportunities. Indian policy experts hoped to expedite assimilation of Native Americans to the larger American society, which was becoming urban. In 1959, the Choctaw Termination Act was passed. Unless repealed by the federal government, the Choctaw Nation of Oklahoma would effectively be terminated as a sovereign nation as of August 25, 1970. President John F. Kennedy halted further termination in 1961 and decided against implementing additional terminations. He did enact some of the last terminations in process, such as with the Ponca. Both presidents Lyndon Johnson and Richard Nixon repudiated termination of the federal government's relationship with Native American tribes. Mississippi Choctaw Self-Determination era The Choctaw people continued to struggle economically due to bigotry, cultural isolation, and lack of jobs. The Choctaw, who for 150 years had been neither white nor black, were "left where they had always been"—in poverty. Will D. Campbell, a Baptist minister and Civil Rights activist, witnessed the destitution of the Choctaw. He would later write, "the thing I remember the most ... was the depressing sight of the Choctaws, their shanties along the country roads, grown men lounging on the dirt streets of their villages in demeaning idleness, sometimes drinking from a common bottle, sharing a roll-your-own cigarette, their half-clad children a picture of hurting that would never end." With reorganization and establishment of tribal government, however, over the next decades they took control of "schools, health care facilities, legal and judicial systems, and social service programs." The Choctaws witnessed the social forces that brought Freedom Summer and its after effects to their ancient homeland. The civil rights movement produced significant social change for the Choctaw in Mississippi, as their civil rights were enhanced. Prior to the Civil Rights Act of 1964, most jobs were given to whites, then blacks. Donna Ladd wrote that a Choctaw, now in her 40s, remembers "as a little girl, she thought that a 'white only' sign in a local store meant she could only order white, or vanilla, ice cream. It was a small story, but one that shows how a third race can easily get left out of the attempts for understanding." On June 21, 1964 James Chaney, Andrew Goodman, and Michael Schwerner (renowned civil rights workers) disappeared; their remains were later found in a newly constructed dam. A crucial turning point in the FBI investigation came when the charred remains of the murdered civil rights workers' station wagon was found on a Mississippi Choctaw reservation. Two Choctaw women, who were in the back seat of a deputy's patrol car, said they witnessed the meeting of two conspirators who expressed their desire to "beat-up" the boys. The end of legalized racial segregation permitted the Choctaws to participate in public institutions and facilities that had been reserved exclusively for white patrons. Phillip Martin, who had served in the U. S. Army in Europe during World War II, returned to visit his former Neshoba County, Mississippi home. After seeing the poverty of his people, he decided to stay to help. Martin served as chairperson in various Choctaw committees up until 1977. Martin was elected as Chief of the Mississippi Band of Choctaw Indians. He served a total of 30 years, being re-elected until 2007. Martin died in Jackson, Mississippi, on February 4, 2010. He was eulogized as a visionary leader, who had lifted his people out of poverty with businesses and casinos built on tribal land. 1960s to present In the social changes around the civil rights era, between 1965 and 1982 many Choctaw Native Americans renewed their commitments to the value of their ancient heritage. Working to celebrate their own strengths and exercise appropriate rights; they dramatically reversed the trend toward abandonment of Indian culture and tradition. During the 1960s, Community Action programs connected with Native Americans were based on citizen participation. In the 1970s, the Choctaw repudiated the extremes of Indian activism. The Oklahoma Choctaw sought a local grassroots solution to reclaim their cultural identity and sovereignty as a nation. The Mississippi Choctaw would lay the foundations of business ventures. Federal policy under President Richard M. Nixon encouraged giving tribes more authority for self-determination, within a policy of federal recognition. Realizing the damage that had been done by termination of tribal status, he ended the federal emphasis of the 1950s on termination of certain tribes' federally recognized status and relationships with the federal government: Soon after this, Congress passed the landmark Indian Self-Determination and Education Assistance Act of 1975; this completed a 15-year period of federal policy reform with regard to American Indian tribes. The legislation authorized processes by which tribes could negotiate contracts with the BIA to manage directly more of their education and social service programs. In addition, it provided direct grants to help tribes develop plans for assuming such responsibility. It also provided for Indian parents' participation on local school boards. Beginning in 1979 the Mississippi Choctaw tribal council worked on a variety of economic development initiatives, first geared toward attracting industry to the reservation. They had many people available to work, natural resources, and no state or federal taxes. Industries have included automotive parts, greeting cards, direct mail and printing, and plastic-molding. The Mississippi Band of Choctaw Indians is one of the state's largest employers, running 19 businesses and employing 7,800 people. Starting with New Hampshire in 1963, numerous state governments began to operate lotteries and other gambling in order to raise money for government services, often promoting the programs by promising to earmark revenues to fund education, for instance. In 1987 the Supreme Court of the United States ruled that federally recognized tribes could operate gaming facilities on reservations, as this was sovereign territory, and be free from state regulation. As tribes began to develop gaming, starting with bingo, in 1988 the U.S. Congress enacted the Indian Gaming Regulatory Act (IGRA). It set the broad terms for Native American tribes to operate casinos, requiring that they do so only in states that had already authorized private gaming. Since then development of casino gaming has been one of the chief sources for many tribes of new revenues. The Choctaw Nation of Oklahoma developed gaming operations and a related resort: the Choctaw Casino Resort and Choctaw Casino Bingo are their popular gaming destinations in Durant. Located near the Oklahoma-Texas border, these sites attract residents of Southern Oklahoma and North Texas. The largest regional population base from which they draw is the Dallas-Fort Worth Metroplex. The Mississippi Band of Choctaw Indians (MBCI) unsuccessfully sought state agreement to develop gaming under the Ray Mabus administration. But in 1992 Mississippi Governor Kirk Fordice gave permission for the MBCI to develop Class III gaming. They have developed one of the largest casino resorts in the nation; it is located in Philadelphia, Mississippi near the Pearl River. The Silver Star Casino opened its doors in 1994. The Golden Moon Casino opened in 2002. The casinos are collectively known as the Pearl River Resort. After nearly two hundred years, the Choctaw have regained control of the ancient sacred site of Nanih Waiya. Mississippi protected the site for years as a state park. In 2006, the state legislature passed a bill to return Nanih Waiya to the Choctaw. Jack Abramoff and Indian casino lobbying In the second half of the 1990s, lobbyist Jack Abramoff was employed by Preston Gates Ellis & Rouvelas Meeds LLP, the lobbying arm in Washington, DC of the Preston Gates & Ellis LLP law firm based in Seattle, Washington. In 1995, Abramoff began representing Native American tribes who wanted to develop gambling casinos, starting with the Mississippi Band of Choctaw Indians. The Choctaw originally had lobbied the federal government directly, but beginning in 1994, they found that many of the congressional members who had responded to their issues had either retired or were defeated in the "Republican Revolution" of the 1994 elections. Nell Rogers, the tribe's specialist on legislative affairs, had a friend who was familiar with the work of Abramoff and his father as Republican activists. The tribe contacted Preston Gates, and soon after hired the firm and Abramoff. Abramoff succeeded in gaining defeat of a Congressional bill to use the unrelated business income tax (UBIT) to tax Native American casinos; it was sponsored by Reps. Bill Archer (R-TX) and Ernest Istook (R-OK). Since the matter involved taxation, Abramoff enlisted help from Grover Norquist, a Republican acquaintance from college, and his Americans for Tax Reform (ATR). The bill was eventually defeated in 1996 in the Senate, due in part to grassroots work by ATR. The Choctaw paid $60,000 in fees to Abramoff. According to Washington Business Forward, a lobbying trade magazine, Senator Tom DeLay was also a major figure in achieving defeat of the bill. The fight strengthened Abramoff's alliance with him. Purporting to represent Native Americans before Congress and state governments in the developing field of gaming, Jack Abramoff and Michael Scanlon used fraudulent means to gain profits of $15 million in total payments from the Mississippi Band of Choctaw Indians. After Congressional oversight hearings were held in 2004 on the lobbyists' activities, federal criminal charges were brought against Abramoff and Scanlon. In an e-mail sent January 29, 2002, Abramoff had written to Scanlon, "I have to meet with the monkeys from the Choctaw tribal council." On January 3, 2006, Abramoff pleaded guilty to three felony counts — conspiracy, fraud, and tax evasion. The charges were based principally on his lobbying activities in Washington on behalf of Native American tribes. In addition, Abramoff and other defendants must make restitution of at least $25 million that was defrauded from clients, most notably the Native American tribes. 2011 Federal Bureau of Investigation raid In July 2011, agents from the FBI "seized" Pearl River Resort informational assets. The Los Angeles Times reported that the Indians are "faced with infighting over a disputed election for tribal chief and an FBI investigation targeting the tribe's casinos." State-recognized tribes Two US states recognize tribes that are not recognized by the US federal government. Alabama recognizes the MOWA Band of Choctaw Indians, who have a 600-acre reservation in southwestern Alabama and a total enrolled population of 3,600. The tribe has the last Indian school in Alabama named Calcedeaver in Mount Vernon, Mobile County, Alabama. Louisiana recognizes the Choctaw-Apache Tribe of Ebarb, Clifton Choctaw, and Louisiana Choctaw Tribe. In the 2010 Census In the 2010 US Census, there were people who identified as Choctaw living in every state of the Union. The states with the largest Choctaw populations were: Oklahoma – 79,006 Texas – 24,024 California – 23,403 Mississippi – 9,260 Arkansas – 4,840 Alabama – 4,513 Culture The Choctaw people are believed to have coalesced in the 17th century, perhaps from peoples from Alabama and the Plaquemine culture. Their culture continued to evolve in the Southeast. The Choctaw practiced Head flattening as a ritual adornment for its people, but the practice eventually fell out of favor. Some of their communities had extensive trade and interaction with Europeans, including people from Spain, France, and England greatly shaped it as well. After the United States was formed and its settlers began to move into the Southeast, the Choctaw were among the Five Civilized Tribes, who adopted some of their ways. They transitioned to yeoman farming methods, and accepted European Americans and African Americans into their society. In mid-summer the Mississippi Band of Choctaw Indians celebrate their traditional culture during the Choctaw Indian Fair with ball games, dancing, cooking and entertainment. Clans Within the Choctaws were two distinct moieties: Imoklashas (elders) and Inhulalatas (youth). Each moiety had several clans or Iskas; it is estimated there were about 12 Iskas altogether. The people had a matrilineal kinship system, with children born into the clan or iska of the mother and taking their social status from it. In this system, their maternal uncles had important roles. Identity was established first by moiety and iska; so a Choctaw identified first as Imoklasha or Inhulata, and second as Choctaw. Children belonged to the Iska of their mother. The following were some major districts: Okla Hannalli (people of six towns) Okla Tannap (people from the other side) Okla Fayala (people who are widely dispersed) By the early 1930s, the anthropologist John Swanton wrote of the Choctaw: "[T]here are only the faintest traces of groups with truly totemic designations, the animal and plant names which occur seeming not to have had a totemic connotation." Swanton wrote, "Adam Hodgson ... told ... that there were tribes or families among the Indians, somewhat similar to the Scottish clans; such as, the Panther family, the Bird family, Raccoon Family, the Wolf family." The following are possible totemic clan designations: Wind Bear Deer Wolf Panther Holly Leaf Bird Raccoon Crawfish Games Choctaw stickball, the oldest field sport in North America, was also known as the "little brother of war" because of its roughness and substitution for war. When disputes arose between Choctaw communities, stickball provided a civil way to settle issues. The stickball games would involve as few as twenty or as many as 300 players. The goal posts could be from a few hundred feet apart to a few miles. Goal posts were sometimes located within each
better, ended tribal governments. In addition, it proposed the end of communal, tribal lands. Continuing the struggle over land and assimilation, the US proposed the end to the tribal lands held in common, and allotment of lands to tribal members in severalty (individually). The US declared land in excess of the registered households needs to be "surplus" to the tribe, and took it for sale to new European-American settlers. In addition, individual ownership meant that Native Americans could sell their individual plots. This would also enable new settlers to buy land from those Native Americans who wished to sell. The US government set up the Dawes Commission to manage the land allotment policy; it registered members of the tribe and made allocations of lands. Beginning in 1894, the Dawes Commission was established to register Choctaw and other families of the Indian Territory, so that the former tribal lands could be properly distributed among them. The final list included 18,981 citizens of the Choctaw Nation, 1,639 Mississippi Choctaw, and 5,994 former slaves (and descendants of former slaves), most held by Choctaws in the Indian/Oklahoma Territory. (At the same time, the Dawes Commission registered members of the other Five Civilized Tribes for the same purpose. The Dawes Rolls have become important records for proving tribal membership.) Following completion of the land allotments, the US proposed to end tribal governments of the Five Civilized Tribes and admit the two territories jointly as a state. Territory transition to Oklahoma statehood (1889) The establishment of Oklahoma Territory following the Civil War was a required land cession by the Five Civilized Tribes, who had supported the Confederacy. The government used its railroad access to the Oklahoma Territory to stimulate development there. The Indian Appropriations Bill of 1889 included an amendment by Illinois Representative William McKendree Springer, that authorized President Benjamin Harrison to open the two million acres (8,000 km²) of Oklahoma Territory for settlement, resulting in the Land Run of 1889. The Choctaw Nation was overwhelmed with new settlers and could not regulate their activities. In the late 19th century, Choctaws suffered almost daily from violent crimes, murders, thefts and assaults from whites and from other Choctaws. Intense factionalism divided the traditionalistic "Nationalists" and pro-assimilation "Progressives," who fought for control. In 1905, delegates of the Five Civilized Tribes met at the Sequoyah Convention to write a constitution for an Indian-controlled state. They wanted to have Indian Territory admitted as the State of Sequoyah. Although they took a thoroughly developed proposal to Washington, DC, seeking approval, eastern states' representatives opposed it, not wanting to have two western states created in the area, as the Republicans feared that both would be Democrat-dominated, as the territories had a southern tradition of settlement. President Theodore Roosevelt, a Republican, ruled that the Oklahoma and Indian territories had to be jointly admitted as one state, Oklahoma. To achieve this, tribal governments had to end and all residents accept state government. Many of the leading Native American representatives from the Sequoyah Convention participated in the new state convention. Its constitution was based on many elements of the one developed for the State of Sequoyah. In 1906 the U.S. dissolved the governments of the Five Civilized Tribes. This action was part of continuing negotiations by Native Americans and European Americans over the best proposals for the future. The Choctaw Nation continued to protect resources not stipulated in treaty or law. On November 16, 1907, Oklahoma was admitted to the union as the 46th state. Mississippi Choctaw Delegation to Washington (1914) By 1907, the Mississippi Choctaw were in danger of becoming extinct. The Dawes Commission had sent a large number of the Mississippi Choctaws to Indian Territory, and only 1,253 members remained. Meetings were held in April and May 1913 to try to find a solution to this problem. Wesley Johnson was elected chief of the newly formed Mississippi, Alabama, and Louisiana Choctaw Council at the May 1913 meeting. After some deliberation, the council selected delegates to send to Washington, D.C. to bring attention to their plight. Historian Robert Bruce Ferguson wrote in his 2015 article that: In late January 1914, Chief Wesley Johnson and his delegates (Culbertson Davis and Emil John) traveled to Washington, D. C. ... While they were in Washington, Johnson, Davis, and John met with numerous senators & representatives and persuaded the federals to bring the Choctaw case before Congress. On February 5th, their mission culminated with the meeting of President Woodrow Wilson. Culbertson Davis presented a beaded Choctaw belt as a token of goodwill to the President. Nearly two years after the trip to Washington, the Indian Appropriations Act of May 18, 1916 was passed. A stipulation allowed $1,000 for a investigation on the Mississippi Choctaws' condition. John R. T. Reeves was to "investigate the condition of the Indians living in Mississippi and report to Congress ... as to their needs for additional land and school facilities ..." Reeves submitted his report on November 6, 1916. Hearing at Union, Mississippi In March 1917, federal representatives held hearings, attended by around 100 Choctaws, to examine the needs of the Mississippi Choctaws. Some of the congressmen who presided over the hearings were: Charles D. Carter of Oklahoma, William W. Hastings of Oklahoma, Carl T. Hayden of Arizona, John N. Tillman of Arkansas, and William W. Venable of Mississippi. These hearings resulted in improvements such as improved access to health care, housing, and schools. After Cato H. Sells investigated the Choctaws' condition, the U. S. Bureau of Indian Affairs established the Choctaw Agency on October 8 of 1918. The Choctaw Agency was based in Philadelphia, Mississippi, the center of Indian activity. Dr. Frank J. McKinley was its first superintendent, and he was also the physician. Before 1916, six Indian schools operated in three counties: two in Leake, three in Neshoba, and one in Newton. The names of those schools were: Tubby Rock Indian School, Calcutta Indian School, Revenue Indian school, Red Water Indian School, and Gum Springs Indian School. The Newton Indian school's name is not known. The agency established new schools in the following Indian communities: Bogue Chitto, Bogue Homo, Conehatta, Pearl River, Red Water, Standing Pine, and Tucker. Under segregation, few schools were open to Choctaw children, whom the white southerners classified as non-whites. The Mississippi Choctaws' improvements may have continued if it wasn't dramatically interrupted by world events. World War I slowed down progress for the Indians as Washington's bureaucracy focused on the war. Some Mississippi Choctaws also served during the war. The Spanish Influenza also slowed progress as many Choctaws were killed by the world-wide epidemic. World War I (1918) In the closing days of World War I, a group of Oklahoma Choctaws serving in the U.S. Army used their native language as the basis for secret communication among Americans, as Germans could not understand it. They are now called the Choctaw Code Talkers. The Choctaws were the Native American innovators who served as code talkers. Captain Lawrence, a company commander, overheard Solomon Louis and Mitchell Bobb conversing in the Choctaw language. He learned there were eight Choctaw men in the battalion. Fourteen Choctaw Indian men in the Army's 36th Division trained to use their language for military communications. Their communications, which could not be understood by Germans, helped the American Expeditionary Force win several key battles in the Meuse-Argonne Campaign in France, during the last big German offensive of the war. Within 24 hours after the US Army starting using the Choctaw speakers, they turned the tide of battle by controlling their communications. In less than 72 hours, the Germans were retreating and the Allies were on full attack. The 14 Choctaw Code Talkers were Albert Billy, Mitchell Bobb, Victor Brown, Ben Caterby, James Edwards, Tobias Frazer, Ben Hampton, Solomon Louis, Pete Maytubby, Jeff Nelson, Joseph Oklahombi, Robert Taylor, Calvin Wilson, and Captain Walter Veach. More than 70 years passed before the contributions of the Choctaw Code talkers were fully recognized. On November 3, 1989, in recognition of the important role the Choctaw Code Talkers played during World War I, the French government presented the Chevalier de L'Ordre National du Mérite (the Knight of the National Order of Merit) to the Choctaws Code Talkers. The US Army again used Choctaw speakers for coded language during World War II. Reorganization (1934) During the Great Depression and the Roosevelt Administration, officials began numerous initiatives to alleviate some of the social and economic conditions in the South. The 1933 Special Narrative Report described the dismal state of welfare of Mississippi Choctaws, whose population by 1930 had slightly increased to 1,665 people. John Collier, the US Commissioner for Indian Affairs (now BIA), had worked for a decade on Indian affairs and been developing ideas to change federal policy. He used the report as instrumental support to re-organize the Mississippi Choctaw as the Mississippi Band of Choctaw Indians. This enabled them to establish their own tribal government, and gain a beneficial relationship with the federal government. In 1934, President Franklin Roosevelt signed into law the Indian Reorganization Act. This law proved critical for survival of the Mississippi Choctaw. Baxter York, Emmett York, and Joe Chitto worked on gaining recognition for the Choctaw. They realized that the only way to gain recognition was to adopt a constitution. A rival organization, the Mississippi Choctaw Indian Federation, opposed tribal recognition because of fears of dominance by the Bureau of Indian Affairs (BIA). They disbanded after leaders of the opposition were moved to another jurisdiction. The first Mississippi Band of Choctaw Indians tribal council members were Baxter and Emmett York with Joe Chitto as the first chairperson. With the tribe's adoption of government, in 1944 the Secretary of the Interior declared that would be held in trust for the Choctaw of Mississippi. Lands in Neshoba and surrounding counties were set aside as a federal Indian reservation. Eight communities were included in the reservation land: Bogue Chitto, Bogue Homa, Conehatta, Crystal Ridge, Pearl River, Red Water, Tucker, and Standing Pine. Under the Indian Reorganization Act, the Mississippi Choctaws re-organized on April 20, 1945 as the Mississippi Band of Choctaw Indians. This gave them some independence from the Democrat-dominated state government, which continued with enforcement of racial segregation and discrimination. World War II (1941) World War II was a significant turning point for Choctaws and Native Americans in general. Although the Treaty of Dancing Rabbit Creek stated Mississippi Choctaws had U.S. citizenship, they had become associated with "colored people" as non-white in a state that had imposed racial segregation under Jim Crow laws. State services for Native Americans were non-existent. The state was poor and still dependent on agriculture. In its system of segregation, services for minorities were consistently underfunded. The state constitution and voter registration rules dating from the turn of the 20th century kept most Native Americans from voting, making them ineligible to serve on juries or to be candidates for local or state offices. They were without political representation. A Mississippi Choctaw veteran stated, "Indians were not supposed to go in the military back then ... the military was mainly for whites. My category was white instead of Indian. I don't know why they did that. Even though Indians weren't citizens of this country, couldn't register to vote, didn't have a draft card or anything, they took us anyway." Van Barfoot, a Choctaw from Mississippi, who was a sergeant and later a second lieutenant in the U.S. Army, 157th Infantry, 45th Infantry Division, received the Medal of Honor. Barfoot was commissioned a second lieutenant after he destroyed two German machine gun nests, took 17 prisoners, and disabled an enemy tank. Lt. Colonel Edward E. McClish from Oklahoma was a guerrilla leader in the Philippines. Post-Reorganization The first Mississippi Band of Choctaw Indians regular tribal council meeting was held on July 10, 1945. The members were Joe Chitto (Chairman), J.C. Allen (Vice Chairman), Nicholas Bell (Secretary Treasurer), Tom Bell, Preatice Jackson, Dempsey Morris, Woodrow W. Jackson, Lonnie Anderson, Joseph Farve, Phillip Farve, Will Wilson, Hensley Gibson, Will Jimmie, Baxter York, Ennis Martin, and Jimpson McMillan. After World War II, pressure in Congress mounted to reduce Washington's authority on Native American lands and liquidate the government's responsibilities to them. In 1953 the House of Representatives passed Resolution 108, proposing an end to federal services for 13 tribes deemed ready to handle their own affairs. The same year, Public Law 280 transferred jurisdiction over tribal lands to state and local governments in five states. Within a decade Congress terminated federal services to more than sixty groups despite intense opposition by Indians. Congress settled on a policy to terminate tribes as quickly as possible. Out of concern for the isolation of many Native Americans in rural areas, the federal government created relocation programs to cities to try to expand their employment opportunities. Indian policy experts hoped to expedite assimilation of Native Americans to the larger American society, which was becoming urban. In 1959, the Choctaw Termination Act was passed. Unless repealed by the federal government, the Choctaw Nation of Oklahoma would effectively be terminated as a sovereign nation as of August 25, 1970. President John F. Kennedy halted further termination in 1961 and decided against implementing additional terminations. He did enact some of the last terminations in process, such as with the Ponca. Both presidents Lyndon Johnson and Richard Nixon repudiated termination of the federal government's relationship with Native American tribes. Mississippi Choctaw Self-Determination era The Choctaw people continued to struggle economically due to bigotry, cultural isolation, and lack of jobs. The Choctaw, who for 150 years had been neither white nor black, were "left where they had always been"—in poverty. Will D. Campbell, a Baptist minister and Civil Rights activist, witnessed the destitution of the Choctaw. He would later write, "the thing I remember the most ... was the depressing sight of the Choctaws, their shanties along the country roads, grown men lounging on the dirt streets of their villages in demeaning idleness, sometimes drinking from a common bottle, sharing a roll-your-own cigarette, their half-clad children a picture of hurting that would never end." With reorganization and establishment of tribal government, however, over the next decades they took control of "schools, health care facilities, legal and judicial systems, and social service programs." The Choctaws witnessed the social forces that brought Freedom Summer and its after effects to their ancient homeland. The civil rights movement produced significant social change for the Choctaw in Mississippi, as their civil rights were enhanced. Prior to the Civil Rights Act of 1964, most jobs were given to whites, then blacks. Donna Ladd wrote that a Choctaw, now in her 40s, remembers "as a little girl, she thought that a 'white only' sign in a local store meant she could only order white, or vanilla, ice cream. It was a small story, but one that shows how a third race can easily get left out of the attempts for understanding." On June 21, 1964 James Chaney, Andrew Goodman, and Michael Schwerner (renowned civil rights workers) disappeared; their remains were later found in a newly constructed dam. A crucial turning point in the FBI investigation came when the charred remains of the murdered civil rights workers' station wagon was found on a Mississippi Choctaw reservation. Two Choctaw women, who were in the back seat of a deputy's patrol car, said they witnessed the meeting of two conspirators who expressed their desire to "beat-up" the boys. The end of legalized racial segregation permitted the Choctaws to participate in public institutions and facilities that had been reserved exclusively for white patrons. Phillip Martin, who had served in the U. S. Army in Europe during World War II, returned to visit his former Neshoba County, Mississippi home. After seeing the poverty of his people, he decided to stay to help. Martin served as chairperson in various Choctaw committees up until 1977. Martin was elected as Chief of the Mississippi Band of Choctaw Indians. He served a total of 30 years, being re-elected until 2007. Martin died in Jackson, Mississippi, on February 4, 2010. He was eulogized as a visionary leader, who had lifted his people out of poverty with businesses and casinos built on tribal land. 1960s to present In the social changes around the civil rights era, between 1965 and 1982 many Choctaw Native Americans renewed their commitments to the value of their ancient heritage. Working to celebrate their own strengths and exercise appropriate rights; they dramatically reversed the trend toward abandonment of Indian culture and tradition. During the 1960s, Community Action programs connected with Native Americans were based on citizen participation. In the 1970s, the Choctaw repudiated the extremes of Indian activism. The Oklahoma Choctaw sought a local grassroots solution to reclaim their cultural identity and sovereignty as a nation. The Mississippi Choctaw would lay the foundations of business ventures. Federal policy under President Richard M. Nixon encouraged giving tribes more authority for self-determination, within a policy of federal recognition. Realizing the damage that had been done by termination of tribal status, he ended the federal emphasis of the 1950s on termination of certain tribes' federally recognized status and relationships with the federal government: Soon after this, Congress passed the landmark Indian Self-Determination and Education Assistance Act of 1975; this completed a 15-year period of federal policy reform with regard to American Indian tribes. The legislation authorized processes by which tribes could negotiate contracts with the BIA to manage directly more of their education and social service programs. In addition, it provided direct grants to help tribes develop plans for assuming such responsibility. It also provided for Indian parents' participation on local school boards. Beginning in 1979 the Mississippi Choctaw tribal council worked on a variety of economic development initiatives, first geared toward attracting industry to the reservation. They had many people available to work, natural resources, and no state or federal taxes. Industries have included automotive parts, greeting cards, direct mail and printing, and plastic-molding. The Mississippi Band of Choctaw Indians is one of the state's largest employers, running 19 businesses and employing 7,800 people. Starting with New Hampshire in 1963, numerous state governments began to operate lotteries and other gambling in order to raise money for government services, often promoting the programs by promising to earmark revenues to fund education, for instance. In 1987 the Supreme Court of the United States ruled that federally recognized tribes could operate gaming facilities on reservations, as this was sovereign territory, and be free from state regulation. As tribes began to develop gaming, starting with bingo, in 1988 the U.S. Congress enacted the Indian Gaming Regulatory Act (IGRA). It set the broad terms for Native American tribes to operate casinos, requiring that they do so only in states that had already authorized private gaming. Since then development of casino gaming has been one of the chief sources for many tribes of new revenues. The Choctaw Nation of Oklahoma developed gaming operations and a related resort: the Choctaw Casino Resort and Choctaw Casino Bingo are their popular gaming destinations in Durant. Located near the Oklahoma-Texas border, these sites attract residents of Southern Oklahoma and North Texas. The largest regional population base from which they draw is the Dallas-Fort Worth Metroplex. The Mississippi Band of Choctaw Indians (MBCI) unsuccessfully sought state agreement to develop gaming under the Ray Mabus administration. But in 1992 Mississippi Governor Kirk Fordice gave permission for the MBCI to develop Class III gaming. They have developed one of the largest casino resorts in the nation; it is located in Philadelphia, Mississippi near the Pearl River. The Silver Star Casino opened its doors in 1994. The Golden Moon Casino opened in 2002. The casinos are collectively known as the Pearl River Resort. After nearly two hundred years, the Choctaw have regained control of the ancient sacred site of Nanih Waiya. Mississippi protected the site for years as a state park. In 2006, the state legislature passed a bill to return Nanih Waiya to the Choctaw. Jack Abramoff and Indian casino lobbying In the second half of the 1990s, lobbyist Jack Abramoff was employed by Preston Gates Ellis & Rouvelas Meeds LLP, the lobbying arm in Washington, DC of the Preston Gates & Ellis LLP law firm based in Seattle, Washington. In 1995, Abramoff began representing Native American tribes who wanted to develop gambling casinos, starting with the Mississippi Band of Choctaw Indians. The Choctaw originally had lobbied the federal government directly, but beginning in 1994, they found that many of the congressional members who had responded to their issues had either retired or were defeated in the "Republican Revolution" of the 1994 elections. Nell Rogers, the tribe's specialist on legislative affairs, had a friend who was familiar with the work of Abramoff and his father as Republican activists. The tribe contacted Preston Gates, and soon after hired the firm and Abramoff. Abramoff succeeded in gaining defeat of a Congressional bill to use the unrelated business income tax (UBIT) to tax Native American casinos; it was sponsored by Reps. Bill Archer (R-TX) and Ernest Istook (R-OK). Since the matter involved taxation, Abramoff enlisted help from Grover Norquist, a Republican acquaintance from college, and his Americans for Tax Reform (ATR). The bill was eventually defeated in 1996 in the Senate, due in part to grassroots work by ATR. The Choctaw paid $60,000 in fees to Abramoff. According to Washington Business Forward, a lobbying trade magazine, Senator Tom DeLay was also a major figure in achieving defeat of the bill. The fight strengthened Abramoff's alliance with him. Purporting to represent Native Americans before Congress and state governments in the developing field of gaming, Jack Abramoff and Michael Scanlon used fraudulent means to gain profits of $15 million in total payments from the Mississippi Band of Choctaw Indians. After Congressional oversight hearings were held in 2004 on the lobbyists' activities, federal criminal charges were brought against Abramoff and Scanlon. In an e-mail sent January 29, 2002, Abramoff had written to Scanlon, "I have to meet with the monkeys from the Choctaw tribal council." On January 3, 2006, Abramoff pleaded guilty to three felony counts — conspiracy, fraud, and tax evasion. The charges were based principally on his lobbying activities in Washington on behalf of Native American tribes. In addition, Abramoff and other defendants must make restitution of at least $25 million that was defrauded from clients, most notably the Native American tribes. 2011 Federal Bureau of Investigation raid In July 2011, agents from the FBI "seized" Pearl River Resort informational assets. The Los Angeles Times reported that the Indians are "faced with infighting over a disputed election for tribal chief and an FBI investigation targeting the tribe's casinos." State-recognized tribes Two US states recognize tribes that are not recognized by the US federal government. Alabama recognizes the MOWA Band of Choctaw Indians, who have a 600-acre reservation in southwestern Alabama and a total enrolled population of 3,600. The tribe has the last Indian school in Alabama named Calcedeaver in Mount Vernon, Mobile County, Alabama. Louisiana recognizes the Choctaw-Apache Tribe of Ebarb, Clifton Choctaw, and Louisiana Choctaw Tribe. In the 2010 Census In the 2010 US Census, there were people who identified as Choctaw living in every state of the Union. The states with the largest Choctaw populations were: Oklahoma – 79,006 Texas – 24,024 California – 23,403 Mississippi – 9,260 Arkansas – 4,840 Alabama – 4,513 Culture The Choctaw people are believed to have coalesced in the 17th century, perhaps from peoples from Alabama and the Plaquemine culture. Their culture continued to evolve in the Southeast. The Choctaw practiced Head flattening as a ritual adornment for its people, but the practice eventually fell out of favor. Some of their communities had extensive trade and interaction with Europeans, including people from Spain, France, and England greatly shaped it as well. After the United States was formed and its settlers began to move into the Southeast, the Choctaw were among the Five Civilized Tribes, who adopted some of their ways. They transitioned to yeoman farming methods, and
pickle bean, orca bean or yin yang bean Places Calypso Deep, the deepest point of the Mediterranean Sea Calypso, an area and a cave in Gozo, Malta Calypso, North Carolina, a town in the United States Ships French submarine Calypso HMS Calypso, the name of a number of British Royal Navy ships MS The Calypso, a cruise ship MV Calypsoland, a ferry of E. Zammit & Sons Ltd and later the Gozo Channel Line between 1969 and 1984 RV Calypso, a 1942 British minesweeper and later an oceanographic research ship operated by Jacques-Yves Cousteau UNS Calypso, fictitious starship commanded by the player in the 1994 sci-fi strategy game Alien Legacy USS Calypso, the name of various United States Navy ships Other Calypso, a character from the Twisted Metal video game series Calypso, a name given by Sunita Williams to the CST-100 Starliner spacecraft used for the Boeing Orbital Flight Test in 2019 Related spellings 53 Kalypso, an asteroid "Calipso" (song), a 2019 song by Charlie Charles and Dardust featuring Sfera Ebbasta, Mahmood and Fabri Fibra CALIPSO, NASA's "Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations" satellite Chalypso, a dance Kalypso Media, a video game developer Kalypso (software), an open source modelling software
refer to: Books "Calypso" (Ulysses episode), an episode in James Joyce's novel Ulysses Calypso (book), 2018 essay collection by David Sedaris Calypso (comics), Marvel Comics character Calypso, Camp Half-Blood Chronicles Companies and brands Calypso (camera), an underwater camera — a precursor to the Nikonos camera Calypso (electronic ticketing system), an electronic ticketing system for public transport Calypso (email client), later called Courier Calypso Park, a Canadian theme waterpark Calypso Technology, an American financial services application software company Ultracraft Calypso, a Belgian light aircraft design Entertainment Music Calypso music, a genre of Trinidadian folk music Calypso (album), by Harry Belafonte Banda Calypso, a Brazilian musical duo Songs "Calypso" (John Denver song), written as a tribute to Jacques-Yves Cousteau and his research ship Calypso "Calypso" (Luis Fonsi and Stefflon Don song), 2018 "Calypso" (Spiderbait song), a 1997 single by Australian alt-rock band Spiderbait "Calypso", a song by France Gall "Calypso", a song by Jean-Michel Jarre from Waiting for Cousteau "Calypso", a song
affinity in attempts to explain how heat is evolved during combustion reactions. The term affinity has been used figuratively since c. 1600 in discussions of structural relationships in chemistry, philology, etc., and reference to "natural attraction" is from 1616. "Chemical affinity", historically, has referred to the "force" that causes chemical reactions. as well as, more generally, and earlier, the ″tendency to combine″ of any pair of substances. The broad definition, used generally throughout history, is that chemical affinity is that whereby substances enter into or resist decomposition. The modern term chemical affinity is a somewhat modified variation of its eighteenth-century precursor "elective affinity" or elective attractions, a term that was used by the 18th century chemistry lecturer William Cullen. Whether Cullen coined the phrase is not clear, but his usage seems to predate most others, although it rapidly became widespread across Europe, and was used in particular by the Swedish chemist Torbern Olof Bergman throughout his book (1775). Affinity theories were used in one way or another by most chemists from around the middle of the 18th century into the 19th century to explain and organise the different combinations into which substances could enter and from which they could be retrieved. Antoine Lavoisier, in his famed 1789 Traité Élémentaire de Chimie (Elements of Chemistry), refers to Bergman's work and discusses the concept of elective affinities or attractions. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world. According to Prigogine, the term was introduced and developed by Théophile de Donder. Goethe used the concept in his novel Elective Affinities (1809). Visual representations The affinity concept was very closely linked to the visual representation of substances on a table. The first-ever affinity table, which was based on displacement reactions, was published in 1718 by the French chemist Étienne François Geoffroy. Geoffroy's name is best known in connection with these tables of "affinities" (tables des rapports), which were first presented to the French Academy of Sciences in 1718 and 1720, as shown below: During the 18th century many versions of the table were proposed with leading chemists like Torbern Bergman in Sweden and Joseph Black in Scotland adapting it to accommodate new chemical discoveries. All the tables were essentially lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents. Crucially, the table was the central graphic tool used to teach chemistry to students and its visual arrangement was often combined with other kinds diagrams. Joseph Black, for example, used the table in combination with chiastic and circlet diagrams to visualise the core principles of chemical affinity. Affinity tables were used throughout Europe until the early 19th century when they were displaced by affinity concepts introduced by Claude Berthollet. Modern conceptions In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming
representation of substances on a table. The first-ever affinity table, which was based on displacement reactions, was published in 1718 by the French chemist Étienne François Geoffroy. Geoffroy's name is best known in connection with these tables of "affinities" (tables des rapports), which were first presented to the French Academy of Sciences in 1718 and 1720, as shown below: During the 18th century many versions of the table were proposed with leading chemists like Torbern Bergman in Sweden and Joseph Black in Scotland adapting it to accommodate new chemical discoveries. All the tables were essentially lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents. Crucially, the table was the central graphic tool used to teach chemistry to students and its visual arrangement was often combined with other kinds diagrams. Joseph Black, for example, used the table in combination with chiastic and circlet diagrams to visualise the core principles of chemical affinity. Affinity tables were used throughout Europe until the early 19th century when they were displaced by affinity concepts introduced by Claude Berthollet. Modern conceptions In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition. In modern terms, we relate affinity to the phenomenon whereby certain atoms or molecules have the tendency to aggregate or bond. For example, in the 1919 book Chemistry of Human Life physician George W. Carey states that, "Health depends on a proper amount of iron phosphate Fe3(PO4)2 in the blood, for the molecules of this salt have chemical affinity for oxygen and carry it to all parts of the organism." In this antiquated context, chemical affinity is sometimes found synonymous with the term "magnetic attraction". Many writings, up
comets with high orbital inclinations and small perihelion distances is generally to reduce the perihelion distance to very small values. Hale–Bopp has about a 15% chance of eventually becoming a sungrazing comet through this process. Scientific results Comet Hale–Bopp was observed intensively by astronomers during its perihelion passage, and several important advances in cometary science resulted from these observations. The dust production rate of the comet was very high (up to 2.0 kg/s), which may have made the inner coma optically thick. Based on the properties of the dust grains—high temperature, high albedo and strong 10 μm silicate emission feature—the astronomers concluded the dust grains are smaller than observed in any other comet. Hale–Bopp showed the highest ever linear polarization detected for any comet. Such polarization is the result of solar radiation getting scattered by the dust particles in the coma of the comet and depends on the nature of the grains. It further confirms that the dust grains in the coma of comet Hale–Bopp were smaller than inferred in any other comet. Sodium tail One of the most remarkable discoveries was that the comet had a third type of tail. In addition to the well-known gas and dust tails, Hale–Bopp also exhibited a faint sodium tail, only visible with powerful instruments with dedicated filters. Sodium emission had been previously observed in other comets, but had not been shown to come from a tail. Hale–Bopp's sodium tail consisted of neutral atoms (not ions), and extended to some 50 million kilometres in length. The source of the sodium appeared to be the inner coma, although not necessarily the nucleus. There are several possible mechanisms for generating a source of sodium atoms, including collisions between dust grains surrounding the nucleus, and "sputtering" of sodium from dust grains by ultraviolet light. It is not yet established which mechanism is primarily responsible for creating Hale–Bopp's sodium tail, and the narrow and diffuse components of the tail may have different origins. While the comet's dust tail roughly followed the path of the comet's orbit and the gas tail pointed almost directly away from the Sun, the sodium tail appeared to lie between the two. This implies that the sodium atoms are driven away from the comet's head by radiation pressure. Deuterium abundance The abundance of deuterium in comet Hale–Bopp in the form of heavy water was found to be about twice that of Earth's oceans. If Hale–Bopp's deuterium abundance is typical of all comets, this implies that although cometary impacts are thought to be the source of a significant amount of the water on Earth, they cannot be the only source. Deuterium was also detected in many other hydrogen compounds in the comet. The ratio of deuterium to normal hydrogen was found to vary from compound to compound, which astronomers believe suggests that cometary ices were formed in interstellar clouds, rather than in the solar nebula. Theoretical modelling of ice formation in interstellar clouds suggests that comet Hale–Bopp formed at temperatures of around 25–45 kelvins. Organics Spectroscopic observations of Hale–Bopp revealed the presence of many organic chemicals, several of which had never been detected in comets before. These complex molecules may exist within the cometary nucleus, or might be synthesised by reactions in the comet. Detection of argon Hale–Bopp was the first comet where the noble gas argon was detected. Noble gases are chemically inert and vary from low to high volatility. Since different noble elements have different sublimation temperatures, and don't interact with other elements, they can be used for probing the temperature histories of the cometary ices. Krypton has a sublimation temperature of 16–20 K and was found to be depleted more than 25 times relative to the solar abundance, while argon with its higher sublimation temperature was enriched relative to the solar abundance. Together these observations indicate that the interior of Hale–Bopp has always been colder than 35–40 K, but has at some point been warmer than 20 K. Unless the solar nebula was much colder and richer in argon than generally believed, this suggests that the comet formed beyond Neptune in the Kuiper belt region and then migrated outward to the Oort cloud. Rotation Comet Hale–Bopp's activity and outgassing were not spread uniformly over its nucleus, but
long as the Great Comet of 1811, the previous record holder. Accordingly, Hale–Bopp was dubbed the great comet of 1997. Discovery The comet was discovered independently on July 23, 1995, by two observers, Alan Hale and Thomas Bopp, both in the United States. Hale had spent many hundreds of hours searching for comets without success, and was tracking known comets from his driveway in New Mexico when he chanced upon Hale–Bopp just after midnight. The comet had an apparent magnitude of 10.5 and lay near the globular cluster M70 in the constellation of Sagittarius. Hale first established that there was no other deep-sky object near M70, and then consulted a directory of known comets, finding that none were known to be in this area of the sky. Once he had established that the object was moving relative to the background stars, he emailed the Central Bureau for Astronomical Telegrams, the clearing house for astronomical discoveries. Bopp did not own a telescope. He was out with friends near Stanfield, Arizona, observing star clusters and galaxies when he chanced across the comet while at the eyepiece of his friend's telescope. He realized he might have spotted something new when, like Hale, he checked his star maps to determine if any other deep-sky objects were known to be near M70, and found that there were none. He alerted the Central Bureau for Astronomical Telegrams through a Western Union telegram. Brian G. Marsden, who had run the bureau since 1968, laughed, "Nobody sends telegrams anymore. I mean, by the time that telegram got here, Alan Hale had already e-mailed us three times with updated coordinates." The following morning, it was confirmed that this was a new comet, and it was given the designation C/1995 O1. The discovery was announced in International Astronomical Union circular 6187. Early observation Hale–Bopp's orbital position was calculated as 7.2 astronomical units (AU) from the Sun, placing it between Jupiter and Saturn and by far the greatest distance from Earth at which a comet had been discovered by amateurs. Most comets at this distance are extremely faint, and show no discernible activity, but Hale–Bopp already had an observable coma. A precovery image taken at the Anglo-Australian Telescope in 1993 was found to show the then-unnoticed comet some 13 AU from the Sun, a distance at which most comets are essentially unobservable. (Halley's Comet was more than 100 times fainter at the same distance from the Sun.) Analysis indicated later that its comet nucleus was 60±20 kilometres in diameter, approximately six times the size of Halley's Comet. Its great distance and surprising activity indicated that comet Hale–Bopp might become very bright when it reached perihelion in 1997. However, comet scientists were wary – comets can be extremely unpredictable, and many have large outbursts at great distance only to diminish in brightness later. Comet Kohoutek in 1973 had been touted as a 'comet of the century' and turned out to be unspectacular. Perihelion Hale–Bopp became visible to the naked eye in May 1996, and although its rate of brightening slowed considerably during the latter half of that year, scientists were still cautiously optimistic that it would become very bright. It was too closely aligned with the Sun to be observable during December 1996, but when it reappeared in January 1997 it was already bright enough to be seen by anyone who looked for it, even from large cities with light-polluted skies. The Internet was a growing phenomenon at the time, and numerous websites that tracked the comet's progress and provided daily images from around the world became extremely popular. The Internet played a large role in encouraging the unprecedented public interest in comet Hale–Bopp. As the comet approached the Sun, it continued to brighten, shining at 2nd magnitude in February, and showing a growing pair of tails, the blue gas tail pointing straight away from the Sun and the yellowish dust tail curving away along its orbit. On March 9, a solar eclipse in China, Mongolia and eastern Siberia allowed observers there to see the comet in the daytime. Hale–Bopp had its closest approach to Earth on March 22, 1997, at a distance of 1.315 AU. As it passed perihelion on April 1, 1997, the comet developed into a spectacular sight. It shone brighter than any star in the sky except Sirius, and its dust tail stretched 40–45 degrees across the sky. The comet was visible well before the sky got fully dark each night, and while many great comets are very close to the Sun as they pass perihelion, comet Hale–Bopp was visible all night to northern hemisphere observers. After perihelion After its perihelion passage, the comet moved into the southern celestial hemisphere. The comet was much less impressive to southern hemisphere observers than it had been in the northern hemisphere, but southerners were able to see the comet gradually fade from view during the second half of 1997. The last naked-eye observations were reported in December 1997, which meant that the comet had remained visible without aid for 569 days, or about 18 and a half months. The previous record had been set by the Great Comet of 1811, which was visible to the naked eye for about 9 months. The comet continued to fade as it receded, but was still tracked by astronomers. In October 2007, 10 years after the perihelion and at distance of 25.7 AU from Sun, the comet was still active as indicated by the detection of the CO-driven coma. Herschel Space Observatory images taken in 2010 suggest comet Hale–Bopp is covered in a fresh frost layer. Hale–Bopp
from the public or from other people affected by it. In a political sense, conspiracy refers to a group of people united in the goal of usurping, altering or overthrowing an established political power. Depending on the circumstances, a conspiracy may also be a crime, or a civil wrong. The term generally implies wrongdoing or illegality on the part of the conspirators, as people would not need to conspire to engage in activities that were lawful and ethical, or to which no one would object. There are some coordinated activities that people engage in with secrecy that are not generally thought of as conspiracies. For example, intelligence agencies such as the American CIA and the British MI6 necessarily make plans in secret to spy on suspected enemies of their respective countries, but this kind of activity is generally not considered to be a conspiracy so long as their goal is to fulfill their official functions, and not something like improperly enriching themselves. Similarly, the coaches of competing sports teams routinely meet behind closed doors to plan game strategies and specific plays designed to defeat their opponents, but this activity is not considered a conspiracy because this is considered a legitimate part of the sport. Furthermore, a conspiracy must be engaged in knowingly. The continuation of social traditions that work to the advantage of certain groups and to the disadvantage of certain other groups, though possibly unethical, is not a conspiracy if participants in the practice are not carrying it forward for the purpose of perpetuating this advantage.
groups and to the disadvantage of certain other groups, though possibly unethical, is not a conspiracy if participants in the practice are not carrying it forward for the purpose of perpetuating this advantage. On the other hand, if the intent of carrying out a conspiracy exists, then there is a conspiracy even if the details are never agreed to aloud by the participants. CIA covert operations, for instance, are by their very nature hard to prove definitively but research into the agency's work, as well as revelations by former CIA employees, has suggested several cases where the agency tried to influence events. Between 1947 and 1989, the United States tried to change other nations' governments 72 times. During the Cold War, 26 of the U.S.' covert operations successfully brought a U.S.-backed government to power; the remaining 40 failed. A "conspiracy theory" is a belief that a conspiracy has actually been decisive in producing a political event of which the theorists strongly disapprove. Political scientist Michael
as it is not always dust and heat, but winter nights here are very cold too, usually below freezing points. Khes and pattu are also manufactured with wool or cotton. Khes is a form of blanket with a field of black white and pattu has a white ground base. Cholistan is now selling the wool for it brings maximum profit. Textiles It may be mentioned that cotton textiles have always been a hallmark of craft of Indus valley civilization. Various kinds of khaddar-cloth are made for local consumption, and fine khaddar bedclothes and coarse lungies are woven here. A beautiful cloth called Sufi is also woven of silk and cotton, or with cotton wrap and silk wool. Gargas are made with numerous patterns and color, having complicated embroidery, mirror, and patchwork. Ajrak is another specialty of Cholistan. It is a special and delicate printing technique on both sides of the cloth in indigo blue and red patterns covering the base cloth. Cotton turbans and shawls are also made here. Chunri is another form of dopattas, having innumerable colors and patterns like dots, squares, and circles on it. People As per the 1998 Census of Pakistan, a total of 128,019, with a 2015 estimate of 229,071, with 70% living in Lesser Cholistan. The average household size is 6.65. Local crafts As mentioned above, the Indus Valley has always been occupied by the wandering nomadic tribes, who are fond of isolated areas, as such areas allow them to lead life free of foreign intrusion, enabling them to establish their own individual and unique cultures. Cholistan till the era of Mughal rule had also been isolated from outside influence. During the rule of Mughal Emperor Akbar, it became a proper productive unit. The entire area was ruled by a host of kings who securely guarded their frontiers. The rulers were the great patrons of art, and the various crafts underwent a simultaneous and parallel development, influencing each other. Masons, stone carvers, artisans, artists, and designers started rebuilding the old cities and new sites, and with that flourished new courts, paintings, weaving, and pottery. The fields of architecture, sculpture, terra cotta, and pottery developed greatly in this phase. Camel products Camels are highly valued by the desert dwellers. Camels are not only useful for transportation and loading purposes, but its skin and wool are also quite worthwhile. Camel wool is spun and woven into beautiful woolen blankets known as falsies and into stylish and durable rugs. The camel's leather is also utilized in making caps, goblets, and expensive lampshades. Leather work Leather work is another important local cottage industry due to the large number of livestock here. Other than the products mentioned above, Khusa (shoes) is a specialty of this area. Cholistani khusas are very famous for the quality of workmanship, variety, and richness of designs especially when stitched and embroidered with golden or brightly colored threads. Jewellery The people of Cholistan are fond of jewelry, especially gold jewelry. The chief ornaments made and worn by them are Nath (nosegay), Katmala (necklace) Kangan (bracelet), and Pazeb (anklets). Gold and silver bangles are also a product of Cholistan. The locals similarly work in enamel, producing enamel buttons, earrings, bangles, and rings. Ecology Flora Subsoil water in Cholistan is typically brackish, and unsuitable for most plant growth. Native trees, shrubs, and grasses are drought tolerant. There are 131 plant species in Cholistan from 89 genera and 24 families. Most common of them are below; Prosopis cineraria Haloxylon salicornicum Cenchrus ciliaris A man-made forest called Dingarh was developed by The Pakistan Council of Research in Water Resources (PCRWR) on more than 100 ha. Dunes were fixed and stabilized by mechanical and vegetative means, and the area is now covered with trees with orchards of zizyphus, date palms, and grassland grown with collected rainwater and saline groundwater. Fauna The wildlife of Cholistan desert mostly consists of migratory birds, especially Houbara bustard who migrates to this part during winters. This species of birds is most famous in the hunting season, even though they are endangered in Pakistan (vulnerable globally), according to IUCN Red List. Their population has decreased from 4,746 in 2001 to just
livelihood opportunities aside from livestock farming are available in the region. Agricultural farming away from the irrigated regions in Lower Cholistan are unavailable due to the lack of steady water-supply. Camels in particular are prized in Cholistan for their meat and milk, use as transportation, and for entertainment such as racing and camel dancing. Two types of camels are found in Cholistan: Marrecha, or Mahra, is used for transportation or racing/dancing. Berella is used for milk production, and can produce 10–15 liters of milk per day per animal. It has the major importance for satisfying the area's major needs for cottage industry as well as milk meat and fat. Because of the nomadic way of life the main wealth of the people are their cattle that are bred for sale, milked or shorn for their wool. Moreover, isolated as they were, they had to depend upon themselves for all their needs like food, clothing, and all the items of daily use. So all their crafts initially stemmed from necessity but later on they started exporting their goods to the other places as well. The estimated number of livestock in the desert areas is 1.6 million. Cotton and wool products Cholistan produces very superior type of carpet wool as compared to that produced in other parts of Pakistan. From this wool they knit beautiful carpets, rugs and other woolen items. This includes blankets, which is also a local necessity for the desert as it is not always dust and heat, but winter nights here are very cold too, usually below freezing points. Khes and pattu are also manufactured with wool or cotton. Khes is a form of blanket with a field of black white and pattu has a white ground base. Cholistan is now selling the wool for it brings maximum profit. Textiles It may be mentioned that cotton textiles have always been a hallmark of craft of Indus valley civilization. Various kinds of khaddar-cloth are made for local consumption, and fine khaddar bedclothes and coarse lungies are woven here. A beautiful cloth called Sufi is also woven of silk and cotton, or with cotton wrap and silk wool. Gargas are made with numerous patterns and color, having complicated embroidery, mirror, and patchwork. Ajrak is another specialty of Cholistan. It is a special and delicate printing technique on both sides of the cloth in indigo blue and red patterns covering the base cloth. Cotton turbans and shawls are also made here. Chunri is another form of dopattas, having innumerable colors and patterns like dots, squares, and circles on it. People As per the 1998 Census of Pakistan, a total of 128,019, with a 2015 estimate of 229,071, with 70% living in Lesser Cholistan. The average household size is 6.65. Local crafts As mentioned above, the Indus Valley has always been occupied by the wandering nomadic tribes, who are fond of isolated areas, as such areas allow them to lead life free of foreign intrusion, enabling them to establish their own individual and unique cultures. Cholistan till the era of Mughal rule had also been isolated from outside influence. During the rule of Mughal Emperor Akbar, it became a proper productive unit. The entire area was ruled by a host of kings who securely guarded their frontiers. The rulers were the great patrons of art, and the various crafts underwent a simultaneous and parallel development, influencing each other. Masons, stone carvers, artisans, artists, and designers started rebuilding the old cities and new sites, and with that flourished new courts, paintings, weaving, and pottery. The fields of architecture, sculpture, terra cotta, and pottery developed greatly in this phase. Camel products Camels are highly valued by the desert dwellers. Camels are not only useful for transportation and loading purposes, but its skin and wool are also quite worthwhile. Camel wool is spun and woven into beautiful woolen blankets known as falsies and into stylish and durable rugs. The camel's leather is also utilized in making caps, goblets, and expensive lampshades. Leather work Leather work is another important local cottage industry due to the large number of livestock here. Other than the products mentioned above, Khusa (shoes) is a specialty of this area. Cholistani khusas are very famous for the quality of workmanship, variety, and richness of designs especially when stitched and embroidered with golden or brightly colored threads. Jewellery The people of Cholistan are fond of jewelry, especially gold
tribute from the East Anglian King Edmund, the Great Army moved north, seizing York, chief city of the Northumbrians. The Great Army defeated an attack on York by the two rivals for the Northumbrian throne, Osberht and Ælla, who had put aside their differences in the face of a common enemy. Both would-be kings were killed in the failed assault, probably on 21 March 867. Following this, the leaders of the Great Army are said to have installed one Ecgberht as king of the Northumbrians. Their next target was Mercia where King Burgred, aided by his brother-in-law King Æthelred of Wessex, drove them off. While the kingdoms of East Anglia, Mercia and Northumbria were under attack, other Viking armies were active in the far north. Amlaíb and Auisle (Ásl or Auðgísl), said to be his brother, brought an army to Fortriu and obtained tribute and hostages in 866. Historians disagree as to whether the army returned to Ireland in 866, 867 or even in 869. Late sources of uncertain reliability state that Auisle was killed by Amlaíb in 867 in a dispute over Amlaíb's wife, the daughter of Cináed. It is unclear whether, if accurate, this woman should be identified as a daughter of Cináed mac Ailpín, and thus Causantín's sister, or as a daughter of Cináed mac Conaing, king of Brega. While Amlaíb and Auisle were in north Britain, the Annals of Ulster record that Áed Findliath, High King of Ireland, took advantage of their absence to destroy the longphorts along the northern coasts of Ireland. Áed Findliath was married to Causantín's sister Máel Muire. She later married Áed's successor Flann Sinna. Her death is recorded in 913. In 870, Amlaíb and Ívarr attacked Dumbarton Rock, where the River Leven meets the River Clyde, the chief place of the kingdom of Alt Clut, south-western neighbour of Pictland.
East Anglia. The following year, having obtained tribute from the East Anglian King Edmund, the Great Army moved north, seizing York, chief city of the Northumbrians. The Great Army defeated an attack on York by the two rivals for the Northumbrian throne, Osberht and Ælla, who had put aside their differences in the face of a common enemy. Both would-be kings were killed in the failed assault, probably on 21 March 867. Following this, the leaders of the Great Army are said to have installed one Ecgberht as king of the Northumbrians. Their next target was Mercia where King Burgred, aided by his brother-in-law King Æthelred of Wessex, drove them off. While the kingdoms of East Anglia, Mercia and Northumbria were under attack, other Viking armies were active in the far north. Amlaíb and Auisle (Ásl or Auðgísl), said to be his brother, brought an army to Fortriu and obtained tribute and hostages in 866. Historians disagree as to whether the army returned to Ireland in 866, 867 or even in 869. Late sources of uncertain reliability state that Auisle was killed by Amlaíb in 867 in a dispute over Amlaíb's wife, the daughter of Cináed. It is unclear whether, if accurate, this woman should be identified as a daughter of Cináed mac Ailpín, and thus Causantín's sister, or as a daughter of Cináed mac Conaing, king of Brega. While Amlaíb and Auisle were in north Britain, the Annals of Ulster record that Áed Findliath, High King of Ireland, took advantage of their absence to destroy the longphorts along the northern coasts of Ireland. Áed Findliath was married to Causantín's sister Máel Muire. She later married Áed's successor Flann Sinna. Her death is recorded in 913. In 870, Amlaíb and Ívarr attacked Dumbarton Rock, where the River Leven meets the River Clyde, the chief place of the kingdom of Alt Clut, south-western neighbour of Pictland. The siege lasted four months before the fortress fell to the Vikings who returned to Ireland with many prisoners, "Angles, Britons and Picts", in 871. Archaeological evidence suggests that Dumbarton Rock was largely abandoned and that Govan replaced it as the chief place of the kingdom of Strathclyde, as Alt Clut was later known. King Artgal of Alt Clut did not long survive these events, being killed "at the instigation" of Causantín son of Cináed two
that he had not received the amount of territory that was his due as the eldest son. Annoyed that Constans had received Thrace and Macedonia after the death of Dalmatius, Constantine demanded that Constans hand over the African provinces, to which he agreed in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage, and thus Constantine, and which belonged to Italy, and therefore Constans. Further complications arose when Constans came of age and Constantine, who had grown accustomed to dominating his younger brother, would not relinquish the guardianship. In 340 Constantine marched into Italy at the head of his troops to claim territory from Constans. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was engaged in military operations and was killed by Constans's generals in an ambush outside Aquileia. Constans then took control of his deceased brother's realm. References Sources Primary sources Zosimus, Historia Nova, Book 2 Historia Nova Aurelius Victor, Epitome de Caesaribus Eutropius, Breviarium ab urbe condita Secondary sources DiMaio, Michael, and Robert Frakes, "Constantine II (337–340 A.D.)", D.I.R. Gibbon, Edward. Decline & Fall of the Roman Empire (1888) Lewis, William
to his death in a failed invasion of Italy in 340. Career The eldest son of Constantine the Great and Fausta, Constantine II was born in Arles in February 316 and raised as a Christian. Caesar On 1 March 317, he was made Caesar. In 323, at the age of seven, he took part in his father's campaign against the Sarmatians. At age ten, he became commander of Gaul, following the death of his half-brother Crispus. An inscription dating to 330 records the title of Alamannicus, so it is probable that his generals won a victory over the Alamanni. His military career continued when Constantine I made him field commander during the 332 campaign against the Goths. Augustus Following the death of his father in 337, Constantine II initially became emperor jointly with his brothers Constantius II and Constans, with the empire divided between them and their cousins, the caesars Dalmatius and Hannibalianus. This arrangement barely survived Constantine I's death, as his sons arranged the slaughter of most of the rest of the family by the army. As a result, the three brothers gathered together in Pannonia and there, on 9 September 337, divided the Roman world among themselves. Constantine, proclaimed Augustus by the troops received Gaul, Britannia and Hispania. He was soon involved in the struggle between factions rupturing the unity of the Christian Church.
858) was the first of the family recorded as a king, but as king of the Picts. This change of title, from king of the Picts to king of Alba, is part of a broader transformation of Pictland and the origins of the Kingdom of Alba are traced to Constantine's lifetime. His reign, like those of his predecessors, was dominated by the actions of Viking rulers in the British Isles, particularly the Uí Ímair ("the grandsons of Ímar", or Ivar the Boneless). During Constantine's reign the rulers of the southern kingdoms of Wessex and Mercia, later the Kingdom of England, extended their authority northwards into the disputed kingdoms of Northumbria. At first, the southern rulers allied with him against the Vikings, but in 934 Æthelstan, unprovoked, invaded Scotland both by sea and land with a huge retinue that included four Welsh Kings. He ravaged southern Alba but there is no record of any battles. He had withdrawn by September. Three years later in 937, probably in retaliation for the invasion of Alba, King Constantine allied with Olaf Guthfrithson, King of Dublin, and Owain, King of Strathclyde, but they were defeated at the battle of Brunanburh. In 943 Constantine abdicated the throne and retired to the Céli Dé (Culdee) monastery of St Andrews where he died in 952. He was succeeded by his predecessor's son Malcolm I (Máel Coluim mac Domnaill). Constantine's reign of 43 years, exceeded in Scotland only by that of King William the Lion before the Union of the Crowns in 1603, is believed to have played a defining part in the gaelicisation of Pictland, in which his patronage of the Irish Céli Dé monastic reformers was a significant factor. During his reign the words "Scots" and "Scotland" () are first used to mean part of what is now Scotland. The earliest evidence for the ecclesiastical and administrative institutions which would last until the Davidian Revolution also appears at this time. Sources Compared to neighbouring Ireland and Anglo-Saxon England, few records of 9th- and 10th-century events in Scotland survive. The main local source from the period is the Chronicle of the Kings of Alba, a list of kings from Kenneth MacAlpin (died 858) to Kenneth II (Cináed mac Maíl Coluim, died 995). The list survives in the Poppleton Manuscript, a 13th-century compilation. Originally simply a list of kings with reign lengths, the other details contained in the Poppleton Manuscript version were added in the 10th and 12th centuries. In addition to this, later king lists survive. The earliest genealogical records of the descendants of Kenneth MacAlpin may date from the end of the 10th century, but their value lies more in their context, and the information they provide about the interests of those for whom they were compiled, than in the unreliable claims they contain. For narrative history the principal sources are the Anglo-Saxon Chronicle and the Irish annals. The evidence from charters created in the Kingdom of England provides occasional insight into events in Scotland. While Scandinavian sagas describe events in 10th-century Britain, their value as sources of historical narrative, rather than documents of social history, is disputed. Mainland European sources rarely concern themselves with affairs in any part of the British Isles, and even less commonly with events in Scotland, but the life of Saint Cathróe of Metz, a work of hagiography written in Germany at the end of the 10th century, provides plausible details of the saint's early life in north Britain. While the sources for north-eastern Britain, the lands of the kingdom of Northumbria and the former Pictland, are limited and late, those for the areas on the Irish Sea and Atlantic coasts—the modern regions of north-west England and all of northern and western Scotland—are non-existent, and archaeology and toponymy are of primary importance. Pictland from Constantín mac Fergusa to Constantine I The dominant kingdom in eastern Scotland before the Viking Age was the northern Pictish kingdom of Fortriu on the shores of the Moray Firth. By the 9th century, the Gaels of Dál Riata (Dalriada) were subject to the kings of Fortriu of the family of Constantín mac Fergusa (Constantine son of Fergus). Constantín's family dominated Fortriu after 789 and perhaps, if Constantín was a kinsman of Óengus I of the Picts (Óengus son of Fergus), from around 730. The dominance of Fortriu came to an end in 839 with a defeat by Viking armies reported by the Annals of Ulster in which King Uen of Fortriu and his brother Bran, Constantín's nephews, together with the king of Dál Riata, Áed mac Boanta, "and others almost innumerable" were killed. These deaths led to a period of instability lasting a decade as several families attempted to establish their dominance in Pictland. By around 848 Kenneth MacAlpin had emerged as the winner. Later national myth made Kenneth MacAlpin the creator of the kingdom of Scotland, the founding of which was dated from 843, the year in which he was said to have destroyed the Picts and inaugurated a new era. The historical record for 9th-century Scotland is meagre, but the Irish annals and the 10th-century Chronicle of the Kings of Alba agree that Kenneth was a Pictish king, and call him "king of the Picts" at his death. The same style is used of Kenneth's brother Donald I (Domnall mac Ailpín) and sons Constantine I (Constantín mac Cináeda) and Áed (Áed mac Cináeda). The kingdom ruled by Kenneth's descendants—older works used the name House of Alpin to describe them but descent from Kenneth was the defining factor, Irish sources referring to Clann Cináeda meic Ailpín ("the Clan of Kenneth MacAlpin")—lay to the south of the previously dominant kingdom of Fortriu, centred in the lands around the River Tay. The extent of Kenneth's nameless kingdom is uncertain, but it certainly extended from the Firth of Forth in the south to the Mounth in the north. Whether it extended beyond the mountainous spine of north Britain—Druim Alban—is unclear. The core of the kingdom was similar to the old counties of Mearns, Forfar, Perth, Fife, and Kinross. Among the chief ecclesiastical centres named in the records are Dunkeld, probably seat of the bishop of the kingdom, and Cell Rígmonaid (modern St Andrews). Kenneth's son Constantine died in 876, probably killed fighting against a Viking army that had come north from Northumbria in 874. According to the king lists, he was counted the 70th and last king of the Picts in later times. Britain and Ireland at the end of the 9th century In 899 Alfred the Great, king of Wessex, died leaving his son Edward the Elder as ruler of England south of the River Thames and his daughter Æthelflæd and son-in-law Æthelred ruling the western, English part of Mercia. The situation in the Danish kingdoms of eastern England is less clear. King Eohric was probably ruling in East Anglia, but no dates can reliably be assigned to the successors of Guthfrith of York in Northumbria. It is known that Guthfrith was succeeded by Sigurd and Cnut, although whether these men ruled jointly or one after the other is uncertain. Northumbria may have been divided by this time between the Viking kings in York and the local rulers, perhaps represented by Eadulf, based at Bamburgh who controlled the lands from the River Tyne or River Tees to the Forth in the north. In Ireland, Flann Sinna, married to Constantine's aunt Máel Muire, was dominant. The years around 900 represented a period of weakness among the Vikings and Norse-Gaels of Dublin. They are reported to have been divided between two rival leaders. In 894 one group left Dublin, perhaps settling on the Irish Sea coast of Britain between the River Mersey and the Firth of Clyde. The remaining Dubliners were expelled in 902 by Flann Sinna's son-in-law Cerball mac Muirecáin, and soon afterwards appeared in western and northern Britain. To the south-west of Constantine's lands lay the kingdom of Strathclyde. This extended north into the Lennox, east to the River Forth, and south into the Southern Uplands. In 900 it was probably ruled by King Dyfnwal. The situation of the Gaelic kingdoms of Dál Riata in western Scotland is uncertain. No kings are known by name after Áed mac Boanta. The Frankish Annales Bertiniani may record the conquest of the Inner Hebrides, the seaward part of Dál Riata, by Northmen in 849. In addition to these, the arrival of new groups of Vikings from northern and western Europe was still commonplace. Whether there were Viking or Norse-Gael kingdoms in the Western Isles or the Northern Isles at this time is debated. Early life Áed, Constantine's father, succeeded Constantine's uncle and namesake Constantine I in 876 but was killed in 878. Áed's short reign is glossed as being of no importance by most king lists. Although the date of his birth is nowhere recorded, Constantine II cannot have been born any later than the year after his father's death, i.e., 879. His name may suggest that he was born a few years earlier, during the reign of his uncle Constantine I. After Áed's death, there is a two-decade gap until the death of Donald II (Domnall mac Constantín) in 900 during which nothing is reported in the Irish annals. The entry for the reign between Áed and Donald II is corrupt in the Chronicle of the Kings of Alba, and in this case the Chronicle is at variance with every other king list. According to the Chronicle, Áed was followed by Eochaid, a grandson of Kenneth MacAlpin, who is somehow connected with Giric, but all other lists say that Giric ruled after Áed and make great claims for him. Giric is not known to have been a kinsman of Kenneth's, although it has been suggested that he was related to him by marriage. The major changes in Pictland which began at about this time have been associated by Alex Woolf and Archie Duncan with Giric's reign. Woolf suggests that Constantine and his younger brother Donald may have passed Giric's reign in exile in Ireland where their aunt Máel Muire was wife of two successive High Kings of Ireland, Áed Findliath and Flann Sinna. Giric died in 889. If he had been in exile, Constantine may have returned
after 789 and perhaps, if Constantín was a kinsman of Óengus I of the Picts (Óengus son of Fergus), from around 730. The dominance of Fortriu came to an end in 839 with a defeat by Viking armies reported by the Annals of Ulster in which King Uen of Fortriu and his brother Bran, Constantín's nephews, together with the king of Dál Riata, Áed mac Boanta, "and others almost innumerable" were killed. These deaths led to a period of instability lasting a decade as several families attempted to establish their dominance in Pictland. By around 848 Kenneth MacAlpin had emerged as the winner. Later national myth made Kenneth MacAlpin the creator of the kingdom of Scotland, the founding of which was dated from 843, the year in which he was said to have destroyed the Picts and inaugurated a new era. The historical record for 9th-century Scotland is meagre, but the Irish annals and the 10th-century Chronicle of the Kings of Alba agree that Kenneth was a Pictish king, and call him "king of the Picts" at his death. The same style is used of Kenneth's brother Donald I (Domnall mac Ailpín) and sons Constantine I (Constantín mac Cináeda) and Áed (Áed mac Cináeda). The kingdom ruled by Kenneth's descendants—older works used the name House of Alpin to describe them but descent from Kenneth was the defining factor, Irish sources referring to Clann Cináeda meic Ailpín ("the Clan of Kenneth MacAlpin")—lay to the south of the previously dominant kingdom of Fortriu, centred in the lands around the River Tay. The extent of Kenneth's nameless kingdom is uncertain, but it certainly extended from the Firth of Forth in the south to the Mounth in the north. Whether it extended beyond the mountainous spine of north Britain—Druim Alban—is unclear. The core of the kingdom was similar to the old counties of Mearns, Forfar, Perth, Fife, and Kinross. Among the chief ecclesiastical centres named in the records are Dunkeld, probably seat of the bishop of the kingdom, and Cell Rígmonaid (modern St Andrews). Kenneth's son Constantine died in 876, probably killed fighting against a Viking army that had come north from Northumbria in 874. According to the king lists, he was counted the 70th and last king of the Picts in later times. Britain and Ireland at the end of the 9th century In 899 Alfred the Great, king of Wessex, died leaving his son Edward the Elder as ruler of England south of the River Thames and his daughter Æthelflæd and son-in-law Æthelred ruling the western, English part of Mercia. The situation in the Danish kingdoms of eastern England is less clear. King Eohric was probably ruling in East Anglia, but no dates can reliably be assigned to the successors of Guthfrith of York in Northumbria. It is known that Guthfrith was succeeded by Sigurd and Cnut, although whether these men ruled jointly or one after the other is uncertain. Northumbria may have been divided by this time between the Viking kings in York and the local rulers, perhaps represented by Eadulf, based at Bamburgh who controlled the lands from the River Tyne or River Tees to the Forth in the north. In Ireland, Flann Sinna, married to Constantine's aunt Máel Muire, was dominant. The years around 900 represented a period of weakness among the Vikings and Norse-Gaels of Dublin. They are reported to have been divided between two rival leaders. In 894 one group left Dublin, perhaps settling on the Irish Sea coast of Britain between the River Mersey and the Firth of Clyde. The remaining Dubliners were expelled in 902 by Flann Sinna's son-in-law Cerball mac Muirecáin, and soon afterwards appeared in western and northern Britain. To the south-west of Constantine's lands lay the kingdom of Strathclyde. This extended north into the Lennox, east to the River Forth, and south into the Southern Uplands. In 900 it was probably ruled by King Dyfnwal. The situation of the Gaelic kingdoms of Dál Riata in western Scotland is uncertain. No kings are known by name after Áed mac Boanta. The Frankish Annales Bertiniani may record the conquest of the Inner Hebrides, the seaward part of Dál Riata, by Northmen in 849. In addition to these, the arrival of new groups of Vikings from northern and western Europe was still commonplace. Whether there were Viking or Norse-Gael kingdoms in the Western Isles or the Northern Isles at this time is debated. Early life Áed, Constantine's father, succeeded Constantine's uncle and namesake Constantine I in 876 but was killed in 878. Áed's short reign is glossed as being of no importance by most king lists. Although the date of his birth is nowhere recorded, Constantine II cannot have been born any later than the year after his father's death, i.e., 879. His name may suggest that he was born a few years earlier, during the reign of his uncle Constantine I. After Áed's death, there is a two-decade gap until the death of Donald II (Domnall mac Constantín) in 900 during which nothing is reported in the Irish annals. The entry for the reign between Áed and Donald II is corrupt in the Chronicle of the Kings of Alba, and in this case the Chronicle is at variance with every other king list. According to the Chronicle, Áed was followed by Eochaid, a grandson of Kenneth MacAlpin, who is somehow connected with Giric, but all other lists say that Giric ruled after Áed and make great claims for him. Giric is not known to have been a kinsman of Kenneth's, although it has been suggested that he was related to him by marriage. The major changes in Pictland which began at about this time have been associated by Alex Woolf and Archie Duncan with Giric's reign. Woolf suggests that Constantine and his younger brother Donald may have passed Giric's reign in exile in Ireland where their aunt Máel Muire was wife of two successive High Kings of Ireland, Áed Findliath and Flann Sinna. Giric died in 889. If he had been in exile, Constantine may have returned to Pictland where his cousin Donald II became king. Donald's reputation is suggested by the epithet dasachtach, a word used of violent madmen and mad bulls, attached to him in the 11th-century writings of Flann Mainistrech, echoed by his description in the Prophecy of Berchan as "the rough one who will think relics and psalms of little worth". Wars with the Viking kings in Britain and Ireland continued during Donald's reign and he was probably killed fighting yet more Vikings at Dunnottar in the Mearns in 900. Constantine succeeded him as king. Vikings and bishops The earliest event recorded in the Chronicle of the Kings of Alba in Constantine's reign is an attack by Vikings and the plundering of Dunkeld "and all Albania" in his third year. This is the first use of the word Albania, the Latin form of the Old Irish Alba, in the Chronicle which until then describes the lands ruled by the descendants of Cináed as Pictavia. These Norsemen could have been some of those who were driven out of Dublin in 902, or were the same group who had defeated Domnall in 900. The Chronicle states that the Northmen were killed in Srath Erenn, which is confirmed by the Annals of Ulster which records the death of Ímar grandson of Ímar and many others at the hands of the men of Fortriu in 904. This Ímar was the first of the Uí Ímair, the grandsons of Ímar, to be reported; three more grandsons of Ímar appear later in Constantín's reign. The Fragmentary Annals of Ireland contain an account of the battle, and this attributes the defeat of the Norsemen to the intercession of Saint Columba following fasting and prayer. An entry in the Chronicon Scotorum under the year 904 may possibly contain a corrupted reference to this battle. The next event reported by the Chronicle of the Kings of Alba is dated to 906. This records that:King Constantine and Bishop Cellach met at the Hill of Belief near the royal city of Scone and pledged themselves that the laws and disciplines of the faith, and the laws of churches and gospels, should be kept pariter cum Scottis. The meaning of this entry, and its significance, have been the subject of debate. The phrase pariter cum Scottis in the Latin text of the Chronicle has been translated in several ways. William Forbes Skene and Alan Orr Anderson proposed that it should be read as "in conformity with the customs of the Gaels", relating it to the claims in the king lists that Giric liberated the church from secular oppression and adopted Irish customs. It has been read as "together with the Gaels", suggesting either public participation or the presence of Gaels from the western coasts as well as the people of the east coast. Finally, it is suggested that it was the ceremony that followed "the custom of the Gaels" and not the agreements. The idea that this gathering agreed to uphold Irish laws governing the church has suggested that it was an important step in the gaelicisation of the lands east of Druim Alban. Others have proposed that the ceremony in some way endorsed Constantine's kingship, prefiguring later royal inaugurations at Scone. Alternatively, if Bishop Cellach was appointed by Giric, it may be that the gathering was intended to heal a rift between king and church. Return of the Uí Ímair Following the events at Scone, there is little of substance reported for a decade. A story in the Fragmentary Annals of Ireland, perhaps referring to events sometime after 911, claims that Queen Æthelflæd, who ruled in Mercia, allied with the Irish and northern rulers against the Norsemen on the Irish sea coasts of Northumbria. The Annals of Ulster record the defeat of an Irish fleet from the kingdom of Ulaid by Vikings "on the coast of England" at about this time. In this period the Chronicle of the Kings of Alba reports the death of Cormac mac Cuilennáin, king of Munster, in the eighth year of Constantine's reign. This is followed by an undated entry which was formerly read as "In his time Domnall [i.e. Dyfnwal], king of the [Strathclyde] Britons died, and Domnall son of Áed was elected". This was thought to record the election of a brother of Constantine named Domnall to the kingship of the Britons of Strathclyde and was seen as early evidence of the domination of Strathclyde by the kings of Alba. The entry in question is now read as "...Dyfnwal... and Domnall son Áed king of Ailech died", this Domnall being a son of Áed Findliath who died on 915. Finally, the deaths of Flann Sinna and Niall Glúndub are recorded. There are more reports of Viking fleets in the Irish Sea from 914 onwards. By 916 fleets under Sihtric Cáech and Ragnall, said to be grandsons of Ímar (that is, they belonged to the same Uí Ímair kindred as the Ímar who was killed in 904), were very active in Ireland. Sihtric inflicted a heavy defeat on the armies of Leinster and retook Dublin in 917. The following year Ragnall appears to have returned across the Irish sea intent on establishing himself as king at York. The only precisely dated event in the summer of 918 is the death of Queen Æthelflæd on 918 at Tamworth, Staffordshire. Æthelflæd had been negotiating with the Northumbrians to obtain their submission, but her death put an end to this and her successor, her brother Edward the Elder, was occupied with securing control of Mercia. The northern part of Northumbria, and perhaps the whole kingdom, had probably been ruled by Ealdred son of Eadulf since 913. Faced with Ragnall's invasion, Ealdred came north seeking assistance from Constantine. The two advanced south to face Ragnall, and this led to a battle somewhere on the banks of the River Tyne, probably at Corbridge where Dere Street crosses the river. The Battle of Corbridge appears to have been indecisive; the Chronicle of the Kings of Alba is alone in giving Constantine the victory. The report of the battle in the Annals of Ulster says that none of the kings or mormaers among the men of Alba were killed. This is the first surviving use of the word mormaer; other than the knowledge that Constantine's kingdom had its own bishop or bishops and royal villas, this is the only hint to the institutions of the kingdom. After Corbridge, Ragnall enjoyed only a short respite. In the south, Alfred's son Edward had rapidly secured control of Mercia and had a burh constructed at Bakewell in the Peak District from which his armies could easily strike north. An army from Dublin led by Ragnall's kinsman Sihtric struck at north-western Mercia in 919, but in 920 or 921 Edward met with Ragnall and other kings. The Anglo-Saxon Chronicle states that these kings "chose Edward as father and lord". Among the other kings present were
of the Constantinian dynasty. His reputation flourished during the lifetime of his children and for centuries after his reign. The medieval church held him up as a paragon of virtue, while secular rulers invoked him as a prototype, a point of reference and the symbol of imperial legitimacy and identity. Beginning with the Renaissance, there were more critical appraisals of his reign, due to the rediscovery of anti-Constantinian sources. Trends in modern and recent scholarship have attempted to balance the extremes of previous scholarship. Sources Constantine was a ruler of major importance, and has always been a controversial figure. The fluctuations in his reputation reflect the nature of the ancient sources for his reign. These are abundant and detailed, but they have been strongly influenced by the official propaganda of the period and are often one-sided; no contemporaneous histories or biographies dealing with his life and rule have survived. The nearest replacement is Eusebius's Vita Constantini—a mixture of eulogy and hagiography written between AD 335 and circa AD 339—that extols Constantine's moral and religious virtues. The Vita creates a contentiously positive image of Constantine, and modern historians have frequently challenged its reliability. The fullest secular life of Constantine is the anonymous Origo Constantini, a work of uncertain date, which focuses on military and political events to the neglect of cultural and religious matters. Lactantius' De mortibus persecutorum, a political Christian pamphlet on the reigns of Diocletian and the Tetrarchy, provides valuable but tendentious detail on Constantine's predecessors and early life. The ecclesiastical histories of Socrates, Sozomen, and Theodoret describe the ecclesiastic disputes of Constantine's later reign. Written during the reign of Theodosius II (AD 408–450), a century after Constantine's reign, these ecclesiastical historians obscure the events and theologies of the Constantinian period through misdirection, misrepresentation, and deliberate obscurity. The contemporary writings of the orthodox Christian Athanasius, and the ecclesiastical history of the Arian Philostorgius also survive, though their biases are no less firm. The epitomes of Aurelius Victor (De Caesaribus), Eutropius (Breviarium), Festus (Breviarium), and the anonymous author of the Epitome de Caesaribus offer compressed secular political and military histories of the period. Although not Christian, the epitomes paint a favourable image of Constantine but omit reference to Constantine's religious policies. The Panegyrici Latini, a collection of panegyrics from the late third and early fourth centuries, provide valuable information on the politics and ideology of the tetrarchic period and the early life of Constantine. Contemporary architecture, such as the Arch of Constantine in Rome and palaces in Gamzigrad and Córdoba, epigraphic remains, and the coinage of the era complement the literary sources. Early life Flavius Valerius Constantinus, as he was originally named, was born in the city of Naissus (today Niš, Serbia), part of the Dardania province of Moesia on 27 February, probably AD 272. His father was Flavius Constantius, who was born in Dacia Ripensis, and a native of the province of Moesia. Constantine probably spent little time with his father who was an officer in the Roman army, part of the Emperor Aurelian's imperial bodyguard. Being described as a tolerant and politically skilled man, Constantius advanced through the ranks, earning the governorship of Dalmatia from Emperor Diocletian, another of Aurelian's companions from Illyricum, in 284 or 285. Constantine's mother was Helena, a Greek woman of low social standing from Helenopolis of Bithynia. It is uncertain whether she was legally married to Constantius or merely his concubine. His main language was Latin, and during his public speeches he needed Greek translators. In July AD 285, Diocletian declared Maximian, another colleague from Illyricum, his co-emperor. Each emperor would have his own court, his own military and administrative faculties, and each would rule with a separate praetorian prefect as chief lieutenant. Maximian ruled in the West, from his capitals at Mediolanum (Milan, Italy) or Augusta Treverorum (Trier, Germany), while Diocletian ruled in the East, from Nicomedia (İzmit, Turkey). The division was merely pragmatic: the empire was called "indivisible" in official panegyric, and both emperors could move freely throughout the empire. In 288, Maximian appointed Constantius to serve as his praetorian prefect in Gaul. Constantius left Helena to marry Maximian's stepdaughter Theodora in 288 or 289. Diocletian divided the Empire again in AD 293, appointing two caesars (junior emperors) to rule over further subdivisions of East and West. Each would be subordinate to their respective augustus (senior emperor) but would act with supreme authority in his assigned lands. This system would later be called the Tetrarchy. Diocletian's first appointee for the office of Caesar was Constantius; his second was Galerius, a native of Felix Romuliana. According to Lactantius, Galerius was a brutal, animalistic man. Although he shared the paganism of Rome's aristocracy, he seemed to them an alien figure, a semi-barbarian. On 1 March, Constantius was promoted to the office of caesar, and dispatched to Gaul to fight the rebels Carausius and Allectus. In spite of meritocratic overtones, the Tetrarchy retained vestiges of hereditary privilege, and Constantine became the prime candidate for future appointment as caesar as soon as his father took the position. Constantine went to the court of Diocletian, where he lived as his father's heir presumptive. In the East Constantine received a formal education at Diocletian's court, where he learned Latin literature, Greek, and philosophy. The cultural environment in Nicomedia was open, fluid, and socially mobile; in it, Constantine could mix with intellectuals both pagan and Christian. He may have attended the lectures of Lactantius, a Christian scholar of Latin in the city. Because Diocletian did not completely trust Constantius—none of the Tetrarchs fully trusted their colleagues—Constantine was held as something of a hostage, a tool to ensure Constantius' best behavior. Constantine was nonetheless a prominent member of the court: he fought for Diocletian and Galerius in Asia and served in a variety of tribunates; he campaigned against barbarians on the Danube in AD 296 and fought the Persians under Diocletian in Syria (AD 297), as well as under Galerius in Mesopotamia (AD 298–299). By late AD 305, he had become a tribune of the first order, a tribunus ordinis primi. Constantine had returned to Nicomedia from the eastern front by the spring of AD 303, in time to witness the beginnings of Diocletian's "Great Persecution", the most severe persecution of Christians in Roman history. In late 302, Diocletian and Galerius sent a messenger to the oracle of Apollo at Didyma with an inquiry about Christians. Constantine could recall his presence at the palace when the messenger returned, when Diocletian accepted his court's demands for universal persecution. On 23 February AD 303, Diocletian ordered the destruction of Nicomedia's new church, condemned its scriptures to the flames, and had its treasures seized. In the months that followed, churches and scriptures were destroyed, Christians were deprived of official ranks, and priests were imprisoned. It is unlikely that Constantine played any role in the persecution. In his later writings, he would attempt to present himself as an opponent of Diocletian's "sanguinary edicts" against the "Worshippers of God", but nothing indicates that he opposed it effectively at the time. Although no contemporary Christian challenged Constantine for his inaction during the persecutions, it remained a political liability throughout his life. On 1 May AD 305, Diocletian, as a result of a debilitating sickness taken in the winter of AD 304–305, announced his resignation. In a parallel ceremony in Milan, Maximian did the same. Lactantius states that Galerius manipulated the weakened Diocletian into resigning, and forced him to accept Galerius' allies in the imperial succession. According to Lactantius, the crowd listening to Diocletian's resignation speech believed, until the last moment, that Diocletian would choose Constantine and Maxentius (Maximian's son) as his successors. It was not to be: Constantius and Galerius were promoted to augusti, while Severus and Maximinus, Galerius' nephew, were appointed their caesars respectively. Constantine and Maxentius were ignored. Some of the ancient sources detail plots that Galerius made on Constantine's life in the months following Diocletian's abdication. They assert that Galerius assigned Constantine to lead an advance unit in a cavalry charge through a swamp on the middle Danube, made him enter into single combat with a lion, and attempted to kill him in hunts and wars. Constantine always emerged victorious: the lion emerged from the contest in a poorer condition than Constantine; Constantine returned to Nicomedia from the Danube with a Sarmatian captive to drop at Galerius' feet. It is uncertain how much these tales can be trusted. In the West Constantine recognized the implicit danger in remaining at Galerius' court, where he was held as a virtual hostage. His career depended on being rescued by his father in the west. Constantius was quick to intervene. In the late spring or early summer of AD 305, Constantius requested leave for his son to help him campaign in Britain. After a long evening of drinking, Galerius granted the request. Constantine's later propaganda describes how he fled the court in the night, before Galerius could change his mind. He rode from post-house to post-house at high speed, hamstringing every horse in his wake. By the time Galerius awoke the following morning, Constantine had fled too far to be caught. Constantine joined his father in Gaul, at Bononia (Boulogne) before the summer of AD 305. From Bononia, they crossed the Channel to Britain and made their way to Eboracum (York), capital of the province of Britannia Secunda and home to a large military base. Constantine was able to spend a year in northern Britain at his father's side, campaigning against the Picts beyond Hadrian's Wall in the summer and autumn. Constantius' campaign, like that of Septimius Severus before it, probably advanced far into the north without achieving great success. Constantius had become severely sick over the course of his reign, and died on 25 July 306 in Eboracum. Before dying, he declared his support for raising Constantine to the rank of full augustus. The Alamannic king Chrocus, a barbarian taken into service under Constantius, then proclaimed Constantine as augustus. The troops loyal to Constantius' memory followed him in acclamation. Gaul and Britain quickly accepted his rule; Hispania, which had been in his father's domain for less than a year, rejected it. Constantine sent Galerius an official notice of Constantius' death and his own acclamation. Along with the notice, he included a portrait of himself in the robes of an augustus. The portrait was wreathed in bay. He requested recognition as heir to his father's throne, and passed off responsibility for his unlawful ascension on his army, claiming they had "forced it upon him". Galerius was put into a fury by the message; he almost set the portrait and messenger on fire. His advisers calmed him, and argued that outright denial of Constantine's claims would mean certain war. Galerius was compelled to compromise: he granted Constantine the title "caesar" rather than "augustus" (the latter office went to Severus instead). Wishing to make it clear that he alone gave Constantine legitimacy, Galerius personally sent Constantine the emperor's traditional purple robes. Constantine accepted the decision, knowing that it would remove doubts as to his legitimacy. Early rule Constantine's share of the Empire consisted of Britain, Gaul, and Spain, and he commanded one of the largest Roman armies which was stationed along the important Rhine frontier. He remained in Britain after his promotion to emperor, driving back the tribes of the Picts and securing his control in the northwestern dioceses. He completed the reconstruction of military bases begun under his father's rule, and he ordered the repair of the region's roadways. He then left for Augusta Treverorum (Trier) in Gaul, the Tetrarchic capital of the northwestern Roman Empire. The Franks learned of Constantine's acclamation and invaded Gaul across the lower Rhine over the winter of AD 306–307. He drove them back beyond the Rhine and captured Kings Ascaric and Merogais; the kings and their soldiers were fed to the beasts of Trier's amphitheatre in the adventus (arrival) celebrations which followed. Constantine began a major expansion of Trier. He strengthened the circuit wall around the city with military towers and fortified gates, and he began building a palace complex in the northeastern part of the city. To the south of his palace, he ordered the construction of a large formal audience hall and a massive imperial bathhouse. He sponsored many building projects throughout Gaul during his tenure as emperor of the West, especially in Augustodunum (Autun) and Arelate (Arles). According to Lactantius, Constantine followed a tolerant policy towards Christianity, although he was not yet a Christian himself. He probably judged it a more sensible policy than open persecution and a way to distinguish himself from the "great persecutor" Galerius. He decreed a formal end to persecution and returned to Christians all that they had lost during them. Constantine was largely untried and had a hint of illegitimacy about him; he relied on his father's reputation in his early propaganda, which gave as much coverage to his father's deeds as to his. His military skill and building projects, however, soon gave the panegyrist the opportunity to comment favourably on the similarities between father and son, and Eusebius remarked that Constantine was a "renewal, as it were, in his own person, of his father's life and reign". Constantinian coinage, sculpture, and oratory also show a new tendency for disdain towards the "barbarians" beyond the frontiers. He minted a coin issue after his victory over the Alemanni which depicts weeping and begging Alemannic tribesmen, "the Alemanni conquered" beneath the phrase "Romans' rejoicing". There was little sympathy for these enemies; as his panegyrist declared, "It is a stupid clemency that spares the conquered foe." Maxentius' rebellion Following Galerius' recognition of Constantine as caesar, Constantine's portrait was brought to Rome, as was customary. Maxentius mocked the portrait's subject as the son of a harlot and lamented his own powerlessness. Maxentius, envious of Constantine's authority, seized the title of emperor on 28 October AD 306. Galerius refused to recognize him but failed to unseat him. Galerius sent Severus against Maxentius, but during the campaign, Severus' armies, previously under command of Maxentius' father Maximian, defected, and Severus was seized and imprisoned. Maximian, brought out of retirement by his son's rebellion, left for Gaul to confer with Constantine in late AD 307. He offered to marry his daughter Fausta to Constantine and elevate him to augustan rank. In return, Constantine would reaffirm the old family alliance between Maximian and Constantius and offer support to Maxentius' cause in Italy. Constantine accepted and married Fausta in Trier in late summer AD 307. Constantine now gave Maxentius his meagre support, offering Maxentius political recognition. Constantine remained aloof from the Italian conflict, however. Over the spring and summer of AD 307, he had left Gaul for Britain to avoid any involvement in the Italian turmoil; now, instead of giving Maxentius military aid, he sent his troops against Germanic tribes along the Rhine. In AD 308, he raided the territory of the Bructeri, and made a bridge across the Rhine at Colonia Agrippinensium (Cologne). In AD 310, he marched to the northern Rhine and fought the Franks. When not campaigning, he toured his lands advertising his benevolence and supporting the economy and the arts. His refusal to participate in the war increased his popularity among his people and strengthened his power base in the West. Maximian returned to Rome in the winter of AD 307–308, but soon fell out with his son. In early AD 308, after a failed attempt to usurp Maxentius' title, Maximian returned to Constantine's court. On 11 November AD 308, Galerius called a general council at the military city of Carnuntum (Petronell-Carnuntum, Austria) to resolve the instability in the western provinces. In attendance were Diocletian, briefly returned from retirement, Galerius, and Maximian. Maximian was forced to abdicate again and Constantine was again demoted to caesar. Licinius, one of Galerius' old military companions, was appointed augustus in the western regions. The new system did not last long: Constantine refused to accept the demotion, and continued to style himself as augustus on his coinage, even as other members of the Tetrarchy referred to him as a caesar on theirs. Maximinus was frustrated that he had been passed over for promotion while the newcomer Licinius had been raised to the office of augustus and demanded that Galerius promote him. Galerius offered to call both Maximinus and Constantine "sons of the augusti", but neither accepted the new title. By the spring of AD 310, Galerius was referring to both men as augusti. Maximian's rebellion In AD 310, a dispossessed Maximian rebelled against Constantine while Constantine was away campaigning against the Franks. Maximian had been sent south to Arles with a contingent of Constantine's army, in preparation for any attacks by Maxentius in southern Gaul. He announced that Constantine was dead, and took up the imperial purple. In spite of a large donative pledge to any who would support him as emperor, most of Constantine's army remained loyal to their emperor, and Maximian was soon compelled to leave. Constantine soon heard of the rebellion, abandoned his campaign against the Franks, and marched his army up the Rhine. At Cabillunum (Chalon-sur-Saône), he moved his troops onto waiting boats to row down the slow waters of the Saône to the quicker waters of the Rhone. He disembarked at Lugdunum (Lyon). Maximian fled to Massilia (Marseille), a town better able to withstand a long siege than Arles. It made little difference, however, as loyal citizens opened the rear gates to Constantine. Maximian was captured and reproved for his crimes. Constantine granted some clemency, but strongly encouraged his suicide. In July AD 310, Maximian hanged himself. In spite of the earlier rupture in their relations, Maxentius was eager to present himself as his father's devoted son after his death. He began minting coins with his father's deified image, proclaiming his desire to avenge Maximian's death. Constantine initially presented the suicide as an unfortunate family tragedy. By AD 311, however, he was spreading another version. According to this, after Constantine had pardoned him, Maximian planned to murder Constantine in his sleep. Fausta learned of the plot and warned Constantine, who put a eunuch in his own place in bed. Maximian was apprehended when he killed the eunuch and was offered suicide, which he accepted. Along with using propaganda, Constantine instituted a damnatio memoriae on Maximian, destroying all inscriptions referring to him and eliminating any public work bearing his image. The death of Maximian required a shift in Constantine's public image. He could no longer rely on his connection to the elder Emperor Maximian, and needed a new source of legitimacy. In a speech delivered in Gaul on 25 July AD 310, the anonymous orator reveals a previously unknown dynastic connection to Claudius II, a 3rd-century emperor famed for defeating the Goths and restoring order to the empire. Breaking away from tetrarchic models, the speech emphasizes Constantine's ancestral prerogative to rule, rather than principles of imperial equality. The new ideology expressed in the speech made Galerius and Maximian irrelevant to Constantine's right to rule. Indeed, the orator emphasizes ancestry to the exclusion of all other factors: "No chance agreement of men, nor some unexpected consequence of favor, made you emperor," the orator declares to Constantine. The oration also moves away from the religious ideology of the Tetrarchy, with its focus on twin dynasties of Jupiter and Hercules. Instead, the orator proclaims that Constantine experienced a divine vision of Apollo and Victory granting him laurel wreaths of health and a long reign. In the likeness of Apollo, Constantine recognized himself as the saving figure to whom would be granted "rule of the whole world", as the poet Virgil had once foretold. The oration's religious shift is paralleled by a similar shift in Constantine's coinage. In his early reign, the coinage of Constantine advertised Mars as his patron. From AD 310 on, Mars was replaced by Sol Invictus, a god conventionally identified with Apollo. There is little reason to believe that either the dynastic connection or the divine vision are anything other than fiction, but their proclamation strengthened Constantine's claims to legitimacy and increased his popularity among the citizens of Gaul. Civil wars War against Maxentius By the middle of AD 310, Galerius had become too ill to involve himself in imperial politics. His final act survives: a letter to provincials posted in Nicomedia on 30 April AD 311, proclaiming an end to the persecutions, and the resumption of religious toleration. He died soon after the edict's proclamation, destroying what little remained of the tetrarchy. Maximinus mobilized against Licinius, and seized Asia Minor. A hasty peace was signed on a boat in the middle of the Bosphorus. While Constantine toured Britain and Gaul, Maxentius prepared for war. He fortified northern Italy, and strengthened his support in the Christian community by allowing it to elect a new Bishop of Rome, Eusebius. Maxentius' rule was nevertheless insecure. His early support dissolved in the wake of heightened tax rates and depressed trade; riots broke out in Rome and Carthage; and Domitius Alexander was able to briefly usurp his authority in Africa. By AD 312, he was a man barely tolerated, not one actively supported, even among Christian Italians. In the summer of AD 311, Maxentius mobilized against Constantine while Licinius was occupied with affairs in the East. He declared war on Constantine, vowing to avenge his father's "murder". To prevent Maxentius from forming an alliance against him with Licinius, Constantine forged his own alliance with Licinius over the winter of AD 311–312, and offered him his sister Constantia in marriage. Maximinus considered Constantine's arrangement with Licinius an affront to his authority. In response, he sent ambassadors to Rome, offering political recognition to Maxentius in exchange for a military support. Maxentius accepted. According to Eusebius, inter-regional travel became impossible, and there was military buildup everywhere. There was "not a place where people were not expecting the onset of hostilities every day". Constantine's advisers and generals cautioned against preemptive attack on Maxentius; even his soothsayers recommended against it, stating that the sacrifices had produced unfavourable omens. Constantine, with a spirit that left a deep impression on his followers, inspiring some to believe that he had some form of supernatural guidance, ignored all these cautions. Early in the spring of AD 312, Constantine crossed the Cottian Alps with a quarter of his army, a force numbering about 40,000. The first town his army encountered was Segusium (Susa, Italy), a heavily fortified town that shut its gates to him. Constantine ordered his men to set fire to its gates and scale its walls. He took the town quickly. Constantine ordered his troops not to loot the town, and advanced with them into northern Italy. At the approach to the west of the important city of Augusta Taurinorum (Turin, Italy), Constantine met a large force of heavily armed Maxentian cavalry. In the ensuing battle Constantine's army encircled Maxentius' cavalry, flanked them with his own cavalry, and dismounted them with blows from his soldiers' iron-tipped clubs. Constantine's armies emerged victorious. Turin refused to give refuge to Maxentius' retreating forces, opening its gates to Constantine instead. Other cities of the north Italian plain sent Constantine embassies of congratulation for his victory. He moved on to Milan, where he was met with open gates and jubilant rejoicing. Constantine rested his army in Milan until mid-summer AD 312, when he moved on to Brixia (Brescia). Brescia's army was easily dispersed, and Constantine quickly advanced to Verona, where a large Maxentian force was camped. Ruricius Pompeianus, general of the Veronese forces and Maxentius' praetorian prefect, was in a strong defensive position, since the town was surrounded on three sides by the Adige. Constantine sent a small force north of the town in an attempt to cross the river unnoticed. Ruricius sent a large detachment to counter Constantine's expeditionary force, but was defeated. Constantine's forces successfully surrounded the town and laid siege. Ruricius gave Constantine the slip and returned with a larger force to oppose Constantine. Constantine refused to let up on the siege, and sent only a small force to oppose him. In the desperately fought encounter that followed, Ruricius was killed and his army destroyed. Verona surrendered soon afterwards, followed by Aquileia, Mutina (Modena), and Ravenna. The road to Rome was now wide open to Constantine. Maxentius prepared for the same type of war he had waged against Severus and Galerius: he sat in Rome and prepared for a siege. He still controlled Rome's praetorian guards, was well-stocked with African grain, and was surrounded on all sides by the seemingly impregnable Aurelian Walls. He ordered all bridges across the Tiber cut, reportedly on the counsel of the gods, and left the rest of central Italy undefended; Constantine secured that region's support without challenge. Constantine progressed slowly along the Via Flaminia, allowing the weakness of Maxentius to draw his regime further into turmoil. Maxentius' support continued to weaken: at chariot races on 27 October, the crowd openly taunted Maxentius, shouting that Constantine was invincible. Maxentius, no longer certain that he would emerge from a siege victorious, built a temporary boat bridge across the Tiber in preparation for a field battle against Constantine. On 28 October AD 312, the sixth anniversary of his reign, he approached the keepers of the Sibylline Books for guidance. The keepers prophesied that, on that very day, "the enemy of the Romans" would die. Maxentius advanced north to meet Constantine in battle. Constantine adopts the Greek letters Chi Rho for Christ's initials Maxentius' forces were still twice the size of Constantine's, and he organized them in long lines facing the battle plain with their backs to the river. Constantine's army arrived on the field bearing unfamiliar symbols on their standards and their shields. According to Lactantius "Constantine was directed in a dream to cause the heavenly sign to be delineated on the shields of his soldiers, and so to proceed to battle. He did as he had been commanded, and he marked on their shields the letter Χ, with a perpendicular line drawn through it and turned round thus at the top, being the cipher of Christ. Having this sign (☧), his troops stood to arms." Eusebius describes a vision that Constantine had while marching at midday in which "he saw with his own eyes the trophy of a cross of light in the heavens, above the sun, and bearing the inscription, In Hoc Signo Vinces" ("In this sign thou shalt conquer"). In Eusebius's account, Constantine had a dream the following night in which Christ appeared with the same heavenly sign and told him to make an army standard in the form of the labarum. Eusebius is vague about when and where these events took place, but it enters his narrative before the war begins against Maxentius. He describes the sign as Chi (Χ) traversed by Rho (Ρ) to form ☧, representing the first two letters of the Greek word (Christos). A medallion was issued at Ticinum in AD 315 which shows Constantine wearing a helmet emblazoned with the Chi Rho, and coins issued at Siscia in AD 317/318 repeat the image. The figure was otherwise rare and is uncommon in imperial iconography and propaganda before the 320s. It wasn't completely unknown, however, being an abbreviation of the Greek word chrēston (good), having previously appeared on the coins of Ptolemy III, Euergetes I (247–222 BC). Constantine deployed his own forces along the whole length of Maxentius' line. He ordered his cavalry to charge, and they broke Maxentius' cavalry. He then sent his infantry against Maxentius' infantry, pushing many into the Tiber where they were slaughtered and drowned. The battle was brief, and Maxentius' troops were broken before the first charge. His horse guards and praetorians initially held their position, but they broke under the force of a Constantinian cavalry charge; they also broke ranks and fled to the river. Maxentius rode with them and attempted to cross the bridge of boats (Ponte Milvio), but he was pushed into the Tiber and drowned by the mass of his fleeing soldiers. In Rome Constantine entered Rome on 29 October AD 312, and staged a grand adventus in the city which was met with jubilation. Maxentius' body was fished out of the Tiber and decapitated, and his head was paraded through the streets for all to see. After the ceremonies, the disembodied head was sent to Carthage, and Carthage offered no further resistance. Unlike his predecessors, Constantine neglected to make the trip to the Capitoline Hill and perform customary sacrifices at the Temple of Jupiter. However, he did visit the Senatorial Curia Julia, and he promised to restore its ancestral privileges and give it a secure role in his reformed government; there would be no revenge against Maxentius' supporters. In response, the Senate decreed him "title of the first name", which meant that his name would be listed first in all official documents, and they acclaimed him as "the greatest Augustus". He issued decrees returning property that was lost under Maxentius, recalling political exiles, and releasing Maxentius' imprisoned opponents. An extensive propaganda campaign followed, during which Maxentius' image was purged from all public places. He was written up as a "tyrant" and set against an idealized image of Constantine the "liberator". Eusebius is the best representative of this strand of Constantinian propaganda. Maxentius' rescripts were declared invalid, and the honours that he had granted to leaders of the Senate were also invalidated. Constantine also attempted to remove Maxentius' influence on Rome's urban landscape. All structures built by him were rededicated to Constantine, including the Temple of Romulus and the Basilica of Maxentius. At the focal point of the basilica, a stone statue was erected of Constantine holding the Christian labarum in its hand. Its inscription bore the message which the statue illustrated: By this sign, Constantine had freed Rome from the yoke of the tyrant. Constantine also sought to upstage Maxentius' achievements. For example, the Circus Maximus was redeveloped so that its seating capacity was 25 times larger than that of Maxentius' racing complex on the Via Appia. Maxentius' strongest military supporters were neutralized when he disbanded the Praetorian Guard and Imperial Horse Guard. The tombstones of the Imperial Horse Guard were ground up and used in a basilica on the Via Labicana, and their former base was redeveloped into the Lateran Basilica on 9 November AD 312—barely two weeks after Constantine captured the city. The Legio II Parthica was removed from Albano Laziale, and the remainder of Maxentius' armies were sent to do frontier duty on the Rhine. Wars against Licinius In the following years, Constantine gradually consolidated his military superiority over his rivals in the crumbling Tetrarchy. In 313, he met Licinius in Milan to secure their alliance by the marriage of Licinius and Constantine's half-sister Constantia. During this meeting, the emperors agreed on the so-called Edict of Milan, officially granting full tolerance to Christianity and all religions in the Empire. The document had special benefits for Christians, legalizing their religion and granting them restoration for all property seized during Diocletian's persecution. It repudiates past methods of religious coercion and used only general terms to refer to the divine sphere—"Divinity" and "Supreme Divinity", summa divinitas. The conference was cut short, however, when news reached Licinius that his rival Maximinus had crossed the Bosporus and invaded European territory. Licinius departed and eventually defeated Maximinus, gaining control over the entire eastern half of the Roman Empire. Relations between the two remaining emperors deteriorated, as Constantine suffered an assassination attempt at the hands of a character that Licinius wanted elevated to the rank of Caesar; Licinius, for his part, had Constantine's statues in Emona destroyed. In either AD 314 or 316, the two Augusti fought against one another at the Battle of Cibalae, with Constantine being victorious. They clashed again at the Battle of Mardia in 317, and agreed to a settlement in which Constantine's sons Crispus and Constantine II, and Licinius' son Licinianus were made caesars. After this arrangement, Constantine ruled the dioceses of Pannonia and Macedonia and took residence at Sirmium, whence he could wage war on the Goths and Sarmatians in 322, and on the Goths in 323, defeating and killing their leader Rausimod. In the year 320, Licinius allegedly reneged on the religious freedom promised by the Edict of Milan in 313 and began to oppress Christians anew, generally without bloodshed, but resorting
Britain, Gaul, and Spain, and he commanded one of the largest Roman armies which was stationed along the important Rhine frontier. He remained in Britain after his promotion to emperor, driving back the tribes of the Picts and securing his control in the northwestern dioceses. He completed the reconstruction of military bases begun under his father's rule, and he ordered the repair of the region's roadways. He then left for Augusta Treverorum (Trier) in Gaul, the Tetrarchic capital of the northwestern Roman Empire. The Franks learned of Constantine's acclamation and invaded Gaul across the lower Rhine over the winter of AD 306–307. He drove them back beyond the Rhine and captured Kings Ascaric and Merogais; the kings and their soldiers were fed to the beasts of Trier's amphitheatre in the adventus (arrival) celebrations which followed. Constantine began a major expansion of Trier. He strengthened the circuit wall around the city with military towers and fortified gates, and he began building a palace complex in the northeastern part of the city. To the south of his palace, he ordered the construction of a large formal audience hall and a massive imperial bathhouse. He sponsored many building projects throughout Gaul during his tenure as emperor of the West, especially in Augustodunum (Autun) and Arelate (Arles). According to Lactantius, Constantine followed a tolerant policy towards Christianity, although he was not yet a Christian himself. He probably judged it a more sensible policy than open persecution and a way to distinguish himself from the "great persecutor" Galerius. He decreed a formal end to persecution and returned to Christians all that they had lost during them. Constantine was largely untried and had a hint of illegitimacy about him; he relied on his father's reputation in his early propaganda, which gave as much coverage to his father's deeds as to his. His military skill and building projects, however, soon gave the panegyrist the opportunity to comment favourably on the similarities between father and son, and Eusebius remarked that Constantine was a "renewal, as it were, in his own person, of his father's life and reign". Constantinian coinage, sculpture, and oratory also show a new tendency for disdain towards the "barbarians" beyond the frontiers. He minted a coin issue after his victory over the Alemanni which depicts weeping and begging Alemannic tribesmen, "the Alemanni conquered" beneath the phrase "Romans' rejoicing". There was little sympathy for these enemies; as his panegyrist declared, "It is a stupid clemency that spares the conquered foe." Maxentius' rebellion Following Galerius' recognition of Constantine as caesar, Constantine's portrait was brought to Rome, as was customary. Maxentius mocked the portrait's subject as the son of a harlot and lamented his own powerlessness. Maxentius, envious of Constantine's authority, seized the title of emperor on 28 October AD 306. Galerius refused to recognize him but failed to unseat him. Galerius sent Severus against Maxentius, but during the campaign, Severus' armies, previously under command of Maxentius' father Maximian, defected, and Severus was seized and imprisoned. Maximian, brought out of retirement by his son's rebellion, left for Gaul to confer with Constantine in late AD 307. He offered to marry his daughter Fausta to Constantine and elevate him to augustan rank. In return, Constantine would reaffirm the old family alliance between Maximian and Constantius and offer support to Maxentius' cause in Italy. Constantine accepted and married Fausta in Trier in late summer AD 307. Constantine now gave Maxentius his meagre support, offering Maxentius political recognition. Constantine remained aloof from the Italian conflict, however. Over the spring and summer of AD 307, he had left Gaul for Britain to avoid any involvement in the Italian turmoil; now, instead of giving Maxentius military aid, he sent his troops against Germanic tribes along the Rhine. In AD 308, he raided the territory of the Bructeri, and made a bridge across the Rhine at Colonia Agrippinensium (Cologne). In AD 310, he marched to the northern Rhine and fought the Franks. When not campaigning, he toured his lands advertising his benevolence and supporting the economy and the arts. His refusal to participate in the war increased his popularity among his people and strengthened his power base in the West. Maximian returned to Rome in the winter of AD 307–308, but soon fell out with his son. In early AD 308, after a failed attempt to usurp Maxentius' title, Maximian returned to Constantine's court. On 11 November AD 308, Galerius called a general council at the military city of Carnuntum (Petronell-Carnuntum, Austria) to resolve the instability in the western provinces. In attendance were Diocletian, briefly returned from retirement, Galerius, and Maximian. Maximian was forced to abdicate again and Constantine was again demoted to caesar. Licinius, one of Galerius' old military companions, was appointed augustus in the western regions. The new system did not last long: Constantine refused to accept the demotion, and continued to style himself as augustus on his coinage, even as other members of the Tetrarchy referred to him as a caesar on theirs. Maximinus was frustrated that he had been passed over for promotion while the newcomer Licinius had been raised to the office of augustus and demanded that Galerius promote him. Galerius offered to call both Maximinus and Constantine "sons of the augusti", but neither accepted the new title. By the spring of AD 310, Galerius was referring to both men as augusti. Maximian's rebellion In AD 310, a dispossessed Maximian rebelled against Constantine while Constantine was away campaigning against the Franks. Maximian had been sent south to Arles with a contingent of Constantine's army, in preparation for any attacks by Maxentius in southern Gaul. He announced that Constantine was dead, and took up the imperial purple. In spite of a large donative pledge to any who would support him as emperor, most of Constantine's army remained loyal to their emperor, and Maximian was soon compelled to leave. Constantine soon heard of the rebellion, abandoned his campaign against the Franks, and marched his army up the Rhine. At Cabillunum (Chalon-sur-Saône), he moved his troops onto waiting boats to row down the slow waters of the Saône to the quicker waters of the Rhone. He disembarked at Lugdunum (Lyon). Maximian fled to Massilia (Marseille), a town better able to withstand a long siege than Arles. It made little difference, however, as loyal citizens opened the rear gates to Constantine. Maximian was captured and reproved for his crimes. Constantine granted some clemency, but strongly encouraged his suicide. In July AD 310, Maximian hanged himself. In spite of the earlier rupture in their relations, Maxentius was eager to present himself as his father's devoted son after his death. He began minting coins with his father's deified image, proclaiming his desire to avenge Maximian's death. Constantine initially presented the suicide as an unfortunate family tragedy. By AD 311, however, he was spreading another version. According to this, after Constantine had pardoned him, Maximian planned to murder Constantine in his sleep. Fausta learned of the plot and warned Constantine, who put a eunuch in his own place in bed. Maximian was apprehended when he killed the eunuch and was offered suicide, which he accepted. Along with using propaganda, Constantine instituted a damnatio memoriae on Maximian, destroying all inscriptions referring to him and eliminating any public work bearing his image. The death of Maximian required a shift in Constantine's public image. He could no longer rely on his connection to the elder Emperor Maximian, and needed a new source of legitimacy. In a speech delivered in Gaul on 25 July AD 310, the anonymous orator reveals a previously unknown dynastic connection to Claudius II, a 3rd-century emperor famed for defeating the Goths and restoring order to the empire. Breaking away from tetrarchic models, the speech emphasizes Constantine's ancestral prerogative to rule, rather than principles of imperial equality. The new ideology expressed in the speech made Galerius and Maximian irrelevant to Constantine's right to rule. Indeed, the orator emphasizes ancestry to the exclusion of all other factors: "No chance agreement of men, nor some unexpected consequence of favor, made you emperor," the orator declares to Constantine. The oration also moves away from the religious ideology of the Tetrarchy, with its focus on twin dynasties of Jupiter and Hercules. Instead, the orator proclaims that Constantine experienced a divine vision of Apollo and Victory granting him laurel wreaths of health and a long reign. In the likeness of Apollo, Constantine recognized himself as the saving figure to whom would be granted "rule of the whole world", as the poet Virgil had once foretold. The oration's religious shift is paralleled by a similar shift in Constantine's coinage. In his early reign, the coinage of Constantine advertised Mars as his patron. From AD 310 on, Mars was replaced by Sol Invictus, a god conventionally identified with Apollo. There is little reason to believe that either the dynastic connection or the divine vision are anything other than fiction, but their proclamation strengthened Constantine's claims to legitimacy and increased his popularity among the citizens of Gaul. Civil wars War against Maxentius By the middle of AD 310, Galerius had become too ill to involve himself in imperial politics. His final act survives: a letter to provincials posted in Nicomedia on 30 April AD 311, proclaiming an end to the persecutions, and the resumption of religious toleration. He died soon after the edict's proclamation, destroying what little remained of the tetrarchy. Maximinus mobilized against Licinius, and seized Asia Minor. A hasty peace was signed on a boat in the middle of the Bosphorus. While Constantine toured Britain and Gaul, Maxentius prepared for war. He fortified northern Italy, and strengthened his support in the Christian community by allowing it to elect a new Bishop of Rome, Eusebius. Maxentius' rule was nevertheless insecure. His early support dissolved in the wake of heightened tax rates and depressed trade; riots broke out in Rome and Carthage; and Domitius Alexander was able to briefly usurp his authority in Africa. By AD 312, he was a man barely tolerated, not one actively supported, even among Christian Italians. In the summer of AD 311, Maxentius mobilized against Constantine while Licinius was occupied with affairs in the East. He declared war on Constantine, vowing to avenge his father's "murder". To prevent Maxentius from forming an alliance against him with Licinius, Constantine forged his own alliance with Licinius over the winter of AD 311–312, and offered him his sister Constantia in marriage. Maximinus considered Constantine's arrangement with Licinius an affront to his authority. In response, he sent ambassadors to Rome, offering political recognition to Maxentius in exchange for a military support. Maxentius accepted. According to Eusebius, inter-regional travel became impossible, and there was military buildup everywhere. There was "not a place where people were not expecting the onset of hostilities every day". Constantine's advisers and generals cautioned against preemptive attack on Maxentius; even his soothsayers recommended against it, stating that the sacrifices had produced unfavourable omens. Constantine, with a spirit that left a deep impression on his followers, inspiring some to believe that he had some form of supernatural guidance, ignored all these cautions. Early in the spring of AD 312, Constantine crossed the Cottian Alps with a quarter of his army, a force numbering about 40,000. The first town his army encountered was Segusium (Susa, Italy), a heavily fortified town that shut its gates to him. Constantine ordered his men to set fire to its gates and scale its walls. He took the town quickly. Constantine ordered his troops not to loot the town, and advanced with them into northern Italy. At the approach to the west of the important city of Augusta Taurinorum (Turin, Italy), Constantine met a large force of heavily armed Maxentian cavalry. In the ensuing battle Constantine's army encircled Maxentius' cavalry, flanked them with his own cavalry, and dismounted them with blows from his soldiers' iron-tipped clubs. Constantine's armies emerged victorious. Turin refused to give refuge to Maxentius' retreating forces, opening its gates to Constantine instead. Other cities of the north Italian plain sent Constantine embassies of congratulation for his victory. He moved on to Milan, where he was met with open gates and jubilant rejoicing. Constantine rested his army in Milan until mid-summer AD 312, when he moved on to Brixia (Brescia). Brescia's army was easily dispersed, and Constantine quickly advanced to Verona, where a large Maxentian force was camped. Ruricius Pompeianus, general of the Veronese forces and Maxentius' praetorian prefect, was in a strong defensive position, since the town was surrounded on three sides by the Adige. Constantine sent a small force north of the town in an attempt to cross the river unnoticed. Ruricius sent a large detachment to counter Constantine's expeditionary force, but was defeated. Constantine's forces successfully surrounded the town and laid siege. Ruricius gave Constantine the slip and returned with a larger force to oppose Constantine. Constantine refused to let up on the siege, and sent only a small force to oppose him. In the desperately fought encounter that followed, Ruricius was killed and his army destroyed. Verona surrendered soon afterwards, followed by Aquileia, Mutina (Modena), and Ravenna. The road to Rome was now wide open to Constantine. Maxentius prepared for the same type of war he had waged against Severus and Galerius: he sat in Rome and prepared for a siege. He still controlled Rome's praetorian guards, was well-stocked with African grain, and was surrounded on all sides by the seemingly impregnable Aurelian Walls. He ordered all bridges across the Tiber cut, reportedly on the counsel of the gods, and left the rest of central Italy undefended; Constantine secured that region's support without challenge. Constantine progressed slowly along the Via Flaminia, allowing the weakness of Maxentius to draw his regime further into turmoil. Maxentius' support continued to weaken: at chariot races on 27 October, the crowd openly taunted Maxentius, shouting that Constantine was invincible. Maxentius, no longer certain that he would emerge from a siege victorious, built a temporary boat bridge across the Tiber in preparation for a field battle against Constantine. On 28 October AD 312, the sixth anniversary of his reign, he approached the keepers of the Sibylline Books for guidance. The keepers prophesied that, on that very day, "the enemy of the Romans" would die. Maxentius advanced north to meet Constantine in battle. Constantine adopts the Greek letters Chi Rho for Christ's initials Maxentius' forces were still twice the size of Constantine's, and he organized them in long lines facing the battle plain with their backs to the river. Constantine's army arrived on the field bearing unfamiliar symbols on their standards and their shields. According to Lactantius "Constantine was directed in a dream to cause the heavenly sign to be delineated on the shields of his soldiers, and so to proceed to battle. He did as he had been commanded, and he marked on their shields the letter Χ, with a perpendicular line drawn through it and turned round thus at the top, being the cipher of Christ. Having this sign (☧), his troops stood to arms." Eusebius describes a vision that Constantine had while marching at midday in which "he saw with his own eyes the trophy of a cross of light in the heavens, above the sun, and bearing the inscription, In Hoc Signo Vinces" ("In this sign thou shalt conquer"). In Eusebius's account, Constantine had a dream the following night in which Christ appeared with the same heavenly sign and told him to make an army standard in the form of the labarum. Eusebius is vague about when and where these events took place, but it enters his narrative before the war begins against Maxentius. He describes the sign as Chi (Χ) traversed by Rho (Ρ) to form ☧, representing the first two letters of the Greek word (Christos). A medallion was issued at Ticinum in AD 315 which shows Constantine wearing a helmet emblazoned with the Chi Rho, and coins issued at Siscia in AD 317/318 repeat the image. The figure was otherwise rare and is uncommon in imperial iconography and propaganda before the 320s. It wasn't completely unknown, however, being an abbreviation of the Greek word chrēston (good), having previously appeared on the coins of Ptolemy III, Euergetes I (247–222 BC). Constantine deployed his own forces along the whole length of Maxentius' line. He ordered his cavalry to charge, and they broke Maxentius' cavalry. He then sent his infantry against Maxentius' infantry, pushing many into the Tiber where they were slaughtered and drowned. The battle was brief, and Maxentius' troops were broken before the first charge. His horse guards and praetorians initially held their position, but they broke under the force of a Constantinian cavalry charge; they also broke ranks and fled to the river. Maxentius rode with them and attempted to cross the bridge of boats (Ponte Milvio), but he was pushed into the Tiber and drowned by the mass of his fleeing soldiers. In Rome Constantine entered Rome on 29 October AD 312, and staged a grand adventus in the city which was met with jubilation. Maxentius' body was fished out of the Tiber and decapitated, and his head was paraded through the streets for all to see. After the ceremonies, the disembodied head was sent to Carthage, and Carthage offered no further resistance. Unlike his predecessors, Constantine neglected to make the trip to the Capitoline Hill and perform customary sacrifices at the Temple of Jupiter. However, he did visit the Senatorial Curia Julia, and he promised to restore its ancestral privileges and give it a secure role in his reformed government; there would be no revenge against Maxentius' supporters. In response, the Senate decreed him "title of the first name", which meant that his name would be listed first in all official documents, and they acclaimed him as "the greatest Augustus". He issued decrees returning property that was lost under Maxentius, recalling political exiles, and releasing Maxentius' imprisoned opponents. An extensive propaganda campaign followed, during which Maxentius' image was purged from all public places. He was written up as a "tyrant" and set against an idealized image of Constantine the "liberator". Eusebius is the best representative of this strand of Constantinian propaganda. Maxentius' rescripts were declared invalid, and the honours that he had granted to leaders of the Senate were also invalidated. Constantine also attempted to remove Maxentius' influence on Rome's urban landscape. All structures built by him were rededicated to Constantine, including the Temple of Romulus and the Basilica of Maxentius. At the focal point of the basilica, a stone statue was erected of Constantine holding the Christian labarum in its hand. Its inscription bore the message which the statue illustrated: By this sign, Constantine had freed Rome from the yoke of the tyrant. Constantine also sought to upstage Maxentius' achievements. For example, the Circus Maximus was redeveloped so that its seating capacity was 25 times larger than that of Maxentius' racing complex on the Via Appia. Maxentius' strongest military supporters were neutralized when he disbanded the Praetorian Guard and Imperial Horse Guard. The tombstones of the Imperial Horse Guard were ground up and used in a basilica on the Via Labicana, and their former base was redeveloped into the Lateran Basilica on 9 November AD 312—barely two weeks after Constantine captured the city. The Legio II Parthica was removed from Albano Laziale, and the remainder of Maxentius' armies were sent to do frontier duty on the Rhine. Wars against Licinius In the following years, Constantine gradually consolidated his military superiority over his rivals in the crumbling Tetrarchy. In 313, he met Licinius in Milan to secure their alliance by the marriage of Licinius and Constantine's half-sister Constantia. During this meeting, the emperors agreed on the so-called Edict of Milan, officially granting full tolerance to Christianity and all religions in the Empire. The document had special benefits for Christians, legalizing their religion and granting them restoration for all property seized during Diocletian's persecution. It repudiates past methods of religious coercion and used only general terms to refer to the divine sphere—"Divinity" and "Supreme Divinity", summa divinitas. The conference was cut short, however, when news reached Licinius that his rival Maximinus had crossed the Bosporus and invaded European territory. Licinius departed and eventually defeated Maximinus, gaining control over the entire eastern half of the Roman Empire. Relations between the two remaining emperors deteriorated, as Constantine suffered an assassination attempt at the hands of a character that Licinius wanted elevated to the rank of Caesar; Licinius, for his part, had Constantine's statues in Emona destroyed. In either AD 314 or 316, the two Augusti fought against one another at the Battle of Cibalae, with Constantine being victorious. They clashed again at the Battle of Mardia in 317, and agreed to a settlement in which Constantine's sons Crispus and Constantine II, and Licinius' son Licinianus were made caesars. After this arrangement, Constantine ruled the dioceses of Pannonia and Macedonia and took residence at Sirmium, whence he could wage war on the Goths and Sarmatians in 322, and on the Goths in 323, defeating and killing their leader Rausimod. In the year 320, Licinius allegedly reneged on the religious freedom promised by the Edict of Milan in 313 and began to oppress Christians anew, generally without bloodshed, but resorting to confiscations and sacking of Christian office-holders. Although this characterization of Licinius as anti-Christian is somewhat doubtful, the fact is that he seems to have been far less open in his support of Christianity than Constantine. Therefore, Licinius was prone to see the Church as a force more loyal to Constantine than to the Imperial system in general, as the explanation offered by the Church historian Sozomen. This dubious arrangement eventually became a challenge to Constantine in the West, climaxing in the great civil war of 324. Constantine's Christian eulogists present the war as a battle between Christianity and paganism; Licinius, aided by Gothic mercenaries, represented the past and ancient paganism, while Constantine and his Franks marched under the standard of the labarum. Outnumbered, but fired by their zeal, Constantine's army emerged victorious in the Battle of Adrianople. Licinius fled across the Bosphorus and appointed Martinian, his magister officiorum, as nominal Augustus in the West, but Constantine next won the Battle of the Hellespont, and finally the Battle of Chrysopolis on 18 September 324. Licinius and Martinian surrendered to Constantine at Nicomedia on the promise their lives would be spared: they were sent to live as private citizens in Thessalonica and Cappadocia respectively, but in 325 Constantine accused Licinius of plotting against him and had them both arrested and hanged; Licinius' son (the son of Constantine's half-sister) was killed in 326. Thus Constantine became the sole emperor of the Roman Empire. Later rule Foundation of Constantinople Diocletian had chosen Nicomedia in the East as his capital during the Tetrarchy—not far from Byzantium, well situated to defend Thrace, Asia, and Egypt, all of which had required his military attention. Constantine had recognized the shift of the center of gravity of the Empire from the remote and depopulated West to the richer cities of the East, and the military strategic importance of protecting the Danube
that it can be referenced between languages and tools, making it easy to work with code written in a language the developer is not using. The Common Language Specification (CLS) The CLI should conform with the set of base rules to which any language targeting, since that language should interoperate with other CLS-compliant languages. The CLS rules are a subset of the Common Type System. The Virtual Execution System (VES) The VES loads and executes CLI-compatible programs, using the metadata to combine separately generated pieces of code at runtime. All compatible languages compile to Common Intermediate Language (CIL), which is an intermediate language that is abstracted from the platform hardware. When the code is executed, the platform-specific VES will compile the CIL to the machine language according to the specific hardware and operating system. Standardization and licensing In August 2000, Microsoft, Hewlett-Packard, Intel, and others worked to standardize CLI. By December 2001, it was ratified by the Ecma, with ISO standardization following in April 2003. Microsoft and its partners hold patents for CLI. Ecma and ISO require that all patents essential to implementation be made available under "reasonable and non-discriminatory (RAND) terms." It is common for RAND licensing to require some royalty payment, which could be a cause for concern with Mono. As of January 2013, neither Microsoft nor its partners have identified any patents essential to CLI implementations subject to RAND terms. As of July 2009, Microsoft added C# and CLI to the list of specifications that the Microsoft Community Promise applies to, so anyone can safely implement specified editions of the standards without fearing a patent lawsuit from Microsoft. To
of default .NET installations. However, the conformance clause of the CLI allows for extending the supported profile by adding new methods and types to classes, as well as deriving from new namespaces. But it does not allow for adding new members to interfaces. This means that the features of the CLI can be used and extended, as long as the conforming profile implementation does not change the behavior of a program intended to run on that profile, while allowing for unspecified behavior from programs written specifically for that implementation. In 2012, Ecma and ISO published the new edition of the CLI standard, which is not covered by the Community Promise. Implementations .NET Framework is Microsoft's original commercial implementation of the CLI. It only supports Windows. It was superseded by .NET in November 2020. .NET is the free and open-source multi-platform successor to .NET Framework, released under the MIT License .NET Compact Framework is Microsoft's commercial implementation of the CLI for portable devices and Xbox 360. .NET Micro Framework is an open source implementation of the CLI for resource-constrained devices. Mono is an alternative open source implementation of CLI and accompanying technologies, mainly
Zealand in 1930, India in 1932, and Pakistan in 1952. However, international cricket continued to be played as bilateral Test matches over three, four or five days. In the early 1960s, English county cricket teams began playing a shortened version of cricket which only lasted for one day. Starting in 1962 with a four-team knockout competition known as the Midlands Knock-Out Cup, and continuing with the inaugural Gillette Cup in 1963, one-day cricket grew in popularity in England. A national Sunday League was formed in 1969. The first One-Day International match was played on the fifth day of a rain-aborted Test match between England and Australia at Melbourne in 1971, to fill the time available and as compensation for the frustrated crowd. It was a forty over game with eight balls per over. In the late 1970s, Kerry Packer established the rival World Series Cricket (WSC) competition. It introduced many of the now commonplace features of One Day International cricket, including coloured uniforms, matches played at night under floodlights with a white ball and dark sight screens, and, for television broadcasts, multiple camera angles, effects microphones to capture sounds from the players on the pitch, and on-screen graphics. The first of the matches with coloured uniforms was the WSC Australians in wattle gold versus WSC West Indians in coral pink, played at VFL Park in Melbourne on 17 January 1979. The success and popularity of the domestic one-day competitions in England and other parts of the world, as well as the early One-Day Internationals, prompted the ICC to consider organising a Cricket World Cup. Prudential World Cups (1975–1983) The inaugural Cricket World Cup was hosted in 1975 by England, the only nation able to put forward the resources to stage an event of such magnitude at the time. The 1975 tournament started on 7 June. The first three events were held in England and officially known as the Prudential Cup after the sponsors Prudential plc. The matches consisted of 60 six-ball overs per team, played during the daytime in traditional form, with the players wearing cricket whites and using red cricket balls. Eight teams participated in the first tournament: Australia, England, India, New Zealand, Pakistan, and the West Indies (the six Test nations at the time), together with Sri Lanka and a composite team from East Africa. One notable omission was South Africa, who were banned from international cricket due to apartheid. The tournament was won by the West Indies, who defeated Australia by 17 runs in the final at Lord's. Roy Fredricks of West Indies was the first batsmen who got hit-wicket in ODI during the 1975 World Cup final. The 1979 World Cup saw the introduction of the ICC Trophy competition to select non-Test playing teams for the World Cup, with Sri Lanka and Canada qualifying. The West Indies won a second consecutive World Cup tournament, defeating the hosts England by 92 runs in the final. At a meeting which followed the World Cup, the International Cricket Conference agreed to make the competition a quadrennial event. The 1983 event was hosted by England for a third consecutive time. By this stage, Sri Lanka had become a Test-playing nation, and Zimbabwe qualified through the ICC Trophy. A fielding circle was introduced, away from the stumps. Four fieldsmen needed to be inside it at all times. The teams faced each other twice, before moving into the knock-outs. India was crowned champions after upsetting the West Indies by 43 runs in the final. Different champions (1987–1996) India and Pakistan jointly hosted the 1987 tournament, the first time that the competition was held outside England. The games were reduced from 60 to 50 overs per innings, the current standard, because of the shorter daylight hours in the Indian subcontinent compared with England's summer. Australia won the championship by defeating England by 7 runs in the final, the closest margin in the World Cup final until the 2019 edition between England and New Zealand. The 1992 World Cup, held in Australia and New Zealand, introduced many changes to the game, such as coloured clothing, white balls, day/night matches, and a change to the fielding restriction rules. The South African cricket team participated in the event for the first time, following the fall of the apartheid regime and the end of the international sports boycott. Pakistan overcame a dismal start in the tournament to eventually defeat England by 22 runs in the final and emerge as winners. The 1996 championship was held in the Indian subcontinent for a second time, with the inclusion of Sri Lanka as host for some of its group stage matches. In the semi-final, Sri Lanka, heading towards a crushing victory over India at Eden Gardens after the hosts lost eight wickets while scoring 120 runs in pursuit of 252, were awarded victory by default after crowd unrest broke out in protest against the Indian performance. Sri Lanka went on to win their maiden championship by defeating Australia by seven wickets in the final at Lahore. Australian treble (1999–2007) In 1999 the event was hosted by England, with some matches also being held in Scotland, Ireland, Wales and the Netherlands. Twelve teams contested the World Cup. Australia qualified for the semi-finals after reaching their target in their Super 6 match against South Africa off the final over of the match. They then proceeded to the final with a tied match in the semi-final also against South Africa where a mix-up between South African batsmen Lance Klusener and Allan Donald saw Donald drop his bat and stranded mid-pitch to be run out. In the final, Australia dismissed Pakistan for 132 and then reached the target in less than 20 overs and with eight wickets in hand. South Africa, Zimbabwe and Kenya hosted the 2003 World Cup. The number of teams participating in the event increased from twelve to fourteen. Kenya's victories over Sri Lanka and Zimbabwe, among others – and a forfeit by the New Zealand team, which refused to play in Kenya because of security concerns – enabled Kenya to reach the semi-finals, the best result by an associate. In the final, Australia made 359 runs for the loss of two wickets, the largest ever total in a final, defeating India by 125 runs. In 2007 the tournament was hosted by the West Indies and expanded to sixteen teams. Following Pakistan's upset loss to World Cup debutants Ireland in the group stage, Pakistani coach Bob Woolmer was found dead in his hotel room. Jamaican police had initially launched a murder investigation into Woolmer's death but later confirmed that he died of heart failure. Australia defeated Sri Lanka in the final by 53 runs (D/L) in farcical light conditions, and extended their undefeated run in the World Cup to 29 matches and winning three straight championships. Hosts triumph (2011–2019) India, Sri Lanka and Bangladesh together hosted the 2011 World Cup. Pakistan were stripped of their hosting rights following the terrorist attack on the Sri Lankan cricket team in 2009, with the games originally scheduled for Pakistan redistributed to the other host countries. The number of teams participating in the World Cup was reduced to fourteen. Australia lost their final group stage match against Pakistan on 19 March 2011, ending an unbeaten streak of 35 World Cup matches, which had begun on 23 May 1999. India won their second World Cup title by beating Sri Lanka by 6 wickets in the final at Wankhede Stadium in Mumbai, making India became the first country to win the World Cup at home. This was also the first time that two Asian countries faced each other in a World Cup Final. Australia and New Zealand jointly hosted the 2015 World Cup. The number of participants remained at fourteen. Ireland was the most successful Associate nation with a total of three wins in the tournament. New Zealand beat South Africa in a thrilling first semi-final to qualify for their maiden World Cup final. Australia defeated New Zealand by seven wickets in the final at Melbourne to lift the World Cup for the fifth time. The 2019 World Cup was hosted by England and Wales. The number of participants was reduced to 10. New Zealand defeated India in the first semi-final, which was pushed over to the reserve day due to rain. England defeated the defending champions, Australia, in the second semi-final. Neither finalist had previously won the World Cup. In the final, the scores were tied at 241 after 50 overs and the match went to a super over, after which the scores were again tied at 15. The World Cup was won by England, whose boundary count was greater than New Zealand's. Format Qualification From the first World Cup in 1975 up to the 2019 World Cup, the majority of teams taking part qualified automatically. Until the 2015 World Cup this was mostly through having Full Membership of the ICC, and for the 2019 World Cup this was mostly through ranking position in the ICC ODI Championship. Since the second World Cup in 1979 up to the 2019 World Cup, the teams that qualified automatically were joined by a small number of others who qualified for the World Cup through the qualification process. The first qualifying tournament being the ICC Trophy; later the process expanding with pre-qualifying tournaments. For the 2011 World Cup, the ICC World Cricket League replaced the past pre-qualifying processes; and the name "ICC Trophy" was changed to "ICC World Cup Qualifier". The World Cricket League was the qualification system provided to allow the Associate and Affiliate members of the ICC more opportunities to qualify. The number of teams qualifying varied throughout the years. From the 2023 World Cup onwards, only the host nation(s) will qualify automatically. All countries will participate in a series of leagues to determine qualification, with automatic promotion and relegation between divisions from one World Cup cycle to the next. Tournament The format of the Cricket World Cup has changed greatly over the course of its history. Each of the first four tournaments was played by eight teams, divided into two groups of four. The competition consisted of two stages, a group stage and a knock-out stage. The four teams in each group played each other in the round-robin group stage, with the top two teams in each group progressing to the semi-finals. The winners of the semi-finals played against each other in the final. With South Africa returning in the fifth tournament in 1992 as a result of the end of the apartheid boycott, nine teams played each other once in the
in 1999. The number of nations playing Test cricket increased gradually over time, with the addition of West Indies in 1928, New Zealand in 1930, India in 1932, and Pakistan in 1952. However, international cricket continued to be played as bilateral Test matches over three, four or five days. In the early 1960s, English county cricket teams began playing a shortened version of cricket which only lasted for one day. Starting in 1962 with a four-team knockout competition known as the Midlands Knock-Out Cup, and continuing with the inaugural Gillette Cup in 1963, one-day cricket grew in popularity in England. A national Sunday League was formed in 1969. The first One-Day International match was played on the fifth day of a rain-aborted Test match between England and Australia at Melbourne in 1971, to fill the time available and as compensation for the frustrated crowd. It was a forty over game with eight balls per over. In the late 1970s, Kerry Packer established the rival World Series Cricket (WSC) competition. It introduced many of the now commonplace features of One Day International cricket, including coloured uniforms, matches played at night under floodlights with a white ball and dark sight screens, and, for television broadcasts, multiple camera angles, effects microphones to capture sounds from the players on the pitch, and on-screen graphics. The first of the matches with coloured uniforms was the WSC Australians in wattle gold versus WSC West Indians in coral pink, played at VFL Park in Melbourne on 17 January 1979. The success and popularity of the domestic one-day competitions in England and other parts of the world, as well as the early One-Day Internationals, prompted the ICC to consider organising a Cricket World Cup. Prudential World Cups (1975–1983) The inaugural Cricket World Cup was hosted in 1975 by England, the only nation able to put forward the resources to stage an event of such magnitude at the time. The 1975 tournament started on 7 June. The first three events were held in England and officially known as the Prudential Cup after the sponsors Prudential plc. The matches consisted of 60 six-ball overs per team, played during the daytime in traditional form, with the players wearing cricket whites and using red cricket balls. Eight teams participated in the first tournament: Australia, England, India, New Zealand, Pakistan, and the West Indies (the six Test nations at the time), together with Sri Lanka and a composite team from East Africa. One notable omission was South Africa, who were banned from international cricket due to apartheid. The tournament was won by the West Indies, who defeated Australia by 17 runs in the final at Lord's. Roy Fredricks of West Indies was the first batsmen who got hit-wicket in ODI during the 1975 World Cup final. The 1979 World Cup saw the introduction of the ICC Trophy competition to select non-Test playing teams for the World Cup, with Sri Lanka and Canada qualifying. The West Indies won a second consecutive World Cup tournament, defeating the hosts England by 92 runs in the final. At a meeting which followed the World Cup, the International Cricket Conference agreed to make the competition a quadrennial event. The 1983 event was hosted by England for a third consecutive time. By this stage, Sri Lanka had become a Test-playing nation, and Zimbabwe qualified through the ICC Trophy. A fielding circle was introduced, away from the stumps. Four fieldsmen needed to be inside it at all times. The teams faced each other twice, before moving into the knock-outs. India was crowned champions after upsetting the West Indies by 43 runs in the final. Different champions (1987–1996) India and Pakistan jointly hosted the 1987 tournament, the first time that the competition was held outside England. The games were reduced from 60 to 50 overs per innings, the current standard, because of the shorter daylight hours in the Indian subcontinent compared with England's summer. Australia won the championship by defeating England by 7 runs in the final, the closest margin in the World Cup final until the 2019 edition between England and New Zealand. The 1992 World Cup, held in Australia and New Zealand, introduced many changes to the game, such as coloured clothing, white balls, day/night matches, and a change to the fielding restriction rules. The South African cricket team participated in the event for the first time, following the fall of the apartheid regime and the end of the international sports boycott. Pakistan overcame a dismal start in the tournament to eventually defeat England by 22 runs in the final and emerge as winners. The 1996 championship was held in the Indian subcontinent for a second time, with the inclusion of Sri Lanka as host for some of its group stage matches. In the semi-final, Sri Lanka, heading towards a crushing victory over India at Eden Gardens after the hosts lost eight wickets while scoring 120 runs in pursuit of 252, were awarded victory by default after crowd unrest broke out in protest against the Indian performance. Sri Lanka went on to win their maiden championship by defeating Australia by seven wickets in the final at Lahore. Australian treble (1999–2007) In 1999 the event was hosted by England, with some matches also being held in Scotland, Ireland, Wales and the Netherlands. Twelve teams contested the World Cup. Australia qualified for the semi-finals after reaching their target in their Super 6 match against South Africa off the final over of the match. They then proceeded to the final with a tied match in the semi-final also against South Africa where a mix-up between South African batsmen Lance Klusener and Allan Donald saw Donald drop his bat and stranded mid-pitch to be run out. In the final, Australia dismissed Pakistan for 132 and then reached the target in less than 20 overs and with eight wickets in hand. South Africa, Zimbabwe and Kenya hosted the 2003 World Cup. The number of teams participating in the event increased from twelve to fourteen. Kenya's victories over Sri Lanka and Zimbabwe, among others – and a forfeit by the New Zealand team, which refused to play in Kenya because of security concerns – enabled Kenya to reach the semi-finals, the best result by an associate. In the final, Australia made 359 runs for the loss of two wickets, the largest ever total in a final, defeating India by 125 runs. In 2007 the tournament was hosted by the West Indies and expanded to sixteen teams. Following Pakistan's upset loss to World Cup debutants Ireland in the group stage, Pakistani coach Bob Woolmer was found dead in his hotel room. Jamaican police had initially launched a murder investigation into Woolmer's death but later confirmed that he died of heart failure. Australia defeated Sri Lanka in the final by 53 runs (D/L) in farcical light conditions, and extended their undefeated run in the World Cup to 29 matches and winning three straight championships. Hosts triumph (2011–2019) India, Sri Lanka and Bangladesh together hosted the 2011 World Cup. Pakistan were stripped of their hosting rights following the terrorist attack on the Sri Lankan cricket team in 2009, with the games originally scheduled for Pakistan redistributed to the other host countries. The number of teams participating in the World Cup was reduced to fourteen. Australia lost their final group stage match against Pakistan on 19 March 2011, ending an unbeaten streak of 35 World Cup matches, which had begun on 23 May 1999. India won their second World Cup title by beating Sri Lanka by 6 wickets in the final at Wankhede Stadium in Mumbai, making India became the first country to win the World Cup at home. This was also the first time that two Asian countries faced each other in a World Cup Final. Australia and New Zealand jointly hosted the 2015 World Cup. The number of participants remained at fourteen. Ireland was the most successful Associate nation with a total of three wins in the tournament. New Zealand beat South Africa in a thrilling first semi-final to qualify for their maiden World Cup final. Australia defeated New Zealand by seven wickets in the final at Melbourne to lift the World Cup for the fifth time. The 2019 World Cup was hosted by England and Wales. The number of participants was reduced to 10. New Zealand defeated India in the first semi-final, which was pushed over to the reserve day due to rain. England defeated the defending champions, Australia, in the second semi-final. Neither finalist had previously won the World Cup. In the final, the scores were tied at 241 after 50 overs and the match went to a super over, after which the scores were again tied at 15. The World Cup was won by England, whose boundary count was greater than New Zealand's. Format Qualification From the first World Cup in 1975 up to the 2019 World Cup, the majority of teams taking part qualified automatically. Until the 2015 World Cup this was mostly through having Full Membership of the ICC, and for the 2019 World Cup this was mostly through ranking position in the ICC ODI Championship. Since the second World Cup in 1979 up to the 2019 World Cup, the teams that qualified automatically were joined by a small number of others who qualified for the World Cup through the qualification process. The first qualifying tournament being the ICC Trophy; later the process expanding with pre-qualifying tournaments. For the
in Ottawa, by Prime Minister of Canada Pierre Trudeau, but reminiscent of the excursions to Chequers or Dorneywood in the days of the Prime Ministers' Conferences. Only the head of the delegation and their spouse and one additional person attend the retreats. The additional person may be of any capacity (personal, political, security, etc.) but only has occasional and intermittent access to the head of the delegation. It is usually at the retreat where, isolated from their advisers, the heads resolve the most intransigent issues: leading to the Gleneagles Agreement in 1977, the Lusaka Declaration in 1979, the Langkawi Declaration in 1989, the Millbrook Programme in 1995, the Aso Rock Declaration in 2003, and the Colombo Declaration on Sustainable, Inclusive and Equitable Development in 2013. The 'fringe' of civil society organisations, including the Commonwealth Family and local groups, adds a cultural dimension to the event, and brings the CHOGM a higher media profile and greater acceptance by the local population. First officially recognised at Limassol in 1993, these events, spanning a longer period than the meeting itself, have, to an extent, preserved the length of the CHOGM: but only in the cultural sphere. Other meetings, such as those of the Commonwealth Ministerial Action Group, Commonwealth Business Council, and respective foreign ministers, have also dealt with business away from the heads of government themselves. As the scope of the CHOGM has expanded beyond the meetings of the heads of governments themselves, the CHOGMs have become progressively shorter, and their business compacted into less time. The 1971 CHOGM lasted for nine days, and the 1977 and 1991 CHOGMs for seven days each. However, Harare's epochal CHOGM was the last to last a week; the 1993 CHOGM lasted for five days, and the contentious 1995 CHOGM for only three-and-a-half. The 2005 and subsequent conferences were held over two to two-and-a-half-days. However, recent CHOGMs have also featured several days of pre-summit Commonwealth Forums on business, women, youth, as well as the Commonwealth People's Forum and meetings of foreign ministers. Issues During the 1980s, CHOGMs were dominated by calls for the Commonwealth to impose sanctions on South Africa to pressure the country to end apartheid. The division between Britain, during the government of Margaret Thatcher which resisted the call for sanctions and African Commonwealth countries, and the rest of the Commonwealth was intense at times and led to speculation that the organisation might collapse. According to one of Margaret Thatcher's former aides, Mrs. Thatcher, very privately, used to say that CHOGM stood for "Compulsory Handouts to Greedy Mendicants." According to his daughter, Denis Thatcher also referred to CHOGM as standing for 'C**ns Holidaying on Government Money'. In 2011, British Prime Minister David Cameron informed the British House of Commons that his proposals to reform the rules governing royal succession, a change which would require the approval of all sixteen Commonwealth realms, was approved at the 28–30 October CHOGM in Perth, subsequently referred to as the Perth Agreement. Rwanda – which is due to be the organsisation's next Chair-in-Office – joined the Commonwealth in 2009 despite the Commonwealth Human Rights Initiative's (CHRI) finding that "the state of governance and human rights in Rwanda does not satisfy Commonwealth standards", and that it "does not therefore qualify for admission". Both the CHRI and Human Rights Watch have found that respect for democracy and human rights in Rwanda has declined since the
Gleneagles Agreement in 1977, the Lusaka Declaration in 1979, the Langkawi Declaration in 1989, the Millbrook Programme in 1995, the Aso Rock Declaration in 2003, and the Colombo Declaration on Sustainable, Inclusive and Equitable Development in 2013. The 'fringe' of civil society organisations, including the Commonwealth Family and local groups, adds a cultural dimension to the event, and brings the CHOGM a higher media profile and greater acceptance by the local population. First officially recognised at Limassol in 1993, these events, spanning a longer period than the meeting itself, have, to an extent, preserved the length of the CHOGM: but only in the cultural sphere. Other meetings, such as those of the Commonwealth Ministerial Action Group, Commonwealth Business Council, and respective foreign ministers, have also dealt with business away from the heads of government themselves. As the scope of the CHOGM has expanded beyond the meetings of the heads of governments themselves, the CHOGMs have become progressively shorter, and their business compacted into less time. The 1971 CHOGM lasted for nine days, and the 1977 and 1991 CHOGMs for seven days each. However, Harare's epochal CHOGM was the last to last a week; the 1993 CHOGM lasted for five days, and the contentious 1995 CHOGM for only three-and-a-half. The 2005 and subsequent conferences were held over two to two-and-a-half-days. However, recent CHOGMs have also featured several days of pre-summit Commonwealth Forums on business, women, youth, as well as the Commonwealth People's Forum and meetings of foreign ministers. Issues During the 1980s, CHOGMs were dominated by calls for the Commonwealth to impose sanctions on South Africa to pressure the country to end apartheid. The division between Britain, during the government of Margaret Thatcher which resisted the call for sanctions and African Commonwealth countries, and the rest of the Commonwealth was intense at times and led to speculation that the organisation might collapse. According to one of Margaret Thatcher's former aides, Mrs. Thatcher, very privately, used to say that CHOGM stood for "Compulsory Handouts to Greedy Mendicants." According to his daughter, Denis Thatcher also referred to CHOGM as standing for 'C**ns Holidaying on Government Money'. In 2011, British Prime Minister David Cameron informed the British House of Commons that his proposals to reform the rules governing royal succession, a change which would require the approval of all sixteen Commonwealth realms, was approved at the 28–30 October CHOGM in Perth, subsequently referred to as the Perth Agreement. Rwanda – which is due to be the organsisation's next Chair-in-Office – joined the Commonwealth in 2009 despite the Commonwealth Human Rights Initiative's (CHRI) finding that "the state of governance and human rights in Rwanda does not satisfy Commonwealth standards", and that it "does not therefore qualify for admission". Both the CHRI and Human Rights Watch have found that respect for democracy and human rights in Rwanda has declined since the country joined the Commonwealth. There have been calls for the Commonwealth to stand up for democracy and human rights in Rwanda at the 2021 CHOGM. Agenda Under the Millbrook Commonwealth Action Programme, each CHOGM is responsible for renewing the remit of the Commonwealth Ministerial Action Group, whose responsibility it is to uphold the Harare Declaration on the core political principles of the Commonwealth. Incidents A bomb exploded at the Sydney Hilton Hotel, the venue for the February 1978 Commonwealth Heads of Government Regional Meeting. Twelve foreign heads of government were staying in the hotel at the time. Most delegates were evacuated by Royal Australian Air Force helicopters and the meeting was moved to Bowral, protected by 800 soldiers of the Australian Army. As the convocation of heads of governments and permanent Commonwealth staff and experts, CHOGMs are the highest institution of action in the Commonwealth, and rare occasions on which Commonwealth leaders all come together. CHOGMs have been the venues of many of the Commonwealth's most dramatic events.
to Shen Dao. It originally consisted of ten volumes and forty-two chapters, of which all but seven chapters have been lost. |- |Heguanzi | |- |Gongsun longzi | |- |Guiguzi | |- |The Lüshi Chunqiu |An encyclopedic of ancient classics edited by Lü Buwei. |- |Shizi |Attributed to Shi Jiao |- !Mythology ! |- |The Classic of Mountains and Seas (Shan Hai Jing) |A compilation of early geography and myths from various locations. |- |Tale of King Mu, Son of Heaven | |- !Taoism ! |- |Dao De Jing|Attributed to Laozi. |- |Guan Yinzi |Fragment |- | The Liezi (or Classic of the Perfect Emptiness) |Attributed to Lie Yukou. |- | Zhuangzi |Attributed to the philosopher of the same name, Zhuangzi. |- |Wenzi | |} Poetry After 206 BC The Twenty-Four Histories, a collection of authoritative histories of China for various dynasties: The Records of the Grand Historian by Sima Qian The Book of Han by Ban Gu. The Book of Later Han by Fan Ye The Records of Three Kingdoms by Chen Shou The Book of Jin by Fang Xuanling The Book of Song by Shen Yue The Book of Southern Qi by Xiao Zixian The Book of Liang by Yao Silian The Book of Chen by Yao Silian The History of the Southern Dynasties by Li Yanshou The Book of Wei by Wei Shou The Book of Zhou by Linghu Defen The Book of Northern Qi by Li Baiyao The History of the Northern Dynasties by Li Yanshou The Book of Sui by Wei Zheng The Old Book of Tang by Liu Xu The New Book of Tang by Ouyang Xiu The Old History of Five Dynasties by Xue Juzheng The New History of Five Dynasties by Ouyang Xiu The History of Song by Toqto'a The History of Liao by Toqto'a The History of Jin by Toqto'a The History of Yuan by Song Lian The History of Ming by Zhang Tingyu The Draft History of Qing by Zhao Erxun is usually referred as the 25th classic of history records The New History of Yuan by Ke Shaomin is sometimes referred as the 26th classic of history records The Chronicles of Huayang, an old record of ancient history and tales of southwestern China, attributed to Chang Qu. The Biographies of Exemplary Women, a biographical collection of exemplary women in ancient China, compiled by Liu Xiang. The Spring and Autumn Annals of the Sixteen Kingdoms, a historical record of the Sixteen Kingdoms, attributed to Cui Hong, is lost. The Shiming, is a dictionary compiled by Liu Xi by the end of 2nd century. A New Account of the Tales of the World, a collection of historical anecdotes and character sketches of some 600 literati, musicians, and painters. The Thirty-Six Strategies, a military strategy book attributed to Tan Daoji. The Literary Mind and the Carving of Dragons (Wen Xin Diao Long), a review book on ancient Chinese literature and writings by Liu Xie. The Commentary on the Water Classic, a book on hydrology of rivers in China attributed to the great geographer Li Daoyuan. The Dialogues between Li Jing and Tang Taizong, a military strategy book attributed to Li Jing The Comprehensive Mirror for Aid in Government (Zizhi Tongjian), with Sima Guang as its main editor. The Spring and Autumn Annals of Wu and Yue, a historical record of the states of Wu and Yue during the Spring and Autumn period, attributed to Zhao Ye. The Zhenguan Zhengyao, a record of governance strategies and leadership of Emperor Taizong of Tang, attributed to Wu Jing. The Jiaoshi Yilin, a work modelled after the I Ching, composed during the Western Han Dynasty and attributed to Jiao Yanshou. The Nine Chapters on the Mathematical Art, a mathematics Chinese book composed by several generations scholars of Han Dynasty. The Thousand Character Classic, attributed to Zhou Xingsi. The Treatise on Astrology of the Kaiyuan Era, compiled by Gautama Siddha, is a Chinese encyclopedia on astrology and divination. The Shitong, written by Liu Zhiji, a work on historiography. The Tongdian, written by Du You, a contemporary text focused on the Tang dynasty. The Tang Huiyao, compiled by Wang Pu, a text based on the institutional history of the Tang dynasty. The Great Tang Records on the Western Regions, compiled by Bianji; a recount of Xuanzang's journey. The Miscellaneous Morsels from Youyang, written by Duan Chengshi, records fantastic stories, anecdotes, and exotic customs. The Four Great Books of Song, a term referring to the four large compilations during the beginning of Song dynasty: The Taiping Yulan, a leishu encyclopedia. The Taiping Guangji , a collection of folk tales and theology. The Wenyuan Yinghua, an anthology of poetry, odes, songs and other writings. The Cefu Yuangui, a leishu encyclopedia of political essays, autobiographies, memorials and decrees. The Dream Pool Essay, a collection of essays on science, technology, military strategies, history, politics, music and arts, written by Shen Kuo. The Exploitation of the Works of Nature, an encyclopedia compiled by Song Yingxing. The Compendium of Materia Medica, a classic book of medicine written by Li Shizhen. The Siku Quanshu, the largest compilation of literature in Chinese history. The New Songs from the Jade Terrace, a poetry collection from the Six Dynasties period. The Quan Tangshi, or Collected Tang Poems, compiled during the Qing dynasty, published AD 1705. The Xiaolin Guangji, a collection of jokes compiled during the Qing dynasty. See also Chinese literature Imperial examination List of early Chinese texts Kaicheng Stone Classics Seven Military Classics Old Texts Sinology Thomas Francis Wade Herbert Giles Lionel Giles Frederic H. Balfour Notes Bibliography Primary Sources References Online Endymion Wilkinson. Chinese History: A New Manual. (Cambridge, Massachusetts: Harvard University Asia Center, Harvard-Yenching Institute Monograph Series. New Edition; Second, Revised printing March 2013). . See esp. pp. 365– 377, Ch. 28, "The Confucian Classics." External links Chinese Text Project (English Chinese) (Chinese
|- | The Methods of the Sima (司馬法) (Sima Fa) |Attributed to Sima Rangju. |- | Wei Liaozi (尉繚子) |Attributed to Wei Liao. |- | The Three Strategies of Huang Shigong (黃石公三略) |Attributed to Jiang Ziya. |- | The Thirty-Six Stratagems | Recently recovered. |- !Legalism ! |- | Guanzi |Attributed to Guan Zhong. |- |Deng Xizi |Fragment |- |The Book of Lord Shang |Attributed to Shang Yang. |- | Hanfeizi |Attributed to Han Fei. |- | Shenzi |Attributed to Shen Buhai; all but one chapter is lost. |- |The Canon of Laws |Attributed to Li Kui. |- !Medicine ! |- |Huangdi Neijing| |- |Nan Jing | |- !Miscellaneous ! |- |Yuzi |Fragment |- |Mozi |Attributed to the philosopher of the same name, Mozi. |- |Yinwenzi |Fragment |- |Shenzi| Attributed to Shen Dao. It originally consisted of ten volumes and forty-two chapters, of which all but seven chapters have been lost. |- |Heguanzi | |- |Gongsun longzi | |- |Guiguzi | |- |The Lüshi Chunqiu |An encyclopedic of ancient classics edited by Lü Buwei. |- |Shizi |Attributed to Shi Jiao |- !Mythology ! |- |The Classic of Mountains and Seas (Shan Hai Jing) |A compilation of early geography and myths from various locations. |- |Tale of King Mu, Son of Heaven | |- !Taoism ! |- |Dao De Jing|Attributed to Laozi. |- |Guan Yinzi |Fragment |- | The Liezi (or Classic of the Perfect Emptiness) |Attributed to Lie Yukou. |- | Zhuangzi |Attributed to the philosopher of the same name, Zhuangzi. |- |Wenzi | |} Poetry After 206 BC The Twenty-Four Histories, a collection of authoritative histories of China for various dynasties: The Records of the Grand Historian by Sima Qian The Book of Han by Ban Gu. The Book of Later Han by Fan Ye The Records of Three Kingdoms by Chen Shou The Book of Jin by Fang Xuanling The Book of Song by Shen Yue The Book of Southern Qi by Xiao Zixian The Book of Liang by Yao Silian The Book of Chen by Yao Silian The History of the Southern Dynasties by Li Yanshou The Book of Wei by Wei Shou The Book of Zhou by Linghu Defen The Book of Northern Qi by Li Baiyao The History of the Northern Dynasties by Li Yanshou The Book of Sui by Wei Zheng The Old Book of Tang by Liu Xu The New Book of Tang by Ouyang Xiu The Old History of Five Dynasties by Xue Juzheng The New History of Five Dynasties by Ouyang Xiu The History of Song by Toqto'a The History of Liao by Toqto'a The History of Jin by Toqto'a The History of Yuan by Song Lian The History of Ming by Zhang Tingyu The Draft History of Qing by Zhao Erxun is usually referred as the 25th classic of history records The New History of Yuan by Ke Shaomin is sometimes referred as the 26th classic of history records The Chronicles of Huayang, an old record of ancient history and tales of southwestern China, attributed to Chang Qu. The Biographies of Exemplary Women, a biographical collection of exemplary women in ancient China, compiled by Liu Xiang. The Spring and Autumn Annals of the Sixteen Kingdoms, a historical record of the Sixteen Kingdoms, attributed to Cui Hong, is lost. The Shiming, is a dictionary compiled by Liu Xi by the end of 2nd century. A New Account of the Tales of the World, a collection of historical anecdotes and character sketches of some 600 literati, musicians, and painters. The Thirty-Six Strategies, a military strategy book attributed to Tan Daoji. The Literary Mind and the Carving of Dragons (Wen Xin Diao Long), a review book on ancient Chinese literature and writings by Liu Xie. The Commentary on the Water Classic, a book on hydrology of rivers in China attributed to the great geographer Li Daoyuan. The Dialogues between Li Jing and Tang Taizong, a military strategy book attributed to Li Jing The Comprehensive Mirror for Aid in Government (Zizhi Tongjian), with Sima Guang as its main editor. The Spring and Autumn Annals of Wu and Yue, a historical record of the states of Wu and Yue during the Spring and Autumn period, attributed to Zhao Ye. The Zhenguan Zhengyao, a record of governance strategies and leadership of Emperor Taizong of Tang, attributed to Wu Jing. The Jiaoshi Yilin, a work modelled after the I Ching, composed during the Western Han Dynasty and attributed to Jiao Yanshou. The Nine Chapters on the Mathematical Art, a mathematics Chinese book composed by several generations scholars of Han Dynasty. The Thousand Character Classic, attributed to Zhou Xingsi. The Treatise on Astrology of the Kaiyuan Era, compiled by Gautama Siddha, is a Chinese encyclopedia on astrology and divination. The Shitong, written by Liu Zhiji, a work on historiography. The Tongdian, written by Du You, a contemporary text focused on the Tang dynasty. The Tang Huiyao, compiled by Wang Pu, a text based on the institutional history of the Tang dynasty. The Great Tang Records on the Western Regions, compiled by Bianji; a recount of Xuanzang's journey. The Miscellaneous Morsels from Youyang, written by Duan Chengshi, records fantastic stories, anecdotes, and exotic customs. The Four Great Books of Song, a term referring to the four large compilations during the beginning of Song dynasty: The Taiping Yulan, a leishu encyclopedia. The Taiping Guangji , a collection of folk tales and theology. The Wenyuan Yinghua, an anthology of poetry, odes, songs and other writings. The Cefu Yuangui, a leishu encyclopedia of political essays, autobiographies, memorials and decrees. The Dream Pool Essay, a collection of essays on science, technology, military strategies, history, politics, music and arts, written by Shen Kuo. The Exploitation of the Works of Nature, an encyclopedia compiled by Song Yingxing. The Compendium of Materia Medica, a classic book of medicine written by Li Shizhen. The Siku Quanshu, the largest compilation of literature in Chinese history. The New Songs from the Jade Terrace, a poetry collection from the Six Dynasties period. The Quan Tangshi, or Collected Tang Poems, compiled during the Qing dynasty, published AD 1705. The Xiaolin Guangji, a collection of jokes compiled during the Qing dynasty. See also Chinese literature Imperial examination List of early Chinese
The inbound call centre is a new and increasingly popular service for many types of healthcare facilities, including large hospitals. Inbound call centres can be outsourced or managed in-house. These healthcare call centres are designed to help streamline communications, enhance patient retention and satisfaction, reduce expenses and improve operational efficiencies. Hospitality Many large hospitality companies such as the Hilton Hotels Corporation and Marriott International make use of call centres to manage reservations. These are known in the industry as "central reservations offices". Staff members at these call centres take calls from clients wishing to make reservations or other inquiries via a public number, usually a 1-800 number. These centres may operate as many as 24 hours per day, seven days a week, depending on the call volume the chain receives. Evaluation Mathematical theory Queueing theory is a branch of mathematics in which models of service systems have been developed. A call centre can be seen as a queueing network and results from queueing theory such as the probability an arriving customer needs to wait before starting service useful for provisioning capacity. (Erlang's C formula is such a result for an M/M/c queue and approximations exist for an M/G/k queue.) Statistical analysis of call centre data has suggested arrivals are governed by an inhomogeneous Poisson process and jobs have a log-normal service time distribution. Simulation algorithms are increasingly being used to model call arrival, queueing and service levels. Call centre operations have been supported by mathematical models beyond queueing, with operations research, which considers a wide range of optimisation problems seeking to reduce waiting times while keeping server utilisation and therefore efficiency high. Criticism Call centres have received criticism for low pay rates and restrictive working practices for employees, which have been deemed as a dehumanising environment. Other research illustrates how call centre workers develop ways to counter or resist this environment by integrating local cultural sensibilities or embracing a vision of a new life. Most call centres provide electronic reports that outline performance metrics, quarterly highlights and other information about the calls made and received. This has the benefit of helping the company to plan the workload and time of its employees. However, it has also been argued that such close monitoring breaches the human right to privacy. Complaints are often logged by callers who find the staff do not have enough skill or authority to resolve problems, as well as appearing apathetic. These concerns are due to a business process that exhibits levels of variability because the experience a customer gets and results a company achieves on a given call are dependent upon the quality of the agent. Call centres are beginning to address this by using agent-assisted automation to standardise the process all agents use. However, more popular alternatives are using personality and skill based approaches. The various challenges encountered by call operators are discussed by several authors. Media portrayals Indian call centres have been the focus of several documentary films, the 2004 film Thomas L. Friedman Reporting: The Other Side of Outsourcing, the 2005 films John and Jane, Nalini by Day, Nancy by Night, and 1-800-India: Importing a White-Collar Economy, and the 2006 film Bombay Calling, among others. An Indian call centre is also the subject of the 2006 film Outsourced and a key location in the 2008 film, Slumdog Millionaire. The 2014 BBC fly on the wall documentary series The Call Centre gave an often distorted although humorous view of life in a Welsh call centre. See also Automatic call distributor Business process outsourcing Call management List of call centre companies Predictive dialling Operator messaging Queue management system Skills based routing Virtual queue The Call Centre, a BBC fly-on-the-wall documentary at a Welsh call centre References Further reading Cusack M., "Online Customer Care", American Society for Quality (ASQ) Press, 2000. Cleveland B., "Call Center Management on Fast Forward", ICMI Press, 2006. Kennedy I., Call centres, School of Electrical and Information Engineering, University of the Witwatersrand, 2003. Masi D.M.B., Fischer M.J., Harris C.M., Numerical Analysis of Routing Rules for Call centres, Telecommunications Review, 1998, noblis.org HSE website Psychosocial risk factors in call centres: An evaluation of work design and well-being. Reena Patel, Working the Night Shift: Women in India's Call Center Industry (Stanford University Press; 2010) 219 pages; traces changing views of "women's work" in India under globalization. Fluss, Donna, "The Real-Time Contact centre", 2005 AMACOM Wegge, J., van Dick, R., Fisher, G., Wecking, C., & Moltzen, K. (2006, January). Work motivation, organisational identification, and well-being in call centre work. Work & Stress, 20(1), 60–83. Legros, B. (2016). Unintended consequences of optimizing a queue discipline for a service level defined by a percentile of the waiting time. Operations Research Letters, 44(6), 839–845. External links Mandelbaum, Avishai Call Centers (Centres) Research Bibliography with Abstracts. Faculty of Industrial Engineering and Management, Technion-Israel Institute of Technology. Computer telephony integration Telemarketing
more supervisor stations. It can be independently operated or networked with additional centers, often linked to a corporate computer network, including mainframes, microcomputer/servers and LANs. Increasingly, the voice and data pathways into the center are linked through a set of new technologies called computer telephony integration. The contact center is a central point from which all customer contacts are managed. Through contact centers, valuable information about company are routed to appropriate people, contacts to be tracked and data to be gathered. It is generally a part of the company's customer relationship management infrastructure. The majority of large companies use contact centers as a means of managing their customer interactions. These centers can be operated by either an in-house department responsible or outsourcing customer interaction to a third-party agency (known as Outsourcing Call Centres). History Answering services, as known in the 1960s through the 1980s, earlier and slightly later, involved a business that specifically provided the service. Primarily by the use of an off-premises extension (OPX) for each subscribing business, connected at a switchboard at the answering service business, the answering service would answer the otherwise unattended phones of the subscribing businesses with a live operator. The live operator could take messages or relay information, doing so with greater human interactivity than a mechanical answering machine. Although undoubtedly more costly (the human service, the cost of setting up and paying the phone company for the OPX on a monthly basis), it had the advantage of being more ready to respond to the unique needs of after-hours callers. The answering service operators also had the option of calling the client and alerting them to, particularly important calls. The origins of call centers date back to the 1960s with the UK-based Birmingham Press and Mail, which installed Private Automated Business Exchanges (PABX) to have rows of agents handling customer contacts. By 1973, call centers received mainstream attention after Rockwell International patented its Galaxy Automatic Call Distributor (GACD) for a telephone booking system as well as the popularization of telephone headsets as seen on televised NASA Mission Control Center events. During the late 1970s, call center technology expanded to include telephone sales, airline reservations, and banking systems. The term "call center" was first published and recognised by the Oxford English Dictionary in 1983. The 1980s experienced the development of toll-free telephone numbers to increase the efficiency of agents and overall call volume. Call centers increased with the deregulation of long-distance calling and growth in information-dependent industries. As call centres expanded, unionisation occurred in North America to gain members including the Communications Workers of America and the United Steelworkers. In Australia, the National Union of Workers represents unionised workers; their activities form part of the Australian labour movement. In Europe, Uni Global Union of Switzerland is involved in assisting unionisation in this realm and in Germany Vereinte Dienstleistungsgewerkschaft represents call centre workers. During the 1990s, call centres expanded internationally and developed into two additional subsets of communication, contact centres, and outsourced bureau centres. A contact centre is defined as a coordinated system of people, processes, technologies, and strategies that provides access to information, resources, and expertise, through appropriate channels of communication, enabling interactions that create value for the customer and organization. In contrast to in-house management, outsourced bureau contact centres are a model of contact centre that provide services on a "pay per use" model. The overheads of the contact centre are shared by many clients, thereby supporting a very cost effective model, especially for low volumes of calls. The modern contact centre includes automated call blending of inbound and outbound calls as well as predictive dialing capabilities dramatically increasing agents productivity. Latest implementations with more complex systems, require highly skilled operational and management staff that can use multichannel online and offline tools to improve customer interactions. Technology Call centre technologies include: speech recognition software which allowed Interactive Voice Response (IVR) systems to handle first levels of customer support, text mining, natural language processing to allow better customer handling, agent training via interactive scripting and automatic mining using best practices from past interactions, support automation and many other technologies to improve agent productivity and customer satisfaction. Automatic lead selection or lead steering is also intended to improve efficiencies, both for inbound and outbound campaigns. This allows inbound calls to be directly routed to the appropriate agent for the task, whilst minimising wait times and long lists of irrelevant options for people calling in. For outbound calls, lead selection allows management to designate what type of leads go to which agent based on factors including skill, socioeconomic factors, past performance, and percentage likelihood of closing a sale per lead. The universal queue standardises the processing of communications across multiple technologies such as fax, phone, and email. The virtual queue provides callers with an alternative to waiting on hold when no agents are available to handle inbound call demand. Premises-based technology Historically, call centres have been built on Private branch exchange (PBX) equipment owned, hosted, and maintained by the call centre operator. The PBX can provide functions such as automatic call distribution, interactive voice response, and skills-based routing. Virtual call centre In a virtual call centre model, the call centre operator (business) pays a monthly or annual fee to a vendor that hosts the call centre telephony and data equipment in their own facility, cloud-based. In this model, the operator does not own, operate or host the equipment on which the call centre runs. Agents connect to the vendor's equipment through traditional PSTN telephone lines, or over voice over IP. Calls to and from prospects or contacts originate from or terminate at the vendor's data centre, rather than at the call centre operator's premises. The vendor's telephony equipment (at times data servers) then connects the calls to the call centre operator's agents. Virtual call centre technology allows people to work from home or any other location instead of in a traditional, centralised, call centre location, which increasingly allows people 'on the go' or with physical or other disabilities to work from desired locations - i.e. not leaving their house. The only required equipment is Internet access and a workstation. The companies are preferring Virtual Call Centre services due to cost advantage. Companies can start their call centre business immediately without installing the basic infrastructure like Dialer, ACD and
who instructed him to keep careful records of his observations. Messier's first documented observation was that of the Mercury transit of 6 May 1753, followed by his observations journals at Cluny Hotel and at the French Navy observatories. In 1764, Messier was made a fellow of the Royal Society; in 1769, he was elected a foreign member of the Royal Swedish Academy of Sciences; and on 30 June 1770, he was elected to the French Academy of Sciences. Messier discovered 13 comets: C/1760 B1 (Messier) C/1763 S1 (Messier) C/1764 A1 (Messier) C/1766 E1 (Messier) C/1769 P1 (Messier) D/1770 L1 (Lexell) C/1771 G1 (Messier) C/1773 T1 (Messier) C/1780 U2 (Messier) C/1788 W1 (Messier) C/1793 S2 (Messier) C/1798 G1 (Messier) C/1785 A1 (Messier-Méchain) He also co-discovered Comet C/1801 N1, a discovery shared with several other observers including Pons, Méchain, and Bouvard. (Comet Pons-Messier-Méchain-Bouvard) Near the end of his life, Messier self-published a booklet connecting the great comet of 1769 to the birth of Napoleon, who was in power at the time of publishing. According to Maik Meyer: Messier is buried in Père Lachaise Cemetery, Paris, in Section 11. The grave is faintly inscribed,
in the area of the sky Messier could observe, from the north celestial pole to a declination of about −35.7° . They are not organized scientifically by object type, or by location. The first version of Messier's catalogue contained 45 objects and was published in 1774 in the journal of the French Academy of Sciences in Paris. In addition to his own discoveries, this version included objects previously observed by other astronomers, with only 17 of the 45 objects being discovered by Messier himself. By 1780 the catalog had increased to 80 objects. The final version of the catalogue was published in 1781, in the 1784 issue of Connaissance des Temps. The final list of Messier objects had grown to 103. On several occasions between 1921 and 1966, astronomers and historians discovered evidence of another seven objects that were observed either by Messier or by Méchain, shortly after the final version was published. These seven objects, M 104 through M 110, are accepted by astronomers as "official" Messier objects. The objects' Messier designations, from M 1 to M 110, are still used by professional and amateur astronomers today and their relative brightness makes them popular objects in the amateur astronomical community. Legacy The lunar crater Messier and the asteroid 7359 Messier were named in his honour. See also Deep-sky object List of
civilization where bodies were buried in wooden coffins. The urn burials and the "grave skeletons" were nearly contemporaneous. Reddish pottery, painted in black with antelopes, peacocks etc., sun or star motifs, with different surface treatments to the earlier period. Expansion of settlements into the east. Rice became a main crop. Apparent breakdown of the widespread trade of the Indus civilization, with materials such as marine shells no longer used. Continued use of mud brick for building. Some of the designs painted on the Cemetery H funerary urns have been interpreted through the lens of Vedic mythology: for instance, peacocks with hollow bodies and a small human form inside, which has been interpreted as the souls of the dead, and a hound that can be seen as the hound of Yama, the god of death. This may indicate the introduction of new religious beliefs during this period, but the archaeological evidence does not support the hypothesis that the Cemetery H people were the destroyers of the Harappan cities. Archaeology Cremation in India is first attested in the Cemetery H culture, a practice previously described in the Vedas. The Rigveda contains a reference to the emerging practice, in RV 10.15.14, where the forefathers "both cremated (agnidagdhá-) and uncremated (ánagnidagdha-)" are invoked. See also Chronological dating Phases in archaeology Pottery in the Indian subcontinent Periodisation of the Indus Valley Civilisation Ahar-Banas culture (3000 – 1500 BCE) Late Harappan Phase of IVC (1900 - 1500 BCE) Cemetery H culture in Punjab Jhukar-Jhangar culture in Punjab Rangpur culture in Gujarat Vedic period Kuru Kingdom (1200 – c. 500 BCE) OCP (2000-1500 BCE) Copper Hoard Culture (2800-1500 BCE), may or may not be independent of vedic culture References Sources External links http://www.harappa.com harappa.com https://web.archive.org/web/20060908052731/http://pubweb.cc.u-tokai.ac.jp/indus/english/3_1_01.html journal Archaeological cultures of South Asia Bronze Age cultures of
absorbed the Cemetery H people and gave rise to the Painted Grey Ware culture (to 1400 BC). Together with the Gandhara grave culture and the Ochre Coloured Pottery culture, the Cemetery H culture is considered by some scholars as a factor in the formation of the Vedic civilization. Features The distinguishing features of this culture include: The use of cremation of human remains. The bones were stored in painted pottery burial urns. This is completely different from the Indus civilization where bodies were buried in wooden coffins. The urn burials and the "grave skeletons" were nearly contemporaneous. Reddish pottery, painted in black with antelopes, peacocks etc., sun or star motifs, with different surface treatments to the earlier period. Expansion of settlements into the east. Rice became a main crop. Apparent breakdown of the widespread trade of the Indus civilization, with materials such as marine shells no longer used. Continued use of mud brick for building. Some of the designs painted on the Cemetery H funerary urns have been interpreted through the lens of Vedic mythology: for instance, peacocks with hollow bodies and a small human form inside, which has been interpreted as the souls of the dead, and a hound that can be seen as the hound of Yama, the god of death. This may indicate the introduction of new religious beliefs during this period, but the archaeological evidence does not support the hypothesis that the Cemetery H people were the destroyers of the Harappan cities. Archaeology Cremation in India is first attested in the Cemetery H culture, a practice previously described in the Vedas. The Rigveda contains a reference to the emerging practice, in RV 10.15.14, where the forefathers "both cremated (agnidagdhá-) and uncremated (ánagnidagdha-)" are invoked. See also Chronological dating Phases in archaeology Pottery in the Indian subcontinent Periodisation of the Indus Valley
little success and its aims were not supported by the United States. Organicism and nations Gini was a proponent of organicism and saw nations as organic in nature. Gini shared the view held by Oswald Spengler that populations go through a cycle of birth, growth, and decay. Gini claimed that nations at a primitive level have a high birth rate, but, as they evolve, the upper classes birth rate drops while the lower class birth rate, while higher, will inevitably deplete as their stronger members emigrate, die in war, or enter into the upper classes. If a nation continues on this path without resistance, Gini claimed the nation would enter a final decadent stage where the nation would degenerate as noted by decreasing birth rate, decreasing cultural output, and the lack of imperial conquest. At this point, the decadent nation with its aging population can be overrun by a more youthful and vigorous nation. Gini's organicist theories of nations and natality are believed to have influenced policies of Italian Fascism. Honours The following honorary degrees were conferred upon him: Economics by the Catholic University of the Sacred Heart in Milan (1932), Sociology by the University of Geneva (1934), Sciences by Harvard University (1936), Social Sciences by the University of Cordoba, Argentine (1963). Partial bibliography Il sesso dal punto di vista statistica: le leggi della produzione dei sessi (1908) Sulla misura della concentrazione e della variabilità dei caratteri (1914) Quelques considérations au sujet de la construction des nombres indices des prix et des questions analogues (1924) Memorie di metodologia statistica. Vol.1: Variabilità e Concentrazione (1955) Memorie di metodologia statistica. Vol.2: Transvariazione (1960) References External links Biography Of Corrado Gini at the Metron, the statistics journal he founded. Paper on "Corrado Gini and Italian Statistics under Fascism" by Giovanni Favero June 2002 A. Forcina and G. M. Giorgi "Early Gini’s Contributions to Inequality Measurement and Statistical Inference." JEHPS mars 2005 Another photograph 1884 births 1965 deaths People from Motta di Livenza Italian sociologists Italian eugenicists Italian fascists Italian
Statistical, Demographic and Actuarial Sciences. Under fascism In 1926, he was appointed President of the Central Institute of Statistics in Rome. This he organised as a single centre for Italian statistical services. He was a close intimate of Mussolini throughout the 20s. He resigned from his position within the institute in 1932. In 1927 he published a treatise entitled The Scientific Basis of Fascism. In 1929, Gini founded the Italian Committee for the Study of Population Problems (Comitato italiano per lo studio dei problemi della popolazione) which, two years later, organised the first Population Congress in Rome. A eugenicist apart from being a demographer, Gini led an expedition to survey Polish populations, among them the Karaites. Gini was throughout the 20s a supporter of fascism, and expressed his hope that Nazi Germany and Fascist Italy would emerge as victors in WW2. However, he never supported any measure of exclusion of the Jews. Milestones during the rest of his career include: In 1933 – vice president of the International Sociological Institute. In 1934 – president of the Italian Genetics and Eugenics Society. In 1935 – president of the International Federation of Eugenics Societies in Latin-language Countries. In 1937 – president of the Italian Sociological Society. In 1941 – president of the Italian Statistical Society. In 1957 – Gold Medal for outstanding service to the Italian School. In 1962 – National Member of the Accademia dei Lincei. Italian Unionist Movement On October 12, 1944, Gini joined with the Calabrian activist Santi Paladino, and fellow-statistician Ugo Damiani to found the Italian Unionist Movement, for which the emblem was the Stars and Stripes, the Italian flag and a world map. According to the three men, the Government of the United States should annex all free and democratic nations worldwide, thereby transforming itself into a world government, and allowing Washington, D.C. to maintain Earth in a perpetual condition of peace. The party existed up to 1948 but had little success and its aims were not supported by the United States. Organicism and nations Gini was a proponent of organicism and saw nations as organic in nature. Gini shared the view held by Oswald Spengler that populations go through a cycle of birth, growth, and decay. Gini claimed that nations at a primitive level have a high birth rate, but, as they evolve, the upper classes birth rate drops while the lower class birth rate, while higher, will inevitably deplete as their stronger members emigrate, die
caused along the length of the crankshaft by the cylinders farthest from the output end acting on the torsional elasticity of the metal. History Crank mechanism Han China The earliest hand-operated cranks appeared in China during the Han Dynasty (202 BC-220 AD). They were used for silk-reeling, hemp-spinning, for the agricultural winnowing fan, in the water-powered flour-sifter, for hydraulic-powered metallurgic bellows, and in the well windlass. The rotary winnowing fan greatly increased the efficiency of separating grain from husks and stalks. However, the potential of the crank of converting circular motion into reciprocal motion never seems to have been fully realized in China, and the crank was typically absent from such machines until the turn of the 20th century. Roman Empire A crank in the form of an eccentrically-mounted handle of the rotary handmill appeared in 5th century BC Celtiberian Spain and ultimately spread across the Roman Empire. A Roman iron crank dating to the 2nd century AD was excavated in Augusta Raurica, Switzerland. The crank-operated Roman mill is dated to the late 2nd century. Evidence for the crank combined with a connecting rod appears in the Hierapolis mill, dating to the 3rd century; they are also found in stone sawmills in Roman Syria and Ephesus dating to the 6th century. The pediment of the Hierapolis mill shows a waterwheel fed by a mill race powering via a gear train two frame saws which cut blocks by the way of some kind of connecting rods and cranks. The crank and connecting rod mechanisms of the other two archaeologically-attested sawmills worked without a gear train. Water-powered marble saws in Germany were mentioned by the late 4th century poet Ausonius; about the same time, these mill types seem also to be indicated by Gregory of Nyssa from Anatolia. Medieval Europe A rotary grindstone operated by a crank handle is shown in the Carolingian manuscript Utrecht Psalter; the pen drawing of around 830 goes back to a late antique original. Cranks used to turn wheels are also depicted or described in various works dating from the tenth to thirteenth centuries. The first depictions of the compound crank in the carpenter's brace appear between 1420 and 1430 in northern European artwork. The rapid adoption of the compound crank can be traced in the works of an unknown German engineer writing on the state of military technology during the Hussite Wars: first, the connecting-rod, applied to cranks, reappeared; second, double-compound cranks also began to be equipped with connecting-rods; and third, the flywheel was employed for these cranks to get them over the 'dead-spot'. The concept was much improved by the Italian engineer and writer Roberto Valturio in 1463, who devised a boat with five sets, where the parallel cranks are all joined to a single power source by one connecting-rod, an idea also taken up by his compatriot Italian painter Francesco di Giorgio. The crank had become common in Europe by the early 15th century, as seen in the works of the military engineer Konrad Kyeser (1366–after 1405). Devices depicted in Kyeser's Bellifortis include cranked windlasses for spanning siege crossbows, cranked chain of buckets for water-lifting and cranks fitted to a wheel of bells. Kyeser also equipped the Archimedes' screws for water-raising with a crank handle, an innovation which subsequently replaced the ancient practice of working the pipe by treading. Pisanello painted a piston-pump driven by a water-wheel and operated by two simple cranks and two connecting-rods. The 15th also century saw the introduction of cranked rack-and-pinion devices, called cranequins, which were fitted to the crossbow's stock as a means of exerting even more force while spanning the missile weapon. In the textile industry, cranked reels for winding skeins of yarn were introduced. Crankshaft Medieval Near East The non-manual crank appears in several of the hydraulic devices described by the Banū Mūsā brothers in their 9th-century Book of Ingenious Devices. These automatically operated cranks appear in several devices, two of which contain an action which approximates to that of a crankshaft, anticipating Al-Jazari's invention by several centuries and its first appearance in Europe by over five centuries. However, the automatic crank described by the Banu Musa would not have allowed a full rotation, but only a small modification was required to convert it to a crankshaft. Arab engineer Al-Jazari (1136–1206), in the Artuqid Sultanate, described a crank and connecting rod system in a rotating machine in two of his water-raising machines. The author Sally Ganchy identified a crankshaft in his twin-cylinder pump mechanism, including both the crank and shaft mechanisms. Renaissance Europe The Italian physician Guido da Vigevano (c. 1280−1349), planning for a new crusade, made illustrations for a paddle boat and war carriages that were propelled by manually turned compound cranks and gear wheels, identified as an early crankshaft prototype by Lynn Townsend White. The Luttrell Psalter, dating to around 1340, describes a grindstone which was rotated by two cranks, one at each end of its axle; the geared hand-mill, operated either with one or two cranks, appeared later in the 15th century. Around 1480, the early medieval rotary grindstone was improved with a treadle and crank mechanism. Cranks mounted on push-carts first appear in a German engraving of 1589. Crankshafts were also described by Leonardo da Vinci (1452–1519) and a Dutch farmer and windmill owner by the name Cornelis Corneliszoon van Uitgeest in 1592. His wind-powered sawmill used a crankshaft to convert a windmill's circular motion into a back-and-forward motion powering the saw. Corneliszoon was granted a patent for his crankshaft in 1597. Modern Europe From the 16th century onwards, evidence of cranks and connecting rods integrated into machine design becomes abundant in the technological treatises of the period: Agostino Ramelli's The Diverse and Artifactitious Machines of 1588 depicts eighteen examples, a number that rises in the Theatrum Machinarum Novum by Georg Andreas Böckler to 45 different machines. Cranks were formerly common on some machines in the early 20th century; for example almost all phonographs before the 1930s were powered by clockwork motors wound with cranks. Reciprocating piston engines use cranks to convert the linear piston motion into rotational motion. Internal combustion engines of early 20th century automobiles were usually started with hand cranks, before electric starters came into general use. The 1918 Reo owner's manual describes how to hand crank the automobile: First: Make sure the gear shifting lever is in neutral position. Second: The clutch pedal is unlatched and the clutch engaged. The brake pedal is pushed forward as far as possible setting brakes on the rear wheel. Third: See that spark control lever, which is the short lever located on top of the steering wheel on the right side, is back as far as possible toward the driver and the long lever, on top of the steering column controlling the carburetor, is pushed forward about one inch from its retarded position. Fourth: Turn ignition switch to point marked "B" or "M" Fifth: Set the carburetor control on the steering column to the point marked "START." Be sure there is gasoline in the carburetor. Test for this by pressing down on the small pin projecting from the front of the bowl until the carburetor floods. If it fails to flood it shows that the fuel is not being delivered to the carburetor properly and the motor cannot be expected to start. See instructions on page 56 for filling the vacuum tank. Sixth: When it is certain the carburetor has a supply of fuel, grasp the handle of starting crank, push in endwise to engage ratchet with crank shaft pin and turn over the motor by giving a quick upward pull. Never push down, because if for any reason the motor should kick back, it would endanger the operator. Internal combustion engines Large engines are usually multicylinder to reduce pulsations from individual firing strokes, with more than one piston attached to a complex crankshaft. Many small engines, such as those found in mopeds or garden machinery, are single cylinder and use only a single piston, simplifying crankshaft design. A crankshaft is subjected to enormous stresses, potentially equivalent of several tonnes of force. The crankshaft is connected to the fly-wheel (used to smooth out shock and convert
to the 120 degree spacing of the crankshaft. The same engine, however, can be made to provide evenly spaced power pulses by using a crankshaft with an individual crank throw for each cylinder, spaced so that the pistons are actually phased 120° apart, as in the GM 3800 engine. While most production V8 engines use four crank throws spaced 90° apart, high-performance V8 engines often use a "flat" crankshaft with throws spaced 180° apart, essentially resulting in two straight four engines running on a common crankcase. The difference can be heard as the flat-plane crankshafts result in the engine having a smoother, higher-pitched sound than cross-plane (for example, IRL IndyCar Series compared to NASCAR Sprint Cup Series, or a Ferrari 355 compared to a Chevrolet Corvette). This type of crankshaft was also used on early types of V8 engines. See the main article on crossplane crankshafts. Engine balance For some engines it is necessary to provide counterweights for the reciprocating mass of each piston and connecting rod to improve engine balance. These are typically cast as part of the crankshaft but, occasionally, are bolt-on pieces. While counter weights add a considerable amount of weight to the crankshaft, it provides a smoother running engine and allows higher RPM levels to be reached. Flying arms In some engine configurations, the crankshaft contains direct links between adjacent crank pins, without the usual intermediate main bearing. These links are called flying arms. This arrangement is sometimes used in V6 and V8 engines, as it enables the engine to be designed with different V angles than what would otherwise be required to create an even firing interval, while still using fewer main bearings than would normally be required with a single piston per crankthrow. This arrangement reduces weight and engine length at the expense of less crankshaft rigidity. Rotary aircraft engines Some early aircraft engines were a rotary engine design, where the crankshaft was fixed to the airframe and instead the cylinders rotated with the propeller. Radial engines The radial engine is a reciprocating type internal combustion engine configuration in which the cylinders point outward from a central crankshaft like the spokes of a wheel. It resembles a stylized star when viewed from the front, and is called a "star engine" (German Sternmotor, French Moteur en étoile) in some languages. The radial configuration was very commonly used in aircraft engines before turbine engines became predominant. Construction Crankshafts can be monolithic (made in a single piece) or assembled from several pieces. Monolithic crankshafts are most common, but some smaller and larger engines use assembled crankshafts. Forging and casting, and machining Crankshafts can be forged from a steel bar usually through roll forging or cast in ductile steel. Today more and more manufacturers tend to favor the use of forged crankshafts due to their lighter weight, more compact dimensions and better inherent damping. With forged crankshafts, vanadium microalloyed steels are mostly used as these steels can be air cooled after reaching high strengths without additional heat treatment, with exception to the surface hardening of the bearing surfaces. The low alloy content also makes the material cheaper than high alloy steels. Carbon steels are also used, but these require additional heat treatment to reach the desired properties. Cast iron crankshafts are today mostly found in cheaper production engines (such as those found in the Ford Focus diesel engines) where the loads are lower. Some engines also use cast iron crankshafts for low output versions while the more expensive high output version uses forged steel. Crankshafts can also be machined from billet, often a bar of high quality vacuum remelted steel. Though the fiber flow (local inhomogeneities of the material's chemical composition generated during casting) does not follow the shape of the crankshaft (which is undesirable), this is usually not a problem since higher quality steels, which normally are difficult to forge, can be used. Per unit, these crankshafts tend to be very expensive due to the large amount of material that must be removed with lathes and milling machines, the high material cost, and the additional heat treatment required. However, since no expensive tooling is needed, this production method allows small production runs without high up-front costs. In an effort to reduce costs, used crankshafts may also be machined. A good core may often be easily reconditioned by a crankshaft grinding process. Severely damaged crankshafts may also be repaired with a welding operation, prior to grinding, that utilizes a submerged arc welding machine. To accommodate the smaller journal diameters a ground crankshaft has, and possibly an oversized thrust dimension, undersize engine bearings are used to allow for precise clearances during operation. Machining or remanufacturing crankshafts are precision machined to exact tolerances with no odd size crankshaft bearings or journals. Thrust surfaces are micro-polished to provide precise surface finishes for smooth engine operation and reduced thrust bearing wear. Every journal is inspected and measured with critical accuracy. After machining, oil holes are chamfered to improve lubrication and every journal polished to a smooth finish for long bearing life. Remanufactured crankshafts are thoroughly cleaned with special emphasis to flushing and brushing out oil passages to remove any contaminants. Remanufacturing a crankshaft typically involves the following steps: Stress on crankshafts The shaft is subjected to various forces but generally needs to be analysed in two positions. Firstly, failure may occur at the position of maximum bending; this may be at the centre of the crank or at either end. In
of the Naval Staff (disambiguation), in several countries Former Taiwanese navy ship prefix Education Cicero-North Syracuse High School, New York, US City of Norwich School, England Computation and Neural Systems, a Caltech program Organisations Canadian Nuclear Society Congress of Neurological Surgeons US Corporation for National Service, later Corporation for National and Community Service Council for National Security, 2006 military of Thailand Szekler National Council (), Romania Media Catholic News Service China News Service CNSNews.com, formerly Cybercast News Service Other Cairns International
Clinical nurse specialist Coagulase-negative staphylococcus Connectedness to nature scale Conserved non-coding sequence of DNA Crigler–Najjar syndrome Crystallography and NMR system, a software library Color Naming System CNS (DNS server), Caching Name Server, a DNS server software product Military CNS (chemical weapon), a mixture of chloroacetophenone, chloropicrin and chloroform Chief of the Naval Staff (disambiguation), in several countries Former Taiwanese navy ship prefix Education Cicero-North Syracuse High School, New York, US City of Norwich School, England Computation and
allowing for administration of certain pharmaceuticals and drugs. Brain At the anterior end of the spinal cord lies the brain. The brain makes up the largest portion of the CNS. It is often the main structure referred to when speaking of the nervous system in general. The brain is the major functional unit of the CNS. While the spinal cord has certain processing ability such as that of spinal locomotion and can process reflexes, the brain is the major processing unit of the nervous system. Brainstem The brainstem consists of the medulla, the pons and the midbrain. The medulla can be referred to as an extension of the spinal cord, which both have similar organization and functional properties. The tracts passing from the spinal cord to the brain pass through here. Regulatory functions of the medulla nuclei include control of blood pressure and breathing. Other nuclei are involved in balance, taste, hearing, and control of muscles of the face and neck. The next structure rostral to the medulla is the pons, which lies on the ventral anterior side of the brainstem. Nuclei in the pons include pontine nuclei which work with the cerebellum and transmit information between the cerebellum and the cerebral cortex. In the dorsal posterior pons lie nuclei that are involved in the functions of breathing, sleep, and taste. The midbrain, or mesencephalon, is situated above and rostral to the pons. It includes nuclei linking distinct parts of the motor system, including the cerebellum, the basal ganglia and both cerebral hemispheres, among others. Additionally, parts of the visual and auditory systems are located in the midbrain, including control of automatic eye movements. The brainstem at large provides entry and exit to the brain for a number of pathways for motor and autonomic control of the face and neck through cranial nerves, Autonomic control of the organs is mediated by the tenth cranial nerve. A large portion of the brainstem is involved in such autonomic control of the body. Such functions may engage the heart, blood vessels, and pupils, among others. The brainstem also holds the reticular formation, a group of nuclei involved in both arousal and alertness. Cerebellum The cerebellum lies behind the pons. The cerebellum is composed of several dividing fissures and lobes. Its function includes the control of posture and the coordination of movements of parts of the body, including the eyes and head, as well as the limbs. Further, it is involved in motion that has been learned and perfected through practice, and it will adapt to new learned movements. Despite its previous classification as a motor structure, the cerebellum also displays connections to areas of the cerebral cortex involved in language and cognition. These connections have been shown by the use of medical imaging techniques, such as functional MRI and Positron emission tomography. The body of the cerebellum holds more neurons than any other structure of the brain, including that of the larger cerebrum, but is also more extensively understood than other structures of the brain, as it includes fewer types of different neurons. It handles and processes sensory stimuli, motor information, as well as balance information from the vestibular organ. Diencephalon The two structures of the diencephalon worth noting are the thalamus and the hypothalamus. The thalamus acts as a linkage between incoming pathways from the peripheral nervous system as well as the optical nerve (though it does not receive input from the olfactory nerve) to the cerebral hemispheres. Previously it was considered only a "relay station", but it is engaged in the sorting of information that will reach cerebral hemispheres (neocortex). Apart from its function of sorting information from the periphery, the thalamus also connects the cerebellum and basal ganglia with the cerebrum. In common with the aforementioned reticular system the thalamus is involved in wakefullness and consciousness, such as though the SCN. The hypothalamus engages in functions of a number of primitive emotions or feelings such as hunger, thirst and maternal bonding. This is regulated partly through control of secretion of hormones from the pituitary gland. Additionally the hypothalamus plays a role in motivation and many other behaviors of the individual. Cerebrum The cerebrum of cerebral hemispheres make up the largest visual portion of the human brain. Various structures combine to form the cerebral hemispheres, among others: the cortex, basal ganglia, amygdala and hippocampus. The hemispheres together control a large portion of the functions of the human brain such as emotion, memory, perception and motor functions. Apart from this the cerebral hemispheres stand for the cognitive capabilities of the brain. Connecting each of the hemispheres is the corpus callosum as well as several additional commissures. One of the most important parts of the cerebral hemispheres is the cortex, made up of gray matter covering the surface of the brain. Functionally, the cerebral cortex is involved in planning and carrying out of everyday tasks. The hippocampus is involved in storage of memories, the amygdala plays a role in perception and communication of emotion, while the basal ganglia play a major role in the coordination of voluntary movement. Difference from the peripheral nervous system This differentiates the CNS from the PNS, which consists of neurons, axons, and Schwann cells. Oligodendrocytes and Schwann cells have similar functions in the CNS and PNS, respectively. Both act to add myelin sheaths to the axons, which acts as a form of insulation allowing for better and faster proliferation of electrical signals along the nerves. Axons in the CNS are often very short, barely a few millimeters, and do not need the same degree of isolation as peripheral nerves. Some peripheral nerves can be over 1 meter in length, such as the nerves to the big toe. To ensure signals move at sufficient speed, myelination is needed. The way in which the Schwann cells and oligodendrocytes myelinate nerves differ. A Schwann cell usually myelinates a single axon, completely surrounding it. Sometimes, they may myelinate many axons, especially when in areas of short axons. Oligodendrocytes usually myelinate several axons. They do this by sending out thin projections of their cell membrane, which envelop and enclose the axon. Development During early development of the vertebrate embryo, a longitudinal groove on the neural plate gradually deepens and the ridges on either side of the groove (the neural folds) become elevated, and ultimately meet, transforming the groove into a closed tube called the neural tube. The formation of the neural tube is called neurulation. At this stage, the walls of the neural tube contain proliferating neural stem cells in a region called the ventricular zone. The neural stem cells, principally radial glial cells, multiply and generate neurons through the process of neurogenesis, forming the rudiment of the CNS. The neural tube gives rise to both brain and spinal cord. The anterior (or 'rostral') portion of the neural tube initially differentiates into three brain vesicles (pockets): the prosencephalon at the front, the mesencephalon, and, between the mesencephalon and the spinal cord, the rhombencephalon. (By six weeks in the human embryo) the prosencephalon then divides further into the telencephalon and diencephalon; and the rhombencephalon divides into the metencephalon and myelencephalon. The spinal cord is derived from the posterior or 'caudal' portion of the neural
also be seen macroscopically on brain tissue. The white matter consists of axons and oligodendrocytes, while the gray matter consists of neurons and unmyelinated fibers. Both tissues include a number of glial cells (although the white matter contains more), which are often referred to as supporting cells of the CNS. Different forms of glial cells have different functions, some acting almost as scaffolding for neuroblasts to climb during neurogenesis such as bergmann glia, while others such as microglia are a specialized form of macrophage, involved in the immune system of the brain as well as the clearance of various metabolites from the brain tissue. Astrocytes may be involved with both clearance of metabolites as well as transport of fuel and various beneficial substances to neurons from the capillaries of the brain. Upon CNS injury astrocytes will proliferate, causing gliosis, a form of neuronal scar tissue, lacking in functional neurons. The brain (cerebrum as well as midbrain and hindbrain) consists of a cortex, composed of neuron-bodies constituting gray matter, while internally there is more white matter that form tracts and commissures. Apart from cortical gray matter there is also subcortical gray matter making up a large number of different nuclei. Spinal cord From and to the spinal cord are projections of the peripheral nervous system in the form of spinal nerves (sometimes segmental nerves). The nerves connect the spinal cord to skin, joints, muscles etc. and allow for the transmission of efferent motor as well as afferent sensory signals and stimuli. This allows for voluntary and involuntary motions of muscles, as well as the perception of senses. All in all 31 spinal nerves project from the brain stem, some forming plexa as they branch out, such as the brachial plexa, sacral plexa etc. Each spinal nerve will carry both sensory and motor signals, but the nerves synapse at different regions of the spinal cord, either from the periphery to sensory relay neurons that relay the information to the CNS or from the CNS to motor neurons, which relay the information out. The spinal cord relays information up to the brain through spinal tracts through the final common pathway to the thalamus and ultimately to the cortex. Cranial nerves Apart from the spinal cord, there are also peripheral nerves of the PNS that synapse through intermediaries or ganglia directly on the CNS. These 12 nerves exist in the head and neck region and are called cranial nerves. Cranial nerves bring information to the CNS to and from the face, as well as to certain muscles (such as the trapezius muscle, which is innervated by accessory nerves as well as certain cervical spinal nerves). Two pairs of cranial nerves; the olfactory nerves and the optic nerves are often considered structures of the CNS. This is because they do not synapse first on peripheral ganglia, but directly on CNS neurons. The olfactory epithelium is significant in that it consists of CNS tissue expressed in direct contact to the environment, allowing for administration of certain pharmaceuticals and drugs. Brain At the anterior end of the spinal cord lies the brain. The brain makes up the largest portion of the CNS. It is often the main structure referred to when speaking of the nervous system in general. The brain is the major functional unit of the CNS. While the spinal cord has certain processing ability such as that of spinal locomotion and can process reflexes, the brain is the major processing unit of the nervous system. Brainstem The brainstem consists of the medulla, the pons and the midbrain. The medulla can be referred to as an extension of the spinal cord, which both have similar organization and functional properties. The tracts passing from the spinal cord to the brain pass through here. Regulatory functions of the medulla nuclei include control of blood pressure and breathing. Other nuclei are involved in balance, taste, hearing, and control of muscles of the face and neck. The next structure rostral to the medulla is the pons, which lies on the ventral anterior side of the brainstem. Nuclei in the pons include pontine nuclei which work with the cerebellum and transmit information between the cerebellum and the cerebral cortex. In the dorsal posterior pons lie nuclei that are involved in the functions of breathing, sleep, and taste. The midbrain, or mesencephalon, is situated above and rostral to the pons. It includes nuclei linking distinct parts of the motor system, including the cerebellum, the basal ganglia and both cerebral hemispheres, among others. Additionally, parts of the visual and auditory systems are located in the midbrain, including control of automatic eye movements. The brainstem at large provides entry and exit to the brain for a number of pathways for motor and autonomic control of the face and neck through cranial nerves, Autonomic control of the organs is mediated by the tenth cranial nerve. A large portion of the brainstem is involved in such autonomic control of the body. Such functions may
number of organelles (such as mitochondria, ribosomes), and grows in size. In G1 phase, a cell has three options. To continue cell cycle and enter S phase Stop cell cycle and enter G0 phase for undergoing differentiation. Become arrested in G1 phase hence it may enter G0 phase or re-enter cell cycle. The deciding point is called check point (Restriction point). This check point is called the restriction point or START and is regulated by G1/S cyclins, which cause transition from G1 to S phase. Passage through the G1 check point commits the cell to division. S phase (DNA replication) The ensuing S phase starts when DNA synthesis commences; when it is complete, all of the chromosomes have been replicated, i.e., each chromosome consists of two sister chromatids. Thus, during this phase, the amount of DNA in the cell has doubled, though the ploidy and number of chromosomes are unchanged. Rates of RNA transcription and protein synthesis are very low during this phase. An exception to this is histone production, most of which occurs during the S phase. G2 phase (growth) G2 phase occurs after DNA replication and is a period of protein synthesis and rapid cell growth to prepare the cell for mitosis. During this phase microtubules begin to reorganize to form a spindle (preprophase). Before proceeding to mitotic phase, cells must be checked at the G2 checkpoint for any DNA damage within the chromosomes. The G2 checkpoint is mainly regulated by the tumor protein p53. If the DNA is damaged, p53 will either repair the DNA or trigger the apoptosis of the cell. If p53 is dysfunctional or mutated, cells with damaged DNA may continue through the cell cycle, leading to the development of cancer. Mitotic phase (chromosome separation) The relatively brief M phase consists of nuclear division (karyokinesis). It is a relatively short period of the cell cycle. M phase is complex and highly regulated. The sequence of events is divided into phases, corresponding to the completion of one set of activities and the start of the next. These phases are sequentially known as: prophase prometaphase metaphase anaphase telophase Mitosis is the process by which a eukaryotic cell separates the chromosomes in its cell nucleus into two identical sets in two nuclei. During the process of mitosis the pairs of chromosomes condense and attach to microtubules that pull the sister chromatids to opposite sides of the cell. Mitosis occurs exclusively in eukaryotic cells, but occurs in different ways in different species. For example, animal cells undergo an "open" mitosis, where the nuclear envelope breaks down before the chromosomes separate, while fungi such as Aspergillus nidulans and Saccharomyces cerevisiae (yeast) undergo a "closed" mitosis, where chromosomes divide within an intact cell nucleus. Cytokinesis phase (separation of all cell components) Mitosis is immediately followed by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Mitosis and cytokinesis together define the division of the mother cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei in a process called endoreplication. This occurs most notably among the fungi and slime molds, but is found in various groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can result in cell death through apoptosis or cause mutations that may lead to cancer. Regulation of eukaryotic cell cycle Regulation of the cell cycle involves processes crucial to the survival of a cell, including the detection and repair of genetic damage as well as the prevention of uncontrolled cell division. The molecular events that control the cell cycle are ordered and directional; that is, each process occurs in a sequential fashion and it is impossible to "reverse" the cycle. Role of cyclins and CDKs Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general, more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many of these genes cdc (for "cell division cycle") followed by an identifying number, e.g. cdc25 or cdc20. Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals. General mechanism of cyclin-CDK interaction Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. However, results from a recent study of E2F transcriptional dynamics at the single-cell level argue that the role of G1 cyclin-CDK activities, in particular cyclin D-CDK4/6, is to tune the timing rather than the commitment of cell cycle entry. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Specific action of cyclin-CDK complexes Cyclin D is the first cyclin produced in the cells that enter the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D levels stay low in resting cells that are not proliferating. Additionally, CDK4/6 and CDK2 are also inactive because CDK4/6 are bound by INK4 family members (e.g., p16), limiting kinase activity. Meanwhile, CDK2 complexes are inhibited by the CIP/KIP proteins such as p21 and p27, When it is time for a cell to enter the cell cycle, which is triggered by a mitogenic stimuli, levels of cyclin D increase. In response to this trigger, cyclin D binds to existing CDK4/6, forming the active cyclin D-CDK4/6 complex. Cyclin D-CDK4/6 complexes in turn mono-phosphorylates the retinoblastoma susceptibility protein (Rb) to pRb. The un-phosphorylated Rb tumour suppressor functions in inducing cell cycle exit and maintaining G0 arrest (senescence). In the last few decades, a model has been widely accepted whereby pRB proteins are inactivated by cyclin D-Cdk4/6-mediated phosphorylation. Rb has 14+ potential phosphorylation sites. Cyclin D-Cdk 4/6 progressively phosphorylates Rb to hyperphosphorylated state, which triggers dissociation of pRB–E2F complexes, thereby inducing G1/S cell cycle gene expression and progression into S phase. However, scientific observations from a recent study show that Rb is present in three types of isoforms: (1) un-phosphorylated Rb in G0 state; (2) mono-phosphorylated Rb, also referred to as "hypo-phosphorylated' or 'partially' phosphorylated Rb in early G1 state; and (3) inactive hyper-phosphorylated Rb in late G1 state. In early G1 cells, mono-phosphorylated Rb exits as 14 different isoforms, one of each has distinct E2F binding affinity. Rb has been found to associate with hundreds of different proteins and the idea that different mono-phosphorylated Rb isoforms have different protein partners was very appealing. A recent report confirmed that mono-phosphorylation controls Rb's association with other proteins and generates functional distinct forms of Rb. All different mono-phosphorylated Rb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Importantly, different mono-phosphorylated forms of RB have distinct transcriptional outputs that are extended beyond E2F regulation. In general, the binding of pRb to E2F inhibits the E2F target gene expression of certain G1/S and S transition genes including E-type cyclins. The partial phosphorylation of RB de-represses the Rb-mediated suppression of E2F target
approximately 10% of the cell cycle. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei in a process called endoreplication. This occurs most notably among the fungi and slime molds, but is found in various groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can result in cell death through apoptosis or cause mutations that may lead to cancer. Regulation of eukaryotic cell cycle Regulation of the cell cycle involves processes crucial to the survival of a cell, including the detection and repair of genetic damage as well as the prevention of uncontrolled cell division. The molecular events that control the cell cycle are ordered and directional; that is, each process occurs in a sequential fashion and it is impossible to "reverse" the cycle. Role of cyclins and CDKs Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general, more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many of these genes cdc (for "cell division cycle") followed by an identifying number, e.g. cdc25 or cdc20. Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals. General mechanism of cyclin-CDK interaction Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. However, results from a recent study of E2F transcriptional dynamics at the single-cell level argue that the role of G1 cyclin-CDK activities, in particular cyclin D-CDK4/6, is to tune the timing rather than the commitment of cell cycle entry. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Specific action of cyclin-CDK complexes Cyclin D is the first cyclin produced in the cells that enter the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D levels stay low in resting cells that are not proliferating. Additionally, CDK4/6 and CDK2 are also inactive because CDK4/6 are bound by INK4 family members (e.g., p16), limiting kinase activity. Meanwhile, CDK2 complexes are inhibited by the CIP/KIP proteins such as p21 and p27, When it is time for a cell to enter the cell cycle, which is triggered by a mitogenic stimuli, levels of cyclin D increase. In response to this trigger, cyclin D binds to existing CDK4/6, forming the active cyclin D-CDK4/6 complex. Cyclin D-CDK4/6 complexes in turn mono-phosphorylates the retinoblastoma susceptibility protein (Rb) to pRb. The un-phosphorylated Rb tumour suppressor functions in inducing cell cycle exit and maintaining G0 arrest (senescence). In the last few decades, a model has been widely accepted whereby pRB proteins are inactivated by cyclin D-Cdk4/6-mediated phosphorylation. Rb has 14+ potential phosphorylation sites. Cyclin D-Cdk 4/6 progressively phosphorylates Rb to hyperphosphorylated state, which triggers dissociation of pRB–E2F complexes, thereby inducing G1/S cell cycle gene expression and progression into S phase. However, scientific observations from a recent study show that Rb is present in three types of isoforms: (1) un-phosphorylated Rb in G0 state; (2) mono-phosphorylated Rb, also referred to as "hypo-phosphorylated' or 'partially' phosphorylated Rb in early G1 state; and (3) inactive hyper-phosphorylated Rb in late G1 state. In early G1 cells, mono-phosphorylated Rb exits as 14 different isoforms, one of each has distinct E2F binding affinity. Rb has been found to associate with hundreds of different proteins and the idea that different mono-phosphorylated Rb isoforms have different protein partners was very appealing. A recent report confirmed that mono-phosphorylation controls Rb's association with other proteins and generates functional distinct forms of Rb. All different mono-phosphorylated Rb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Importantly, different mono-phosphorylated forms of RB have distinct transcriptional outputs that are extended beyond E2F regulation. In general, the binding of pRb to E2F inhibits the E2F target gene expression of certain G1/S and S transition genes including E-type cyclins. The partial phosphorylation of RB de-represses the Rb-mediated suppression of E2F target gene expression, begins the expression of cyclin E. The molecular mechanism that causes the cell switched to cyclin E activation is currently not known, but as cyclin E levels rise, the active cyclin E-CDK2 complex is formed, bringing Rb to be inactivated by hyper-phosphorylation. Hyperphosphorylated Rb is completely dissociated from E2F, enabling further expression of a wide range of E2F target genes are required for driving cells to proceed into S phase [1]. Recently, it has been identified that cyclin D-Cdk4/6 binds to a C-terminal alpha-helix region of Rb that is only distinguishable to cyclin D rather than other cyclins, cyclin E, A and B. This observation based on the structural analysis of Rb phosphorylation supports that Rb is phosphorylated in a different level through multiple Cyclin-Cdk complexes. This also makes feasible the current model of a simultaneous switch-like inactivation of all mono-phosphorylated Rb isoforms through one type of Rb hyper-phosphorylation mechanism. In addition, mutational analysis of the cyclin D- Cdk 4/6 specific Rb C-terminal helix shows that disruptions of cyclin D-Cdk 4/6 binding to Rb prevents Rb phosphorylation, arrests cells in G1, and bolsters Rb's functions in tumor suppressor. This cyclin-Cdk driven cell cycle transitional mechanism governs a cell committed to the cell cycle that allows cell proliferation. A cancerous cell growth often accompanies with deregulation of Cyclin D-Cdk 4/6 activity. The hyperphosphorylated Rb dissociates from the E2F/DP1/Rb complex (which was bound to the E2F responsive genes, effectively "blocking" them from transcription), activating E2F. Activation of E2F results in transcription of various genes like cyclin E, cyclin A, DNA polymerase, thymidine kinase, etc. Cyclin E thus produced binds to CDK2, forming the cyclin E-CDK2 complex, which pushes the cell from G1 to S phase (G1/S, which initiates the G2/M transition). Cyclin B-cdk1 complex activation causes breakdown of nuclear envelope and initiation of prophase, and subsequently, its deactivation causes the cell to exit mitosis. A quantitative study of E2F transcriptional dynamics at the single-cell level by using engineered fluorescent reporter cells provided a quantitative framework for understanding the control logic of cell cycle entry, challenging the canonical textbook model. Genes that regulate the amplitude of E2F accumulation, such as Myc, determine the commitment in cell cycle and S phase entry. G1 cyclin-CDK activities are not the driver of cell cycle entry. Instead, they primarily tune the timing of E2F increase, thereby modulating the pace of cell cycle progression. Inhibitors Endogenous Two families of genes, the cip/kip (CDK interacting protein/Kinase inhibitory protein) family and the INK4a/ARF (Inhibitor of Kinase 4/Alternative Reading Frame) family, prevent the progression of the cell cycle. Because these genes are instrumental in prevention of tumor formation, they are known as tumor suppressors. The cip/kip family includes the genes p21, p27 and p57. They halt the cell cycle in G1 phase by binding to and inactivating cyclin-CDK complexes. p21 is activated by p53 (which, in turn, is triggered by DNA damage e.g. due to radiation). p27 is activated by Transforming Growth Factor β (TGF β), a growth inhibitor. The INK4a/ARF family includes p16INK4a, which binds to CDK4 and arrests the cell cycle in G1 phase, and p14ARF which prevents p53 degradation. Synthetic Synthetic inhibitors of Cdc25 could also be useful for the arrest of cell cycle and therefore be useful as antineoplastic and anticancer agents. Many human cancers possess the hyper-activated Cdk 4/6 activities. Given the observations of cyclin D-Cdk 4/6 functions, inhibition of Cdk 4/6 should result in preventing a malignant tumor from proliferating. Consequently, scientists have tried to invent the synthetic Cdk4/6 inhibitor as Cdk4/6 has been characterized to be a therapeutic target for anti-tumor effectiveness. Three Cdk4/6 inhibitors - palbociclib, ribociclib, and abemaciclib - currently received FDA approval for clinical use to treat advanced-stage or metastatic, hormone-receptor-positive (HR-positive, HR+), HER2-negative (HER2-) breast cancer. For example, palbociclib is an orally active CDK4/6 inhibitor which has demonstrated improved outcomes for ER-positive/HER2-negative advanced breast cancer. The main side effect is neutropenia which can be managed by dose reduction. Cdk4/6 targeted therapy will only treat cancer types where Rb is expressed. Cancer cells with loss of Rb have primary resistance to Cdk4/6 inhibitors. Transcriptional regulatory network Current evidence suggests that a semi-autonomous transcriptional network acts in concert with the CDK-cyclin machinery to regulate the cell cycle. Several gene expression studies in Saccharomyces cerevisiae have identified 800–1200 genes that change expression over the course of the cell cycle. They are transcribed at high levels at specific points in the cell cycle, and remain at lower levels throughout the rest of the cycle. While the set of identified genes differs between studies due to the computational methods and criteria used to identify them, each study indicates that a large portion of yeast genes are temporally regulated. Many periodically expressed genes are driven by transcription factors that are also periodically expressed. One screen of single-gene knockouts identified 48 transcription factors (about 20% of all non-essential transcription factors) that show cell cycle progression defects. Genome-wide studies using high throughput technologies have identified the transcription factors that bind to the promoters of yeast genes, and correlating these findings with temporal expression patterns have allowed the identification of transcription factors that drive phase-specific gene expression. The expression profiles of these transcription factors are driven by the transcription factors that peak in the prior phase, and computational models have shown that a CDK-autonomous network of these transcription factors is sufficient to produce steady-state oscillations in gene expression). Experimental evidence also suggests that gene expression can oscillate with the period seen in dividing wild-type cells independently of the CDK machinery. Orlando et al. used microarrays to measure the expression of a set of 1,271 genes that they identified as periodic in both wild type cells and cells lacking all S-phase and mitotic cyclins (clb1,2,3,4,5,6). Of the 1,271 genes assayed, 882 continued to be expressed in the cyclin-deficient cells at the same time as in the wild type cells, despite the fact that the cyclin-deficient cells arrest at the border between G1 and S phase. However, 833 of the genes assayed changed behavior between the wild type and mutant cells, indicating that these genes are likely directly or indirectly regulated by the CDK-cyclin machinery. Some genes that continued to be expressed on time in the mutant cells were also expressed at different levels in the mutant and wild type cells. These findings suggest that while the transcriptional network may oscillate independently of the CDK-cyclin oscillator, they are coupled in a manner that requires both to ensure the proper timing of cell cycle events. Other work indicates that phosphorylation, a post-translational modification, of cell cycle transcription factors by Cdk1 may alter the localization or activity of the transcription factors in order to tightly control timing of target genes. While oscillatory transcription plays a key role in the progression of the yeast cell cycle, the CDK-cyclin machinery operates independently in the early embryonic cell cycle. Before the midblastula transition, zygotic transcription does not occur and all needed proteins, such as the B-type cyclins, are translated from maternally loaded mRNA. DNA replication and DNA replication origin activity Analyses of synchronized cultures of Saccharomyces cerevisiae under conditions that prevent DNA replication initiation without delaying cell cycle progression showed that origin licensing decreases the expression of genes with origins near their 3' ends, revealing that downstream origins can regulate the expression of upstream genes. This confirms previous predictions from mathematical modeling of a global causal coordination between DNA replication origin activity and mRNA expression, and shows that mathematical modeling of
tree in computer science Philosophy Cartesian anxiety, a hope that studying the world will give us unchangeable knowledge of ourselves and the world Cartesian circle, a potential mistake in reasoning Cartesian doubt, a form of methodical skepticism as a basis for philosophical rigor Cartesian dualism, the philosophy of the distinction between mind and body Cartesianism, the
of graphs, a binary operation on graphs Cartesian tree, a binary tree in computer science Philosophy Cartesian anxiety, a hope that studying the world will give us unchangeable knowledge of ourselves and the world Cartesian circle, a potential mistake in reasoning Cartesian doubt, a form of methodical skepticism as a basis for philosophical rigor Cartesian dualism, the philosophy of the distinction between mind and body Cartesianism, the philosophy of René Descartes Cartesianists, followers of Cartesianism Cartesian Meditations, a work by Edmund Husserl Cartesian linguistics, a work by Noam Chomsky Cartesian theatre, a derisive view
each other. In a neutral position, the hands do not impart any force other than the touch of the follower's hands in the leader's. In swing dances, tension and compression may be maintained for a significant period of time. In other dances, such as Latin, tension and compression may be used as indications of upcoming movement. However, in both styles, tension and compression do not signal immediate movement: the follow must be careful not to move prior to actual movement by the lead. Until then, the dancers must match pressures without moving their hands. In some styles of Lindy Hop, the tension may become quite high without initiating movement. The general rule for open connections is that moves of the leader's hands back, forth, left or right are originated through moves of the entire body. Accordingly, for the follower, a move of the connected hand is immediately transformed into the corresponding move of the body. Tensing the muscles and locking the arm achieves this effect but is neither comfortable nor correct. Such tension eliminates the subtler communication in the connection, and eliminates free movement up and down, such as is required to initiate many turns. Instead of just tensing the arms, connection is achieved by engaging the shoulder, upper body and torso muscles. Movement originates in the body's core. A leader leads by moving himself and maintaining frame and connection. Different forms of dance and different movements within each dance may call for differences in the connection. In some dances the separation distance between the partners remains pretty constant. In others e.g. Modern Jive moving closer together and further apart are fundamental to the dance, requiring flexion and extension of the arms, alternating compression and tension. The connection between two partners has a different feel in every dance and with every partner. Good social dancers adapt to the conventions of the dance and the responses of their partners. See also Frame Dance move Lead and
momentum. During compression connection, the dancers are pushing towards each other. In a neutral position, the hands do not impart any force other than the touch of the follower's hands in the leader's. In swing dances, tension and compression may be maintained for a significant period of time. In other dances, such as Latin, tension and compression may be used as indications of upcoming movement. However, in both styles, tension and compression do not signal immediate movement: the follow must be careful not to move prior to actual movement by the lead. Until then, the dancers must match pressures without moving their hands. In some styles of Lindy Hop, the tension may become quite high without initiating movement. The general rule for open connections is that moves of the leader's hands back, forth, left or right are originated through moves of the entire body. Accordingly, for the follower, a move of the connected hand is immediately transformed into the corresponding move of the body. Tensing the muscles and locking the arm achieves this effect but is neither comfortable nor correct. Such tension eliminates the subtler communication in the connection, and eliminates free movement up and down, such as is required to initiate many turns. Instead of just tensing the arms, connection is achieved by engaging the shoulder, upper body and torso muscles. Movement originates in the body's core. A leader leads by moving himself and maintaining frame and connection. Different forms of dance and different movements within each dance may call for differences in the connection. In some dances the separation distance between the partners remains pretty constant. In others e.g. Modern Jive moving closer together and further apart are fundamental to the dance, requiring flexion and extension of the arms, alternating compression and tension. The connection between two partners has
ethnographers into two: Siwa and Buda. The Siwa caste was subdivided into five: Kemenuh, Keniten, Mas, Manuba and Petapan. This classification was to accommodate the observed marriage between higher-caste Brahmana men with lower-caste women. The other castes were similarly further sub-classified by 19th-century and early-20th-century ethnographers based on numerous criteria ranging from profession, endogamy or exogamy or polygamy, and a host of other factors in a manner similar to castas in Spanish colonies such as Mexico, and caste system studies in British colonies such as India. Philippines In the Philippines, pre-colonial societies do not have a single social structure. The class structures can be roughly categorised into four types: Classless societies - egalitarian societies with no class structure. Examples include the Mangyan and the Kalanguya peoples. Warrior societies - societies where a distinct warrior class exists, and whose membership depends on martial prowess. Examples include the Mandaya, Bagobo, Tagakaulo, and B'laan peoples who had warriors called the bagani or magani. Similarly, in the Cordillera highlands of Luzon, the Isneg and Kalinga peoples refer to their warriors as mengal or maingal. This society is typical for head-hunting ethnic groups or ethnic groups which had seasonal raids (mangayaw) into enemy territory. Petty plutocracies - societies which have a wealthy class based on property and the hosting of periodic prestige feasts. In some groups, it was an actual caste whose members had specialised leadership roles, married only within the same caste, and wore specialised clothing. These include the kadangyan of the Ifugao, Bontoc, and Kankanaey peoples, as well as the baknang of the Ibaloi people. In others, though wealth may give one prestige and leadership qualifications, it was not a caste per se. Principalities - societies with an actual ruling class and caste systems determined by birthright. Most of these societies are either Indianized or Islamized to a degree. They include the larger coastal ethnic groups like the Tagalog, Kapampangan, Visayan, and Moro societies. Most of them were usually divided into four to five caste systems with different names under different ethnic groups that roughly correspond to each other. The system was more or less feudalistic, with the datu ultimately having control of all the lands of the community. The land is subdivided among the enfranchised classes, the sakop or sa-op (vassals, lit. "those under the power of another"). The castes were hereditary, though they were not rigid. They were more accurately a reflection of the interpersonal political relationships, a person is always the follower of another. People can move up the caste system by marriage, by wealth, or by doing something extraordinary; and conversely they can be demoted, usually as criminal punishment or as a result of debt. Shamans are the exception, as they are either volunteers, chosen by the ranking shamans, or born into the role by innate propensity for it. They are enumerated below from the highest rank to the lowest: Royalty - (Visayan: kadatoan) the datu and immediate descendants. They are often further categorised according to purity of lineage. The power of the datu is dependent on the willingness of their followers to render him respect and obedience. Most roles of the datu were judicial and military. In case of an unfit datu, support may be withdrawn by his followers. Datu were almost always male, though in some ethnic groups like the Banwaon people, the female shaman (babaiyon) co-rules as the female counterpart of the datu. Nobility - (Visayan: tumao; Tagalog: maginoo; Kapampangan ginu; Tausug: bangsa mataas) the ruling class, either inclusive of or exclusive of the royal family. Most are descendants of the royal line or gained their status through wealth or bravery in battle. They owned lands and subjects, from whom they collected taxes. Shamans - (Visayan: babaylan; Tagalog: katalonan) the spirit mediums, usually female or feminised men. While they weren't technically a caste, they commanded the same respect and status as nobility. Warriors - (Visayan: timawa; Tagalog: maharlika) the martial class. They could own land and subjects like the higher ranks, but were required to fight for the datu in times of war. In some Filipino ethnic groups, they were often tattooed extensively to record feats in battle and as protection against harm. They were sometimes further subdivided into different classes, depending on their relationship with the datu. They traditionally went on seasonal raids on enemy settlements. Commoners and slaves - (Visayan, Maguindanao: ulipon; Tagalog: alipin; Tausug: kiapangdilihan; Maranao: kakatamokan) - the lowest class composed of the rest of the community who were not part of the enfranchised classes. They were further subdivided into the commoner class who had their own houses, the servants who lived in the houses of others, and the slaves who were usually captives from raids, criminals, or debtors. Most members of this class were equivalent to the European serf class, who paid taxes and can be conscripted to communal tasks, but were more or less free to do as they please. East Asia China and Mongolia During the period of Yuan Dynasty, ruler Kublai Khan enforced a Four Class System, which was a legal caste system. The order of four classes of people in descending order were: Mongolian Semu people Han people (in the northern areas of China) Southerners (people of the former Southern Song dynasty) Today, the Hukou system is argued by various Western sources to be the current caste system of China. Tibet There is significant controversy over the social classes of Tibet, especially with regards to the serfdom in Tibet controversy. has put forth the argument that pre-1950s Tibetan society was functionally a caste system, in contrast to previous scholars who defined the Tibetan social class system as similar to European feudal serfdom, as well as non-scholarly western accounts which seek to romanticise a supposedly 'egalitarian' ancient Tibetan society. Japan In Japan's history, social strata based on inherited position rather than personal merit, were rigid and highly formalised in a system called mibunsei (身分制). At the top were the Emperor and Court nobles (kuge), together with the Shōgun and daimyō. Below them, the population was divided into four classes: samurai, peasants, craftsmen and merchants. Only samurai were allowed to bear arms. A samurai had a right to kill any peasants, craftsman or merchant who he felt were disrespectful. Merchants were the lowest caste because they did not produce any products. The castes were further sub-divided; for example, peasants were labelled as furiuri, tanagari, mizunomi-byakusho among others. As in Europe, the castes and sub-classes were of the same race, religion and culture. Howell, in his review of Japanese society notes that if a Western power had colonised Japan in the 19th century, they would have discovered and imposed a rigid four-caste hierarchy in Japan. De Vos and Wagatsuma observe that Japanese society had a systematic and extensive caste system. They discuss how alleged caste impurity and alleged racial inferiority, concepts often assumed to be different, are superficial terms, and are due to identical inner psychological processes, which expressed themselves in Japan and elsewhere. Endogamy was common because marriage across caste lines was socially unacceptable. Japan had its own untouchable caste, shunned and ostracised, historically referred to by the insulting term eta, now called burakumin. While modern law has officially abolished the class hierarchy, there are reports of discrimination against the buraku or burakumin underclasses. The burakumin are regarded as "ostracised". The burakumin are one of the main minority groups in Japan, along with the Ainu of Hokkaidō and those of Korean or Chinese descent. Korea The baekjeong (백정) were an "untouchable" outcaste of Korea. The meaning today is that of butcher. It originates in the Khitan invasion of Korea in the 11th century. The defeated Khitans who surrendered were settled in isolated communities throughout Goryeo to forestall rebellion. They were valued for their skills in hunting, herding, butchering, and making of leather, common skill sets among nomads. Over time, their ethnic origin was forgotten, and they formed the bottom layer of Korean society. In 1392, with the foundation of the Confucian Joseon dynasty, Korea systemised its own native class system. At the top were the two official classes, the Yangban, which literally means "two classes". It was composed of scholars (munban) and warriors (muban). Scholars had a significant social advantage over the warriors. Below were the jung-in (중인-中人: literally "middle people". This was a small class of specialised professions such as medicine, accounting, translators, regional bureaucrats, etc. Below that were the sangmin (상민-常民: literally 'commoner'), farmers working their own fields. Korea also had a serf population known as the nobi. The nobi population could fluctuate up to about one third of the population, but on average the nobi made up about 10% of the total population. In 1801, the vast majority of government nobi were emancipated, and by 1858 the nobi population stood at about 1.5% of the total population of Korea. The hereditary nobi system was officially abolished around 1886–87 and the rest of the nobi system was abolished with the Gabo Reform of 1894, but traces remained until 1930. The opening of Korea to foreign Christian missionary activity in the late 19th century saw some improvement in the status of the baekjeong. However, everyone was not equal under the Christian congregation, and even so protests erupted when missionaries tried to integrate baekjeong into worship, with non-baekjeong finding this attempt insensitive to traditional notions of hierarchical advantage. Around the same time, the baekjeong began to resist open social discrimination. They focused on social and economic injustices affecting them, hoping to create an egalitarian Korean society. Their efforts included attacking social discrimination by upper class, authorities, and "commoners", and the use of degrading language against children in public schools. With the Gabo reform of 1896, the class system of Korea was officially abolished. Following the collapse of the Gabo government, the new cabinet, which became the Gwangmu government after the establishment of the Korean Empire, introduced systematic measures for abolishing the traditional class system. One measure was the new household registration system, reflecting the goals of formal social equality, which was implemented by the loyalists' cabinet. Whereas the old registration system signified household members according to their hierarchical social status, the new system called for an occupation. While most Koreans by then had surnames and even bongwan, although still substantial number of cheonmin, mostly consisted of serfs and slaves, and untouchables did not. According to the new system, they were then required to fill in the blanks for surname in order to be registered as constituting separate households. Instead of creating their own family name, some cheonmins appropriated their masters' surname, while others simply took the most common surname and its bongwan in the local area. Along with this example, activists within and outside the Korean government had based their visions of a new relationship between the government and people through the concept of citizenship, employing the term inmin ("people") and later, kungmin ("citizen"). North Korea The Committee for Human Rights in North Korea reported that "Every North Korean citizen is assigned a heredity-based class and socio-political rank over which the individual exercises no control but which determines all aspects of his or her life." Called Songbun, Barbara Demick describes this "class structure" as an updating of the hereditary "caste system", a combination of Confucianism and Stalinism. She claims that a bad family background is called "tainted blood", and that by law this "tainted blood" lasts three generations. West Asia Yezidi society is
witnessed caste-related violence. In 2005, government recorded approximately 110,000 cases of reported violent acts, including rape and murder, against Dalits. For 2012, the government recorded 651 murders, 3,855 injuries, 1,576 rapes, 490 kidnappings, and 214 cases of arson. The socio-economic limitations of the caste system are reduced due to urbanisation and affirmative action. Nevertheless, the caste system still exists in endogamy and patrimony, and thrives in the politics of democracy, where caste provides ready made constituencies to politicians. The globalisation and economic opportunities from foreign businesses has influenced the growth of India's middle-class population. Some members of the Chhattisgarh Potter Caste Community (CPCC) are middle-class urban professionals and no longer potters unlike the remaining majority of traditional rural potter members. There is persistence of caste in Indian politics. Caste associations have evolved into caste-based political parties. Political parties and the state perceive caste as an important factor for mobilisation of people and policy development. Studies by Bhatt and Beteille have shown changes in status, openness, mobility in the social aspects of Indian society. As a result of modern socio-economic changes in the country, India is experiencing significant changes in the dynamics and the economics of its social sphere. While arranged marriages are still the most common practice in India, the internet has provided a network for younger Indians to take control of their relationships through the use of dating apps. This remains isolated to informal terms, as marriage is not often achieved through the use of these apps. Hypergamy is still a common practice in India and Hindu culture. Men are expected to marry within their caste, or one below, with no social repercussions. If a woman marries into a higher caste, then her children will take the status of their father. If she marries down, her family is reduced to the social status of their son in law. In this case, the women are bearers of the egalitarian principle of the marriage. There would be no benefit in marrying a higher caste if the terms of the marriage did not imply equality. However, men are systematically shielded from the negative implications of the agreement. Geographical factors also determine adherence to the caste system. Many Northern villages are more likely to participate in exogamous marriage, due to a lack of eligible suitors within the same caste. Women in North India have been found to be less likely to leave or divorce their husbands since they are of a relatively lower caste system, and have higher restrictions on their freedoms. On the other hand, Pahari women, of the northern mountains, have much more freedom to leave their husbands without stigma. This often leads to better husbandry as his actions are not protected by social expectations. Chiefly among the factors influencing the rise of exogamy is the rapid urbanisation in India experienced over the last century. It is well known that urban centers tend to be less reliant on agriculture and are more progressive as a whole. As India's cities boomed in population, the job market grew to keep pace. Prosperity and stability were now more easily attained by an individual, and the anxiety to marry quickly and effectively was reduced. Thus, younger, more progressive generations of urban Indians are less likely than ever to participate in the antiquated system of arranged endogamy. India has also implemented a form of Affirmative Action, locally known as "reservation groups". Quota system jobs, as well as placements in publicly funded colleges, hold spots for the 8% of India's minority, and underprivileged groups. As a result, in states such as Tamil Nadu or those in the north-east, where underprivileged populations predominate, over 80% of government jobs are set aside in quotas. In education, colleges lower the marks necessary for the Dalits to enter. Nepal The Nepali caste system resembles in some respects the Indian jāti system, with numerous jāti divisions with a varna system superimposed. Inscriptions attest the beginnings of a caste system during the Licchavi period. Jayasthiti Malla (1382–1395) categorised Newars into 64 castes (Gellner 2001). A similar exercise was made during the reign of Mahindra Malla (1506–1575). The Hindu social code was later set up in Gorkha by Ram Shah (1603–1636). Pakistan McKim Marriott claims a social stratification that is hierarchical, closed, endogamous and hereditary is widely prevalent, particularly in western parts of Pakistan. Frederik Barth in his review of this system of social stratification in Pakistan suggested that these are castes. Sri Lanka The caste system in Sri Lanka is a division of society into strata, influenced by the textbook varnas and jāti system found in India. Ancient Sri Lankan texts such as the Pujavaliya, Sadharmaratnavaliya and Yogaratnakaraya and inscriptional evidence show that the above hierarchy prevailed throughout the feudal period. The repetition of the same caste hierarchy even as recently as the 18th century, in the Kandyan-period Kadayimpoth – Boundary books as well indicates the continuation of the tradition right up to the end of Sri Lanka's monarchy. Outside South Asia Southeast Asia Indonesia Balinese caste structure has been described as being based either on three categories—the noble triwangsa (thrice born), the middle class of dwijāti (twice born), and the lower class of ekajāti (once born)--or on four castes Brahminas – priest Satrias – knighthood Wesias – commerce Sudras – servitude The Brahmana caste was further subdivided by Dutch ethnographers into two: Siwa and Buda. The Siwa caste was subdivided into five: Kemenuh, Keniten, Mas, Manuba and Petapan. This classification was to accommodate the observed marriage between higher-caste Brahmana men with lower-caste women. The other castes were similarly further sub-classified by 19th-century and early-20th-century ethnographers based on numerous criteria ranging from profession, endogamy or exogamy or polygamy, and a host of other factors in a manner similar to castas in Spanish colonies such as Mexico, and caste system studies in British colonies such as India. Philippines In the Philippines, pre-colonial societies do not have a single social structure. The class structures can be roughly categorised into four types: Classless societies - egalitarian societies with no class structure. Examples include the Mangyan and the Kalanguya peoples. Warrior societies - societies where a distinct warrior class exists, and whose membership depends on martial prowess. Examples include the Mandaya, Bagobo, Tagakaulo, and B'laan peoples who had warriors called the bagani or magani. Similarly, in the Cordillera highlands of Luzon, the Isneg and Kalinga peoples refer to their warriors as mengal or maingal. This society is typical for head-hunting ethnic groups or ethnic groups which had seasonal raids (mangayaw) into enemy territory. Petty plutocracies - societies which have a wealthy class based on property and the hosting of periodic prestige feasts. In some groups, it was an actual caste whose members had specialised leadership roles, married only within the same caste, and wore specialised clothing. These include the kadangyan of the Ifugao, Bontoc, and Kankanaey peoples, as well as the baknang of the Ibaloi people. In others, though wealth may give one prestige and leadership qualifications, it was not a caste per se. Principalities - societies with an actual ruling class and caste systems determined by birthright. Most of these societies are either Indianized or Islamized to a degree. They include the larger coastal ethnic groups like the Tagalog, Kapampangan, Visayan, and Moro societies. Most of them were usually divided into four to five caste systems with different names under different ethnic groups that roughly correspond to each other. The system was more or less feudalistic, with the datu ultimately having control of all the lands of the community. The land is subdivided among the enfranchised classes, the sakop or sa-op (vassals, lit. "those under the power of another"). The castes were hereditary, though they were not rigid. They were more accurately a reflection of the interpersonal political relationships, a person is always the follower of another. People can move up the caste system by marriage, by wealth, or by doing something extraordinary; and conversely they can be demoted, usually as criminal punishment or as a result of debt. Shamans are the exception, as they are either volunteers, chosen by the ranking shamans, or born into the role by innate propensity for it. They are enumerated below from the highest rank to the lowest: Royalty - (Visayan: kadatoan) the datu and immediate descendants. They are often further categorised according to purity of lineage. The power of the datu is dependent on the willingness of their followers to render him respect and obedience. Most roles of the datu were judicial and military. In case of an unfit datu, support may be withdrawn by his followers. Datu were almost always male, though in some ethnic groups like the Banwaon people, the female shaman (babaiyon) co-rules as the female counterpart of the datu. Nobility - (Visayan: tumao; Tagalog: maginoo; Kapampangan ginu; Tausug: bangsa mataas) the ruling class, either inclusive of or exclusive of the royal family. Most are descendants of the royal line or gained their status through wealth or bravery in battle. They owned lands and subjects, from whom they collected taxes. Shamans - (Visayan: babaylan; Tagalog: katalonan) the spirit mediums, usually female or feminised men. While they weren't technically a caste, they commanded the same respect and status as nobility. Warriors - (Visayan: timawa; Tagalog: maharlika) the martial class. They could own land and subjects like the higher ranks, but were required to fight for the datu in times of war. In some Filipino ethnic groups, they were often tattooed extensively to record feats in battle and as protection against harm. They were sometimes further subdivided into different classes, depending on their relationship with the datu. They traditionally went on seasonal raids on enemy settlements. Commoners and slaves - (Visayan, Maguindanao: ulipon; Tagalog: alipin; Tausug: kiapangdilihan; Maranao: kakatamokan) - the lowest class composed of the rest of the community who were not part of the enfranchised classes. They were further subdivided into the commoner class who had their own houses, the servants who lived in the houses of others, and the slaves who were usually captives from raids, criminals, or debtors. Most members of this class were equivalent to the European serf class, who paid taxes and can be conscripted to communal tasks, but were more or less free to do as they please. East Asia China and Mongolia During the period of Yuan Dynasty, ruler Kublai Khan enforced a Four Class System, which was a legal caste system. The order of four classes of people in descending order were: Mongolian Semu people Han people (in the northern areas of China) Southerners (people of the former Southern Song dynasty) Today, the Hukou system is argued by various Western sources to be the current caste system of China. Tibet There is significant controversy over the social classes of Tibet, especially with regards to the serfdom in Tibet controversy. has put forth the argument that pre-1950s Tibetan society was functionally a caste system, in contrast to previous scholars who defined the Tibetan social class system as similar to European feudal serfdom, as well as non-scholarly western accounts which seek to romanticise a supposedly 'egalitarian' ancient Tibetan society. Japan In Japan's history, social strata based on inherited position rather than personal merit, were rigid and highly formalised in a system called mibunsei (身分制). At the top were the Emperor and Court nobles (kuge), together with the Shōgun and daimyō. Below them, the population was divided into four classes: samurai, peasants, craftsmen and merchants. Only samurai were allowed to bear arms. A samurai had a right to kill any peasants, craftsman or merchant who he felt were disrespectful. Merchants were the lowest caste because they did not produce any products. The castes were further sub-divided; for example, peasants were labelled as furiuri, tanagari, mizunomi-byakusho among others. As in Europe, the castes and sub-classes were of the same race, religion and culture. Howell, in his review of Japanese society notes that if a Western power had colonised Japan in the 19th century, they would have discovered and imposed a rigid four-caste hierarchy in Japan. De Vos and Wagatsuma observe that Japanese society had a systematic and extensive caste system. They discuss how alleged caste impurity and alleged racial inferiority, concepts often assumed to be different, are superficial terms, and are due to identical inner psychological processes, which expressed themselves in Japan and elsewhere. Endogamy was common because marriage across caste lines was socially unacceptable. Japan had its own untouchable caste, shunned and ostracised, historically referred to by the insulting term eta, now called burakumin. While modern law has officially abolished the class hierarchy, there are reports of discrimination against the buraku or burakumin underclasses. The burakumin are regarded as "ostracised". The burakumin are one of the main minority groups in Japan, along with the Ainu of Hokkaidō and those of Korean or Chinese descent. Korea The baekjeong (백정) were an "untouchable" outcaste of Korea. The meaning today is that of butcher. It originates in the Khitan invasion of Korea in the 11th century. The defeated Khitans who surrendered were settled in isolated communities throughout Goryeo to forestall rebellion. They were valued for their skills in hunting, herding, butchering, and making of leather, common skill sets among nomads. Over time, their ethnic origin was forgotten, and they formed the bottom layer of Korean society. In 1392, with the foundation of the Confucian Joseon dynasty, Korea systemised its own native class system. At the top were the two official classes, the Yangban, which literally means "two classes". It was composed of scholars (munban) and warriors (muban). Scholars had a significant social advantage over the warriors. Below were the jung-in (중인-中人: literally "middle people". This was a small class of specialised professions such as medicine, accounting, translators, regional bureaucrats, etc. Below that were the sangmin (상민-常民: literally 'commoner'), farmers working their own fields. Korea also had a serf population known as the nobi. The nobi population could fluctuate up to about one third of the population, but on average the nobi made up about 10% of the total population. In 1801, the vast majority of government nobi were emancipated, and by 1858 the nobi population stood at about 1.5% of the total population of Korea. The hereditary nobi system was officially abolished around 1886–87 and the rest of the nobi system was abolished with the Gabo Reform of 1894, but traces remained until 1930. The opening of Korea to foreign Christian missionary activity in the late 19th century saw some improvement in the status of the baekjeong. However, everyone was not equal under the Christian congregation, and even so protests erupted when missionaries tried to integrate baekjeong into worship, with non-baekjeong finding this attempt insensitive to traditional notions of hierarchical advantage. Around the same time, the baekjeong began to resist open social discrimination. They focused on social and economic injustices affecting them, hoping to create an egalitarian Korean society. Their efforts included attacking social discrimination by upper class, authorities, and "commoners", and the use of degrading language against children in public schools. With the Gabo reform of 1896, the class system of Korea was officially abolished. Following the collapse of the Gabo government, the new cabinet, which became the Gwangmu government after the establishment of the Korean Empire, introduced systematic measures for abolishing the traditional class system. One measure was the new household registration system, reflecting the goals of formal social equality, which was implemented by the loyalists' cabinet. Whereas the old registration system signified household members according to their hierarchical social status, the new system called for an occupation. While most Koreans by then had surnames and even bongwan, although still substantial number of cheonmin, mostly consisted of serfs and slaves, and untouchables did not. According to the new system, they were then required to fill in the blanks for surname in order to be registered as constituting separate households. Instead of creating their own family name, some cheonmins appropriated their masters' surname, while others simply took the most common surname and its bongwan in the local area. Along with this example, activists within and outside the Korean government had based their
and how people first came to inhabit it Creationism, the belief that the universe was created in specific divine acts and the social movement affiliated with it Genesis creation narrative, the biblical account of creation Creation Ministries International, a Christian apologetics organization Creation Festival, two annual four-day Christian music festivals held in the United States Entertainment Music Albums Creation (EP), 2016 EP by Seven Lions Creation (John Coltrane album), 1965 Creation (Branford Marsalis album), 2001 Creation (Keith Jarrett album), 2015 Creation (Archie Roach album), 2013 Creation (The Pierces album), 2014 Creation, album by Creation Creation, album by Leslie Satcher 2005 Songs "Creation" (William Billings), a hymn tune composed by William Billings The Creation, 1954, an orchestral song by Wolfgang Fortner "Creation",
(1945) Creation Records, a record label created in 1983 by Alan McGee Other uses in entertainment Creation (2009 film), by Jon Amiel about the life of Charles Darwin Creation (unfinished film), a 1931 film that inspired King Kong Creation (novel), a 1981 novel by Gore Vidal Creation (Dragonlance), creation of Krynn, a fictional world of Dragonlance Création, a 1940 ballet by Shirō Fukai "The Creation of Adam" (ca. 1511), a 1512 section of Michelangelo's fresco Sistine Chapel ceiling The Creation: An Appeal to Save Life on Earth (2006), a book by biologist Edward O. Wilson "The Creation" (1927), a poem by James Weldon Johnson, published in God's Trombones: Seven Negro Sermons in Verse La création du monde, a 1923 ballet by
compilers met the specifications. This process was later adopted by the US Department of Defense while defining Ada. Overview Coral 66 is a general-purpose programming language based on ALGOL 60, with some features from Coral 64, JOVIAL, and Fortran. It includes structured record types (as in Pascal) and supports the packing of data into limited storage (also as in Pascal). Like Edinburgh IMP it allows inline (embedded) assembly language, and also offers good runtime checking and diagnostics. It is designed for real-time computing and embedded system applications, and for use on computers with limited processing power, including those limited to fixed-point arithmetic and those without support for dynamic storage allocation. The language was an inter-service standard for British military programming, and was also widely adopted for civil purposes in the British control and automation industry. It was used to write software for both the Ferranti and General Electric Company (GEC) computers from 1971 onwards. Implementations also exist for the Interdata 8/32, PDP-11, VAX and Alpha platforms and HPE Integrity Servers; for the Honeywell, and for the Computer Technology Limited (CTL, later ITL) Modular-1; and for SPARC running Solaris, and Intel running Linux. Queen Elizabeth II sent the first email from a head of state from the Royal Signals and Radar Establishment over the ARPANET on March 26, 1976. The message read "This message to all ARPANET users announces the availability on ARPANET of the Coral 66 compiler provided by the GEC 4080 computer at the Royal Signals and Radar Establishment, Malvern, England, ... Coral 66 is the standard real-time high level language adopted by the Ministry of Defence." As Coral was aimed at a variety of real-time work, rather than general office data processing, there was no standardised equivalent to
by the US Department of Defense while defining Ada. Overview Coral 66 is a general-purpose programming language based on ALGOL 60, with some features from Coral 64, JOVIAL, and Fortran. It includes structured record types (as in Pascal) and supports the packing of data into limited storage (also as in Pascal). Like Edinburgh IMP it allows inline (embedded) assembly language, and also offers good runtime checking and diagnostics. It is designed for real-time computing and embedded system applications, and for use on computers with limited processing power, including those limited to fixed-point arithmetic and those without support for dynamic storage allocation. The language was an inter-service standard for British military programming, and was also widely adopted for civil purposes in the British control and automation industry. It was used to write software for both the Ferranti and General Electric Company (GEC) computers from 1971 onwards. Implementations also exist for the Interdata 8/32, PDP-11, VAX and Alpha platforms and HPE Integrity Servers; for the Honeywell, and for the Computer Technology Limited (CTL, later ITL) Modular-1; and for SPARC running Solaris, and Intel running Linux. Queen Elizabeth II sent the first email from a head of state from the Royal Signals and Radar Establishment over the ARPANET on March 26, 1976. The message read "This message to all ARPANET users announces the availability on ARPANET of the Coral 66 compiler provided by the GEC 4080 computer at the Royal Signals and Radar Establishment, Malvern, England, ... Coral 66 is the standard real-time high level language adopted by the Ministry of Defence." As Coral was aimed at a variety of real-time work, rather than general office data processing, there was no standardised equivalent to a stdio library. IECCA recommended a primitive input/output (I/O) package to accompany any compiler (in a document titled Input/Output of Character data in Coral 66 Utility Programs). Most implementers avoided this by producing Coral interfaces to extant Fortran
usages have lapsed, or been usurped ("Hounslow Heath" for teeth, was replaced by "Hampsteads" from the heath of the same name, stating ). In some cases, false etymologies exist. For example, the term "barney" has been used to mean an altercation or fight since the late nineteenth century, although without a clear derivation. In the 2001 feature film Ocean's Eleven, the explanation for the term is that it derives from Barney Rubble, the name of a cartoon character from the Flintstones television program many decades later in origin. Regional and international variations Rhyming slang is used mainly in London in England but can to some degree be understood across the country. Some constructions, however, rely on particular regional accents for the rhymes to work. For instance, the term "Charing Cross" (a place in London), used to mean "horse" since the mid-nineteenth century, does not work for a speaker without the lot–cloth split, common in London at that time but not nowadays. A similar example is "Joanna" meaning "piano", which is based on the pronunciation of "piano" as "pianna" . Unique formations also exist in other parts of the United Kingdom, such as in the East Midlands, where the local accent has formed "Derby Road", which rhymes with "cold". Outside England, rhyming slang is used in many English-speaking countries in the Commonwealth of Nations, with local variations. For example, in Australian slang, the term for an English person is "pommy", which has been proposed as a rhyme on "pomegranate", pronounced "Pummy Grant", which rhymed with "immigrant". Rhyming slang is continually evolving, and new phrases are introduced all the time; new personalities replace old ones—pop culture introduces new words—as in "I haven't a Scooby" (from Scooby Doo, the eponymous cartoon dog of the cartoon series) meaning "I haven't a clue". Taboo terms Rhyming slang is often used as a substitute for words regarded as taboo, often to the extent that the association with the taboo word becomes unknown over time. "Berk" (often used to mean "foolish person") originates from the most famous of all fox hunts, the "Berkeley Hunt" meaning "cunt"; "cobblers" (often used in the context "what you said is rubbish") originates from "cobbler's awls", meaning "balls" (as in testicles); and "hampton" (usually "'ampton") meaning "prick" (as in penis) originates from "Hampton Wick" (a place in London) – the second part "wick" also entered common usage as "he gets on my wick" (he is an annoying person). Lesser taboo terms include "pony and trap" for "crap" (as in defecate, but often used to denote nonsense or low quality); to blow a raspberry (rude sound of derision) from raspberry tart for "fart"; "D'Oyly Carte" (an opera company) for "fart"; "Jimmy Riddle" (an American country musician) for "piddle" (as in urinate), "J. Arthur Rank" (a film mogul), "Sherman tank", "Jodrell Bank" or "ham shank" for "wank", "Bristol Cities" (contracted to 'Bristols') for "titties", etc. "Taking the Mick" or "taking the Mickey" is thought to be a rhyming slang form of "taking the piss", where "Mick" came from "Mickey Bliss". In December 2004 Joe Pasquale, winner of the fourth series of ITV's I'm a Celebrity... Get Me Out of Here!, became well known for his frequent use of the term "Jacobs", for Jacob's Crackers, a rhyming slang term for knackers i.e. testicles. In popular culture Rhyming slang has been widely used in popular culture including film, television, music, literature, sport and degree classification. In university degree classification In the British undergraduate degree classification system a first class honours degree is known as a Geoff Hurst (First) after the English 1966 World Cup footballer. An upper second class degree is called an Attila the Hun (two-one) and a lower second class as a Desmond Tutu (two-two) while a third class degree is known as a Thora Hird. In film Cary Grant's character teaches rhyming slang to his female companion in Mr. Lucky (1943), describing it as 'Australian rhyming slang'. Rhyming slang is also used and described in a scene of the 1967 film To Sir, with Love starring Sidney Poitier, where the English students tell their foreign teacher that the slang is a drag and something for old people. The closing song of the 1969 crime caper, The Italian Job, ("Getta Bloomin' Move On" a.k.a. "The Self Preservation Society") contains many slang terms. Rhyming slang has been used to lend authenticity to an East End setting. Examples include Lock, Stock and Two Smoking Barrels (1998) (wherein the slang is translated via subtitles in one scene); The Limey (1999); Sexy Beast (2000); Snatch (2000); Ocean's Eleven (2001); and Austin Powers in Goldmember (2002); It's All Gone Pete Tong (2004), after BBC radio disc jockey Pete Tong whose name is used in this context as rhyming slang for "wrong"; Green Street Hooligans (2005). In Margin Call (2011), Will Emerson, played by London-born actor Paul Bettany, asks a friend on the telephone, "How's the trouble and strife?" ("wife"). Cockneys vs Zombies (2012) mocked the genesis of rhyming slang terms when a Cockney character calls zombies "Trafalgars" to even his Cockney fellows' puzzlement; he then explains it thus: "Trafalgar square – fox and hare – hairy Greek – five day week – weak and feeble – pins and needles – needle and stitch – Abercrombie and Fitch – Abercrombie: zombie". The live-action Disney film Mary Poppins Returns song "Trip A Little Light Fantastic" involves Cockney rhyming slang in part of its lyrics, and is primarily spoken by the London lamplighters. Television One early US show to regularly feature rhyming slang was the Saturday morning children's show The Bugaloos (1970–72), with the character of Harmony (Wayne Laryea) often incorporating it in his dialogue. In Britain, rhyming slang had a resurgence of popular interest beginning in the 1970s, resulting from its use in a number of London-based television programmes such as Steptoe and Son (1970–74); and Not On Your Nellie (1974–75), starring Hylda Baker as Nellie Pickersgill, alludes to the phrase "not on your Nellie Duff", rhyming slang for "not on your puff" i.e. not on your life. Similarly, The Sweeney (1975–78) alludes to the phrase "Sweeney Todd" for "Flying Squad", a rapid response unit of London's Metropolitan Police. In The Fall and Rise of Reginald Perrin (1976–79), a comic twist was added to rhyming slang by way of spurious and fabricated examples which a young man had laboriously attempted to explain to his father (e.g. 'dustbins' meaning 'children', as in 'dustbin lids'='kids'; 'Teds' being 'Ted Heath' and thus 'teeth'; and even 'Chitty Chitty' being 'Chitty Chitty Bang Bang', and thus 'rhyming slang'...). It was also featured in an episode of The Good Life in the first season (1975) where Tom and Barbara purchase a wood-burning range from a junk trader called Sam, who litters his language with phony slang in hopes of getting higher payment. He comes up with a fake story as to the origin of Cockney Rhyming slang and is caught out rather quickly. In The Jeffersons season 2 (1976) episode "The Breakup: Part 2", Mr. Bentley explains Cockney rhyming slang to George Jefferson, in that "whistle and flute" means "suit", "apples and pears" means "stairs", "plates of meat" means "feet". The use of rhyming slang was also prominent in Mind Your Language (1977–79), Citizen Smith (1977–80), Minder (1979–94), Only Fools and Horses (1981–91), and EastEnders (1985-). Minder could be quite uncompromising in its use of obscure forms without any clarification. Thus the non-Cockney viewer was obliged
in the mid-19th century in the East End of London, with several sources suggesting some time in the 1840s. The Flash Dictionary of unknown authorship, published in 1921 by Smeeton (48mo), contains a few rhymes. John Camden Hotten's 1859 Dictionary of Modern Slang, Cant, and Vulgar Words likewise states that it originated in the 1840s ("about twelve or fifteen years ago"), but with "chaunters" and "patterers" in the Seven Dials area of London. The reference is to travelling salesmen of certain kinds, chaunters selling sheet music and patterers offered cheap, tawdry goods at fairs and markets up and down the country. Hotten's Dictionary included the first known "Glossary of the Rhyming Slang", which included later mainstays such as "frog and toad" (the main road) and "apples and pears" (stairs), as well as many more obscure examples, e.g. "Battle of the Nile" (a tile, a vulgar term for a hat), "Duke of York" (take a walk), and "Top of Rome" (home). It remains a matter of speculation exactly how rhyming slang originated, for example, as a linguistic game among friends or as a cryptolect developed intentionally to confuse non-locals. If deliberate, it may also have been used to maintain a sense of community, or to allow traders to talk amongst themselves in marketplaces to facilitate collusion, without customers knowing what they were saying, or by criminals to confuse the police (see thieves' cant). The academic, lexicographer and radio personality Terence Dolan has suggested that rhyming slang was invented by Irish immigrants to London "so the actual English wouldn't understand what they were talking about." Development Many examples of rhyming slang are based on locations in London, such as "Peckham Rye", meaning "tie", which dates from the late nineteenth century; "Hampstead Heath", meaning "teeth" (usually as "Hampsteads"), which was first recorded in 1887; and "barnet" (Barnet Fair), meaning "hair", which dates from the 1850s. In the 20th century, rhyming slang began to be based on the names of celebrities — Gregory Peck (neck; cheque), Ruby Murray [as Ruby] (curry), Alan Whicker [as "Alan Whickers"] (knickers), Puff Daddy (caddy), Max Miller (pillow [pronounced ]), Meryl Streep (cheap), Nat King Cole ("dole"), Britney Spears (beers, tears), Henry Halls (balls) — and after pop culture references — Captain Kirk (work), Pop Goes the Weasel (diesel), Mona Lisa (pizza), Mickey Mouse (Scouse), Wallace and Gromit (vomit), Brady Bunch (lunch), Bugs Bunny (money), Scooby-Doo (clue), Winnie the Pooh (shoe), and Schindler's List (pissed). Some words have numerous definitions, such as dead (Father Ted, "gone to bed", brown bread), door (Roger Moore, Andrea Corr, George Bernard Shaw, Rory O'Moore), cocaine (Kurt Cobain; [as "Charlie"] Bob Marley, Boutros Boutros-Ghali, Gianluca Vialli, oats and barley; [as "line"] Patsy Cline; [as "powder"] Niki Lauda), flares ("Lionel Blairs", "Tony Blairs", "Rupert Bears", "Dan Dares"), etc. Many examples have passed into common usage. Some substitutions have become relatively widespread in England in their contracted form. "To have a butcher's", meaning to have a look, originates from "butcher's hook", an S-shaped hook used by butchers to hang up meat, and dates from the late nineteenth century but has existed independently in general use from around the 1930s simply as "butchers". Similarly, "use your loaf", meaning "use your head", derives from "loaf of bread" and also dates from the late nineteenth century but came into independent use in the 1930s. Conversely usages have lapsed, or been usurped ("Hounslow Heath" for teeth, was replaced by "Hampsteads" from the heath of the same name, stating ). In some cases, false etymologies exist. For example, the term "barney" has been used to mean an altercation or fight since the late nineteenth century, although without a clear derivation. In the 2001 feature film Ocean's Eleven, the explanation for the term is that it derives from Barney Rubble, the name of a cartoon character from the Flintstones television program many decades later in origin. Regional and international variations Rhyming slang is used mainly in London in England but can to some degree be understood across the country. Some constructions, however, rely on particular regional accents for the rhymes to work. For instance, the term "Charing Cross" (a place in London), used to mean "horse" since the mid-nineteenth century, does not work for a speaker without the lot–cloth split, common in London at that time but not nowadays. A similar example is "Joanna" meaning "piano", which is based on the pronunciation of "piano" as "pianna" . Unique formations also exist in other parts of the United Kingdom, such as in the East Midlands, where the local accent has formed "Derby Road", which rhymes with "cold". Outside England, rhyming slang is used in many English-speaking countries in the Commonwealth of Nations, with local variations. For example, in Australian slang, the term for an English person is "pommy", which has been proposed as a rhyme on "pomegranate", pronounced "Pummy Grant", which rhymed with "immigrant". Rhyming slang is continually evolving, and new phrases are introduced all the time; new personalities replace old ones—pop culture introduces new words—as in "I haven't a Scooby" (from Scooby Doo, the eponymous cartoon dog of the cartoon series) meaning "I haven't a clue". Taboo terms Rhyming slang is often used as a substitute for words regarded as taboo, often to the extent that the association with the taboo word becomes unknown over time. "Berk" (often used to mean "foolish person") originates from the most famous of all fox hunts, the "Berkeley Hunt" meaning "cunt"; "cobblers" (often used in the context "what you said is rubbish") originates from "cobbler's awls", meaning "balls" (as in testicles); and "hampton" (usually "'ampton") meaning "prick" (as in penis) originates from "Hampton Wick" (a place in London) – the second part "wick" also entered common usage as "he gets on my wick" (he is an annoying person). Lesser taboo terms include "pony and trap" for "crap" (as in defecate, but often used to denote nonsense or low quality); to blow a raspberry (rude sound of derision) from raspberry tart for "fart"; "D'Oyly Carte" (an opera company) for "fart"; "Jimmy Riddle" (an American country musician) for "piddle" (as in urinate), "J. Arthur Rank" (a film mogul), "Sherman tank", "Jodrell Bank" or "ham shank" for "wank", "Bristol Cities" (contracted to 'Bristols') for "titties", etc. "Taking the Mick" or "taking the Mickey" is thought to be a rhyming slang form of "taking the piss", where "Mick" came from "Mickey Bliss". In December 2004 Joe Pasquale, winner of the fourth series of ITV's I'm a Celebrity... Get Me Out of Here!, became well known for his frequent use of the term "Jacobs", for Jacob's Crackers, a rhyming slang term for knackers i.e. testicles. In popular culture Rhyming slang has been widely used in popular culture including film, television, music, literature, sport and degree classification. In university degree classification In the British undergraduate degree classification system a first class honours degree is known as a Geoff Hurst (First) after the
the climate, other factors such as the high occurrence of parasites, diseases and the very low nutritional value of the native forage were problems. Formation of the breed The European breed used in the formation of Canchim cattle was Charolais. In 1922 the Ministry of Agriculture imported Charolais cattle to the State of Goias, where they remained till 1936, when they were transferred to São Carlos in the State of São Paulo, to the Canchim Farm of the Government Research Station, EMBRAPA. From this herd originated the dams and sires utilised in the program of crossbreeding. The main Zebu breed which contributed to the formation to the Canchim was the Indubrazil, although Guzerá and Nelore cattle were also used. Preference was given to the Indubrasil breed, due to the ease of obtaining large herds at reasonable prices, which would have been difficult with Gir, Nelore or Guzerá. The alternative crossbreeding programs initiated in 1940 by Dr. Antonio Teixeira Viana had the objective of obtaining first, crossbreeds 5/8 Charolais and 3/8 Zebu and second, 3/8 Charolais x 5/8 Zebu, to evaluate which of the two was the most successful. The total number of Zebu cows utilized to produce the half-breeds was 368, of which 292 were Indubrasil, 44 Guzerá and 32 Nelore. All the animals produced were reared exclusively on the range. Control of parasites was done every 15 days and the animals were weighed at birth and monthly. The females were weighed up to 30 months and the males up to 40 months. The data collected during various years of work, permitted an evaluation of the various degrees of crossbreeding. The conclusion was that the 5/8 Charolais and 3/8 Zebu was the most suitable, presenting an excellent frame for meat, precocious, resistance to heat and parasites, and a uniform coat. The first crossbred animals, 5/8 Charolais and 3/8 Zebu, were born in 1953. Thus was born a new type of beef cattle for Central Brazil, with the name CANCHIM, derived from the name of a tree very
last century, were extensively crossbred with herds of native cattle. The Indian breed, well known for its ability to survive in the tropics, adapted quickly to Brazil, and soon populated large areas, considerably improving Brazilian beef cattle breeding. Zebu cattle were however found to be inferior to the European breeds in growth rate and yield of meat. It became clear that the beef cattle population required genetic improvement. Simply placing European beef cattle (Bos Taurus), highly productive in temperate climates, in Central Brazil, would not produce good results, due to their inability to adapt to a tropical environment. Besides the climate, other factors such as the high occurrence of parasites, diseases and the very low nutritional value of the native forage were problems. Formation of the breed The European breed used in the formation of Canchim cattle was Charolais. In 1922 the Ministry of Agriculture imported Charolais cattle to the State of Goias, where they remained till 1936, when they were transferred to São Carlos in the State of São Paulo, to the Canchim Farm of the Government Research Station, EMBRAPA. From this herd originated the dams and sires utilised in the program of crossbreeding. The main Zebu breed which contributed to the formation to the Canchim was the Indubrazil, although Guzerá and Nelore cattle were also used. Preference was given to the Indubrasil breed, due to the ease of obtaining large herds at reasonable prices, which would have been difficult with Gir, Nelore or Guzerá. The alternative crossbreeding programs initiated in 1940 by Dr. Antonio Teixeira Viana had the objective of obtaining first, crossbreeds 5/8 Charolais and 3/8 Zebu and second, 3/8 Charolais x 5/8 Zebu, to evaluate which of the two was the most successful. The total number of Zebu cows utilized to produce the half-breeds was 368, of which 292 were Indubrasil, 44 Guzerá
post-Stalin years, when Malenkov chaired Politburo meetings, Khrushchev as First Secretary signed all Central Committee documents into force. From 1954 until 1958, Khrushchev chaired the Politburo as First Secretary, but in 1958 he dismissed and succeeded Nikolai Bulganin as Chairman of the Council of Ministers. During this period, the informal position of Second Secretarylater formalized as Deputy General Secretarywas established. The Second Secretary became responsible for chairing the Secretariat in place of the General Secretary. When the General Secretary could not chair the meetings of the Politburo, the Second Secretary would take his place. This system survived until the dissolution of the CPSU in 1991. To be elected to the Politburo, a member had to serve in the Central Committee. The Central Committee elected the Politburo in the aftermath of a party Congress. Members of the Central Committee were given a predetermined list of candidates for the Politburo having only one candidate for each seat; for this reason, the election of the Politburo was usually passed unanimously. The greater the power held by the sitting CPSU General Secretary, the higher the chance that the Politburo membership would be approved. Secretariat The Secretariat headed the CPSU's central apparatus and was solely responsible for the development and implementation of party policies. It was legally empowered to take over the duties and functions of the Central Committee when it was not in the plenum (did not hold a meeting). Many members of the Secretariat concurrently held a seat in the Politburo. According to a Soviet textbook on party procedures, the Secretariat's role was that of "leadership of current work, chiefly in the realm of personnel selection and in the organization of the verification of fulfillment of party-state decisions". "Selections of personnel" () in this instance meant the maintenance of general standards and the criteria for selecting various personnel. "Verification of fulfillment" () of party and state decisions meant that the Secretariat instructed other bodies. The powers of the Secretariat were weakened under Mikhail Gorbachev, and the Central Committee Commissions took over the functions of the Secretariat in 1988. Yegor Ligachev, a Secretariat member, said that the changes completely destroyed the Secretariat's hold on power and made the body almost superfluous. Because of this, the Secretariat rarely met during the next two years. It was revitalized at the 28th Party Congress in 1990, and the Deputy General Secretary became the official head of the Secretariat. Orgburo The Organizational Bureau, or Orgburo, existed from 1919 to 1952 and was one of three leading bodies of the party when the Central Committee was not in session. It was responsible for "organizational questions, the recruitment, and allocation of personnel, the coordination of activities of the party, government and social organizations (e.g., trade unions and youth organizations), improvement to the party's structure, the distribution of information and reports within the party". The 19th Congress abolished the Orgburo and its duties and responsibilities were taken over by the Secretariat. At the beginning, the Orgburo held three meetings a week and reported to the Central Committee every second week. Lenin described the relation between the Politburo and the Orgburo as "the Orgburo allocates forces, while the Politburo decides policy". A decision of the Orgburo was implemented by the Secretariat. However, the Secretariat could make decisions in the Orgburo's name without consulting its members, but if one Orgburo member objected to a Secretariat resolution, the resolution would not be implemented. In the 1920s, if the Central Committee could not convene the Politburo and the Orgburo would hold a joint session in its place. Control Commission The Central Control Commission (CCC) functioned as the party's supreme court. The CCC was established at the 9th All-Russian Conference in September 1920, but rules organizing its procedure were not enacted before the 10th Congress. The 10th Congress formally established the CCC on all party levels and stated that it could only be elected at a party congress or a party conference. The CCC and the CCs were formally independent but had to make decisions through the party committees at their level, which led them in practice to lose their administrative independence. At first, the primary responsibility of the CCs was to respond to party complaints, focusing mostly on party complaints of factionalism and bureaucratism. At the 11th Congress, the brief of the CCs was expanded; it became responsible for overseeing party discipline. In a bid to further centralize the powers of the CCC, a Presidium of the CCC, which functioned in a similar manner to the Politburo in relation to the Central Committee, was established in 1923. At the 18th Congress, party rules regarding the CCC were changed; it was now elected by the Central Committee and was subordinate to the Central Committee. CCC members could not concurrently be members of the Central Committee. To create an organizational link between the CCC and other central-level organs, the 9th All-Russian Conference created the joint CC–CCC plenums. The CCC was a powerful organ; the 10th Congress allowed it to expel full and candidate Central Committee members and members of their subordinate organs if two-thirds of attendants at a CC–CCC plenum voted for such. At its first such session in 1921, Lenin tried to persuade the joint plenum to expel Alexander Shliapnikov from the party; instead of expelling him, Shliapnikov was given a severe reprimand. Departments The leader of a department was usually given the title "head" (). In practice, the Secretariat had a major say in the running of the departments; for example, five of eleven secretaries headed their own departments in 1978. Normally, specific secretaries were given supervising duties over one or more departments. Each department established its own cellscalled sectionswhich specialized in one or more fields. During the Gorbachev era, a variety of departments made up the Central Committee apparatus. The Party Building and Cadre Work Department assigned party personnel in the nomenklatura system. The State and Legal Department supervised the armed forces, KGB, the Ministry of Internal Affairs, the trade unions, and the Procuracy. Before 1989, the Central Committee had several departments, but some were abolished that year. Among these departments was the Economics Department that was responsible for the economy as a whole, one for machine building, one for the chemical industry, etc. The party abolished these departments to remove itself from the day-to-day management of the economy in favor of government bodies and a greater role for the market, as a part of the perestroika process. In their place, Gorbachev called for the creations of commissions with the same responsibilities as departments, but giving more independence from the state apparatus. This change was approved at the 19th Conference, which was held in 1988. Six commissions were established by late 1988. Pravda Pravda (The Truth) was the leading newspaper in the Soviet Union. The Organizational Department of the Central Committee was the only organ empowered to dismiss Pravda editors. In 1905, Pravda began as a project by members of the Ukrainian Social Democratic Labour Party. Leon Trotsky was approached about the possibility of running the new paper because of his previous work on Ukrainian newspaper Kyivan Thought. The first issue of Pravda was published on 3 October 1908 in Lvov, where it continued until the publication of the sixth issue in November 1909, when the operation was moved to Vienna, Austria-Hungary. During the Russian Civil War, sales of Pravda were curtailed by Izvestia, the government run newspaper. At the time, the average reading figure for Pravda was 130,000. This Vienna-based newspaper published its last issue in 1912 and was succeeded the same year by a new newspaper dominated by the Bolsheviks, also called Pravda, which was headquartered in St. Petersburg. The paper's main goal was to promote Marxist–Leninist philosophy and expose the lies of the bourgeoisie. In 1975, the paper reached a circulation of 10.6 million. It is currently owned by the Communist Party of the Russian Federation. Higher Party School The Higher Party School (HPS) was the organ responsible for teaching cadres in the Soviet Union. It was the successor of the Communist Academy, which was established in 1918. The HPS was established in 1939 as the Moscow Higher Party School and it offered its students a two-year training course for becoming a CPSU official. It was reorganized in 1956 to that it could offer more specialized ideological training. In 1956, the school in Moscow was opened for students from socialist countries outside the Soviet Union. The Moscow Higher Party School was the party school with the highest standing. The school itself had eleven faculties until a 1972 Central Committee resolution demanded a reorganization of the curriculum. The first regional HPS outside Moscow was established in 1946 and by the early 1950s there were 70 Higher Party Schools. During the reorganization drive of 1956, Khrushchev closed 13 of them and reclassified 29 as inter-republican and inter-oblast schools. Lower-level organization Republican and local organization The lowest organ above the primary party organization (PPO) was the district level. Every two years, the local PPO would elect delegates to the district-level party conference, which was overseen by a secretary from a higher party level. The conference elected a Party Committee and First Secretary and re-declared the district's commitment to the CPSU's program. In between conferences, the "raion" party committeecommonly referred to as "raikom"was vested with ultimate authority. It convened at least six times a year to discuss party directives and to oversee the implementation of party policies in their respective districts, to oversee the implementation of party directives at the PPO-level, and to issue directives to PPOs. 75–80 percent of raikom members were full members, while the remaining 20–25 were non-voting, candidate members. Raikom members were commonly from the state sector, party sector, Komsomol or the trade unions. Day-to-day responsibility of the raikom was handed over to a Politburo, which usually composed of 12 members. The district-level First Secretary chaired the meetings of the local Politburo and the raikom, and was the direct link between the district and the higher party echelons. The First Secretary was responsible for the smooth running of operations. The raikom was headed by the local apparatthe local agitation department or industry department. A raikom usually had no more than 4 or 5 departments, each of which was responsible for overseeing the work of the state sector but would not interfere in their work. This system remained identical at all other levels of the CPSU hierarchy. The other levels were cities, oblasts (regions) and republics. The district-level elected delegates to a conference held at least held every three years to elect the party committee. The only difference between the oblast and the district level was that the oblast had its own Secretariat and had more departments at its disposal. The oblast's party committee in turn elected delegates to the republican-level Congress, which was held every five years. The Congress then elected the Central Committee of the republic, which in turn elected a First Secretary and a Politburo. Until 1990, the Russian Soviet Federative Socialist Republic was the only republic that did not have its own republican branch, being instead represented by the CPSU Central Committee. Primary party organizations The primary party organization (PPO) was the lowest level in the CPSU hierarchy. PPOs were organized cells consisting of three or more members. A PPO could exist anywhere; for example, in a factory or a student dormitory. They functioned as the party's "eyes and ears" at the lowest level and were used to mobilize support for party policies. All CPSU members had to be a member of a local PPO. The size of a PPO varied from three people to several hundred, depending upon its setting. In a large enterprise, a PPO usually had several hundred members. In such cases, the PPO was divided into bureaus based upon production-units. Each PPO was led by an executive committee and an executive committee secretary. Each executive committee is responsible for the PPO executive committee and its secretary. In small PPOs, members met periodically to mainly discuss party policies, ideology, or practical matters. In such a case, the PPO secretary was responsible for collecting party dues, reporting to higher organs, and maintaining the party records. A secretary could be elected democratically through a secret ballot, but that was not often the case; in 1979, only 88 out of the over 400,000 PPOs were elected in this fashion. The remainder were chosen by a higher party organ and ratified by the general meetings of the PPO. The PPO general meeting was responsible for electing delegates to the party conference at either the district- or town-level, depending on where the PPO was located. Membership Membership of the party was not open. To become a party member, one had to be approved by various committees, and one's past was closely scrutinized. As generations grew up having known nothing before the Soviet Union, party membership became something one generally achieved after passing a series of stages. Children would join the Young Pioneers and, at the age of 14, might graduate to the Komsomol (Young Communist League). Ultimately, as an adult, if one had shown the proper adherence to party discipline – or had the right connections, one would become a member of the Communist Party itself. Membership of the party carried obligations as it expected Komsomol and CPSU members to pay dues and to carry out appropriate assignments and "social tasks" (общественная работа). In 1918, party membership was approximately 200,000. In the late 1920s under Stalin, the party engaged in an intensive recruitment campaign, the "Lenin Levy", resulting in new members referred to as the Lenin Enrolment, from both the working class and rural areas. This represented an attempt to "proletarianize" the party and an attempt by Stalin to strengthen his base by outnumbering the Old Bolsheviks and reducing their influence in the Party. In 1925, the party had 1,025,000 members in a Soviet population of 147 million. In 1927, membership had risen to 1,200,000. During the collectivization campaign and industrialization campaigns of the first five-year plan from 1929 to 1933, party membership grew rapidly to approximately 3.5 million members. However, party leaders suspected that the mass intake of new members had allowed "social-alien elements" to penetrate the party's ranks and document verifications of membership ensued in 1933 and 1935, removing supposedly unreliable members. Meanwhile, the party closed its ranks to new members from 1933 to November 1936. Even after the reopening of party recruiting, membership fell to 1.9 million by 1939. Nicholas DeWitt gives 2.307 million members in 1939, including candidate members, compared with 1.535 million in 1929 and 6.3 million in 1947. In 1986, the CPSU had over 19 million members,approximately 10% of the Soviet Union's adult population. Over 44% of party members were classified as industrial workers and 12% as collective farmers. The CPSU had party organizations in 14 of the Soviet Union's 15 republics. The Russian Soviet Federative Socialist Republic itself had no separate Communist Party until 1990 because the CPSU controlled affairs there directly. Komsomol The All-Union Leninist Communist Youth League, commonly referred to as Komsomol, was the party's youth wing. The Komsomol acted under the direction of the CPSU Central Committee. It was responsible for indoctrinating youths in communist ideology and organizing social events. It was closely modeled on the CPSU; nominally the highest body was the Congress, followed by the Central Committee, Secretariat and the Politburo. The Komsomol participated in nationwide policy-making by appointing members to the collegiums of the Ministry of Culture, the Ministry of Higher and Specialized Secondary Education, the Ministry of Education and the State Committee for Physical Culture and Sports. The organization's newspaper was the Komsomolskaya Pravda. The First Secretary and the Second Secretary were commonly members of the Central Committee but were never elected to the Politburo. However, at the republican level, several Komsomol first secretaries were appointed to the Politburo. Ideology Marxism–Leninism Marxism–Leninism was the cornerstone of Soviet ideology. It explained and legitimized the CPSU's right to rule while explaining its role as a vanguard party. For instance, the ideology explained that the CPSU's policies, even if they were unpopular, were correct because the party was enlightened. It was represented as the only truth in Soviet society; the Party rejected the notion of multiple truths. Marxism–Leninism was used to justify CPSU rule and Soviet policy, but it was not used as a means to an end. The relationship between ideology and decision-making was at best ambivalent; most policy decisions were made in the light of the continued, permanent development of Marxism–Leninism. Marxism–Leninism as the only truth could notby its very naturebecome outdated. Despite having evolved over the years, Marxism–Leninism had several central tenets. The main tenet was the party's status as the sole ruling party. The 1977 Constitution referred to the party as "The leading and guiding force of Soviet society, and the nucleus of its political system, of all state and public organizations, is the Communist Party of the Soviet Union". State socialism was essential and from Stalin until Gorbachev, official discourse considered that private social and economic activity retarding the development of collective consciousness and the economy. Gorbachev supported privatization to a degree but based his policies on Lenin's and Bukharin's opinions of the New Economic Policy of the 1920s, and supported complete state ownership over the commanding heights of the economy. Unlike liberalism, Marxism–Leninism stressed the role of the individual as a member of a collective rather than the importance of the individual. Individuals only had the right to freedom of expression if it safeguarded the interests of a collective. For instance, the 1977 Constitution stated that every person had the right to express his or her opinion, but the opinion could only be expressed if it was in accordance with the "general interests of Soviet society". The number of rights granted to an individual was decided by the state, and the state could remove these rights if it saw fit. Soviet Marxism–Leninism justified nationalism; the Soviet media portrayed every victory of the state as a victory for the communist movement as a whole. Largely, Soviet nationalism was based upon ethnic Russian nationalism. Marxism–Leninism stressed the importance of the worldwide conflict between capitalism and socialism; the Soviet press wrote about progressive and reactionary forces while claiming that socialism was on the verge of victory and that the "correlations of forces" were in the Soviet Union's favor. The ideology professed state atheism; Party members were not allowed to be religious. Marxism–Leninism believed in the feasibility of a communist mode of production. All policies were justifiable if it contributed to the Soviet Union's achievement of that stage. Leninism In Marxist philosophy, Leninism is the body of political theory for the democratic organization of a revolutionary vanguard party and the achievement of a dictatorship of the proletariat as a political prelude to the establishment of the socialist mode of production developed by Lenin. Since Karl Marx rarely, if ever wrote about how the socialist mode of production would function, these tasks were left for Lenin to solve. Lenin's main contribution to Marxist thought is the concept of the vanguard party of the working class. He conceived the vanguard party as a highly knit, centralized organization that was led by intellectuals rather than by the working class itself. The CPSU was open only to a small number of workers because the workers in Russia still had not developed class consciousness and needed to be educated to reach such a state. Lenin believed that the vanguard party could initiate policies in the name of the working class even if the working class did not support them. The vanguard party would know what was best for the workers because the party functionaries had attained consciousness. Lenin, in light of the Marx's theory of the state (which views the state as an oppressive organ of the ruling class), had no qualms of forcing change upon the country. He viewed the dictatorship of the proletariat, rather than the dictatorship of the bourgeoisie, to be the dictatorship of the majority. The repressive powers of the state were to be used to transform the country, and to strip of the former ruling class of their wealth. Lenin believed that the transition from the capitalist mode of production to the socialist mode of production would last for a long period. According to some authors, Leninism was by definition authoritarian. In contrast to Marx, who believed that the socialist revolution would comprise and be led by the working class alone, Lenin argued that a socialist revolution did not necessarily need to be led or to comprise the working class alone. Instead, he said that a revolution needed to be led by the oppressed classes of society, which in the case of Russia was the peasant class. Stalinism Stalinism, while not an ideology per se, refers to Stalin's thoughts and policies. Stalin's introduction of the concept "Socialism in One Country" in 1924 was an important moment in Soviet ideological discourse. According to Stalin, the Soviet Union did not need a socialist world revolution to construct a socialist society. Four years later, Stalin initiated his "Second Revolution" with the introduction of state socialism and central planning. In the early 1930s, he initiated the collectivization of Soviet agriculture by de-privatizing agriculture and creating peasant cooperatives rather than making it the responsibility of the state. With the initiation of his "Second Revolution", Stalin launched the "Cult of Lenin"a cult of personality centered upon himself. The name of the city of Petrograd was changed to Leningrad, the town of Lenin's birth was renamed Ulyanov (Lenin's birth-name), the Order of Lenin became the highest state award and portraits of Lenin were hung in public squares, workplaces and elsewhere. The increasing bureaucracy which followed the introduction of a state socialist economy was at complete odds with the Marxist notion of "the withering away of the state". Stalin explained the reasoning behind it at the 16th Congress held in 1930; We stand for the strengthening of the dictatorship of the proletariat, which represents the mightiest and most powerful authority of all forms of State that have ever existed. The highest development of the State power for the withering away of State power —this is the Marxian formula. Is this contradictory? Yes, it is contradictory. But this contradiction springs from life itself and reflects completely Marxist dialectic. At the 1939 18th Congress, Stalin abandoned the idea that the state would wither away. In its place, he expressed confidence that the state would exist, even if the Soviet Union reached communism, as long as it was encircled by capitalism. Two key concepts were created in the latter half of his rule; the "two camps" theory and the "capitalist encirclement" theory. The threat of capitalism was used to strengthen Stalin's personal powers and Soviet propaganda began making a direct link with Stalin and stability in society, saying that the country would crumble without the leader. Stalin deviated greatly from classical Marxism on the subject of "subjective factors"; Stalin said that Party members of all ranks had to profess fanatic adherence to the Party's line and ideology, if not, those policies would fail. Concepts Dictatorship of the proletariat Lenin, supporting Marx's theory of the state, believed democracy to be unattainable anywhere in the world before the proletariat seized power. According to Marxist theory, the state is a vehicle for oppression and is headed by a ruling class. He believed that by his time, the only viable solution was dictatorship since the war was heading into a final conflict between the "progressive forces of socialism and the degenerate forces of capitalism". The Russian Revolution was by 1917, already a failure according to its original aim, which was to act as an inspiration for a world revolution. The initial anti-statist posture and the active campaigning for direct democracy was replaced because of Russia's level of development withaccording to their own assessments dictatorship. The reasoning was Russia's lack of development, its status as the sole socialist state in the world, its encirclement by imperialist powers, and its internal encirclement by the peasantry. Marx and Lenin did not care if a bourgeois state was ruled in accordance with a republican, parliamentary or a constitutional monarchical system since this did not change the overall situation. These systems, even if they were ruled by a small clique or ruled through mass participation, were all dictatorships of the bourgeoisie who implemented policies in defense of capitalism. However, there was a difference; after the failures of the world revolutions, Lenin argued that this did not necessarily have to change under the dictatorship of the proletariat. The reasoning came from practical considerations; the majority of the country's inhabitants were not communists, neither could the Party reintroduce parliamentary democracy because that was not in synchronization with its ideology and would lead to the Party losing power. He, therefore, concluded that the form of government has nothing to do with the nature of the dictatorship of the proletariat. Bukharin and Trotsky agreed with Lenin; both said that the revolution had destroyed the old but had failed to create anything new. Lenin had now concluded that the dictatorship of the proletariat would not alter the relationship of power between men, but would rather "transform their productive relations so that, in the long run, the realm of necessity could be overcome and, with that, genuine social freedom realized". From 1920 to 1921, Soviet leaders and ideologists began differentiating between socialism and communism; hitherto the two terms had been used interchangeably and used to explain the same things. From then, the two terms had different meanings; Russia was in transition from capitalism to socialismreferred to interchangeably under Lenin as the dictatorship of the proletariat, socialism was the intermediate stage to communism and communism was considered the last stage of social development. By now, the party leaders believed that because of Russia's backward state, universal mass participation and true democracy could only take form in the last stage. In early Bolshevik discourse, the term "dictatorship of the proletariat" was of little significance, and the few times it was mentioned it was likened to the form of government which had existed in the Paris Commune. However, with the ensuing Russian Civil War and the social and material devastation that followed, its meaning altered from commune-type democracy to rule by iron-discipline. By now, Lenin had concluded that only a proletarian regime as oppressive as its opponents could survive in this world. The powers previously bestowed upon the Soviets were now given to the Council of People's Commissars, the central government, which was, in turn, to be governed by "an army of steeled revolutionary Communists [by Communists he referred to the Party]". In a letter to Gavril Myasnikov in late 1920, Lenin explained his new interpretation of the term "dictatorship of the proletariat": Dictatorship means nothing more nor less than authority untrammeled by any laws, absolutely unrestricted by any rules whatever, and based directly on force. The term 'dictatorship' has no other meaning but this. Lenin justified these policies by claiming that all states were class states by nature and that these states were maintained through class struggle. This meant that the dictatorship of the proletariat in the Soviet Union could only be "won and maintained by the use of violence against the bourgeoisie". The main problem with this analysis is that the Party came to view anyone opposing or holding alternate views of the party as bourgeois. Its worst enemy remained the moderates, which were considered to be "the real agents of the bourgeoisie in the working-class movement, the labor lieutenants of the capitalist class". The term "bourgeoisie" became synonymous with "opponent" and with people who disagreed with the Party in general. These oppressive measures led to another reinterpretation of the dictatorship of the proletariat and socialism in general; it was now defined as a purely economic system. Slogans and theoretical works about democratic mass participation and collective decision-making were now replaced with texts which supported authoritarian management. Considering the situation, the Party believed it had to use the same powers as the bourgeoisie to transform Russia; there was no alternative. Lenin began arguing that the proletariat, like the bourgeoisie, did not have a single preference for a form of government and because of that, the dictatorship was acceptable to both the Party and the proletariat. In a meeting with Party officials, Lenin statedin line with his economist view of socialismthat "Industry is indispensable, democracy is not", further arguing that "we [the Party] do not promise any democracy or any freedom". Anti-imperialism The Marxist theory on imperialism was conceived by Lenin in his book, Imperialism: the Highest Stage of Capitalism (published in 1917). It was written in response to the theoretical crisis within Marxist thought, which occurred due to capitalism's recovery in the 19th century. According to Lenin, imperialism was a specific stage of development of capitalism; a stage he referred to as state monopoly capitalism. The Marxist movement was split on how to solve capitalism's resurgence after the great depression of the late 19th century. Eduard Bernstein from the Social Democratic Party of Germany (SDP) considered capitalism's revitalization as proof that it was evolving into a more humane system, adding that the basic aims of socialists were not to overthrow the state but to take power through elections. Karl Kautsky, also from the SDP, held a highly dogmatic view; he said that there was no crisis within Marxist theory. Both of them denied or belittled the role of class contradictions in society after the crisis. In contrast, Lenin believed that the resurgence was the beginning of a new phase of capitalism; this stage was created because of a strengthening of class contradiction, not because of its reduction. Lenin did not know when the imperialist stage of capitalism began; he said it would be foolish to look for a specific year, however, said it began at the beginning of the 20th century (at least in Europe). Lenin believed that the economic crisis of 1900 accelerated and intensified the concentration of industry and banking, which led to the transformation of the finance capital connection to industry into the monopoly of large banks. In Imperialism: the Highest Stage of Capitalism, Lenin wrote; "the twentieth century marks the turning point from the old capitalism to the new, from the domination of capital in general to the domination of finance capital". Lenin defines imperialism as the monopoly stage of capitalism. The 1986 Party Program claimed the tsarist regime collapsed because the contradictions of imperialism, which he held to be the gap "between the social nature of production and the private capitalist form of appropriation" manifesting itself in wars, economic recessions, and exploitation of the working class, were strongest in Russia. Imperialism was held to have caused the Russo-Japanese War and the First World War, with the 1905 Russian Revolution presented as "the first people's revolution of the imperialist epoch" and the October Revolution is said to have been rooted in "the nationwide movement against imperialist war and for peace." Peaceful coexistence "Peaceful coexistence" was an ideological concept introduced under Khrushchev's rule. While the concept has been interpreted by fellow communists as proposing an end to the conflict between the systems of capitalism and socialism, Khrushchev saw it as a continuation of the conflict in every area except in the military field. The concept said that the two systems were developed "by way of diametrically opposed laws", which led to "opposite principles in foreign policy". Peaceful coexistence was steeped in Leninist and Stalinist thought. Lenin believed that international politics were dominated by class struggle; in the 1940s Stalin stressed the growing polarization which was occurring in the capitalist and socialist systems. Khrushchev's peaceful coexistence was based on practical changes which had occurred; he accused the old "two camp" theory of neglecting the non-aligned movement and the national liberation movements. Khrushchev considered these "grey areas", in which the conflict between capitalism and socialism would be fought. He still stressed that the main contradiction in international relations were those of capitalism and socialism. The Soviet Government under Khrushchev stressed the importance of peaceful coexistence, saying that it had to form the basis of Soviet foreign policy. Failure to do, they believed, would lead to nuclear conflict. Despite this, Soviet theorists still considered peaceful coexistence to be a continuation of the class struggle between the capitalist and socialist worlds, but not based on armed conflict. Khrushchev believed that the conflict, in its current phase, was mainly economic. The emphasis on peaceful coexistence did not mean that the Soviet Union accepted a static world with clear lines. It continued to uphold the creed that socialism was inevitable and they sincerely believed that the world had reached a stage in which the "correlations of forces" were moving towards socialism. With the establishment of socialist regimes in Eastern Europe and Asia, Soviet foreign policy planners believed that capitalism had lost its dominance as an economic system. Socialism in One Country The concept of "Socialism in One Country" was conceived by Stalin in his struggle against Leon Trotsky and his concept of permanent revolution. In 1924, Trotsky published his pamphlet Lessons of October, in which he stated that socialism in the Soviet Union would fail because of the backward state of economic development unless a world revolution began. Stalin responded to Trotsky's pamphlet with his article, "October and Comrade Trotsky's Theory of Permanent Revolution". In it, Stalin stated that he did not believe an inevitable conflict between the working class and the peasants would take place, and that "socialism in one country is completely possible and probable". Stalin held the view common among most Bolsheviks at the time; there was a possibility of real success for socialism in the Soviet Union despite the country's backwardness and international isolation. While Grigoriy Zinoviev, Lev Kamenev and Nikolai Bukharintogether with Stalinopposed Trotsky's theory of permanent revolution, their views on the way socialism could be built diverged. According to Bukharin, Zinoviev and Kamenev supported the resolution of the 14th Conference held in 1925, which stated that "we cannot complete the building of socialism due to our technological backwardness". Despite this cynical attitude, Zinoviev and Kamenev believed that a defective form of socialism could be constructed. At the 14th Conference, Stalin reiterated his position that socialism in one country was feasible despite the capitalist blockade of the Soviet Union. After the conference, Stalin wrote "Concerning the Results of the XIV Conference of the RCP(b)", in which he stated that the peasantry would not turn against the socialist system because they had a self-interest in preserving it. Stalin said the contradictions which arose within the peasantry during the socialist transition could "be overcome by our own efforts". He concluded that the only viable threat to socialism in the Soviet Union was a military intervention. In late 1925, Stalin received a letter from a Party official which stated that his position of "Socialism in One Country" was in contradiction with Friedrich Engels' writings on the subject. Stalin countered that Engels' writings reflected "the era of pre-monopoly capitalism, the pre-imperialist era when there were not yet the conditions of an uneven, abrupt development of the capitalist countries". From 1925, Bukharin began writing extensively on the subject and in 1926, Stalin wrote On Questions of Leninism, which contains his best-known writings on the subject. With the publishing of Leninism, Trotsky began countering Bukharin's and Stalin's arguments, writing that socialism in one country was only possible only in the short term, and said that without a world revolution it would be impossible to safeguard the Soviet Union from the "restoration of bourgeois relations". Zinoviev disagreed with Trotsky and Bukharin, and Stalin; he maintained Lenin's position from 1917 to 1922 and continued to say that only a defective form of socialism could be constructed in the Soviet Union without a world revolution. Bukharin began arguing for the creation of an autarkic economic model, while Trotsky said that the Soviet Union had to participate in the international division of labor to develop. In contrast to Trotsky and Bukharin, in 1938, Stalin said that a world revolution was impossible and that Engels was wrong on the matter. At the 18th Congress, Stalin took the theory to its inevitable conclusion, saying that the communist mode of production could be conceived in one country. He rationalized this by saying that the state could exist in a communist society as long as the Soviet Union was encircled by capitalism. However, with the establishment of socialist regimes in Eastern Europe, Stalin said that socialism in one country was only possible in a large country like the Soviet Union and that to survive, the other states had to follow the Soviet line. Reasons for demise Western view There were few, if any, who believed that the Soviet Union was on the verge of collapse by 1985. The economy was stagnating, but stable enough for the Soviet Union to continue into the 21st century. The political situation was calm because of twenty years of systematic repression against any threat to the country and one-party rule, and the Soviet Union was in its peak of influence in world affairs. The immediate causes for the Soviet Union's dissolution were the policies and thoughts of Mikhail Gorbachev, the CPSU General Secretary. His policies of perestroika and glasnost tried to revitalize the Soviet economy and the social and political culture of the country. Throughout his rule, he put more emphasis on democratizing the Soviet Union because he believed it had lost its moral legitimacy to rule. These policies led to the collapse of the communist regimes in Eastern Europe and indirectly destabilized Gorbachev's and the CPSU's control over the Soviet Union. Archie Brown said: The expectations of, again most notably, Lithuanians, Estonians, and Latvians were enormously enhanced by what they saw happening in the 'outer empire' [Eastern Europe], and they began to believe that they could remove themselves from the 'inner empire'. In truth, a democratized Soviet Union was incompatible with denial of the Baltic states' independence for, to the extent that those Soviet republics became democratic, their opposition to remaining in a political entity whose center was Moscow would become increasingly evident. Yet, it was not preordained that the entire Soviet Union would break up. However, Brown said that the system did not need to collapse or to do so in the way it did. The democratization from above weakened the Party's control over the country and put it on the defensive. Brown added that a different leader than Gorbachev would probably have oppressed the opposition and continued with economic reform. Nonetheless, Gorbachev accepted that the people sought a different road and consented to the Soviet Union's dissolution in 1991. He said that because of its peaceful collapse, the fall of Soviet communism is "one of the great success stories of 20th-century politics". According to Lars T. Lih, the Soviet Union collapsed because people stopped believing in its ideology. He wrote: When in 1991 the Soviet Union collapsed not with a bang but a whimper, this unexpected outcome was partly the result of the previous disenchantments of the narrative of class leadership. The Soviet Union had always been based on the fervent belief in this narrative in its various permutations. When the binding power of the narrative dissolved, the Soviet Union itself dissolved. According to the Communist Party of China The first research into the collapse of the Soviet Union and the Eastern Bloc were very simple and did not take into account several factors. However, these examinations became more advanced by the 1990s, and unlike most Western scholarship, which focuses on the role of Gorbachev and his reform efforts, the Communist Party of China (CPC) examined "core (political) life and death issues" so that it could learn from them and not make the same mistakes. Following the CPSU's demise and the Soviet Union's collapse, the CPC's analysis began examining systematic causes. Several leading CPC officials began hailing Khrushchev's rule, saying that he was the first reformer and that if he had continued after 1964, the Soviet Union would not have witnessed the Era of Stagnation began under Brezhnev and continued under Yuri Andropov and Konstantin Chernenko. The main economic failure was that the political leadership did not pursue any reforms to tackle the economic malaise that had taken hold, dismissing certain techniques as capitalist, and never disentangling the planned economy from socialism. Xu Zhixin from the CASS Institute of Eastern Europe, Russia, and Central Asia, argued that Soviet planners laid too much emphasis on heavy industry, which led to shortages of consumer goods. Unlike his counterparts, Xu argued that the shortages of consumer goods were not an error but "was a consciously planned feature of the system". Other CPSU failures were pursuing the policy of state socialism, the high spending on the military-industrial complex, a low tax base, and the subsidizing of the economy. The CPC argued that when Gorbachev came to power and introduced his economic reforms, they were "too little, too late, and too fast". While most CPC researchers criticize the CPSU's economic policies, many have criticized what they see as "Soviet totalitarianism". They accuse Joseph Stalin of creating a system of mass terror, intimidation, annulling the democracy component of democratic centralism and emphasizing centralism, which led to the creation of an inner-party dictatorship. Other points were Russian nationalism, a lack of separation between the Party and state bureaucracies, suppression of non-Russian ethnicities, distortion of the economy through the introduction of over-centralization and the collectivization of agriculture. According to CPC researcher Xiao Guisen, Stalin's policies led to "stunted economic growth, tight surveillance of society, a lack of democracy in decision-making, an absence of the rule of law, the burden of bureaucracy, the CPSU's alienation from people's concerns, and an accumulation of ethnic tensions". Stalin's effect on ideology was also criticized; several researchers accused his policies of being "leftist", "dogmatist" and a deviation "from true Marxism–Leninism." He is criticized for initiating the "bastardization of Leninism", of deviating from true democratic centralism by establishing a one-man rule and destroying all inner-party consultation, of misinterpreting Lenin's theory of imperialism and of supporting foreign revolutionary movements only when the Soviet Union could get something out of it. Yu Sui, a CPC theoretician, said that "the collapse of the Soviet Union and CPSU is a punishment for its past wrongs!" Similarly, Brezhnev, Mikhail Suslov, Alexei Kosygin and Konstantin Chernenko have been criticized for being "dogmatic, ossified, inflexible, [for having a] bureaucratic ideology and thinking", while Yuri Andropov is depicted by some of having the potential of becoming a new Khrushchev if he had not died early. While the CPC concur with Gorbachev's assessment that the CPSU needed internal reform, they do not agree on how it was implemented, criticizing his idea of "humanistic and democratic socialism", of negating the leading role of the CPSU, of negating Marxism, of negating the analysis of class contradictions and class struggle, and of negating the "ultimate socialist goal of realizing communism". Unlike the other Soviet leaders,
be called Marxism–Leninism. Stalin's position as General Secretary became the top executive position within the party, giving Stalin significant authority over party and state policy. By the end of the 1920s, diplomatic relations with western countries were deteriorating to the point that there was a growing fear of another allied attack on the Soviet Union. Within the country, the conditions of the NEP had enabled growing inequalities between increasingly wealthy strata and the remaining poor. The combination of these tensions led the party leadership to conclude that it was necessary for the government's survival to pursue a new policy that would centralize economic activity and accelerate industrialization. To do this, the first five-year plan was implemented in 1928. The plan doubled the industrial workforce, proletarianizing many of the peasants by removing them from their land and assembling them into urban centers. Peasants who remained in agricultural work were also made to have a similarly proletarian relationship to their labor through the policies of collectivization, which turned feudal-style farms into collective farms which would be in a cooperative nature under the direction of the state. These two shifts changed the base of Soviet society towards a more working-class alignment. The plan was fulfilled ahead of schedule in 1932. The success of industrialization in the Soviet Union led western countries, such as the United States, to open diplomatic relations with the Soviet government. In 1933, after years of unsuccessful workers' revolutions (including a short-lived Bavarian Soviet Republic) and spiraling economic calamity, Adolf Hitler came to power in Germany, violently suppressing the revolutionary organizers and posing a direct threat to the Soviet Union that ideologically supported them. The threat of fascist sabotage and imminent attack greatly exacerbated the already existing tensions within the Soviet Union and the Communist Party. A wave of paranoia overtook Stalin and the party leadership and spread through Soviet society. Seeing potential enemies everywhere, leaders of the government security apparatuses began severe crackdowns known as the Great Purge. In total, hundreds of thousands of people, many of whom were posthumously recognized as innocent, were arrested and either sent to prison camps or executed. Also during this time, a campaign against religion was waged in which the Russian Orthodox Church, which had long been a political arm of tsarism before the revolution, was targeted for repression and organized religion was generally removed from public life and made into a completely private matter, with many churches, mosques and other shrines being repurposed or demolished. The Soviet Union was the first to warn of the impending danger of invasion from Nazi Germany to the international community. The western powers, however, remained committed to maintaining peace and avoiding another war breaking out, many considering the Soviet Union's warnings to be an unwanted provocation. After many unsuccessful attempts to create an anti-fascist alliance among the western countries, including trying to rally international support for the Spanish Republic in its struggle against a fascist military coup supported by Germany and Italy, in 1939 the Soviet Union signed a non-aggression pact with Germany which would be broken in June 1941 when the German military invading the Soviet Union in the largest land invasion in history, beginning the Great Patriotic War. The Communist International was dissolved in 1943 after it was concluded that such an organization had failed to prevent the rise of fascism and the global war necessary to defeat it. After the 1945 Allied victory of World War II, the Party held to a doctrine of establishing socialist governments in the post-war occupied territories that would be administered by Communists loyal to Stalin's administration. The party also sought to expand its sphere of influence beyond the occupied territories, using proxy wars and espionage and providing training and funding to promote Communist elements abroad, leading to the establishment of the Cominform in 1947. In 1949, the Communists emerged victorious in the Chinese Civil War, causing an extreme shift in the global balance of forces and greatly escalating tensions between the Communists and the western powers, fueling the Cold War. In Europe, Yugoslavia, under the leadership of Josip Broz Tito, acquired the territory of Trieste, causing conflict both with the western powers and with the Stalin administration who opposed such a provocative move. Furthermore, the Yugoslav Communists actively supported the Greek Communists during their civil war, further frustrating the Soviet government. These tensions led to a Tito–Stalin Split, which marked the beginning of international sectarian division within the world communist movement. Post-Stalin years (1953–85) After Stalin's death, Khrushchev rose to the top post by overcoming political adversaries, including Lavrentiy Beria and Georgy Malenkov, in a power struggle. In 1955, Khrushchev achieved the demotion of Malenkov and secured his own position as Soviet leader. Early in his rule and with the support of several members of the Presidium, Khrushchev initiated the Thaw, which effectively ended the Stalinist mass terror of the prior decades and reduced socio-economic oppression considerably. At the 20th Congress held in 1956, Khrushchev denounced Stalin's crimes, being careful to omit any reference to complicity by any sitting Presidium members. His economic policies, while bringing about improvements, were not enough to fix the fundamental problems of the Soviet economy. The standard of living for ordinary citizens did increase; 108 million people moved into new housing between 1956 and 1965. Khrushchev's foreign policies led to the Sino-Soviet split, in part a consequence of his public denunciation of Stalin. Khrushchev improved relations with Josip Broz Tito's League of Communists of Yugoslavia but failed to establish the close, party-to-party relations that he wanted. While the Thaw reduced political oppression at home, it led to unintended consequences abroad, such as the Hungarian Revolution of 1956 and unrest in Poland, where the local citizenry now felt confident enough to rebel against Soviet control. Khrushchev also failed to improve Soviet relations with the West, partially because of a hawkish military stance. In the aftermath of the Cuban Missile Crisis, Khrushchev's position within the party was substantially weakened. Shortly before his eventual ousting, he tried to introduce economic reforms championed by Evsei Liberman, a Soviet economist, which tried to implement market mechanisms into the planned economy. Khrushchev was ousted on 14 October 1964 in a Central Committee plenum that officially cited his inability to listen to others, his failure in consulting with the members of the Presidium, his establishment of a cult of personality, his economic mismanagement, and his anti-party reforms as the reasons he was no longer fit to remain as head of the party. He was succeeded in office by Leonid Brezhnev as First Secretary and Alexei Kosygin as Chairman of the Council of Ministers. The Brezhnev era began with a rejection of Khrushchevism in virtually every arena except one: continued opposition to Stalinist methods of terror and political violence. Khrushchev's policies were criticized as voluntarism, and the Brezhnev period saw the rise of neo-Stalinism. While Stalin was never rehabilitated during this period, the most conservative journals in the country were allowed to highlight positive features of his rule. At the 23rd Congress held in 1966, the names of the office of First Secretary and the body of the Presidium reverted to their original names: General Secretary and Politburo, respectively. At the start of his premiership, Kosygin experimented with economic reforms similar to those championed by Malenkov, including prioritizing light industry over heavy industry to increase the production of consumer goods. Similar reforms were introduced in Hungary under the name New Economic Mechanism; however, with the rise to power of Alexander Dubček in Czechoslovakia, who called for the establishment of "socialism with a human face", all non-conformist reform attempts in the Soviet Union were stopped. During his rule, Brezhnev supported détente, a passive weakening of animosity with the West with the goal of improving political and economic relations. However, by the 25th Congress held in 1976, political, economic and social problems within the Soviet Union began to mount, and the Brezhnev administration found itself in an increasingly difficult position. The previous year, Brezhnev's health began to deteriorate. He became addicted to painkillers and needed to take increasingly more potent medications to attend official meetings. Because of the "trust in cadres" policy implemented by his administration, the CPSU leadership evolved into a gerontocracy. At the end of Brezhnev's rule, problems continued to amount; in 1979 he consented to the Soviet intervention in Afghanistan to save the embattled communist regime there and supported the oppression of the Solidarity movement in Poland. As problems grew at home and abroad, Brezhnev was increasingly ineffective in responding to the growing criticism of the Soviet Union by Western leaders, most prominently by US Presidents Jimmy Carter and Ronald Reagan, and UK Prime Minister Margaret Thatcher. The CPSU, which had wishfully interpreted the financial crisis of the 1970s as the beginning of the end of capitalism, found its country falling far behind the West in its economic development. Brezhnev died on 10 November 1982, and was succeeded by Yuri Andropov on 12 November. Andropov, a staunch anti-Stalinist, chaired the KGB during most of Brezhnev's reign. He had appointed several reformers to leadership positions in the KGB, many of whom later became leading officials under Gorbachev. Andropov supported increased openness in the press, particularly regarding the challenges facing the Soviet Union. Andropov was in office briefly, but he appointed a number of reformers, including Yegor Ligachev, Nikolay Ryzhkov and Mikhail Gorbachev, to important positions. He also supported a crackdown on absenteeism and corruption. Andropov had intended to let Gorbachev succeed him in office, but Konstantin Chernenko and his supporters suppressed the paragraph in the letter which called for Gorbachev's elevation. Andropov died on 9 February 1984 and was succeeded by Chernenko. Throughout his short leadership, Chernenko was unable to consolidate power, and effective control of the party organization remained in Gorbachev's control. Chernenko died on 10 March 1985 and was succeeded in office by Gorbachev on 11 March 1985. Gorbachev and the party's demise (1985–91) The Politburo elected Gorbachev as CPSU General Secretary on 11 March 1985, one day after Chernenko's death. When Gorbachev acceded to power, the Soviet Union was stagnating but was stable and might have continued largely unchanged into the 21st century if not for Gorbachev's reforms. Gorbachev conducted a significant personnel reshuffling of the CPSU leadership, forcing old party conservatives out of office. In 1985 and early 1986 the new leadership of the party called for uskoreniye (). Gorbachev reinvigorated the party ideology, adding new concepts and updating older ones. Positive consequences of this included the allowance of "pluralism of thought" and a call for the establishment of "socialist pluralism" (literally, socialist democracy). Gorbachev introduced a policy of glasnost (, meaning openness or transparency) in 1986, which led to a wave of unintended democratization. According to the British researcher of Russian affairs, Archie Brown, the democratization of the Soviet Union brought mixed blessings to Gorbachev; it helped him to weaken his conservative opponents within the party but brought out accumulated grievances which had been suppressed during the previous decades. In reaction to these changes, a conservative movement gained momentum in 1987 in response to Boris Yeltsin's dismissal as First Secretary of the CPSU Moscow City Committee. On 13 March 1988, Nina Andreyeva, a university lecturer, wrote an article titled "I Cannot Forsake My Principles". The publication was planned to occur when both Gorbachev and his protege Alexander Yakovlev were visiting foreign countries. In their place, Yegor Ligachev led the party organization and told journalists that the article was "a benchmark for what we need in our ideology today". Upon Gorbachev's return, the article was discussed at length during a Politburo meeting; it was revealed that nearly half of its members were sympathetic to the letter and opposed further reforms which could weaken the party. The meeting lasted for two days, but on 5 April a Politburo resolution responded with a point-by-point rebuttal to Andreyeva's article. Gorbachev convened the 19th Party Conference in June 1988. He criticized leading party conservatives – Ligachev, Andrei Gromyko and Mikhail Solomentsev. In turn, conservative delegates attacked Gorbachev and the reformers. According to Brown, there had not been as much open discussion and dissent at a party meeting since the early 1920s. Despite the deep-seated opposition to further reform, the CPSU remained hierarchical; the conservatives acceded to Gorbachev's demands in deference to his position as the CPSU General Secretary. The 19th Conference approved the establishment of the Congress of People's Deputies (CPD) and allowed for contested elections between the CPSU and independent candidates. Other organized parties were not allowed. The CPD was elected in 1989; one-third of the seats were appointed by the CPSU and other public organizations to sustain the Soviet one-party state. The elections were democratic, but most elected CPD members opposed any more radical reform. The elections featured the highest electoral turnout in Russian history; no election before or since had a higher participation rate. An organized opposition was established within the legislature under the name Inter-Regional Group of Deputies by dissident Andrei Sakharov. An unintended consequence of these reforms was the increased anti-CPSU pressure; in March 1990, at a session of the Supreme Soviet of the Soviet Union, the party was forced to relinquish its political monopoly of power, in effect turning the Soviet Union into a liberal democracy. The CPSU's demise began in March 1990, when state bodies eclipsed party elements in power. From then until the Soviet Union's disestablishment, Gorbachev ruled the country through the newly created post of President of the Soviet Union. Following this, the central party apparatus didn't play a practical role in Soviet affairs. Gorbachev had become independent from the Politburo and faced few constraints from party leaders. In the summer of 1990 the party convened the 28th Congress. A new Politburo was elected, previous incumbents (except Gorbachev and Vladimir Ivashko, the CPSU Deputy General Secretary) were removed. Later that year, the party began work on a new program with a working title, "Towards a Humane, Democratic Socialism". According to Brown, the program reflected Gorbachev's journey from an orthodox communist to a European social democrat. The freedoms of thought and organization which Gorbachev allowed led to a rise in nationalism in the Soviet republics, indirectly weakening the central authorities. In response to this, a referendum took place in 1991, in which most of the union republics voted to preserve the union in a different form. In reaction to this, conservative elements within the CPSU launched the August 1991 coup, which overthrew Gorbachev but failed to preserve the Soviet Union. When Gorbachev resumed control (21 August 1991) after the coup's collapse, he resigned from the CPSU on 24 August 1991 and operations were handed over to Ivashko. On 29 August 1991 the activity of the CPSU was suspended throughout the country, on 6 November Yeltsin banned the activities of the party in Russia and Gorbachev resigned from the presidency on 25 December; the following day the Soviet of Republics dissolved the Soviet Union. On 30 November 1992, the Constitutional Court of the Russian Federation recognized the ban on the activities of the primary organizations of the Communist Party, formed on a territorial basis, as inconsistent with the Constitution of Russia, but upheld the dissolution of the governing structures of the CPSU and the governing structures of its republican organization – the Communist Party of the RSFSR. After the dissolution of the Soviet Union in 1991, Russian adherents to the CPSU tradition, particularly as it existed before Gorbachev, reorganized themselves within the Communist Party of the Russian Federation (CPRF). Today a wide range of parties in Russia present themselves as successors of CPSU. Several of them have used the name "CPSU". However, the CPRF is generally seen (due to its massive size) as the heir of the CPSU in Russia. Additionally, the CPRF was initially founded as the Communist Party of the Russian SFSR in 1990 (sometime before the abolition of the CPSU) and was seen by critics as a "Russian-nationalist" counterpart to the CPSU. Governing style The style of governance in the party alternated between collective leadership and a cult of personality. Collective leadership split power between the Politburo, the Central Committee, and the Council of Ministers to hinder any attempts to create a one-man dominance over the Soviet political system. By contrast, Stalin's period as the leader was characterized by an extensive cult of personality. Regardless of leadership style, all political power in the Soviet Union was concentrated in the organization of the CPSU. Democratic centralism Democratic centralism is an organizational principle conceived by Lenin. According to Soviet pronouncements, democratic centralism was distinguished from "bureaucratic centralism", which referred to high-handed formulae without knowledge or discussion. In democratic centralism, decisions are taken after discussions, but once the general party line has been formed, discussion on the subject must cease. No member or organizational institution may dissent on a policy after it has been agreed upon by the party's governing body; to do so would lead to expulsion from the party (formalized at the 10th Congress). Because of this stance, Lenin initiated a ban on factions, which was approved at the 10th Congress. Lenin believed that democratic centralism safeguarded both party unity and ideological correctness. He conceived of the system after the events of 1917 when several socialist parties "deformed" themselves and actively began supporting nationalist sentiments. Lenin intended that the devotion to policy required by centralism would protect the parties from such revisionist ills and bourgeois deformation of socialism. Lenin supported the notion of a highly centralized vanguard party, in which ordinary party members elected the local party committee, the local party committee elected the regional committee, the regional committee elected the Central Committee, and the Central Committee elected the Politburo, Orgburo, and the Secretariat. Lenin believed that the party needed to be ruled from the center and have at its disposal power to mobilize party members at will. This system was later introduced in communist parties abroad through the Communist International (Comintern). Vanguardism A central tenet of Leninism was that of the vanguard party. In a capitalist society, the party was to represent the interests of the working class and all of those who were exploited by capitalism in general; however, it was not to become a part of that class. Lenin decided that the party's sole responsibility was to articulate and plan the long-term interests of the oppressed classes. It was not responsible for the daily grievances of those classes; that was the responsibility of the trade unions. According to Lenin, the Party and the oppressed classes could never become one because the Party was responsible for leading the oppressed classes to victory. The basic idea was that a small group of organized people could wield power disproportionate to their size with superior organizational skills. Despite this, until the end of his life, Lenin warned of the danger that the party could be taken over by bureaucrats, by a small clique, or by an individual. Toward the end of his life, he criticized the bureaucratic inertia of certain officials and admitted to problems with some of the party's control structures, which were to supervise organizational life. Organization Communist Party of the Soviet Union (Central Committee) Communist Party of Armenia (Central Committee) Communist Party of Azerbaijan (Central Committee) Communist Party of Bukhara Communist Party of Byelorussia (Central Committee) Communist Party of Estonia (Central Committee) Communist Party of Georgia (Central Committee) Communist Party of the Karelia-Finland SSR (Central Committee) Communist Party of Kazakhstan (Central Committee) Communist Party of Kirgizia (Central Committee) Communist Party of Khorezm Communist Party of the Soviet Union (Central Committee) Communist Party of Latvia (Central Committee) Communist Party of Lithuania (Central Committee) Communist Party of Lithuania and Byelorussia (Central Committee) Communist Party of Moldavia–Moldova (Central Committee) Communist Party of the Russian SFSR (Central Committee) Communist Party of Tajikistan (Central Committee) Communist Party of Turkestan (Central Committee) Communist Party of Turkmenistan (Central Committee) Communist Party of Ukraine (Central Committee) Communist Party of Uzbekistan (Central Committee) Congress The Congress, nominally the highest organ of the party, was convened every five years. Leading up to the October Revolution and until Stalin's consolidation of power, the Congress was the party's main decision-making body. However, after Stalin's ascension, the Congresses became largely symbolic. CPSU leaders used Congresses as a propaganda and control tool. The most noteworthy Congress since the 1930s was the 20th Congress, in which Khrushchev denounced Stalin in a speech titled "The Personality Cult and its Consequences". Despite delegates to Congresses losing their powers to criticize or remove party leadership, the Congresses functioned as a form of elite-mass communication. They were occasions for the party leadership to express the party line over the next five years to ordinary CPSU members and the general public. The information provided was general, ensuring that party leadership retained the ability to make specific policy changes as they saw fit. The Congresses also provided the party leadership with formal legitimacy by providing a mechanism for the election of new members and the retirement of old members who had lost favor. The elections at Congresses were all predetermined and the candidates who stood for seats to the Central Committee and the Central Auditing Commission were approved beforehand by the Politburo and the Secretariat. A Congress could also provide a platform for the announcement of new ideological concepts. For instance, at the 22nd Congress, Khrushchev announced that the Soviet Union would see "communism in twenty years" a position later retracted. A Conference, officially referred to as an All-Union Conference, was convened between Congresses by the Central Committee to discuss party policy and to make personnel changes within the Central Committee. 19 conferences were convened during the CPSU's existence. The 19th Congress held in 1952 removed the clause in the party's statute which stipulated that a party Conference could be convened. The clause was reinstated at the 23rd Congress, which was held in 1966. Central Committee The Central Committee was a collective body elected at the annual party congress. It was mandated to meet at least twice a year to act as the party's supreme governing body. Membership of the Central Committee increased from 71 full members in 1934 to 287 in 1976. Central Committee members were elected to the seats because of the offices they held, not on their personal merit. Because of this, the Central Committee was commonly considered an indicator for Sovietologists to study the strength of the different institutions. The Politburo was elected by and reported to the Central Committee. Besides the Politburo, the Central Committee also elected the Secretariat and the General Secretarythe de facto leader of the Soviet Union. In 1919–1952, the Orgburo was also elected in the same manner as the Politburo and the Secretariat by the plenums of the Central Committee. In between Central Committee plenums, the Politburo and the Secretariat were legally empowered to make decisions on its behalf. The Central Committee or the Politburo and/or Secretariat on its behalf could issue nationwide decisions; decisions on behalf of the party were transmitted from the top to the bottom. Under Lenin, the Central Committee functioned much as the Politburo did during the post-Stalin era, serving as the party's governing body. However, as the membership in the Central Committee increased, its role was eclipsed by the Politburo. Between Congresses, the Central Committee functioned as the Soviet leadership's source of legitimacy. The decline in the Central Committee's standing began in the 1920s; it was reduced to a compliant body of the Party leadership during the Great Purge. According to party rules, the Central Committee was to convene at least twice a year to discuss political mattersbut not matters relating to military policy. The body remained largely symbolic after Stalin's consolidation; leading party officials rarely attended meetings of the Central Committee. Central Auditing Commission The Central Auditing Commission (CAC) was elected by the party Congresses and reported only to the party Congress. It had about as many members as the Central Committee. It was responsible for supervising the expeditious and proper handling of affairs by the central bodies of the Party; it audited the accounts of the Treasury and the enterprises of the Central Committee. It was also responsible for supervising the Central Committee apparatus, making sure that its directives were implemented and that Central Committee directives complied with the party Statute. Statute The Statute (also referred to as the Rules, Charter and Constitution) was the party's by-laws and controlled life within the CPSU. The 1st Statute was adopted at the 2nd Congress of the Russian Social Democratic Labour Partythe forerunner of the CPSU. How the Statute was to be structured and organized led to a schism within the party, leading to the establishment of two competing factions; Bolsheviks (literally majority) and Mensheviks (literally minority). The 1st Statute was based upon Lenin's idea of a centralized vanguard party. The 4th Congress, despite a majority of Menshevik delegates, added the concept of democratic centralism to Article 2 of the Statute. The 1st Statute lasted until 1919 when the 8th Congress adopted the 2nd Statute. It was nearly five times as long as the 1st Statute and contained 66 articles. It was amended at the 9th Congress. At the 11th Congress, the 3rd Statute was adopted with only minor amendments being made. New statutes were approved at the 17th and 18th Congresses respectively. The last party statute, which existed until the dissolution of the CPSU, was adopted at the 22nd Congress. Central Committee apparatus General Secretary General Secretary of the Central Committee was the title given to the overall leader of the party. The office was synonymous with the leader of the Soviet Union after Joseph Stalin's consolidation of power in the 1920s. Stalin used the office of General Secretary to create a strong power base for himself. The office was formally titled First Secretary between 1952 and 1966. Politburo The Political Bureau (Politburo), known as the Presidium from 1952 to 1966, was the highest party organ when the Congress and the Central Committee were not in session. Until the 19th Conference in 1988, the Politburo alongside the Secretariat controlled appointments and dismissals nationwide. In the post-Stalin period, the Politburo controlled the Central Committee apparatus through two channels; the General Department distributed the Politburo's orders to the Central Committee departments and through the personnel overlap which existed within the Politburo and the Secretariat. This personnel overlap gave the CPSU General Secretary a way of strengthening his position within the Politburo through the Secretariat. Kirill Mazurov, Politburo member from 1965 to 1978, accused Brezhnev of turning the Politburo into a "second echelon" of power. He accomplished this by discussing policies before Politburo meetings with Mikhail Suslov, Andrei Kirilenko, Fyodor Kulakov and Dmitriy Ustinov among others, who held seats both in the Politburo and the Secretariat. Mazurov's claim was later verified by Nikolai Ryzhkov, the Chairman of the Council of Ministers under Gorbachev. Ryzhkov said that Politburo meetings lasted only 15 minutes because the people close to Brezhnev had already decided what was to be approved. The Politburo was abolished and replaced by a Presidium in 1952 at the 19th Congress. In the aftermath the 19th Congress and the 1st Plenum of the 19th Central Committee, Stalin ordered the creation of the Bureau of the Presidium,
Yesus Lutheran Church, however, has taken a stand that marriage is inherently between a man and a woman, and has formally broken fellowship with the ELCA, a doctrinal stand that has cost the Ethiopian church ELCA financial support. Conservative position Some mainline Protestant denominations, such as the African Methodist churches, the Reformed Church in America, and the Presbyterian Church in America have a conservative position on the subject. The Seventh-day Adventist Church "recognizes that every human being is valuable in the sight of God, and seeks to minister to all men and women [including homosexuals] in the spirit of Jesus," while maintaining that homosexual sex itself is forbidden in the Bible. "Jesus affirmed the dignity of all human beings and reached out compassionately to persons and families suffering the consequences of sin. He offered caring ministry and words of solace to struggling people, while differentiating His love for sinners from His clear teaching about sinful practices." Conservative Quakers, those within Friends United Meeting and the Evangelical Friends International believe that sexual relations are condoned only in marriage, which they define to be between a man and a woman. Confessional Lutheran churches teach that it is sinful to have homosexual desires, even if they do not lead to homosexual activity. The Doctrinal statement issued by the Wisconsin Evangelical Lutheran Synod states that making a distinction between homosexual orientation and the act of homosexuality is confusing: However, confessional Lutherans also warn against selective morality which harshly condemns homosexuality while treating other sins more lightly. Evangelical churches The positions of the evangelical churches are varied. They range from liberal to fundamentalist or moderate Conservative and neutral. Some evangelical denominations have adopted neutral positions, leaving the choice to local churches to decide for same-sex marriage. Evangelical Conservative position Conservative Evangelical Christians regard homosexual acts as sinful and think they should not be accepted by society. They tend to interpret biblical verses on homosexual acts to mean that the heterosexual family was created by God to be the bedrock of civilization and that same-sex relationships contradict God’s design for marriage and is not his will. Christians who oppose homosexual relationships sometimes argue that same-gender sexual activity is a sin. In opposing interpretations of the Bible that are supportive of homosexual relationships, conservative Christians have argued for the reliability of the Bible, and the meaning of texts related to homosexual acts, while often seeing what they call the diminishing of the authority of the Bible by many homosexual authors as being ideologically driven. As an alternative to a school-sponsored Day of Silence opposing bullying of LGBT students, conservative Christians organized a Golden Rule Initiative, where they passed out cards saying "As a follower of Christ, I believe that all people are created in the image of God and therefore deserve love and respect." Others created a Day of Dialogue to oppose what they believe is the silencing of Christian students who make public their opposition to homosexuality. On 29 August 2017, the Council on Biblical Manhood and Womanhood released a manifesto on human sexuality known as the "Nashville Statement". The statement was signed by 150 evangelical leaders, and includes 14 points of belief. Fundamentalist position It is in the fundamentalist conservative positions, that there are anti-gay activists on TV or radio who claim that homosexuality is the cause of many social problems, such as terrorism. Some evangelical churches in Uganda strongly oppose homosexuality and homosexuals. They have campaigned for laws criminalizing homosexuality. The generalization and use of prejudices to spread hatred of homosexual people are frequent. Sex scandals Some evangelical pastors with antigay speeches have been outed. There was Pastor Ted Haggard, founder of nondenominational charismatic megachurch New Life Church in Colorado Springs, USA. Married with five children, Ted was an anti-gay activist and said he wanted to ban homosexuality from the church. In 2006, he was dismissed from his position as senior pastor after a male prostitute claimed to have had sex with him for three years. After denying the relationship, the pastor admitted that the allegations were accurate. There was also Baptist Pastor George Alan Rekers of the Southern Baptist Convention in the United States and psychologist member of the National Association for Research & Therapy of Homosexuality. Married and father of children, the antigay activist was recognized with a gay escort, hired for a trip to Europe, in 2010. According to him, he had hired the gay escort to carry his luggage. Moderate position Some churches have a moderate Conservative position. Although they do not approve homosexual practices, they show sympathy and respect for homosexuals. Baptist Reflecting this position, some pastors, for example, showed moderation during public statements. For example, in 2008, Baptist pastor Rick Warren of Saddleback Church in Lake Forest, California said that he had developed good relationships with several gay people, without having to compromise his beliefs about the definition of marriage between a man and a woman present in the Bible. Charismatic movement Philip Igbinijesu, a pastor of the Lagos Word Assembly, an Evangelical church, said in a message to his church that the Nigerian law on homosexuality (inciting denunciation) was hateful. He recalled that homosexuals are creatures of God and that they should be treated with respect. Brian Houston of Hillsong Church said that gays are welcome in the church, but they cannot take up leadership positions. Non-denominational Christianity Pastor Joel Osteen of Lakewood Church in Houston said in 2013 he found it unfortunate that several Christian ministers focus on the homosexuality by forgetting the other sins described in the Bible. He said that Jesus did not come to condemn people, but to save them. Other pastors also share this view. Pastor Andy Stanley of North Point Community Church in Alpharetta, mentioned in 2015 that the church should be the safest place on the planet for students to talk about anything, including same-sex attraction. Organizations The French Evangelical Alliance, a member of the European Evangelical Alliance and the World Evangelical Alliance, adopted on 12 October 2002, through its National Council, a document entitled Foi, espérance et homosexualité ("Faith, Hope and Homosexuality "), in which homophobia, hatred and rejection of homosexuals are condemned, but which denies homosexual practices and full church membership of unrepentant homosexuals and those who approve of these practices. In 2015, the Conseil national des évangéliques de France (French National Council of Evangelicals) reaffirmed its position on the issue by opposing marriage of same-sex couples, while not rejecting homosexuals, but wanting to offer them more than a blessing; an accompaniment and a welcome. The French evangelical pastor Philippe Auzenet, a chaplain of the association Oser en parler, regularly intervenes on the subject in the media. It promotes dialogue and respect, as well as sensitization in order to better understand homosexuals. He also said in 2012 that Jesus would go to a gay bar, because he was going to all people with love. Liberal position International There are some international evangelical denominations that are gay-friendly, such as the Alliance of Baptists and Affirming Pentecostal Church International. U.S. A 2014 survey reported that 43% of white evangelical American Christians between the ages of 18 and 33 supported same-sex marriage. Some evangelical churches accept homosexuality and celebrate gay weddings. Pastors have also been involved in changing the traditional position of their church. In 2014, the New Heart Community Church of La Mirada, a Baptist church in the suburbs of Los Angeles was expelled from the Southern Baptist Convention for this purpose. In 2015, GracePointe Church in Franklin in the suburbs of Nashville made this decision. It lost over half of her weekly attendance (from 1,000 to 482). Neutral positions Some evangelical denominations have adopted neutral positions, leaving the choice to local churches to decide for same-sex marriage. Restorationist churches Restorationist churches, such as Seventh-Day Adventists, generally teach that homosexuals are 'broken' and can be 'fixed'. Jehovah's Witnesses believe that "The Bible condemns sexual activity that is not between a husband and wife, whether it is homosexual or heterosexual conduct. (1 Corinthians 6:18) . . . While the Bible disapproves of homosexual acts, it does not condone hatred of homosexuals or homophobia. Instead, Christians are directed to “respect everyone.”​—1 Peter 2:17, Good News Translation." The Church of Jesus Christ of Latter-day Saints said in 2015 that it officially welcomes its gay and lesbian members, if they choose sexual abstinence. The Community of Christ, a branch of Mormonism, fully accepts LGBT persons, performs weddings for gay and lesbian couples, and ordains LGBT members. Within the Stone-Campbell aligned restorationist churches the views are divergent. The churches of Christ (A Capella) and the Independent Christian Churches/Churches of Christ mostly adhere to a very conservative ideology; socially, politically, and religiously and are generally not accepting of openly LGBT members and will not perform weddings for gay and lesbian couples. The Disciples of Christ, is fully accepting of LGBT persons, often performs weddings for gay and lesbian couples, and ordains LGBT members. The United Church of Christ is an officially "open and affirming" church. Other Restorationist churches such as Millerite churches, have taken mixed positions but are increasingly accepting with some of their congregations fully accepting LGBT persons in all aspects of religious and political life. Views supportive of homosexuality In the 20th century, theologians like Jürgen Moltmann, Hans Küng, John Robinson, Bishop David Jenkins, Don Cupitt, and Bishop Jack Spong challenged traditional theological positions and understandings of the Bible; following these developments some have suggested that passages have been mistranslated or that they do not refer to what we understand as "homosexuality." Clay Witt, a minister in the Metropolitan Community Church, explains how theologians and commentators like John Shelby Spong, George Edwards and Michael England interpret injunctions against certain sexual acts as being originally intended as a means of distinguishing religious worship between Abrahamic and the surrounding pagan faiths, within which homosexual acts featured as part of idolatrous religious practices: "England argues that these prohibitions should be seen as being directed against sexual practices of fertility cult worship. As with the earlier reference from Strong’s, he notes that the word 'abomination' used here is directly related to idolatry and idolatrous practices throughout the Hebrew Testament. Edwards makes a similar suggestion, observing that 'the context of the two prohibitions in Leviticus 18:22 and Leviticus 20:13 suggest that what is opposed is not same-sex activity outside the cult, as in the modern secular sense, but within the cult identified as Canaanite'". In 1986, the Evangelical and Ecumenical Women’s Caucus (EEWC), then known as the Evangelical Women's Caucus International, passed a resolution stating: "Whereas homosexual people are children of God, and because of the biblical mandate of Jesus Christ that we are all created equal in God's sight, and in recognition of the presence of the lesbian minority in EWCI, EWCI takes a firm stand in favor of civil rights protection for homosexual persons." Some Christians believe that Biblical passages have been mistranslated or that these passages do not refer to LGBT orientation as currently understood. Liberal Christian scholars, like conservative Christian scholars, accept earlier versions of the texts that make up the Bible in Hebrew or Greek. However, within these early texts there are many terms that modern scholars have interpreted differently from previous generations of scholars. There are concerns with copying errors, forgery, and biases among the translators of later Bibles. They consider some verses such as those they say support slavery or the inferior treatment of women as not being valid today, and against the will of God present in the context of the Bible. They cite these issues when arguing for a change in theological views on sexual relationships to what they say is an earlier view. They differentiate among various sexual practices, treating rape, prostitution, or temple sex rituals as immoral and those within committed relationships as positive regardless of sexual orientation. They view certain verses, which they believe refer only to homosexual rape, as not relevant to consensual homosexual relationships. Yale professor John Boswell has argued that a number of Early Christians entered into homosexual relationships, and that certain Biblical figures had homosexual relationships, such as Ruth and her mother-in-law Naomi, Daniel and the court official Ashpenaz, and David and King Saul's son Jonathan. Boswell has also argued that adelphopoiesis, a rite bonding two men, was akin to a religiously sanctioned same-sex union. Having partaken in such a rite, a person was prohibited from entering into marriage or taking monastic vows, and the choreography of the service itself closely parallelled that of the marriage rite. His views have not found wide acceptance, and opponents have argued that this rite sanctified a Platonic brotherly bond, not a homosexual union. He also argued that condemnation of homosexuality began only in the 12th century. Boswell's critics point out that many earlier doctrinal sources condemn homosexuality as a sin even if they do not prescribe a specific punishment, and that Boswell's arguments are based on sources which reflected a general trend towards harsher penalties, rather than a change in doctrine, from the 12th century onwards. Desmond Tutu, the former Anglican Archbishop of Cape Town and a Nobel Peace Prize winner, has described homophobia as a "crime against humanity" and "every bit as unjust" as apartheid: "We struggled against apartheid in South Africa, supported by people the world over, because black people were being blamed and made to suffer for something we could do nothing about; our very skins. It is the same with sexual orientation. It is a given. ... We treat them [gays and lesbians] as pariahs and push them outside our communities. We make them doubt that they too are children of God – and this must be nearly the ultimate blasphemy. We blame them for what they are." Modern gay Christian leader Justin R. Cannon promotes what he calls "Inclusive Orthodoxy" ('orthodoxy' in this sense is not to be confused with the Eastern Orthodox Church). He explains on his ministry website: "Inclusive Orthodoxy is the belief that the Church can and must be inclusive of LGBT individuals without sacrificing the Gospel and the Apostolic teachings of the Christian faith." Cannon's ministry takes a unique and distinct approach from modern liberal Christians while still supporting homosexual relations. His ministry affirms the divine inspiration of the Bible, the authority of Tradition, and says "...that there is a place within the full life and ministry of the Christian Church for lesbian, gay, bisexual, and transgender Christians, both those who are called to lifelong celibacy and those who are partnered." Today, many religious people are becoming more affirming of same-sex relationships, even in denominations with official stances against homosexuality. In the United States, people in denominations who are against same-sex relationships are liberalizing quickly, though not as quickly as those in more affirming groups. This social change is creating tension within many denominations, and even schisms and mass walk-outs among Mormons and other conservative groups. Pope Francis voiced support for same-sex
it the oldest LGBT-affirming Apostolic Pentecostal denomination in existence. Another such organization is the Affirming Pentecostal Church International, currently the largest affirming Pentecostal organization, with churches in the US, UK, Central and South America, Europe and Africa. LGBT-affirming denominations regard homosexuality as a natural occurrence. The United Church of Christ celebrates gay marriage, and some parts of the Anglican and Lutheran churches allow for the blessing of gay unions. The United Church of Canada also allows same-sex marriage, and views sexual orientation as a gift from God. Within the Anglican Communion, there are openly gay clergy; for example, Gene Robinson is an openly gay Bishop in the US Episcopal Church. Within the Lutheran communion, there are openly gay clergy, too; for example, bishop Eva Brunne is an openly lesbian Bishop in the Church of Sweden. Such religious groups and denominations interpret scripture and doctrine in a way that leads them to accept that homosexuality is morally acceptable, and a natural occurrence. For example, in 1988 the United Church of Canada, that country's largest Protestant denomination, affirmed that "a) All persons, regardless of their sexual orientation, who profess Jesus Christ and obedience to Him, are welcome to be or become full members of the Church; and b) All members of the Church are eligible to be considered for the Ordered Ministry." In 2000, the Church's General Assembly further affirmed that "human sexual orientations, whether heterosexual or homosexual, are a gift from God and part of the marvelous diversity of creation." In addition, some Christian denominations such as the Moravian Church, believe that the Bible speaks negatively of homosexual acts but, as research on the matter continues, the Moravian Church seeks to establish a policy on homosexuality and the ordination of homosexuals. In 2014, Moravian Church in Europe allowed blessings of same-sex unions. Liberal Quakers, those in membership of Britain Yearly Meeting and Friends General Conference in the US approve of same-sex marriage and union. Quakers were the first Christian group in the United Kingdom to advocate for equal marriage and Quakers in Britain formally recognised same-sex relationships in 1963. The United Methodist Church elected a lesbian bishop in 2016, and on 7 May 2018, the Council of Bishops proposed the One Church Plan, which would allow individual pastors and regional church bodies to decide whether to ordain LGBT clergy and perform same-sex weddings. On 26 February 2019, a special session of the General Conference rejected the One Church Plan and voted to strengthen its official opposition to same-sex marriages and ordaining openly LGBT clergy. Various positions The Anglican Church reassures people with same sex attraction they are loved by God and are welcomed as full members of the Body of Christ. The Church leadership has a variety of views in regard to homosexual expression and ordination. Some expressions of sexuality are considered sinful including "promiscuity, prostitution, incest, pornography, paedophilia, predatory sexual behaviour, and sadomasochism (all of which may be heterosexual and homosexual)". The Church is concerned with pressures on young people to engage sexually and encourages abstinence. Churches within Lutheranism hold stances on the issue ranging from labeling homosexual acts as sinful, to acceptance of homosexual relationships. For example, the Lutheran Church–Missouri Synod, the Lutheran Church of Australia, and the Wisconsin Evangelical Lutheran Synod recognize homosexual behavior as intrinsically sinful and seek to minister to those who are struggling with homosexual inclinations. However, the Church of Sweden, the Church of Denmark, the Church of Norway or lutheran churches of Evangelical Church in Germany conducts same-sex marriages, while the Evangelical Lutheran Church in America and Evangelical Lutheran Church in Canada opens the ministry of the church to gay pastors and other professional workers living in committed relationships. The Ethiopian Mekane Yesus Lutheran Church, however, has taken a stand that marriage is inherently between a man and a woman, and has formally broken fellowship with the ELCA, a doctrinal stand that has cost the Ethiopian church ELCA financial support. Conservative position Some mainline Protestant denominations, such as the African Methodist churches, the Reformed Church in America, and the Presbyterian Church in America have a conservative position on the subject. The Seventh-day Adventist Church "recognizes that every human being is valuable in the sight of God, and seeks to minister to all men and women [including homosexuals] in the spirit of Jesus," while maintaining that homosexual sex itself is forbidden in the Bible. "Jesus affirmed the dignity of all human beings and reached out compassionately to persons and families suffering the consequences of sin. He offered caring ministry and words of solace to struggling people, while differentiating His love for sinners from His clear teaching about sinful practices." Conservative Quakers, those within Friends United Meeting and the Evangelical Friends International believe that sexual relations are condoned only in marriage, which they define to be between a man and a woman. Confessional Lutheran churches teach that it is sinful to have homosexual desires, even if they do not lead to homosexual activity. The Doctrinal statement issued by the Wisconsin Evangelical Lutheran Synod states that making a distinction between homosexual orientation and the act of homosexuality is confusing: However, confessional Lutherans also warn against selective morality which harshly condemns homosexuality while treating other sins more lightly. Evangelical churches The positions of the evangelical churches are varied. They range from liberal to fundamentalist or moderate Conservative and neutral. Some evangelical denominations have adopted neutral positions, leaving the choice to local churches to decide for same-sex marriage. Evangelical Conservative position Conservative Evangelical Christians regard homosexual acts as sinful and think they should not be accepted by society. They tend to interpret biblical verses on homosexual acts to mean that the heterosexual family was created by God to be the bedrock of civilization and that same-sex relationships contradict God’s design for marriage and is not his will. Christians who oppose homosexual relationships sometimes argue that same-gender sexual activity is a sin. In opposing interpretations of the Bible that are supportive of homosexual relationships, conservative Christians have argued for the reliability of the Bible, and the meaning of texts related to homosexual acts, while often seeing what they call the diminishing of the authority of the Bible by many homosexual authors as being ideologically driven. As an alternative to a school-sponsored Day of Silence opposing bullying of LGBT students, conservative Christians organized a Golden Rule Initiative, where they passed out cards saying "As a follower of Christ, I believe that all people are created in the image of God and therefore deserve love and respect." Others created a Day of Dialogue to oppose what they believe is the silencing of Christian students who make public their opposition to homosexuality. On 29 August 2017, the Council on Biblical Manhood and Womanhood released a manifesto on human sexuality known as the "Nashville Statement". The statement was signed by 150 evangelical leaders, and includes 14 points of belief. Fundamentalist position It is in the fundamentalist conservative positions, that there are anti-gay activists on TV or radio who claim that homosexuality is the cause of many social problems, such as terrorism. Some evangelical churches in Uganda strongly oppose homosexuality and homosexuals. They have campaigned for laws criminalizing homosexuality. The generalization and use of prejudices to spread hatred of homosexual people are frequent. Sex scandals Some evangelical pastors with antigay speeches have been outed. There was Pastor Ted Haggard, founder of nondenominational charismatic megachurch New Life Church in Colorado Springs, USA. Married with five children, Ted was an anti-gay activist and said he wanted to ban homosexuality from the church. In 2006, he was dismissed from his position as senior pastor after a male prostitute claimed to have had sex with him for three years. After denying the relationship, the pastor admitted that the allegations were accurate. There was also Baptist Pastor George Alan Rekers of the Southern Baptist Convention in the United States and psychologist member of the National Association for Research & Therapy of Homosexuality. Married and father of children, the antigay activist was recognized with a gay escort, hired for a trip to Europe, in 2010. According to him, he had hired the gay escort to carry his luggage. Moderate position Some churches have a moderate Conservative position. Although they do not approve homosexual practices, they show sympathy and respect for homosexuals. Baptist Reflecting this position, some pastors, for example, showed moderation during public statements. For example, in 2008, Baptist pastor Rick Warren of Saddleback Church in Lake Forest, California said that he had developed good relationships with several gay people, without having to compromise his beliefs about the definition of marriage between a man and a woman present in the Bible. Charismatic movement Philip Igbinijesu, a pastor of the Lagos Word Assembly, an Evangelical church, said in a message to his church that the Nigerian law on homosexuality (inciting denunciation) was hateful. He recalled that homosexuals are creatures of God and that they should be treated with respect. Brian Houston of Hillsong Church said that gays are welcome in the church, but they cannot take up leadership positions. Non-denominational Christianity Pastor Joel Osteen of Lakewood Church in Houston said in 2013 he found it unfortunate that several Christian ministers focus on the homosexuality by forgetting the other sins described in the Bible. He said that Jesus did not come to condemn people, but to save them. Other pastors also share this view. Pastor Andy Stanley of North Point Community Church in Alpharetta, mentioned in 2015 that the church should be the safest place on the planet for students to talk about anything, including same-sex attraction. Organizations The French Evangelical Alliance, a member of the European Evangelical Alliance and the World Evangelical Alliance, adopted on 12 October 2002, through its National Council, a document entitled Foi, espérance et homosexualité ("Faith, Hope and Homosexuality "), in which homophobia, hatred and rejection of homosexuals are condemned, but which denies homosexual practices and full church membership of unrepentant homosexuals and those who approve of these practices. In 2015, the Conseil national des évangéliques de France (French National Council of Evangelicals) reaffirmed its position on the issue by opposing marriage of same-sex couples, while not rejecting homosexuals, but wanting to offer them more than a blessing; an accompaniment and a welcome. The French evangelical pastor Philippe Auzenet, a chaplain of the association Oser en parler, regularly intervenes on the subject in the media. It promotes dialogue and respect, as well as sensitization in order to better understand homosexuals. He also said in 2012 that Jesus would go to a gay bar, because he was going to all people with love. Liberal position International There are some international evangelical denominations that are gay-friendly, such as the Alliance of Baptists
the Sahel. They include 150 languages spoken across northern Nigeria, southern Niger, southern Chad, the Central African Republic, and northern Cameroon. The most widely spoken Chadic language is Hausa, a lingua franca of much of inland Eastern West Africa. Composition Newman (1977) classified the languages into the four groups which have been accepted in all subsequent literature. Further subbranching, however, has not been as robust; Blench (2006), for example, only accepts the A/B bifurcation of East Chadic. Kujargé has been added from Blench (2008), who suggests Kujargé may have split off before the breakup of Proto-Chadic and then subsequently became influenced by East Chadic. Subsequent work by Lovestrand argues strongly that Kujarge is a valid member of East Chadic. The placing of Luri as a primary split of West Chadic is erroneous. Caron (2004) shows that this language is South Bauchi and part of the Polci cluster. West Chadic. Two branches, which include (A) the Hausa, Ron, Bole, and Angas languages; and (B) the Bade, Warji, and Zaar languages. Biu–Mandara (Central Chadic). Three branches, which include (A) the Bura, Kamwe, and Bata languages, among other groups; (B) the Buduma and Musgu languages; and (C) Gidar East Chadic. Two branches, which include (A) the Tumak, Nancere, and Kera languages; and (B) the Dangaléat, Mukulu, and Sokoro languages Masa ? Kujargé Origin Modern genetic studies of Northwestern Cameroonian Chadic-speaking populations have observed high frequencies of the Y-Chromosome Haplogroup R1b in these populations (the R1b-V88 variant). This paternal marker is common in parts of West Eurasia, but otherwise rare in Africa. Cruciani et al. (2010) thus proposed that the Proto-Chadic speakers during the mid-Holocene (~7,000 years ago) migrated from
Bauchi and part of the Polci cluster. West Chadic. Two branches, which include (A) the Hausa, Ron, Bole, and Angas languages; and (B) the Bade, Warji, and Zaar languages. Biu–Mandara (Central Chadic). Three branches, which include (A) the Bura, Kamwe, and Bata languages, among other groups; (B) the Buduma and Musgu languages; and (C) Gidar East Chadic. Two branches, which include (A) the Tumak, Nancere, and Kera languages; and (B) the Dangaléat, Mukulu, and Sokoro languages Masa ? Kujargé Origin Modern genetic studies of Northwestern Cameroonian Chadic-speaking populations have observed high frequencies of the Y-Chromosome Haplogroup R1b in these populations (the R1b-V88 variant). This paternal marker is common in parts of West Eurasia, but otherwise rare in Africa. Cruciani et al. (2010) thus proposed that the Proto-Chadic speakers during the mid-Holocene (~7,000 years ago) migrated from the Levant to the Central Sahara, and from there settled in the Lake Chad Basin. However, a more recent study in 2018 found that haplogroup R1b-V88 entered Chad much more recently during "Baggarization" (the migration of Baggara Arabs to the Sahel in the 17th century AD), finding no evidence of ancient Eurasian gene flow. Loanwords Chadic languages contain many Nilo-Saharan loanwords from either the Songhay or Maban branches, pointing to early contact between Chadic and Nilo-Saharan
languages A number of extinct populations have been proposed to have spoken Afroasiatic languages of the Cushitic branch. Marianne Bechhaus-Gerst (2000) proposed that the peoples of the Kerma Culture – which inhabited the Nile Valley in present-day Sudan immediately before the arrival of the first Nubian speakers – spoke Cushitic languages. She argues that the Nilo-Saharan Nobiin language today contains a number of key pastoralism related loanwords that are of proto-Highland East Cushitic origin, including the terms for sheep/goatskin, hen/cock, livestock enclosure, butter and milk. However, more recent linguistic research indicates that the people of the Kerma culture (who were based in southern Nubia) instead spoke Nilo-Saharan languages of the Eastern Sudanic branch, and that the peoples of the C-Group culture to their north (in northern Nubia) and other groups in northern Nubia (such as the Medjay and Belmmyes) spoke Cushitic languages with the latter being related to the modern Beja language. The linguistic affinity of the ancient A-Group culture of northern Nubia—the predecessor of the C-Group culture—is unknown, but Rilly (2019) suggests that it is unlikely to have spoken a language of the Northern East Sudanic branch of Nilo-Saharan, and may have spoken a Cushitic language, another Afro-Asiatic language, or a language belonging to another (non-Northern East Sudanic) branch of the Nilo-Saharan family. Rilly also criticizes proposals (by Behrens and Bechaus-Gerst) of significant early Afro-Asiatic influence on Nobiin, and considers evidence of substratal influence on Nobiin from an earlier now extinct Eastern Sudanic language to be stronger. Linguistic evidence indicates that Cushitic languages were spoken in Lower Nubia, an ancient region which straddles present day Southern Egypt and Northern Sudan, before the arrival of North Eastern Sudanic languages from Upper Nubia Julien Cooper (2017) states that in antiquity, Cushitic languages were spoken in Lower Nubia (the northernmost part of modern-day Sudan). He also states that Eastern Sudanic-speaking populations from southern and west Nubia gradually replaced the earlier Cushitic-speaking populations of this region. In Handbook of Ancient Nubia, Claude Rilly (2019) states that Cushitic languages once dominated Lower Nubia along with the Ancient Egyptian language. He mentions historical records of the Blemmyes, a Cushitic-speaking tribe which controlled Lower Nubia and some cities in Upper Egypt. He mentions the linguistic relationship between the modern Beja language and the ancient Blemmyan language, and that the Blemmyes can be regarded as a particular tribe of the Medjay. Additionally, historiolinguistics indicate that the makers of the Savanna Pastoral Neolithic (Stone Bowl Culture) in the Great Lakes area likely spoke South Cushitic languages. Christopher Ehret (1998) proposed on the basis of loanwords that South Cushitic languages (called "Tale" and "Bisha" by Ehret) were spoken in an area closer to Lake Victoria than are found today. Also, historically, the Southern Nilotic languages have undergone extensive contact with a "missing" branch of East Cushitic that Heine (1979) refers to as Baz. Reconstruction Christopher Ehret proposed a reconstruction of Proto-Cushitic in 1987, but did not base this on individual branch reconstructions. Grover Hudson (1989) has done some preliminary work on Highland East Cushitic, David Appleyard (2006) has proposed a reconstruction of Proto-Agaw, and Roland Kießling and Maarten Mous (2003) have jointly proposed a reconstruction of West Rift Southern Cushitic. No reconstruction been published for Lowland East Cushitic, though Paul D. Black wrote his (unpublished) dissertation on the topic in 1974. No comparative work has yet brought these branch reconstructions together. Comparative vocabulary Basic vocabulary Sample basic vocabulary of Cushitic languages from Vossen & Dimmendaal (2020:318) (with PSC denoting Proto-Southern Cushitic): Numerals Comparison of numerals in individual Cushitic languages: See also List of Proto-Cushitic reconstructions (Wiktionary) Meroitic language Notes References Ethnologue on the Cushitic branch Bender, Marvin Lionel. 1975. Omotic: a new Afroasiatic language family. Southern Illinois University Museum series, number 3. Bender, M. Lionel. 1986. A possible Cushomotic isomorph. Afrikanistische Arbeitspapiere 6:149–155. Fleming, Harold C. 1974. Omotic as an Afroasiatic family. In: Proceedings of the 5th annual conference on African linguistics (ed. by William Leben), p 81-94. African Studies Center & Department of Linguistics, UCLA. Kießling, Roland & Maarten Mous. 2003. The Lexical Reconstruction of West-Rift Southern Cushitic. Cushitic Language Studies Volume 21 Lamberti, Marcello. 1991. Cushitic and its classification. Anthropos 86(4/6):552-561. Newman, Paul. 1980. The Classification of Chadic within Afroasiatic. Universitaire Pers.
the north in Egypt and the Sudan, and to the south in Kenya and Tanzania. As of 2012, the Cushitic languages with over one million speakers were Oromo, Somali, Beja, Afar, Hadiyya, Kambaata, Saho, and Sidama. Official status The Cushitic languages with the greatest number of total speakers are Oromo (37 million), Somali (22 million), Beja (3.2 million), Sidamo (3 million), and Afar (2 million). Oromo serves as one of the official working languages of Ethiopia and is also the working language of several of the states within the Ethiopian federal system including Oromia, Harari and Dire Dawa regional states and of the Oromia Zone in the Amhara Region. Somali is the first of two official languages of Somalia and three official languages of the self declared republic of Somaliland. It also serves as a language of instruction in Djibouti, and as the working language of the Somali Region in Ethiopia. Beja, Afar, Blin and Saho, the languages of the Cushitic branch of Afroasiatic that are spoken in Eritrea, are languages of instruction in the Eritrean elementary school curriculum. The constitution of Eritrea also recognizes the equality of all natively spoken languages. Additionally, Afar is a language of instruction in Djibouti, as well as the working language of the Afar Region in Ethiopia. Origin and prehistory Christopher Ehret argues for a unified Proto-Cushitic language in the Red Sea Hills as far back as the Early Holocene. Based on onomastic evidence, the Medjay and the Blemmyes of northern Nubia are believed to have spoken Cushitic languages related to the modern Beja language. Less certain are hypotheses which propose that Cushitic languages were spoken by the people of the C-Group culture in northern Nubia, or the people of the Kerma culture in southern Nubia. Typological characteristics Phonology Most Cushitic languages have a simple five-vowel system with phonemic length (); a notable exception are the Agaw languages, which do not contrast vowel length, but have one or two additional central vowels. The consonant inventory of many Cushitic languages includes glottalic consonants, e.g. in Oromo, which has the ejectives and the implosive . Less common are pharyngeal consonants , which appear e.g. in Somali or the Saho–Afar languages. Pitch accent is found in most Cushitic languages, and plays a prominent role in morphology and syntax. Grammar Nouns Nouns are inflected for case and number. All nouns are further grouped into two gender categories, masculine gender and feminine gender. In many languages, gender is overtly marked directly on the noun (e.g. in Awngi, where all female nouns carry the suffix -a). The case system of many Cushitic languages is characterized by marked nominative alignment, which is typologically quite rare and predominantly found in languages of Africa. In marked nominative languages, the noun appears in unmarked "absolutive" case when cited in isolation, or when used as predicative noun and as object of a transitive verb; on the other hand, it is explicitly marked for nominative case when it functions as subject in a transitive or intransitive sentence. Possession is usually expressed by genitive case marking of the possessor. South Cushitic—which has no case marking for subject and object—follows the opposite strategy: here, the possessed noun is marked for construct case, e.g. Iraqw afé-r mar'i "doors" (lit. "mouths of houses"), where afee "mouth" is marked for construct case. Most nouns are by default unmarked for number, but can be explicitly marked for singular ("singulative") and plural number. E.g. in Bilin, dəmmu "cat(s)" is number-neutral, from which singular dəmmura "a single cat" and plural dəmmut "several cats" can be formed. Plural formation is very diverse, and employs ablaut (i.e. changes of root vowels or consonants), suffixes and reduplication. Verbs Verbs are inflected for person/number and tense/aspect. Many languages also have a special form of the verb in negative clauses. Most languages distinguish seven person/number categories: first, second, third person, singular and plural number, with a masculine/feminine gender distinction in third person singular. The most common conjugation type employs suffixes. Some languages also have a prefix conjugation: in Beja and the Saho–Afar languages, the prefix conjugation is still a productive part of the verb paradigm, whereas in most other languages, e.g. Somali, it is restricted to only a few verbs. It is generally assumed that historically, the suffix conjugation developed from the older prefix conjugation, by combining the verb stem with a suffixed auxiliary verb. The following table gives an example for the suffix and prefix conjugations in affirmative present tense in Somali. Syntax Basic word order is verb final, the most common order being subject–object–verb (SOV). The subject or object can also follow the verb to indicate focus. Classification Overview The phylum was first designated as Cushitic in 1858. The Omotic languages, once included in Cushitic, have almost universally been removed. The most influential recent classification, Tosco (2003), has informed later approaches. It and two more recent classifications are as follows: Tosco (2000, East Cushitic revised 2020) North Cushitic (Beja) Central Cushitic (Agaw) South Cushitic Maa (Bantu hybrid & partially a planned language, difficult to classify) Dahalo (divergent; possibly not Southern Cushitic) Rift East Cushitic Highland Lowland Saho–Afar Southern (nuclear Southern) Omo–Tana Oromoid Peripheral (?) Yaaku Dullay Appleyard (2012) North Cushitic (Beja) Central Cushitic (Agaw) South Cushitic East Cushitic Lowland East Cushitic Highland East Cushitic Yaaku–Dullay Dahalo Bender (2020 [2008]) Geographic labels are given for comparison; Bender's labels are added in parentheses. Dahalo is made a primary branch, as also suggested by Kiessling and Mous (2003). Yaaku is not listed, being placed within Arboroid. Afar–Saho is removed from Lowland East Cushitic; since they are the most 'lowland' of the Cushitic languages, Bender calls the remnant 'core' East Cushitic. North Cushitic (Beja) Central Cushitic (Agew) Dahalo South Cushitic East Cushitic Afar–Saho Highland East Cushitic Lowland East Cushitic ('core' East Cushitic) Dullay SAOK Eastern Omo–Tana (Somaloid) Western Omo–Tana (Arboroid) Oromoid (Oromo–Konsoid) These classifications have not been without contention. For example, it has been argued that Southern Cushitic belongs in the Eastern branch, with its divergence explained by contact with Hadza- and
several years, depending on the size and complexity of the bankruptcy. The Bankruptcy Code accomplishes this objective through the use of a bankruptcy plan. The debtor in possession typically has the first opportunity to propose a plan during the period of exclusivity. This period allows the debtor 120 days from the date of filing for chapter 11 to propose a plan of reorganization before any other party in interest may propose a plan. If the debtor proposes a plan within the 120-day exclusivity period, a 180-day exclusivity period from the date of filing for chapter 11 is granted in order to allow the debtor to gain confirmation of the proposed plan. With some exceptions, the plan may be proposed by any party in interest. Interested creditors then vote for a plan. Confirmation If the judge approves the reorganization plan and the creditors all agree, then the plan can be confirmed. If at least one class of creditors objects and votes against the plan, it may nonetheless be confirmed if the requirements of cramdown are met. In order to be confirmed over the creditors' objection, the plan must not discriminate against that class of creditors, and the plan must be found fair and equitable to that class. Upon confirmation, the plan becomes binding and identifies the treatment of debts and operations of the business for the duration of the plan. If a plan cannot be confirmed, the court may either convert the case to a liquidation under chapter 7, or, if in the best interests of the creditors and the estate, the case may be dismissed resulting in a return to the status quo before bankruptcy. If the case is dismissed, creditors will look to non-bankruptcy law in order to satisfy their claims. In order to proceed to the confirmation hearing, a disclosure statement must be approved by the bankruptcy court. Once the disclosure statement is approved, the plan proponent will solicit votes from the classes of creditors. Solicitation is the process by which creditors vote on the proposed confirmation plan. This process can be complicated if creditors fail or refuse to vote. In which case, the plan proponent might tailor his or her efforts in obtaining votes, or the plan itself. The plan may be modified before confirmation, so long as the modified plan meets all the requirements of Chapter 11. A chapter 11 case typically results in one of three outcomes: a reorganization; a conversion into chapter 7 liquidation, or it is dismissed. In order for a chapter 11 debtor to reorganize, they must file (and the court must confirm) a plan of reorganization. Simply put, the plan is a compromise between the major stakeholders in the case, including, but not limited to the debtor and its creditors. Most chapter 11 cases aim to confirm a plan, but that may not always be possible. Section 1121(b) of the Bankruptcy Code provides for an exclusivity period in which only the debtor may file a plan of reorganization. This period lasts 120 days after the date of the order for relief, and if the debtor does file a plan within the first 120 days, the exclusivity period is extended to 180 days after the order for relief for the debtor to seek acceptance of the plan by holders of claims and interests. If the judge approves the reorganization plan and the creditors all “agree,” then the plan can be confirmed. §1129 of the Bankruptcy Code requires the bankruptcy court reach certain conclusions prior to “confirming” or “approving” the plan and making it binding on all parties in the case. Most importantly, the bankruptcy court must find the plan (a) complies with applicable law, and (b) has been proposed in good faith. Furthermore, the court must determine whether the plan is “feasible,” in other words, the court must safeguard that confirming the plan won’t yield to liquidation down the road. The plan must ensure that the debtor will be able to pay most administrative and priority claims (priority claims over unsecured claims) on the effective date. Automatic stay Like other forms of bankruptcy, petitions filed under chapter 11 invoke the automatic stay of § 362. The automatic stay requires all creditors to cease collection attempts, and makes many post-petition debt collection efforts void or voidable. Under some circumstances, some creditors, or the United States Trustee, can request the court convert the case into a liquidation under chapter 7, or appoint a trustee to manage the debtor's business. The court will grant a motion to convert to chapter 7 or appoint a trustee if either of these actions is in the best interest of all creditors. Sometimes a company will liquidate under chapter 11 (perhaps in a 363 sale), in which the pre-existing management may be able to help get a higher price for divisions or other assets than a chapter 7 liquidation would be likely to achieve. Section 362(d) of the Bankruptcy Code allows the court to terminate, annul, or modify the continuation of the automatic stay as may be necessary or appropriate to balance the competing interests of the debtor, its estate, creditors, and other parties in interest and grants the bankruptcy court considerable flexibility to tailor relief to the exigencies of the circumstances. Relief from the automatic stay is generally sought by motion and, if opposed, is treated as a contested matter under Bankruptcy Rule 9014. A party seeking relief from the automatic stay must also pay the filing fee required by 28 U.S.C.A. § 1930(b). Executory contracts In the new millennium, airlines have fallen under intense scrutiny for what many see as abusing Chapter 11 bankruptcy as a tool for escaping labor contracts, usually 30-35% of an airline's operating cost. Every major
must be approved by the bankruptcy court. Once the disclosure statement is approved, the plan proponent will solicit votes from the classes of creditors. Solicitation is the process by which creditors vote on the proposed confirmation plan. This process can be complicated if creditors fail or refuse to vote. In which case, the plan proponent might tailor his or her efforts in obtaining votes, or the plan itself. The plan may be modified before confirmation, so long as the modified plan meets all the requirements of Chapter 11. A chapter 11 case typically results in one of three outcomes: a reorganization; a conversion into chapter 7 liquidation, or it is dismissed. In order for a chapter 11 debtor to reorganize, they must file (and the court must confirm) a plan of reorganization. Simply put, the plan is a compromise between the major stakeholders in the case, including, but not limited to the debtor and its creditors. Most chapter 11 cases aim to confirm a plan, but that may not always be possible. Section 1121(b) of the Bankruptcy Code provides for an exclusivity period in which only the debtor may file a plan of reorganization. This period lasts 120 days after the date of the order for relief, and if the debtor does file a plan within the first 120 days, the exclusivity period is extended to 180 days after the order for relief for the debtor to seek acceptance of the plan by holders of claims and interests. If the judge approves the reorganization plan and the creditors all “agree,” then the plan can be confirmed. §1129 of the Bankruptcy Code requires the bankruptcy court reach certain conclusions prior to “confirming” or “approving” the plan and making it binding on all parties in the case. Most importantly, the bankruptcy court must find the plan (a) complies with applicable law, and (b) has been proposed in good faith. Furthermore, the court must determine whether the plan is “feasible,” in other words, the court must safeguard that confirming the plan won’t yield to liquidation down the road. The plan must ensure that the debtor will be able to pay most administrative and priority claims (priority claims over unsecured claims) on the effective date. Automatic stay Like other forms of bankruptcy, petitions filed under chapter 11 invoke the automatic stay of § 362. The automatic stay requires all creditors to cease collection attempts, and makes many post-petition debt collection efforts void or voidable. Under some circumstances, some creditors, or the United States Trustee, can request the court convert the case into a liquidation under chapter 7, or appoint a trustee to manage the debtor's business. The court will grant a motion to convert to chapter 7 or appoint a trustee if either of these actions is in the best interest of all creditors. Sometimes a company will liquidate under chapter 11 (perhaps in a 363 sale), in which the pre-existing management may be able to help get a higher price for divisions or other assets than a chapter 7 liquidation would be likely to achieve. Section 362(d) of the Bankruptcy Code allows the court to terminate, annul, or modify the continuation of the automatic stay as may be necessary or appropriate to balance the competing interests of the debtor, its estate, creditors, and other parties in interest and grants the bankruptcy court considerable flexibility to tailor relief to the exigencies of the circumstances. Relief from the automatic stay is generally sought by motion and, if opposed, is treated as a contested matter under Bankruptcy Rule 9014. A party seeking relief from the automatic stay must also pay the filing fee required by 28 U.S.C.A. § 1930(b). Executory contracts In the new millennium, airlines have fallen under intense scrutiny for what many see as abusing Chapter 11 bankruptcy as a tool for escaping labor contracts, usually 30-35% of an airline's operating cost. Every major US airline has filed for Chapter 11 since 2002. In the space of 2 years (2002–2004) US Airways filed for bankruptcy twice leaving the AFL-CIO, pilot unions and other airline employees claiming the rules of Chapter 11 have helped turn the United States into a corporatocracy. The trustee or debtor-in-possession is given the right, under § 365 of the Bankruptcy Code, subject to court approval, to assume or reject executory contracts and unexpired leases. The trustee or debtor-in-possession must assume or reject an executory contract in its entirety, unless some portion of it is severable. The trustee or debtor-in-possession normally assumes a contract or lease if it is needed to operate the reorganized business or if it can be assigned or sold at a profit. The trustee or debtor-in-possession normally rejects a contract or lease to transform damage claims arising from the nonperformance of those obligations into a prepetition claim. In some situations, rejection can also limit the damages that a contract counterparty can claim against the debtor. Priority Chapter 11 follows the same priority scheme as other bankruptcy chapters. The priority structure is defined primarily by § 507 of the Bankruptcy Code (). As a general rule, administrative expenses (the actual, necessary expenses of preserving the bankruptcy estate, including expenses such as employee wages, and the cost of litigating the chapter 11 case) are paid first. Secured creditors—creditors who have a security interest, or collateral, in the debtor's property—will be paid before unsecured creditors. Unsecured creditors' claims are prioritized by § 507. For instance the claims of suppliers of products or employees of a company may be paid before other unsecured creditors are paid. Each priority level must be paid in full before the next lower priority level may receive payment. Section 1110 Section 1110 () generally provides a secured party with an interest in an aircraft the ability to take possession of the equipment within 60 days after a bankruptcy filing unless the airline cures all defaults. More specifically, the right of the lender to take possession of the secured equipment is not hampered by the automatic stay provisions of the Bankruptcy Code. Subchapter V In August 2019, the Small Business Reorganization Act of 2019 (“SBRA”) added Subchapter V to Chapter 11 of the Bankruptcy Code. Subchapter V, which took effect in February 2020, is reserved exclusively for the small business debtor with the purpose of expediting bankruptcy procedure and economically resolving small business bankruptcy cases. Subchapter V retains many of the advantages of a traditional Chapter 11 case without the unnecessary procedural burdens and costs. It seeks to increase the debtor's ability to negotiate a successful reorganization and retain control of the business and increase oversight and ensure a quick reorganization. A Subchapter V case contrasts from a traditional Chapter 11 in several key aspects: It's earmarked only for the “small business debtor” (as defined by the Bankruptcy Code), so, only a debtor can file a plan of reorganization. The SBRA requires the U.S. Trustee appoint a “subchapter V trustee” to every Subchapter V case to supervise and control estate funds, and facilitate the development of a consensual plan. It also eliminates automatic appointment of an official committee of unsecured creditors and abolishes quarterly fees usually paid to the U.S. Trustee throughout the case. Most notably, Subchapter V allows the small business owner to retain their equity in the business so long as the reorganization plan does not discriminate unfairly and is fair and equitable with respect to each class of claims or interests. Considerations The reorganization and court process may take an inordinate amount of time, limiting the chances of a successful outcome and sufficient debtor-in-possession financing may be unavailable during an economic recession. A preplanned, pre-agreed approach between the debtor and its creditors (sometimes called a pre-packaged bankruptcy) may facilitate the desired result. A company undergoing Chapter 11 reorganization is effectively operating under the "protection" of the court until it emerges. An example is the airline industry in the United States; in 2006 over half the industry's seating capacity was on airlines that were in Chapter 11. These airlines were able to stop making debt payments, break their previously agreed upon labor union contracts, freeing up cash to expand routes or weather a price war against competitors — all with the bankruptcy court's approval. Studies on the impact
theory), a generalization of the preceding conjugations to roots of a polynomial of any degree Conjugate transpose, the complex conjugate of the transpose of a matrix Harmonic conjugate in complex analysis Conjugate (graph theory), an alternative term for a line graph, i.e. a graph representing the edge adjacencies of another graph In group theory, various notions are called conjugation: Inner automorphism, a type of conjugation homomorphism Conjugation in group theory, related to matrix similarity in linear algebra Conjugation (group theory), the image of an element under the conjugation homomorphisms Conjugate closure, the image of a subgroup under the conjugation homomorphisms Conjugate words in combinatorics; this operation on strings resembles conjugation in groups Isogonal conjugate, in geometry Conjugate gradient method, an algorithm for the numerical solution of particular systems of linear equations Conjugate points, in differential geometry Topological conjugation, which identifies equivalent dynamical systems Convex conjugate, the ("dual") lower-semicontinuous convex function resulting from the Legendre–Fenchel transformation of a "primal" function Probability and statistics Conjugate prior, in Bayesian statistics, a family of probability distributions that contains
square root in an expression Conjugate element (field theory), a generalization of the preceding conjugations to roots of a polynomial of any degree Conjugate transpose, the complex conjugate of the transpose of a matrix Harmonic conjugate in complex analysis Conjugate (graph theory), an alternative term for a line graph, i.e. a graph representing the edge adjacencies of another graph In group theory, various notions are called conjugation: Inner automorphism, a type of conjugation homomorphism Conjugation in group theory, related to matrix similarity in linear algebra Conjugation (group theory), the image of an element under the conjugation homomorphisms Conjugate closure, the image of a subgroup under the conjugation homomorphisms Conjugate words in combinatorics; this operation on strings resembles conjugation in groups Isogonal conjugate, in geometry Conjugate gradient method, an algorithm for the numerical solution of particular systems of linear equations Conjugate points,
part of the disputants – as implied by Benford's law of controversy, which only talks about lack of information ("passion is inversely proportional to the amount of real information available"). For example, in analyses of the political controversy over anthropogenic climate change, which is exceptionally virulent in the United States, it has been proposed that those who are opposed to the scientific consensus do so because they don't have enough information about the topic. A study of 1540 US adults found instead that levels of scientific literacy correlated with the strength of opinion on climate change, but not on which side of the debate that they stood. The puzzling phenomenon of two individuals being able to reach different conclusions after being exposed to the same facts has been frequently explained (particularly by Daniel Kahneman) by reference to a 'bounded rationality' – in other words, that most judgments are made using fast acting heuristics that work well in every day situations, but are not amenable to decision-making about complex subjects such as climate change. Anchoring has been particularly identified as relevant in climate change controversies as individuals are found to be more positively inclined to believe in climate change if the outside temperature is higher, if they have been primed to think about heat, and if they are primed with higher temperatures when thinking about the future temperature increases from climate change. In other controversies – such as that around the HPV vaccine, the same evidence seemed to license inference to radically different conclusions. Kahan et al. explained this by the cognitive biases of biased assimilation and a credibility heuristic. Similar effects on reasoning are also seen in non-scientific controversies, for example in the gun control debate in the United States. As with other controversies, it has been suggested that exposure to empirical facts would be sufficient to resolve the debate once and for all. In computer simulations of cultural communities, beliefs were found to polarize within isolated sub-groups, based on the mistaken belief of the community's unhindered access to ground truth. Such confidence in the group to find the ground truth is explicable through the success of wisdom of the crowd based inferences. However, if
Thus, for example, controversies in physics would be limited to subject areas where experiments cannot be carried out yet, whereas controversies would be inherent to politics, where communities must frequently decide on courses of action based on insufficient information. Psychological bases Controversies are frequently thought to be a result of a lack of confidence on the part of the disputants – as implied by Benford's law of controversy, which only talks about lack of information ("passion is inversely proportional to the amount of real information available"). For example, in analyses of the political controversy over anthropogenic climate change, which is exceptionally virulent in the United States, it has been proposed that those who are opposed to the scientific consensus do so because they don't have enough information about the topic. A study of 1540 US adults found instead that levels of scientific literacy correlated with the strength of opinion on climate change, but not on which side of the debate that they stood. The puzzling phenomenon of two individuals being able to reach different conclusions after being exposed to the same facts has been frequently explained (particularly by Daniel Kahneman) by reference to a 'bounded rationality' – in other words, that most judgments are made using fast acting heuristics that work well in every day situations, but are not amenable to decision-making about complex subjects such as climate change. Anchoring has been particularly identified as relevant in climate change controversies as individuals are found to be more positively inclined to believe in climate change if the outside temperature is higher, if they have been primed to think about heat, and if they are primed with higher temperatures when thinking about the future temperature increases from climate change. In other controversies – such as that around the HPV vaccine, the same evidence seemed to license inference to radically different conclusions. Kahan et al. explained this by the cognitive biases of biased assimilation and a credibility heuristic. Similar
arms length |- |style="background: bgcolor=lightblue | Centromere position|style="background: bgcolor=lightblue | Arms length ratio |style="background: bgcolor=lightblue | Sign|style="background: bgcolor=lightblue| Description |- |Medial sensu stricto |1.0 – 1.6 |M |Metacentric |- |Medial region |1.7 |m |Metacentric |- |Submedial |3.0 |sm |Submetacentric |- |Subterminal |3.1 – 6.9 |st |Subtelocentric |- |Terminal region |7.0 |t |Acrocentric |- |Terminal sensu stricto |∞ |T |Telocentric |- |style="background: bgcolor=lightblue|Notes|style="background: bgcolor=lightblue| – |style="background: bgcolor=lightblue|Metacentric: M+m|style="background: bgcolor=lightblue|Atelocentric: M+m+sm+st+t |- |} Metacentric Metacentric means that the centromere is positioned midway between the chromosome ends, resulting in the arms being approximately equal in length. When the centromeres are metacentric, the chromosomes appear to be "x-shaped." Submetacentric Submetacentric means that the centromere is positioned below the middle, with one chromosome arm shorter than the other, often resulting in an L shape. Acrocentric An acrocentric chromosome's centromere is situated so that one of the chromosome arms is much shorter than the other. The "acro-" in acrocentric refers to the Greek word for "peak." The human genome includes six acrocentric chromosomes. Five autosomal acrocentric chromosomes: 13, 14, 15, 21, 22; and the Y chromosome is also acrocentric. Short acrocentric p-arms contain little genetic material and can be translocated without significant harm, as in a balanced Robertsonian translocation. In addition to some protein coding genes, human acrocentric p-arms also contain Nucleolus organizer regions (NORs), from which ribosomal RNA is transcribed. However, a proportion of acrocentric p-arms in cell lines and tissues from normal human donors do not contain detectable NORs. The domestic horse genome includes one metacentric chromosome that is homologous to two acrocentric chromosomes in the conspecific but undomesticated Przewalski's horse. This may reflect either fixation of a balanced Robertsonian translocation in domestic horses or, conversely, fixation of the fission of one metacentric chromosome into two acrocentric chromosomes in Przewalski's horses. A similar situation exists between the human and great ape genomes, with a reduction of two acrocentric chromosomes in the great apes to one metacentric chromosome in humans (see aneuploidy and the human chromosome 2). Many diseases from the result of unbalanced translocations more frequently involve acrocentric chromosomes than other non-acrocentric chromosomes. Acrocentric chromosomes are usually located in and around the nucleolus. As a result these chromosomes tend to be less densely packed than chromosomes in the nuclear periphery. Consistently, chromosomal regions that are less densely packed are also more prone to chromosomal translocations in cancers. Telocentric Telocentric chromosomes' centromeres are located at one end of the chromosome. Telocentric centromeres often result in the p arms being barely or not visible at all. If the telocentric chromosome's centromere is located at the terminal end of the chromosome, then the chromosome only has one arm. Naturally occurring telocentric chromosomes with a terminal centromere are rare, but do exist. Telocentric chromosomes are not present in healthy humans. Misdivision of centromeres in normal chromosomes lead to the development of telosomes. The structure of the telosomes kinetochores determines their cytological stability. The standard house mouse karyotype has only telocentric chromosomes. Subtelocentric Subtelocentric chromosomes' centromeres are located between the middle and the end of the chromosomes, but reside closer to the end of the chromosomes. Centromere number Acentric An acentric chromosome is fragment of a chromosome that lacks a centromere. Since centromeres are the attachment point for spindle fibers in cell division, acentric fragments are not evenly distributed to daughter cells during cell division. As a result, a daughter cell will lack the acentric fragment and deleterious consequences could occur. Chromosome-breaking events can also generate acentric chromosomes or acentric fragments. Dicentric A dicentric chromosome is an abnormal chromosome with two centromeres. It is formed through the fusion of two chromosome segments, each with a centromere, resulting in the loss of acentric fragments (lacking a centromere) and the formation of dicentric fragments. The formation of dicentric chromosomes has been attributed to genetic processes, such as Robertsonian translocation and paracentric inversion. Dicentric chromosomes have important roles in the mitotic stability of chromosomes and the formation of pseudodicentric chromosomes. Monocentric The monocentric chromosome is a chromosome that has only one centromere in a chromosome and forms a narrow constriction. Monocentric centromeres are the most common structure on highly repetitive DNA in plants and animals. Holocentric Unlike monocentric chromosomes, in holocentric chromosomes the entire length of the chromosome acts as the centromere. In holocentric chromosomes there is not one primary constriction but the centromere has many CenH3 loci spread over the whole chromosome. Examples of this type of centromere can be found scattered throughout the plant and animal kingdoms, with the most well-known example being the nematode Caenorhabditis elegans. Polycentric Human chromosomes Sequence There are two types of centromeres. In regional centromeres, DNA sequences contribute to but do not define function. Regional centromeres contain large amounts of DNA and are often packaged into heterochromatin. In most eukaryotes, the centromere's DNA
exists between the human and great ape genomes, with a reduction of two acrocentric chromosomes in the great apes to one metacentric chromosome in humans (see aneuploidy and the human chromosome 2). Many diseases from the result of unbalanced translocations more frequently involve acrocentric chromosomes than other non-acrocentric chromosomes. Acrocentric chromosomes are usually located in and around the nucleolus. As a result these chromosomes tend to be less densely packed than chromosomes in the nuclear periphery. Consistently, chromosomal regions that are less densely packed are also more prone to chromosomal translocations in cancers. Telocentric Telocentric chromosomes' centromeres are located at one end of the chromosome. Telocentric centromeres often result in the p arms being barely or not visible at all. If the telocentric chromosome's centromere is located at the terminal end of the chromosome, then the chromosome only has one arm. Naturally occurring telocentric chromosomes with a terminal centromere are rare, but do exist. Telocentric chromosomes are not present in healthy humans. Misdivision of centromeres in normal chromosomes lead to the development of telosomes. The structure of the telosomes kinetochores determines their cytological stability. The standard house mouse karyotype has only telocentric chromosomes. Subtelocentric Subtelocentric chromosomes' centromeres are located between the middle and the end of the chromosomes, but reside closer to the end of the chromosomes. Centromere number Acentric An acentric chromosome is fragment of a chromosome that lacks a centromere. Since centromeres are the attachment point for spindle fibers in cell division, acentric fragments are not evenly distributed to daughter cells during cell division. As a result, a daughter cell will lack the acentric fragment and deleterious consequences could occur. Chromosome-breaking events can also generate acentric chromosomes or acentric fragments. Dicentric A dicentric chromosome is an abnormal chromosome with two centromeres. It is formed through the fusion of two chromosome segments, each with a centromere, resulting in the loss of acentric fragments (lacking a centromere) and the formation of dicentric fragments. The formation of dicentric chromosomes has been attributed to genetic processes, such as Robertsonian translocation and paracentric inversion. Dicentric chromosomes have important roles in the mitotic stability of chromosomes and the formation of pseudodicentric chromosomes. Monocentric The monocentric chromosome is a chromosome that has only one centromere in a chromosome and forms a narrow constriction. Monocentric centromeres are the most common structure on highly repetitive DNA in plants and animals. Holocentric Unlike monocentric chromosomes, in holocentric chromosomes the entire length of the chromosome acts as the centromere. In holocentric chromosomes there is not one primary constriction but the centromere has many CenH3 loci spread over the whole chromosome. Examples of this type of centromere can be found scattered throughout the plant and animal kingdoms, with the most well-known example being the nematode Caenorhabditis elegans. Polycentric Human chromosomes Sequence There are two types of centromeres. In regional centromeres, DNA sequences contribute to but do not define function. Regional centromeres contain large amounts of DNA and are often packaged into heterochromatin. In most eukaryotes, the centromere's DNA sequence consists of large arrays of repetitive DNA (e.g. satellite DNA) where the sequence within individual repeat elements is similar but not identical. In humans, the primary centromeric repeat unit is called α-satellite (or alphoid), although a number of other sequence types are found in this region. Centromere satellites evolve rapidly between species, and analyses in wild mice show that satellite copy number and heterogeneity relates to population origins and subspecies. Additionally, satellite sequences may be affected by inbreeding. Point centromeres are smaller and more compact. DNA sequences are both necessary and sufficient to specify centromere identity and function in organisms with point centromeres. In budding yeasts, the centromere region is relatively small (about 125 bp DNA) and contains two highly conserved DNA sequences that serve as binding sites for essential kinetochore proteins. Inheritance Since centromeric DNA sequence is not
(Gozo), a citadel in Gozo, Malta Short name of Castellón de la Plana, a city in the Valencian Community, Spain Other Roman Catholic Diocese of Castello, a former diocese based in Venice Castello (surname) Castello cheeses See also Città di Castello, a town in Umbria, Italy Castell (disambiguation) Castella (disambiguation) Castelli
Florence Castello, Hong Kong, a private housing estate in Hong Kong A locality in the town of Monteggio in Switzerland Cittadella (Gozo), a citadel in Gozo, Malta Short name of Castellón de la Plana,
an everyone wins situation in a number
a number of places: Zero-sum
protocol) client applications distributed and supported since 1996 by GlobalSCAPE, who later bought the rights to the software. Both a Windows-based or Mac-based interface were made for both home and professional use. CuteFTP is used to transfer files between computers and File Transfer Protocol (FTP) servers to publish web pages,
publish web pages, download digital images, music, multi-media files and software, and transfer files of any size or type between home and office. Since 1999, CuteFTP Pro and CuteFTP Mac Pro have also been available alongside CuteFTP Home with free trial periods. It was originally developed by Alex
a reset button on them. It is possible to trigger a soft reset by jumping to the CPU reset routine at $FCE2 (64738). A few programs use this as an "exit" feature, although it does not clear memory. The KERNAL ROM went through three separate revisions, mostly designed to fix bugs. The initial version is only found on 326298 motherboards, used in the first production models, and cannot detect whether an NTSC or PAL VIC-II is present. The second revision is found on all C64s made from late 1982 through 1985. The third and last KERNAL ROM revision was introduced on the 250466 motherboard (late breadbin models with 41464 RAM) and is found in all C64Cs. The 6510 CPU is clocked at (NTSC) and (PAL), lower than some competing systems (for example, the Atari 800 is clocked at ). A small performance boost can be gained by disabling the VIC-II's video output via a register write. This feature is often used by tape and disk fastloaders as well as the KERNAL cassette routine to keep a standard CPU cycle timing not modified by the VIC-II's sharing of the bus. The Restore key is gated directly to the CPU's NMI line and will generate an NMI if pressed. The KERNAL handler for the NMI checks if Run/Stop is also pressed; if not, it ignores the NMI and simply exits back out. Run/Stop-Restore normally functions as a soft reset in BASIC that restores all I/O registers to their power on default state, but does not clear memory or reset pointers, so any BASIC programs in memory will be left untouched. Machine language software usually disables Run/Stop-Restore by remapping the NMI vector to a dummy RTI instruction. The NMI can be used for an extra interrupt thread by programs as well, but runs the risk of a system lockup or undesirable side effects if the Restore key is accidentally pressed, as this will trigger an inadvertent activation of the NMI thread. Joysticks, mice, and paddles The C64 retained the DE-9 joystick Atari joystick port from the VIC-20 and added another; any Atari-specification game controller can be used on a C64. The joysticks are read from the registers at $DC00 and $DC01, and most software is designed to use a joystick in port 2 for control rather than port 1, as the upper bits of $DC00 are used by the keyboard and an I/O conflict can result. Although it is possible to use Sega game pads on a C64, it is not recommended as the slightly different signal generated by them can damage the CIA chip. The SID chip's register $D419 is used to control paddles and is an analog input. Atari paddles are electrically compatible with the C64, but have different resistance values than Commodore's paddles, which means most software will not work properly with them. However, only a handful of games, mostly ones released early in the computer's life cycle, can use paddles. In 1986, Commodore released two mice for the C64 and C128, the 1350 and 1351. The 1350 is a digital device, read from the joystick registers (and can be used with any program supporting joystick input); while the 1351 is a true, analog potentiometer based, mouse, read with the SID's analog-to-digital converter. Graphics The graphics chip, VIC-II, features 16 colors, eight hardware sprites per scanline (enabling up to 112 sprites per PAL screen), scrolling capabilities, and two bitmap graphics modes. Text modes The standard text mode features 40 columns, like most Commodore PET models; the built-in character encoding is not standard ASCII but PETSCII, an extended form of ASCII-1963. The KERNAL ROM sets the VIC-II to a dark blue background on power up with a light blue text and border. Unlike the PET and VIC-20, the C64 uses "fat" double-width text as some early VIC-IIs had poor video quality that resulted in a fuzzy picture. Most screenshots show borders around the screen, which is a feature of the VIC-II chip. By utilizing interrupts to reset various hardware registers on precise timings it was possible to place graphics within the borders and thus use the full screen. The C64 has a resolution of 320×200 pixels, consisting of a 40×25 grid of 8×8 character blocks. The C64 has 255 predefined character blocks, called PETSCII. The character set can be copied into RAM and altered by a programmer. There are two colour modes, high resolution, with two colours available per character block (one foreground and one background) and multicolour with four colours per character block (three foreground and one background). In multicolour mode, attributes are shared between pixel pairs, so the effective visible resolution is 160×200 pixels. This is necessary since only 16 KB of memory is available for the VIC-II video processor. As the C64 has a bitmapped screen, it is possible to draw each pixel individually. This is, however, very slow. Most programmers used techniques developed for earlier non-bitmapped systems, like the Commodore PET and TRS-80. A programmer redraws the character set and the video processor fills the screen block by block from the top left corner to the bottom right corner. Two different types of animation are used: character block animation and hardware sprites. Character block animation The user draws a series of characters of a person walking, say, two in the middle of the block, and another two walking in and out of the block. Then the user sequences them so the character walks into the block and out again. Drawing a series of these and the user gets a person walking across the screen. By timing the redraw to occur when the television screen blanks out to restart drawing the screen there will be no flicker. For this to happen, the user programs the VIC-II that it generates a raster interrupt when the video flyback occurs. This is the technique used in the classic Space Invaders arcade game. Horizontal and vertical pixelwise scrolling of up to one character block is supported by two hardware scroll registers. Depending on timing, hardware scrolling affects the entire screen or just selected lines of character blocks. On a non-emulated C64, scrolling is glasslike and blur-free. Hardware sprites A sprite is a movable character which moves over an area of the screen, draws over the background and then redraws it after it moves. Note this is very different from character block animation, where the user is just flipping character blocks. On the C64, the VIC-II video processor handles most of the legwork in sprite emulation; the programmer simply defines the sprite and where they want it to go. The C64 has two types of sprites, respecting their colour mode limitations. Hi-res sprites have one colour (one background and one foreground) and multicolour sprites three (one background and three foreground). Colour modes can be split or windowed on a single screen. Sprites can be doubled in size vertically and horizontally up to four times their size, but the pixel attributes are the same – the pixels become "fatter". There can be 8 sprites in total and 8 in a horizontal line. Sprites can move with glassy smoothness in front of and behind screen characters and other sprites. Sprite-sprite and sprite-background collisions are detected in hardware and the VIC-II can be programmed to trigger an interrupt accordingly. Sound The SID chip has three channels, each with its own ADSR envelope generator and filter capabilities. Ring modulation makes use of channel no. 3, to work with the other two channels. Bob Yannes developed the SID chip and later co-founded synthesizer company Ensoniq. Yannes criticized other contemporary computer sound chips as "primitive, obviously ... designed by people who knew nothing about music". Often the game music has become a hit of its own among C64 users. Well-known composers and programmers of game music on the C64 are Rob Hubbard, Jeroen Tel, Tim Follin, David Whittaker, Chris Hülsbeck, Ben Daglish, Martin Galway, Kjell Nordbø and David Dunn among many others. Due to the chip's three channels, chords are often played as arpeggios, coining the C64's characteristic lively sound. It was also possible to continuously update the master volume with sampled data to enable the playback of 4-bit digitized audio. As of 2008, it became possible to play four channel 8-bit audio samples, 2 SID channels and still use filtering. There are two versions of the SID chip: the 6581 and the 8580. The MOS Technology 6581 was used in the original ("breadbin") C64s, the early versions of the 64C, and the Commodore 128. The 6581 was replaced with the MOS Technology 8580 in 1987. While the 6581 sound quality is a little crisper and many Commodore 64 fans say they prefer its sound, it lacks some versatility available in the 8580 – for example, the 8580 can mix all available waveforms on each channel, whereas the 6581 can only mix waveforms in a channel in a much more limited fashion. The main difference between the 6581 and the 8580 is the supply voltage. The 6581 uses a supply—the 8580, a supply. A modification can be made to use the 6581 in a newer 64C board (which uses the chip). The SID chip's distinctive sound has allowed it to retain a following long after its host computer was discontinued. A number of audio enthusiasts and companies have designed SID-based products as add-ons for the C64, x86 PCs, and standalone or Musical Instrument Digital Interface (MIDI) music devices such as the Elektron SidStation. These devices use chips taken from excess stock, or removed from used computers. In 2007, Timbaland's extensive use of the SidStation led to the plagiarism controversy for "Block Party" and "Do It" (written for Nelly Furtado). In 1986, the Sound Expander was released for the Commodore 64. It was a sound module that contained a Yamaha YM3526 sound chip capable of FM synthesis. It was primarily intended for professional music production. Hardware revisions Commodore made many changes to the C64's hardware during its lifetime, sometimes causing compatibility issues. The computer's rapid development, and Commodore and Tramiel's focus on cost cutting instead of product testing, resulted in several defects that caused developers like Epyx to complain and required many revisions to fix; Charpentier said that "not coming a little close to quality" was one of the company's mistakes. Cost reduction was the reason for most of the revisions. Reducing manufacturing costs was vitally important to Commodore's survival during the price war and leaner years of the 16-bit era. The C64's original (NMOS based) motherboard went through two major redesigns and numerous sub-revisions, exchanging positions of the VIC-II, SID and PLA chips. Initially, a large portion of the cost was eliminated by reducing the number of discrete components, such as diodes and resistors, which enabled the use of a smaller printed circuit board. There were 16 total C64 motherboard revisions, aimed at simplifying and reducing manufacturing costs. Some board revisions were exclusive to PAL regions. All C64 motherboards were manufactured in Hong Kong. IC locations changed frequently on each motherboard revision, as did the presence or lack thereof of the metal RF shield around the VIC-II. PAL boards often had aluminized cardboard instead of a metal shield. The SID and VIC-II are socketed on all boards; however, the other ICs may be either socketed or soldered. The first production C64s, made in 1982 to early 1983, are known as "silver label" models due to the case sporting a silver-colored "Commodore" logo. The power LED had a separate silver badge around it reading "64". These machines also have only a 5-pin video cable and cannot output S-video. In late 1982, Commodore introduced the familiar "rainbow badge" case, but many machines produced into early 1983 also used silver label cases until the existing stock of them was used up. In the spring of 1983, the original 326298 board was replaced by the 250407 motherboard which sported an 8-pin video connector and added S-video support for the first time. This case design was used until the C64C appeared in 1986. All ICs switched to using plastic shells while the silver label C64s had some ceramic ICs, notably the VIC-II. The case is made from ABS plastic which may become brown with time. This can be reversed by using a process known as "retrobright". ICs The VIC-II was manufactured with 5 micrometer NMOS technology and was clocked at either (PAL) or (NTSC). Internally, the clock was divided down to generate the dot clock (about 8 MHz) and the two-phase system clocks (about 1 MHz; the exact pixel and system clock speeds are slightly different between NTSC and PAL machines). At such high clock rates, the chip generated a lot of heat, forcing MOS Technology to use a ceramic dual in-line package called a "CERDIP". The ceramic package was more expensive, but it dissipated heat more effectively than plastic. After a redesign in 1983, the VIC-II was encased in a plastic dual in-line package, which reduced costs substantially, but it did not totally eliminate the heat problem. Without a ceramic package, the VIC-II required the use of a heat sink. To avoid extra cost, the metal RF shielding doubled as the heat sink for the VIC, although not all units shipped with this type of shielding. Most C64s in Europe shipped with a cardboard RF shield, coated with a layer of metal foil. The effectiveness of the cardboard was highly questionable and, worse still, it acted as an insulator, blocking airflow which trapped heat generated by the SID, VIC, and PLA chips. The SID was originally manufactured using NMOS at 7 micrometers and in some areas 6 micrometers. The prototype SID and some very early production models featured a ceramic dual in-line package, but unlike the VIC-II, these are extremely rare as the SID was encased in plastic when production started in early 1982. Motherboard In 1986, Commodore released the last revision to the classic C64 motherboard. It was otherwise identical to the 1984 design, except for the two 64 kilobit × 4 bit DRAM chips that replaced the original eight 64 kilobit × 1 bit ICs. After the release of the Commodore 64C, MOS Technology began to reconfigure the original C64's chipset to use HMOS production technology. The main benefit of using HMOS was that it required less voltage to drive the IC, which consequently generates less heat. This enhanced the overall reliability of the SID and VIC-II. The new chipset was renumbered to 85xx to reflect the change to HMOS. In 1987, Commodore released a 64C variant with a highly redesigned motherboard commonly known as a "short board". The new board used the new HMOS chipset, featuring a new 64-pin PLA chip. The new "SuperPLA", as it was dubbed, integrated many discrete components and transistor–transistor logic (TTL) chips. In the last revision of the 64C motherboard, the 2114 4-bit-wide color RAM was integrated into the SuperPLA. Power supply The C64 used an external power supply, a conventional transformer with multiple tappings (as opposed to switch mode, the type now used on PC power supplies). It was encased in an epoxy resin gel, which discouraged tampering but tended to increase the heat level during use. The design saved space within the computer's case and allowed international versions to be more easily manufactured. The 1541-II and 1581 disk drives, along with various third-party clones, also come with their own external power supply "bricks", as did most peripherals leading to a "spaghetti" of cables and the use of numerous double adapters by users. Commodore power supplies often failed sooner than expected. The computer reportedly had a 30% return rate in late 1983, compared to the 5–7% the industry considered acceptable. Creative Computing reported four working computers out of seven C64s. Malfunctioning power bricks were particularly notorious for damaging the RAM chips. Due to their higher density and single supply (+5V), they had less tolerance for an overvoltage condition. The usually failing voltage regulator could be replaced by piggy-backing a new regulator onto the board and fitting a heat sink on top. The original PSU included on early 1982–83 machines had a 5-pin connector that could accidentally be plugged into the video output of the computer. To prevent the user from making this damaging mistake, Commodore changed the plug design on 250407 motherboards to a 3-pin connector in 1984. Commodore later changed the design yet again, omitting the resin gel in order to reduce costs. The follow-on model, the Commodore 128, used a larger, improved power supply that included a fuse. The power supply that came with the Commodore REU was similar to that of the Commodore 128's unit, providing an upgrade for customers who purchased that accessory. Specifications Internal hardware Microprocessor CPU: MOS Technology 6510/8500 (the 6510/8500 is a modified 6502 with an integrated 6-bit I/O port) Clock speed: or Video: MOS Technology VIC-II 6567/8562 (NTSC), 6569/8565 (PAL) 16 colors Text mode: 40×25 characters; 256 user-defined chars (8×8 pixels, or 4×8 in multicolor mode); or extended background color; 64 user-defined chars with
Services of Vienna, Virginia, which in October 1991 changed its name to America Online and continued to operate its AOL service for the IBM PC compatible and Apple Macintosh. Q-Link was a modified version of the PlayNET system, which Control Video Corporation (CVC, later renamed Quantum Computer Services) licensed. Online gaming The first graphical character-based interactive environment is Club Caribe. First released as Habitat in 1988, Club Caribe was introduced by LucasArts for Q-Link customers on their Commodore 64 computers. Users could interact with one another, chat and exchange items. Although the game's open world was very basic, its use of online avatars and the combination of chat and graphics was revolutionary. Online graphics in the late 1980s were severely restricted by the need to support modem data transfer rates as low as 300 bits per second. Habitat's graphics were stored locally on floppy disk, eliminating the need for network transfer. Hardware CPU and memory The C64 uses an 8-bit MOS Technology 6510 microprocessor. It is almost identical to the 6502 but with three-state buses, a different pinout, slightly different clock signals and other minor changes for this specific application. It also has six I/O lines on otherwise unused legs on the 40-pin IC package. These are used for two purposes in the C64: to bank-switch the machine's read-only memory (ROM) in and out of the processor's address space, and to operate the datasette tape recorder. The C64 has of 8-bit-wide dynamic RAM, of 4-bit-wide static color RAM for text mode, and are available to built-in Commodore BASIC 2.0 on startup. There is of ROM, made up of the BASIC interpreter, the KERNAL, and the character ROM.As the processor could only address at a time, the ROM was mapped into memory, and only of RAM (plus 4 KB in between the ROMs) were available at startup. Most "breadbin" Commodore 64s used 4164 DRAM, with eight chips to total up 64K of system RAM. Later models, featuring Assy 250466 and Assy 250469 motherboards, used 41464 DRAM (64K×4) chips which stored 32 KB per chip, so only two were required Since 4164 DRAMs are 64K×1, eight chips are needed to make an entire byte, and the computer will not function without all of them present. Thus, the first chip contains Bit 0 for the entire memory space, the second chip contains Bit 1, and so forth. This also makes detecting faulty RAM easy, as a bad chip will display random characters on the screen and the character displayed can be used to determine the faulty RAM. The C64 performs a RAM test on power up and if a RAM error is detected, the amount of free BASIC memory will be lower than the normal 38911 figure. If the faulty chip is in lower memory, then an ?OUT OF MEMORY IN 0 error is displayed rather than the usual BASIC startup banner. The color RAM at $D800 uses a separate 2114 SRAM chip and is gated directly to the VIC-II. The C64 uses a somewhat complicated memory banking scheme; the normal power-on default is to have the BASIC ROM mapped in at $A000-$BFFF and the screen editor/KERNAL ROM at $E000–$FFFF. RAM underneath the system ROMs can be written to, but not read back without swapping out the ROMs. Memory location $01 contains a register with control bits for enabling/disabling the system ROMs as well as the I/O area at $D000. If the KERNAL ROM is swapped out, BASIC will be removed at the same time, and it is not possible to have BASIC active without the KERNAL (as BASIC often calls KERNAL routines and part of the ROM code for BASIC is in fact located in the KERNAL ROM). The character ROM is normally not visible to the CPU. It has two mirrors at $1000 and $9000, but only the VIC-II can see them; the CPU will see RAM in those locations. The character ROM may be mapped into $D000–$DFFF where it is then visible to the CPU. Since doing so necessitates swapping out the I/O registers, interrupts must be disabled first. Graphics memory and data cannot be placed at $1000 or $9000 as the VIC-II will see the character ROM there instead. By removing I/O from the memory map, $D000–$DFFF becomes free RAM. The color RAM at $D800 is swapped out along with the I/O registers and this area can be used for static graphics data such as character sets since the VIC-II cannot see the I/O registers (or color RAM via the CPU mapping). If all ROMs and the I/O area are swapped out, the entire 64k RAM space is available aside for locations $0/$1. $C000–$CFFF is free RAM and not used by BASIC or KERNAL routines; because of this, it is an ideal location to store short machine language programs that can be accessed from BASIC. The cassette buffer at $0334–$03FF can also be used to store short machine language routines provided that a Datasette is not used, which will overwrite the buffer. C64 cartridges map into assigned ranges in the CPU's address space and the most common cartridge auto starting requires the presence of a special string at $8000 which contains "CBM80" followed by the address where program execution begins. A few early C64 cartridges released in 1982 use Ultimax mode (or MAX mode), a leftover feature of the failed MAX Machine. These cartridges map into $F000 and displace the KERNAL ROM. If Ultimax mode is used, the programmer will have to provide code for handling system interrupts. The cartridge port has 16 address lines, which grants access to the entire address space of the computer if needed. Disk and tape software normally load at the start of BASIC memory ($0801) and use a small BASIC stub (e.g., 10 SYS(2064)) to jump to the start of the program. Although no Commodore 8-bit machine except the C128 can automatically boot from a floppy disk, some software intentionally overwrites certain BASIC vectors in the process of loading so that execution begins automatically rather than requiring the user to type RUN at the BASIC prompt following loading. Around 300 cartridges were released for the C64, mostly in the machine's first years on the market, after which most software outgrew the 16 KB cartridge limit. In the final years of the C64, larger software companies such as Ocean Software began releasing games on bank-switched cartridges to overcome this 16 KB cartridge limit. Commodore did not include a reset button on any of their computers until the CBM-II line, but there were third-party cartridges with a reset button on them. It is possible to trigger a soft reset by jumping to the CPU reset routine at $FCE2 (64738). A few programs use this as an "exit" feature, although it does not clear memory. The KERNAL ROM went through three separate revisions, mostly designed to fix bugs. The initial version is only found on 326298 motherboards, used in the first production models, and cannot detect whether an NTSC or PAL VIC-II is present. The second revision is found on all C64s made from late 1982 through 1985. The third and last KERNAL ROM revision was introduced on the 250466 motherboard (late breadbin models with 41464 RAM) and is found in all C64Cs. The 6510 CPU is clocked at (NTSC) and (PAL), lower than some competing systems (for example, the Atari 800 is clocked at ). A small performance boost can be gained by disabling the VIC-II's video output via a register write. This feature is often used by tape and disk fastloaders as well as the KERNAL cassette routine to keep a standard CPU cycle timing not modified by the VIC-II's sharing of the bus. The Restore key is gated directly to the CPU's NMI line and will generate an NMI if pressed. The KERNAL handler for the NMI checks if Run/Stop is also pressed; if not, it ignores the NMI and simply exits back out. Run/Stop-Restore normally functions as a soft reset in BASIC that restores all I/O registers to their power on default state, but does not clear memory or reset pointers, so any BASIC programs in memory will be left untouched. Machine language software usually disables Run/Stop-Restore by remapping the NMI vector to a dummy RTI instruction. The NMI can be used for an extra interrupt thread by programs as well, but runs the risk of a system lockup or undesirable side effects if the Restore key is accidentally pressed, as this will trigger an inadvertent activation of the NMI thread. Joysticks, mice, and paddles The C64 retained the DE-9 joystick Atari joystick port from the VIC-20 and added another; any Atari-specification game controller can be used on a C64. The joysticks are read from the registers at $DC00 and $DC01, and most software is designed to use a joystick in port 2 for control rather than port 1, as the upper bits of $DC00 are used by the keyboard and an I/O conflict can result. Although it is possible to use Sega game pads on a C64, it is not recommended as the slightly different signal generated by them can damage the CIA chip. The SID chip's register $D419 is used to control paddles and is an analog input. Atari paddles are electrically compatible with the C64, but have different resistance values than Commodore's paddles, which means most software will not work properly with them. However, only a handful of games, mostly ones released early in the computer's life cycle, can use paddles. In 1986, Commodore released two mice for the C64 and C128, the 1350 and 1351. The 1350 is a digital device, read from the joystick registers (and can be used with any program supporting joystick input); while the 1351 is a true, analog potentiometer based, mouse, read with the SID's analog-to-digital converter. Graphics The graphics chip, VIC-II, features 16 colors, eight hardware sprites per scanline (enabling up to 112 sprites per PAL screen), scrolling capabilities, and two bitmap graphics modes. Text modes The standard text mode features 40 columns, like most Commodore PET models; the built-in character encoding is not standard ASCII but PETSCII, an extended form of ASCII-1963. The KERNAL ROM sets the VIC-II to a dark blue background on power up with a light blue text and border. Unlike the PET and VIC-20, the C64 uses "fat" double-width text as some early VIC-IIs had poor video quality that resulted in a fuzzy picture. Most screenshots show borders around the screen, which is a feature of the VIC-II chip. By utilizing interrupts to reset various hardware registers on precise timings it was possible to place graphics within the borders and thus use the full screen. The C64 has a resolution of 320×200 pixels, consisting of a 40×25 grid of 8×8 character blocks. The C64 has 255 predefined character blocks, called PETSCII. The character set can be copied into RAM and altered by a programmer. There are two colour modes, high resolution, with two colours available per character block (one foreground and one background) and multicolour with four colours per character block (three foreground and one background). In multicolour mode, attributes are shared between pixel pairs, so the effective visible resolution is 160×200 pixels. This is necessary since only 16 KB of memory is available for the VIC-II video processor. As the C64 has a bitmapped screen, it is possible to draw each pixel individually. This is, however, very slow. Most programmers used techniques developed for earlier non-bitmapped systems, like the Commodore PET and TRS-80. A programmer redraws the character set and the video processor fills the screen block by block from the top left corner to the bottom right corner. Two different types of animation are used: character block animation and hardware sprites. Character block animation The user draws a series of characters of a person walking, say, two in the middle of the block, and another two walking in and out of the block. Then the user sequences them so the character walks into the block and out again. Drawing a series of these and the user gets a person walking across the screen. By timing the redraw to occur when the television screen blanks out to restart drawing the screen there will be no flicker. For this to happen, the user programs the VIC-II that it generates a raster interrupt when the video flyback occurs. This is the technique used in the classic Space Invaders arcade game. Horizontal and vertical pixelwise scrolling of up to one character block is supported by two hardware scroll registers. Depending on timing, hardware scrolling affects the entire screen or just selected lines of character blocks. On a non-emulated C64, scrolling is glasslike and blur-free. Hardware sprites A sprite is a movable character which moves over an area of the screen, draws over the background and then redraws it after it moves. Note this is very different from character block animation, where the user is just flipping character blocks. On the C64, the VIC-II video processor handles most of the legwork in sprite emulation; the programmer simply defines the sprite and where they want it to go. The C64 has two types of sprites, respecting their colour mode limitations. Hi-res sprites have one colour (one background and one foreground) and multicolour sprites three (one background and three foreground). Colour modes can be split or windowed on a single screen. Sprites can be doubled in size vertically and horizontally up to four times their size, but the pixel attributes are the same – the pixels become "fatter". There can be 8 sprites in total and 8 in a horizontal line. Sprites can move with glassy smoothness in front of and behind screen characters and other sprites. Sprite-sprite and sprite-background collisions are detected in hardware and the VIC-II can be programmed to trigger an interrupt accordingly. Sound The SID chip has three channels, each with its own ADSR envelope generator and filter capabilities. Ring modulation makes use of channel no. 3, to work with the other two channels. Bob Yannes developed the SID chip and later co-founded synthesizer company Ensoniq. Yannes criticized other contemporary computer sound chips as "primitive, obviously ... designed by people who knew nothing about music". Often the game music has become a hit of its own among C64 users. Well-known composers and programmers of game music on the C64 are Rob Hubbard, Jeroen Tel, Tim Follin, David Whittaker, Chris Hülsbeck, Ben Daglish, Martin Galway, Kjell Nordbø and David Dunn among many others. Due to the chip's three channels, chords are often played as arpeggios, coining the C64's characteristic lively sound. It was also possible to continuously update the master volume with sampled data to enable the playback of 4-bit digitized audio. As of 2008, it became possible to play four channel 8-bit audio samples, 2 SID channels and still use filtering. There are two versions of the SID chip: the 6581 and the 8580. The MOS Technology 6581 was used in the original ("breadbin") C64s, the early versions of the 64C, and the Commodore 128. The 6581 was replaced with the MOS Technology 8580 in 1987. While the 6581 sound quality is a little crisper and many Commodore 64 fans say they prefer its sound, it lacks some versatility available in the 8580 – for example, the 8580 can mix all available waveforms on each channel, whereas the 6581 can only mix waveforms in a channel in a much more limited fashion. The main difference between the 6581 and the 8580 is the supply voltage. The 6581 uses a supply—the 8580, a supply. A modification can be made to use the 6581 in a newer 64C board (which uses the chip). The SID chip's distinctive sound has allowed it to retain a following long after its host computer was discontinued. A number of audio enthusiasts and companies have designed SID-based products as add-ons for the C64, x86 PCs, and standalone or Musical Instrument Digital Interface (MIDI) music devices such as the Elektron SidStation. These devices use chips taken from excess stock, or removed from used computers. In 2007, Timbaland's extensive use of the SidStation led to the plagiarism controversy for "Block Party" and "Do It" (written for Nelly Furtado). In 1986, the Sound Expander was released for the Commodore 64. It was a sound module that contained a Yamaha YM3526 sound chip capable of FM synthesis. It was primarily intended for professional music production. Hardware revisions Commodore made many changes to the C64's hardware during its lifetime, sometimes causing compatibility issues. The computer's rapid development, and Commodore and Tramiel's focus on cost cutting instead of product testing, resulted in several defects that caused developers like Epyx to complain and required many revisions to fix; Charpentier said that "not coming a little close to quality" was one of the company's mistakes. Cost reduction was the reason for most of the revisions. Reducing manufacturing costs was vitally important to Commodore's survival during the price war and leaner years of the 16-bit era. The C64's original (NMOS based) motherboard went through two major redesigns and numerous sub-revisions, exchanging positions of the VIC-II, SID and PLA chips. Initially, a large portion of the cost was eliminated by reducing the number of discrete components, such as diodes and resistors, which enabled the use of a smaller printed circuit board. There were 16 total C64 motherboard revisions, aimed at simplifying and reducing manufacturing costs. Some board revisions were exclusive to PAL regions. All C64 motherboards were manufactured in Hong Kong. IC locations changed frequently on each motherboard revision, as did the presence or lack thereof of the metal RF shield around the VIC-II. PAL boards often had aluminized cardboard instead of a metal shield. The SID and VIC-II are socketed on all boards; however, the other ICs may be either socketed or soldered. The first production C64s, made in 1982 to early 1983, are known as "silver label" models due to the case sporting a silver-colored "Commodore" logo. The power LED had a separate silver badge around it reading "64". These machines also have only a 5-pin video cable and cannot output S-video. In late 1982, Commodore introduced the familiar "rainbow badge" case, but many machines produced into early 1983 also used silver label cases until the existing stock of them was used up. In the spring of 1983, the original 326298 board was replaced by the 250407 motherboard which sported an 8-pin video connector and added S-video support for the first time. This case design was used until the C64C appeared in 1986. All ICs switched to using plastic shells while the silver label C64s had some ceramic ICs, notably the VIC-II. The case is made from ABS plastic which may become brown with time. This can be reversed by using a process known as "retrobright". ICs The VIC-II was manufactured with 5 micrometer NMOS technology and was clocked at either (PAL) or (NTSC). Internally, the clock was divided down to generate the dot clock (about 8 MHz) and the two-phase system clocks (about 1 MHz; the exact pixel and system clock speeds are slightly different between NTSC and PAL machines). At such high clock rates, the chip generated a lot of heat, forcing MOS Technology to use a ceramic dual in-line package called a "CERDIP". The ceramic package was more expensive, but it dissipated heat more effectively than plastic. After a redesign in 1983, the VIC-II was encased in a plastic dual in-line package, which reduced costs substantially, but it did not totally eliminate the heat problem. Without a ceramic package, the VIC-II required the use of a heat sink. To avoid extra cost, the metal RF shielding doubled as the heat sink for the VIC, although not all units shipped with this type of shielding. Most C64s in Europe shipped with a cardboard RF shield, coated with a layer of metal foil. The effectiveness of the cardboard was highly questionable and, worse still, it acted as an insulator, blocking airflow which trapped heat generated by the SID, VIC, and PLA chips. The SID was originally manufactured using NMOS at 7 micrometers and in some areas 6 micrometers. The prototype SID and some very early production models featured a ceramic dual in-line package, but unlike the VIC-II, these are extremely rare as the SID was encased in plastic when production started in early 1982. Motherboard In 1986, Commodore released the last revision to the classic C64 motherboard. It was otherwise identical to the 1984 design, except for the two 64 kilobit × 4 bit DRAM chips that replaced the original eight 64 kilobit × 1 bit ICs. After the release of the Commodore 64C, MOS Technology began to reconfigure the original C64's chipset to use HMOS production technology. The main benefit of using HMOS was that it required less voltage to drive the IC, which consequently generates less heat. This enhanced the overall reliability of the SID and VIC-II. The new chipset was renumbered to 85xx to reflect the change to HMOS. In 1987, Commodore released a 64C variant with a highly redesigned motherboard commonly known as a "short board". The new board used the new HMOS chipset, featuring a new 64-pin PLA chip. The new "SuperPLA", as it was dubbed, integrated many discrete components and transistor–transistor logic (TTL) chips. In the last revision of the 64C motherboard, the 2114 4-bit-wide color RAM was integrated into the SuperPLA. Power supply The C64 used an external power supply, a conventional transformer with multiple tappings (as opposed to switch mode, the type now used on PC power supplies). It was encased in an epoxy resin gel, which discouraged tampering but tended to increase the heat level during use. The design saved space within the computer's case and allowed international versions to be more easily manufactured. The 1541-II and 1581 disk drives, along with various third-party clones, also come with their own external power supply "bricks", as did most peripherals leading to a "spaghetti" of cables and the use of numerous double adapters by users. Commodore power supplies often failed sooner than expected. The computer reportedly had a 30% return rate in late 1983, compared to the 5–7% the industry considered acceptable. Creative Computing reported four working computers out of seven C64s. Malfunctioning power bricks were particularly notorious for damaging the RAM chips. Due to their higher density and single supply (+5V), they had less tolerance for an overvoltage condition. The usually failing voltage regulator could be replaced by piggy-backing a new regulator onto the board and fitting a heat sink on top. The original PSU included on early 1982–83 machines had a 5-pin connector that could accidentally be plugged into the video output of the computer. To prevent the user from making this damaging mistake, Commodore changed the plug design on 250407 motherboards to a 3-pin connector in 1984. Commodore later changed the design yet again, omitting the resin gel in order to reduce costs. The follow-on model, the Commodore 128, used a larger, improved power supply that included a fuse. The power supply that came with the Commodore REU was similar to that of the Commodore 128's unit, providing an upgrade for customers who purchased that accessory. Specifications Internal hardware Microprocessor CPU: MOS Technology 6510/8500 (the 6510/8500 is a modified 6502 with an integrated 6-bit I/O port) Clock speed: or Video: MOS Technology VIC-II 6567/8562 (NTSC), 6569/8565 (PAL) 16 colors Text mode: 40×25 characters; 256 user-defined chars (8×8 pixels, or 4×8 in multicolor mode); or extended background color; 64 user-defined chars with 4 background colors, 4-bit color RAM defines foreground color Bitmap modes: 320×200 (2 unique colors in each 8×8 pixel block), 160×200 (3 unique colors + 1 common color in each 4×8 block) 8 hardware sprites of 24×21 pixels (12×21 in multicolor mode) Smooth scrolling, raster interrupts Sound: MOS Technology 6581/8580 SID 3-channel synthesizer with programmable ADSR envelope 8 octaves 4 waveforms per audio channel: triangle, sawtooth, variable pulse, noise Oscillator synchronization, ring modulation Programmable filter: high pass, low pass, band pass, notch filter Input/Output: Two 6526 Complex Interface Adapters 16 bit parallel I/O 8 bit serial I/O 24-hours (AM/PM) Time of Day clock (TOD), with programmable alarm clock 16 bit interval timers RAM: 64 KB, of which 38 KB were available for BASIC programs 1024 nybbles color RAM (memory allocated for screen color data storage) Expandable to 320 KB with Commodore 1764 256 KB RAM Expansion Unit (REU); although only 64 KB directly accessible; REU used mostly for the GEOS. REUs of 128 KB and 512 KB, originally designed for the C128, were also available, but required the user to buy a stronger power supply from some third party supplier; with the 1764 this was included. Creative Micro Designs also produced a 2 MB REU for the C64 and C128, called the 1750 XL. The technology actually supported up to 16 MB, but 2 MB was the biggest one officially made. Expansions of up to 16 MB were also possible via the CMD SuperCPU. ROM: ( Commodore BASIC 2.0; KERNAL; character generator, providing two character sets) Input/output (I/O) ports and power supply I/O ports: ROM cartridge expansion slot (44-pin slot for edge connector with 6510 CPU address/data bus lines and control signals, as well as GND and voltage pins; used for program modules and memory expansions, among others) Integrated RF modulator television antenna output via an RCA connector. The used channel could be adjusted from number 36 with the potentiometer to the left. 8-pin DIN connector containing composite video output, separate Y/C outputs and sound input/output. This is a 262° horseshoe version of the plug, rather than the 270° circular version. Early C64 units (with motherboard Assy 326298) use a 5-pin DIN connector that carries composite video and luminance signals, but lacks a chroma signal. Serial bus (proprietary serial version of IEEE-488, 6-pin DIN plug) for CBM printers and disk drives PET-type Commodore Datassette 300 baud tape interface (edge connector with digital cassette motor/read/write/key-sense signals), Ground and +5V DC lines. The cassette motor is controlled by a +5V DC signal from the 6510 CPU. The 9V AC input is transformed into
may be physical, such as roads or land masses, or may be abstract, such as toponyms or political boundaries. Represent the terrain of the mapped object on flat media. This is the concern of map projections. Eliminate characteristics of the mapped object that are not relevant to the map's purpose. This is the concern of generalization. Reduce the complexity of the characteristics that will be mapped. This is also the concern of generalization. Orchestrate the elements of the map to best convey its message to its audience. This is the concern of map design. Modern cartography constitutes many theoretical and practical foundations of geographic information systems (GIS) and geographic information science (GISc). History Ancient times What is the earliest known map is a matter of some debate, both because the term "map" is not well-defined and because some artifacts that might be maps might actually be something else. A wall painting that might depict the ancient Anatolian city of Çatalhöyük (previously known as Catal Huyuk or Çatal Hüyük) has been dated to the late 7th millennium BCE. Among the prehistoric alpine rock carvings of Mount Bego (France) and Valcamonica (Italy), dated to the 4th millennium BCE, geometric patterns consisting of dotted rectangles and lines are widely interpreted in archaeological literature as a depiction of cultivated plots. Other known maps of the ancient world include the Minoan "House of the Admiral" wall painting from c. 1600 BCE, showing a seaside community in an oblique perspective, and an engraved map of the holy Babylonian city of Nippur, from the Kassite period (14th12th centuries BCE). The oldest surviving world maps are from 9th century BCE Babylonia. One shows Babylon on the Euphrates, surrounded by Assyria, Urartu and several cities, all, in turn, surrounded by a "bitter river" (Oceanus). Another depicts Babylon as being north of the center of the world. The ancient Greeks and Romans created maps from the time of Anaximander in the 6th century BCE. In the 2nd century CE, Ptolemy wrote his treatise on cartography, Geographia. This contained Ptolemy's world map – the world then known to Western society (Ecumene). As early as the 8th century, Arab scholars were translating the works of the Greek geographers into Arabic. In ancient China, geographical literature dates to the 5th century BCE. The oldest extant Chinese maps come from the State of Qin, dated back to the 4th century BCE, during the Warring States period. In the book of the Xin Yi Xiang Fa Yao, published in 1092 by the Chinese scientist Su Song, a star map on the equidistant cylindrical projection. Although this method of charting seems to have existed in China even before this publication and scientist, the greatest significance of the star maps by Su Song is that they represent the oldest existent star maps in printed form. Early forms of cartography of India included depictions of the pole star and surrounding constellations. These charts may have been used for navigation. Middle Ages and Renaissance ("maps of the world") are the medieval European maps of the world. About 1,100 of these are known to have survived: of these, some 900 are found illustrating manuscripts and the remainder exist as stand-alone documents. The Arab geographer Muhammad al-Idrisi produced his medieval atlas Tabula Rogeriana (Book of Roger) in 1154. By combining the knowledge of Africa, the Indian Ocean, Europe, and the Far East (which he learned through contemporary accounts from Arab merchants and explorers) with the information he inherited from the classical geographers, he was able to write detailed descriptions of a multitude of countries. Along with the substantial text he had written, he created a world map influenced mostly by the Ptolemaic conception of the world, but with significant influence from multiple Arab geographers. It remained the most accurate world map for the next three centuries. The map was divided into seven climatic zones, with detailed descriptions of each zone. As part of this work, a smaller, circular map was made depicting the south on top and Arabia in the center. Al-Idrisi also made an estimate of the circumference of the world, accurate to within 10%. In the Age of Exploration, from the 15th century to the 17th century, European cartographers both copied earlier maps (some of which had been passed down for centuries) and drew their own, based on explorers' observations and new surveying techniques. The invention of the magnetic compass, telescope and sextant enabled increasing accuracy. In 1492, Martin Behaim, a German cartographer, made the oldest extant globe of the Earth. In 1507, Martin Waldseemüller produced a globular world map and a large 12-panel world wall map (Universalis Cosmographia) bearing the first use of the name "America". Portuguese cartographer Diego Ribero was the author of the first known planisphere with a graduated Equator (1527). Italian cartographer Battista Agnese produced at least 71 manuscript atlases of sea charts. Johannes Werner refined and promoted the Werner projection. This was an equal-area, heart-shaped world map projection (generally called a cordiform projection) which was used in the 16th and 17th centuries. Over time, other iterations of this map type arose; most notable are the sinusoidal projection and the Bonne projection. The Werner projection places its standard parallel at the North Pole; a sinusoidal projection places its standard parallel at the equator; and the Bonne projection is intermediate between the two. In 1569, mapmaker Gerardus Mercator first published a map based on his Mercator projection, which uses equally-spaced parallel vertical lines of longitude and parallel latitude lines spaced farther apart as they get farther away from the equator. By this construction, courses of constant bearing are conveniently represented as straight lines for navigation. The same property limits its value as a general-purpose world map because regions are shown as increasingly larger than they actually are the further from the equator they are. Mercator is also credited as the first to use the word "atlas" to describe a collection of maps. In the later years of his life, Mercator resolved to create his Atlas, a book filled with many maps of different regions of the world, as well as a chronological history of the world from the Earth's creation by God until 1568. He was unable to complete it to his satisfaction before he died. Still, some additions were made to the Atlas after his death and new editions were published after his death. In the Renaissance, maps were used to impress viewers and establish the owner's reputation as sophisticated, educated, and worldly. Because of this, towards the end of the Renaissance, maps were displayed with equal importance of painting, sculptures, and other pieces of art. In the sixteenth century, maps were becoming increasingly available to consumers through the introduction of printmaking, with about 10% of Venetian homes having some sort of map by the late 1500s. There were three main functions of maps in the Renaissance: General descriptions of the world Navigation and wayfinding Land surveying and property management In medieval times, written directions of how to get somewhere were more common than the use of maps. With the Renaissance, cartography began to be seen as a metaphor for power. Political leaders could lay claim on territories through the use of maps and this was greatly aided by the religious and colonial expansion of Europe. The most commonly mapped places during the Renaissance were the Holy Land and other religious places. In the late 1400s to the late 1500s, Rome, Florence, and Venice dominated map making and trade. It started in Florence in the mid to late 1400s. Map trade quickly shifted to Rome and Venice but then was overtaken by atlas makers in the late 16th century. Map publishing in Venice was completed with humanities and book publishing in mind, rather than just informational use. Printing technology There were two main printmaking technologies in the Renaissance: woodcut and copper-plate intaglio, referring to the medium used to transfer the image onto paper. In woodcut, the map image is created as a relief chiseled from medium-grain hardwood. The areas intended to be printed are inked and pressed against the sheet. Being raised from the rest of the block, the map lines cause indentations in the paper that can often be felt on the back of the map. There are advantages to using relief to make maps. For one, a printmaker doesn't need a press because the maps could be developed as rubbings. Woodblock is durable enough to be used many times before defects appear. Existing printing presses can be used to create the prints rather than having to create a new one. On the other hand, it is hard to achieve fine detail with the relief technique. Inconsistencies in linework are more apparent in woodcut than in intaglio. To improve quality in the late fifteenth century, a style of relief craftsmanship developed using fine chisels to carve the wood, rather than the more commonly used knife. In intaglio, lines are engraved into workable metals, typically copper but sometimes brass. The engraver spreads a thin sheet of wax over the metal plate and uses ink to draw the details. Then, the engraver traces the lines with a stylus to etch them into the plate beneath. The engraver can also use styli to prick holes along the drawn lines, trace along them with colored chalk, and then engrave the map. Lines going in the same direction are carved at the same time, and then the plate is turned to carve lines going in a different direction. To print from the finished plate, ink is spread over the metal surface and scraped off such that it remains only in the etched channels. Then the plate is pressed forcibly against the paper so that the ink in the channels is transferred to the paper. The pressing is so forceful that it leaves a "plate mark" around the border of the map at the edge of the plate, within which the paper is depressed compared to the margins. Copper and other metals were expensive at the time, so the plate was often reused for new maps or melted down for other purposes. Whether woodcut or intaglio, the printed map is hung out to dry. Once dry, it is usually placed in another press to flatten the paper. Any type of paper that was available at the time could be used to print the map on, but thicker paper was more durable. Both relief and intaglio were used about equally by the end of the fifteenth century. Lettering Lettering in mapmaking is important for denoting information. Fine lettering is difficult in woodcut, where it often turned out square and blocky, contrary to the stylized, rounded writing style popular in Italy at the time. To improve quality, mapmakers developed fine chisels to carve the relief. Intaglio lettering did not suffer the troubles of a coarse medium and so was able to express the looping cursive that came to be known as cancellaresca. There were custom-made reverse punches that were also used in metal engraving alongside freehand lettering. Color The first use of color in map-making cannot be narrowed down to one reason. There
In 1507, Martin Waldseemüller produced a globular world map and a large 12-panel world wall map (Universalis Cosmographia) bearing the first use of the name "America". Portuguese cartographer Diego Ribero was the author of the first known planisphere with a graduated Equator (1527). Italian cartographer Battista Agnese produced at least 71 manuscript atlases of sea charts. Johannes Werner refined and promoted the Werner projection. This was an equal-area, heart-shaped world map projection (generally called a cordiform projection) which was used in the 16th and 17th centuries. Over time, other iterations of this map type arose; most notable are the sinusoidal projection and the Bonne projection. The Werner projection places its standard parallel at the North Pole; a sinusoidal projection places its standard parallel at the equator; and the Bonne projection is intermediate between the two. In 1569, mapmaker Gerardus Mercator first published a map based on his Mercator projection, which uses equally-spaced parallel vertical lines of longitude and parallel latitude lines spaced farther apart as they get farther away from the equator. By this construction, courses of constant bearing are conveniently represented as straight lines for navigation. The same property limits its value as a general-purpose world map because regions are shown as increasingly larger than they actually are the further from the equator they are. Mercator is also credited as the first to use the word "atlas" to describe a collection of maps. In the later years of his life, Mercator resolved to create his Atlas, a book filled with many maps of different regions of the world, as well as a chronological history of the world from the Earth's creation by God until 1568. He was unable to complete it to his satisfaction before he died. Still, some additions were made to the Atlas after his death and new editions were published after his death. In the Renaissance, maps were used to impress viewers and establish the owner's reputation as sophisticated, educated, and worldly. Because of this, towards the end of the Renaissance, maps were displayed with equal importance of painting, sculptures, and other pieces of art. In the sixteenth century, maps were becoming increasingly available to consumers through the introduction of printmaking, with about 10% of Venetian homes having some sort of map by the late 1500s. There were three main functions of maps in the Renaissance: General descriptions of the world Navigation and wayfinding Land surveying and property management In medieval times, written directions of how to get somewhere were more common than the use of maps. With the Renaissance, cartography began to be seen as a metaphor for power. Political leaders could lay claim on territories through the use of maps and this was greatly aided by the religious and colonial expansion of Europe. The most commonly mapped places during the Renaissance were the Holy Land and other religious places. In the late 1400s to the late 1500s, Rome, Florence, and Venice dominated map making and trade. It started in Florence in the mid to late 1400s. Map trade quickly shifted to Rome and Venice but then was overtaken by atlas makers in the late 16th century. Map publishing in Venice was completed with humanities and book publishing in mind, rather than just informational use. Printing technology There were two main printmaking technologies in the Renaissance: woodcut and copper-plate intaglio, referring to the medium used to transfer the image onto paper. In woodcut, the map image is created as a relief chiseled from medium-grain hardwood. The areas intended to be printed are inked and pressed against the sheet. Being raised from the rest of the block, the map lines cause indentations in the paper that can often be felt on the back of the map. There are advantages to using relief to make maps. For one, a printmaker doesn't need a press because the maps could be developed as rubbings. Woodblock is durable enough to be used many times before defects appear. Existing printing presses can be used to create the prints rather than having to create a new one. On the other hand, it is hard to achieve fine detail with the relief technique. Inconsistencies in linework are more apparent in woodcut than in intaglio. To improve quality in the late fifteenth century, a style of relief craftsmanship developed using fine chisels to carve the wood, rather than the more commonly used knife. In intaglio, lines are engraved into workable metals, typically copper but sometimes brass. The engraver spreads a thin sheet of wax over the metal plate and uses ink to draw the details. Then, the engraver traces the lines with a stylus to etch them into the plate beneath. The engraver can also use styli to prick holes along the drawn lines, trace along them with colored chalk, and then engrave the map. Lines going in the same direction are carved at the same time, and then the plate is turned to carve lines going in a different direction. To print from the finished plate, ink is spread over the metal surface and scraped off such that it remains only in the etched channels. Then the plate is pressed forcibly against the paper so that the ink in the channels is transferred to the paper. The pressing is so forceful that it leaves a "plate mark" around the border of the map at the edge of the plate, within which the paper is depressed compared to the margins. Copper and other metals were expensive at the time, so the plate was often reused for new maps or melted down for other purposes. Whether woodcut or intaglio, the printed map is hung out to dry. Once dry, it is usually placed in another press to flatten the paper. Any type of paper that was available at the time could be used to print the map on, but thicker paper was more durable. Both relief and intaglio were used about equally by the end of the fifteenth century. Lettering Lettering in mapmaking is important for denoting information. Fine lettering is difficult in woodcut, where it often turned out square and blocky, contrary to the stylized, rounded writing style popular in Italy at the time. To improve quality, mapmakers developed fine chisels to carve the relief. Intaglio lettering did not suffer the troubles of a coarse medium and so was able to express the looping cursive that came to be known as cancellaresca. There were custom-made reverse punches that were also used in metal engraving alongside freehand lettering. Color The first use of color in map-making cannot be narrowed down to one reason. There are arguments that color started as a way to indicate information on the map, with aesthetics coming second. There are also arguments that color was first used on maps for aesthetics but then evolved into conveying information. Either way, many maps of the Renaissance left the publisher without being colored, a practice that continued all the way into the 1800s. However, most publishers accepted orders from their patrons to have their maps or atlases colored if they wished. Because all coloring was done by hand, the patron could request simple, cheap color, or more expensive, elaborate color, even going so far as silver or gold gilding. The simplest coloring was merely outlines, such as of borders and along rivers. Wash color meant painting regions with inks or watercolors. Limning meant adding silver and gold leaf to the map to illuminate lettering, heraldic arms, or other decorative elements. Early-Modern Period The Early Modern Period saw the convergence of cartographical techniques across Eurasia and the exchange of mercantile mapping techniques via the Indian Ocean. In the early seventeenth century, the Selden map was created by a Chinese cartographer. Historians have put its date of creation around 1620, but there is debate in this regard. This map's significance draws from historical misconceptions of East Asian cartography, the main one being that East Asians didn't do cartography until Europeans arrived. The map's depiction of trading routes, a compass rose, and scale bar points to the culmination of many map-making techniques incorporated into Chinese mercantile cartography. In 1689, representatives of the Russian tsar and Qing Dynasty met near the border town of Nerchinsk, which was near the disputed border of the two powers, in eastern Siberia. The two parties, with the Qing negotiation party bringing Jesuits as intermediaries, managed to work a treaty which placed the Amur River as the border between the Eurasian powers, and opened up trading relations between the two. This treaty's significance draws from the interaction between the two sides, and the intermediaries who were drawn from a wide variety of nationalities. The Enlightenment Maps of the Enlightenment period practically universally used copper plate intaglio, having abandoned the fragile, coarse woodcut technology. Use of map projections evolved, with the double hemisphere being very common and Mercator's prestigious navigational projection gradually making more appearances. Due to the paucity of information and the immense difficulty of surveying during the period, mapmakers frequently plagiarized material without giving credit to the original cartographer. For example, a famous map of North America known as the "Beaver Map" was published in 1715 by Herman Moll. This map is a close reproduction of a 1698 work by Nicolas de Fer. De Fer, in turn, had copied images that were first printed in books by Louis Hennepin, published in 1697, and François Du Creux, in 1664. By the late 18th century, mapmakers often credited the original publisher with something along the lines of, "After [the original cartographer]" in the map's title or cartouche. Modern period In cartography, technology has continually changed in order to meet the demands of new generations of mapmakers and map users. The first maps were produced manually, with brushes and parchment; so they varied in quality and were limited in distribution. The advent of magnetic devices, such as the compass and much later, magnetic storage devices, allowed for the creation of far more accurate maps and the ability to store and manipulate them digitally. Advances in mechanical devices such as the printing press, quadrant, and vernier allowed the mass production of maps and the creation of accurate reproductions from more accurate data. Hartmann Schedel was one of the first cartographers to use the printing press to make maps more widely available. Optical technology, such as the telescope, sextant, and other devices that use telescopes, allowed accurate land surveys and allowed mapmakers and navigators to find their latitude by measuring angles to the North Star
consuming other organisms Consumption (economics), the purchasing of newly produced goods for current use also defined as the
for current use also defined as the consuming of products Consumption function, an economic formula Consumption (sociology) of resources, associated with social class, identity,
be derived. Plant cardenolides Convallaria majalis (Lily of the Valley): convallotoxin Antiaris toxicaria (upas tree): antiarin Strophanthus kombe (Strophanthus vine): ouabain (g-strophanthin) and other strophanthins Digitalis lanata and Digitalis purpurea (Woolly and purple foxglove): digoxin, digitoxin Nerium oleander (oleander tree): oleandrin Asclepias sp. (milkweed): oleandrin Adonis vernalis (Spring pheasant's eye): adonitoxin Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin Erysimum cheiranthoides (wormseed wallflower) and other Erysimum species Cerbera manghas (suicide tree): cerberin Other cardenolides some species of Chrysolina beetles, including Chrysolina coerulans, have cardiac glycosides (including Xylose) in their defensive glands. Bufadienolides Leonurus cardiaca (motherwort): scillarenin Drimia maritima (squill): proscillaridine A Bufo marinus (cane toad): various bufadienolides Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin and others Helleborus spp. (hellebore) Mechanism of action Cardiac glycosides affect the sodium-potassium ATPase pump in cardiac muscle cells to alter their function. Normally, these sodium-potassium pumps move potassium ions in and sodium ions out. Cardiac glycosides, however, inhibit this pump by stabilizing it in the E2-P transition state, so that sodium cannot be extruded: intracellular sodium concentration therefore increases. With regard to potassium ion movement, because both cardiac glycosides and potassium compete for binding to the ATPase pump, changes in extracellular potassium concentration can potentially lead to altered drug efficacy. Nevertheless, by carefully controlling the dosage, such adverse effects can be avoided. Continuing on with the mechanism, raised intracellular sodium levels inhibit the function of a second membrane ion exchanger, NCX, which is responsible for pumping calcium ions out of the cell and sodium ions in at a ratio of /. Thus, calcium ions are also not extruded and will begin to build up inside the cell as well. The disrupted calcium homeostasis and increased cytoplasmic calcium concentrations cause increased calcium uptake into the sarcoplasmic reticulum (SR) via the SERCA2 transporter. Raised calcium stores in the SR allow for greater calcium release on stimulation, so the myocyte can achieve faster and more powerful contraction by cross-bridge cycling. The refractory period of the AV node is increased, so cardiac glycosides also function to decrease heart rate. For example, the ingestion of digoxin leads to increased cardiac output and decreased heart rate without significant changes in blood pressure; this quality allows it to be widely used medicinally in the treatment of cardiac arrhythmias. Cardiac glycosides were identified as senolytics: they can selectively eliminate senescent cells which are more sensitive to the ATPase-inhibiting action due to cell membrane changes. Clinical significance Cardiac glycosides have long served as the main medical treatment to congestive heart failure and cardiac arrhythmia, due to their effects of increasing the force of muscle contraction while reducing heart rate. Heart failure is characterized by an inability to pump enough blood to support the body, possibly due to a decrease in the volume of the blood or its contractile force. Treatments for the condition thus focus on lowering blood pressure, so that the heart does not have to exert as much force to pump the blood, or directly increasing the heart's contractile force, so that the heart can overcome the higher blood pressure. Cardiac glycosides, such as the commonly used digoxin and digitoxin, deal with the latter, due to their positive inotropic activity. On the other hand, cardiac arrhythmia are changes in heart rate, whether faster (tachycardia) or slower (bradycardia). Medicinal treatments for this condition work primarily to counteract tachycardia or atrial fibrillation by slowing down heart rate, as done by cardiac glycosides. Nevertheless, due to questions of toxicity and dosage, cardiac glycosides have been replaced with synthetic drugs such as ACE inhibitors and beta blockers and are no longer used as the primary medical treatment for such conditions. Depending on the severity of the condition, though, they may still be used in conjunction with other treatments. Toxicity From
have a diverse range of biochemical effects regarding cardiac cell function and have also been suggested for use in cancer treatment. Classification General structure The general structure of a cardiac glycoside consists of a steroid molecule attached to a sugar (glycoside) and an R group. The steroid nucleus consists of four fused rings to which other functional groups such as methyl, hydroxyl, and aldehyde groups can be attached to influence the overall molecule's biological activity. Cardiac glycosides also vary in the groups attached at either end of the steroid. Specifically, different sugar groups attached at the sugar end of the steroid can alter the molecule's solubility and kinetics; however, the lactone moiety at the R group end only serves a structural function. In particular, the structure of the ring attached at the R end of the molecule allows it to be classified as either a cardenolide or bufadienolide. Cardenolides differ from bufadienolides due to the presence of an “enolide,” a five-membered ring with a single double bond, at the lactone end. Bufadienolides, on the other hand, contain a “dienolide,” a six-membered ring with two double bonds, at the lactone end. While compounds of both groups can be used to influence the cardiac output of the heart, cardenolides are more commonly used medicinally, primarily due to the widespread availability of the plants from which they are derived. Classification Cardiac glycosides can be more specifically categorized based on the plant they are derived from, as in the following list. For example, cardenolides have been primarily derived from the foxglove plants Digitalis purpurea and Digitalis lanata, while bufadienolides have been derived from the venom of the cane toad Bufo marinus, from which they receive the “bufo” portion of their name. Below is a list of organisms from which cardiac glycosides can be derived. Plant cardenolides Convallaria majalis (Lily of the Valley): convallotoxin Antiaris toxicaria (upas tree): antiarin Strophanthus kombe (Strophanthus vine): ouabain (g-strophanthin) and other strophanthins Digitalis lanata and Digitalis purpurea (Woolly and purple foxglove): digoxin, digitoxin Nerium oleander (oleander tree): oleandrin Asclepias sp. (milkweed): oleandrin Adonis vernalis (Spring pheasant's eye): adonitoxin Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin Erysimum cheiranthoides (wormseed wallflower) and other Erysimum species Cerbera manghas (suicide tree): cerberin Other cardenolides some species of Chrysolina beetles, including Chrysolina coerulans, have cardiac glycosides (including Xylose) in their defensive glands. Bufadienolides Leonurus cardiaca (motherwort): scillarenin Drimia maritima (squill): proscillaridine A Bufo marinus (cane toad): various bufadienolides Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin and others Helleborus spp. (hellebore) Mechanism of action Cardiac glycosides affect the sodium-potassium ATPase pump in cardiac muscle cells to alter their function. Normally, these sodium-potassium pumps move potassium ions in and sodium ions out. Cardiac glycosides, however, inhibit this pump by stabilizing it in the E2-P transition state, so
Mariana Islands Palau Palmyra Atoll Panama (Hay–Bunau-Varilla Treaty turned Panama into a protectorate, protectorate until post-WW2) Panama Canal Zone (1903–1979) Philippines (1898–1946) Puerto Rico Quita Sueño Bank (1869–1981) Roncador Bank (1856–1981) Ryukyu Islands (1945-1972) Shanghai International Settlement (1863–1945) Sultanate of Sulu (1903–1915) Swan Islands, Honduras (1914–1972) Treaty Ports of China, Korea and Japan United States Virgin Islands Wake Island Wilkes Land Russian colonies and protectorates Emirate of Bukhara (1873–1917) Grand Duchy of Finland (1809–1917) Khiva Khanate (1873–1917) Kauai (Hawaii) (1816–1817) Russian America (Alaska) (1733–1867) Fort Ross (California) German colonies Bismarck Archipelago Kamerun Caroline Islands German New Guinea German Samoa German Solomon Islands German East Africa German South-West Africa Gilbert Islands Jiaozhou Bay Mariana Islands Marshall Islands Nauru Palau Togoland Tianjin Italian colonies and protectorates Italian Aegean Islands Italian Albania (1918–1920) Italian Albania (1939–1943) Italian concessions in China Italian concession of Tientsin Italian governorate of Dalmatia Italian governorate of Montenegro Hellenic State Italian Eritrea Italian Somaliland Italian Trans-Juba (briefly; annexed) Libya Italian Tripolitania Italian Cyrenaica Italian Libya Italian East Africa Dutch colonies and Overseas Territories Dutch Brazil Dutch Ceylon Dutch Formosa Dutch Cape Colony Aruba Bonaire Curaçao Saba Sint Eustatius Sint Maarten Surinam (Dutch colony) Dutch East Indies Dutch New Guinea Portuguese colonies Portuguese Africa Cabinda Ceuta Madeira Portuguese Angola Portuguese Cape Verde Portuguese Guinea Portuguese Mozambique Portuguese São Tomé and Príncipe Fort of São João Baptista de Ajudá Portuguese Asia Portuguese India Goa Daman Diu Portuguese Macau Portuguese Oceania Flores Portuguese Timor Solor Portuguese South America Colonial Brazil Cisplatina Misiones Orientales Portuguese North America Azores Newfoundland and Labrador Spanish colonies Canary Islands Cape Juby Captaincy General of Cuba Spanish Florida Spanish Louisiana Captaincy General of the Philippines Caroline Islands Mariana Islands Marshall Islands Palau Islands Ifni Río de Oro Saguia el-Hamra Spanish Morocco Spanish Netherlands Spanish Sahara Spanish Sardinia Spanish Sicily Viceroyalty of Peru Captaincy General of Chile Viceroyalty of the Río de la Plata Spanish Guinea Annobón Fernando Po Río Muni Viceroyalty of New Granada Captaincy General of Venezuela Viceroyalty of New Spain Captaincy General of Guatemala Captaincy General of Yucatán Captaincy General of Santo Domingo Captaincy General of Puerto Rico Spanish Formosa Austrian and Austro-Hungarian colonies Bosnia and Herzegovina 1878–1918. Tianjin, China, 1902–1917. Austrian Netherlands, 1714–1797 Nicobar Islands, 1778–1783 North Borneo, 1876–1879 Danish colonies and dominions Andaman and Nicobar Islands Danish West Indies (now United States Virgin Islands) Danish Norway Faroe Islands Greenland Iceland Serampore Danish Gold Coast Danish India Belgian colonies Belgian Congo Ruanda-Urundi Tianjin Swedish colonies and dominions Guadeloupe New Sweden Saint Barthélemy Swedish Gold Coast Dominions of Sweden in continental Europe Norwegian Overseas Territories Svalbard Jan Mayen Bouvet Island Queen Maud Land Peter I Island Ottoman colonies and Vassal and tributary states of the Ottoman Empire Rumelia Ottoman North Africa Ottoman Arabia Other non-European colonialist countries Australian Overseas Territories Papua New Guinea Christmas Island Cocos Islands Coral Sea Islands Heard Island and McDonald Islands Norfolk Island Nauru Australian Antarctic Territory New Zealand dependencies Cook Islands Nauru Niue Ross Dependency Balleny Islands Ross Island Scott Island Roosevelt Island Japanese colonies and protectorates Bonin Islands Karafuto Korea Kuril Islands Kwantung Leased Territory Nanyo Caroline Islands Marshall Islands Northern Mariana Islands Palau Islands Penghu Islands Ryukyu Domain Taiwan Volcano Islands Chinese colonies and protectorates East Turkistan (Xinjiang) from 1884 - 1933, 1934-1944, 1949-present Guangxi (Tusi) Hainan Nansha Islands Xisha Islands Manchuria Inner Mongolia Outer Mongolia during the Qing dynasty Taiwan Tibet (Kashag) Tuva during the Qing dynasty Yunnan (Tusi) Vietnam during the Han, Sui, and Tang dynasties Ryukyu from the 15th to the 19th century Omani colonies Omani Empire Swahili coast Zanzibar Qatar Bahrain Somalia Socotra Mexican colonies The Californias Texas Central America Clipperton Island Revillagigedo Islands Chiapas Ecuatorian colonies Galápagos Islands Colombian colonies Panama Ecuador Venezuela Archipelago of San Andrés, Providencia and Santa Catalina Argentine colonies and protectorates Protectorate of Peru (1820–1822) Gobierno del Cerrito (1843–1851) Chile (1817–1818) Paraguay (1810–1811, 1873) Uruguay (1810–1813) Bolivia (1810–1822) Tierra del Fuego Patagonia Falkland Islands and Dependencies (1829–1831, 1832–1833, 1982) Argentine Antarctica Misiones Formosa Puna de Atacama (1839– ) Argentina expedition to California (1818) Equatorial Guinea (1810-1815) Paraguayan colonies Mato Grosso do Sul Formosa Bolivian colonies Puna de Atacama (1825–1839 ceded to Argentina) (1825–1879 ceded to Chile) Acre Ethiopian colonies Eritrea Moroccan colonies Western Sahara Indian colonies and protectorates Gilgit Baltistan Thai colonies (Siam) Kingdom of Vientiane (1778–1828) Kingdom of Luang Prabang (1778–1893) Kingdom of Champasak (1778–1893) Kingdom of Cambodia (1771–1867) Kedah (1821–1826) Perlis (1821-1836) (Ancient) Egyptian colonies Canaan Nubia (Khedivate) Egyptian colonies Anglo-Egyptian Sudan Habesh Eyalet Sidon Eyalet Damascus Eyalet Impact of colonialism and colonisation The impacts of colonisation are immense and pervasive. Various effects, both immediate and protracted, include the spread of virulent diseases, unequal social relations, detribalization, exploitation, enslavement, medical advances, the creation of new institutions, abolitionism, improved infrastructure, and technological progress. Colonial practices also spur the spread of colonist languages, literature and cultural institutions, while endangering or obliterating those of native peoples. The native cultures of the colonised peoples can also have a powerful influence on the imperial country. Economy, trade and commerce Economic expansion, sometimes described as the colonial surplus, has accompanied imperial expansion since ancient times. Greek trade networks spread throughout the Mediterranean region while Roman trade expanded with the primary goal of directing tribute from the colonised areas towards the Roman metropole. According to Strabo, by the time of emperor Augustus, up to 120 Roman ships would set sail every year from Myos Hormos in Roman Egypt to India. With the development of trade routes under the Ottoman Empire, Aztec civilisation developed into an extensive empire that, much like the Roman Empire, had the goal of exacting tribute from the conquered colonial areas. For the Aztecs, a significant tribute was the acquisition of sacrificial victims for their religious rituals. On the other hand, European colonial empires sometimes attempted to channel, restrict and impede trade involving their colonies, funneling activity through the metropole and taxing accordingly. Despite the general trend of economic expansion, the economic performance of former European colonies varies significantly. In "Institutions as a Fundamental Cause of Long-run Growth", economists Daron Acemoglu, Simon Johnson and James A. Robinson compare the economic influences of the European colonists on different colonies and study what could explain the huge discrepancies in previous European colonies, for example, between West African colonies like Sierra Leone and Hong Kong and Singapore. According to the paper, economic institutions are the determinant of the colonial success because they determine their financial performance and order for the distribution of resources. At the same time, these institutions are also consequences of political institutions – especially how de facto and de jure political power is allocated. To explain the different colonial cases, we thus need to look first into the political institutions that shaped the economic institutions. For example, one interesting observation is "the Reversal of Fortune" – the less developed civilisations in 1500, like North America, Australia, and New Zealand, are now much richer than those countries who used to be in the prosperous civilisations in 1500 before the colonists came, like the Mughals in India and the Incas in the Americas. One explanation offered by the paper focuses on the political institutions of the various colonies: it was less likely for European colonists to introduce economic institutions where they could benefit quickly from the extraction of resources in the area. Therefore, given a more developed civilisation and denser population, European colonists would rather keep the existing economic systems than introduce an entirely new system; while in places with little to extract, European colonists would rather establish new economic institutions to protect their interests. Political institutions thus gave rise to different types of economic systems, which determined the colonial economic performance. European colonisation and development also changed gendered systems of power already in place around the world. In many pre-colonialist areas, women maintained power, prestige, or authority through reproductive or agricultural control. For example, in certain parts of sub-Saharan Africa women maintained farmland in which they had usage rights. While men would make political and communal decisions for a community, the women would control the village's food supply or their individual family's land. This allowed women to achieve power and autonomy, even in patrilineal and patriarchal societies. Through the rise of European colonialism came a large push for development and industrialisation of most economic systems. However, when working to improve productivity, Europeans focused mostly on male workers. Foreign aid arrived in the form of loans, land, credit, and tools to speed up development, but were only allocated to men. In a more European fashion, women were expected to serve on a more domestic level. The result was a technologic, economic, and class-based gender gap that widened over time. Within a colony, the presence of extractive colonial institutions in a given area has been found have effects on the modern day economic development, institutions and infrastructure of these areas. Slavery and indentured servitude European nations entered their imperial projects with the goal of enriching the European metropoles. Exploitation of non-Europeans and of other Europeans to support imperial goals was acceptable to the colonisers. Two outgrowths of this imperial agenda were the extension of slavery and indentured servitude. In the 17th century, nearly two-thirds of English settlers came to North America as indentured servants. European slave traders brought large numbers of African slaves to the Americas by sail. Spain and Portugal had brought African slaves to work in African colonies such as Cape Verde and São Tomé and Príncipe, and then in Latin America, by the 16th century. The British, French and Dutch joined in the slave trade in subsequent centuries. The European colonial system took approximately 11 million Africans to the Caribbean and to North and South America as slaves. Abolitionists in Europe and Americas protested the inhumane treatment of African slaves, which led to the elimination of the slave trade (and later, of most forms of slavery) by the late 19th century. One (disputed) school of thought points to the role of abolitionism in the American Revolution: while the British colonial metropole started to move towards outlawing slavery, slave-owning elites in the Thirteen Colonies saw this as one of the reasons to fight for their post-colonial independence and for the right to develop and continue a largely slave-based economy. British colonising activity in New Zealand from the early 19th century played a part in ending slave-taking and slave-keeping among the indigenous Māori. On the other hand, British colonial administration in Southern Africa, when it officially abolished slavery in the 1830s, caused rifts in society which arguably perpetuated slavery in the Boer Republics and fed into the philosophy of apartheid. The labour shortages that resulted from abolition inspired European colonisers in Queensland, British Guaiana and Fiji (for example) to develop new sources of labour, re-adopting a system of indentured servitude. Indentured servants consented to a contract with the European colonisers. Under their contract, the servant would work for an employer for a term of at least a year, while the employer agreed to pay for the servant's voyage to the colony, possibly pay for the return to the country of origin, and pay the employee a wage as well. The employees became "indentured" to the employer because they owed a debt back to the employer for their travel expense to the colony, which they were expected to pay through their wages. In practice, indentured servants were exploited through terrible working conditions and burdensome debts imposed by the employers, with whom the servants had no means of negotiating the debt once they arrived in the colony. India and China were the largest source of indentured servants during the colonial era. Indentured servants from India travelled to British colonies in Asia, Africa and the Caribbean, and also to French and Portuguese colonies, while Chinese servants travelled to British and Dutch colonies. Between 1830 and 1930, around 30 million indentured servants migrated from India, and 24 million returned to India. China sent more indentured servants to European colonies, and around the same proportion returned to China. Following the Scramble for Africa, an early but secondary focus for most colonial regimes was the suppression of slavery and the slave trade. By the end of the colonial period they were mostly successful in this aim, though slavery persists in Africa and in the world at large with much the same practices of de facto servility despite legislative prohibition. Military innovation Conquering forces have throughout history applied innovation in order to gain an advantage over the armies of the people they aim to conquer. Greeks developed the phalanx system, which enabled their military units to present themselves to their enemies as a wall, with foot soldiers using shields to cover one another during their advance on the battlefield. Under Philip II of Macedon, they were able to organise thousands of soldiers into a formidable battle force, bringing together carefully trained infantry and cavalry regiments. Alexander the Great exploited this military foundation further during his conquests. The Spanish Empire held a major advantage over Mesoamerican warriors through the use of weapons made of stronger metal, predominantly iron, which was able to shatter the blades of axes used by the Aztec civilisation and others. The use of gunpowder weapons cemented the European military advantage over the peoples they sought to subjugate in the Americas and elsewhere. The end of empire The populations of some colonial territories, such as Canada, enjoyed relative peace and prosperity as part of a European power, at least among the majority; however, minority populations such as First Nations peoples and French-Canadians experienced marginalisation and resented colonial practices. Francophone residents of Quebec, for example, were vocal in opposing conscription into the armed services to fight on behalf of Britain during World War I, resulting in the Conscription crisis of 1917. Other European colonies had much more pronounced conflict between European settlers and the local population. Rebellions broke out in the later decades of the imperial era, such as India's Sepoy Rebellion of 1857. The territorial boundaries imposed by European colonisers, notably in central Africa and South Asia, defied the existing boundaries of native populations that had previously interacted little with one another. European colonisers disregarded native political and cultural animosities, imposing peace upon people under their military control. Native populations were often relocated at the will of the colonial administrators. The Partition of British India in August 1947 led to the Independence of India and the creation of Pakistan. These events also caused much bloodshed at the time of the migration of immigrants from the two countries. Muslims from India and Hindus and Sikhs from Pakistan migrated to the respective countries they sought independence for. Post-independence population movement In a reversal of the migration patterns experienced during the modern colonial era, post-independence era migration followed a route back towards the imperial country. In some cases, this was a movement of settlers of European origin returning to the land of their birth, or to an ancestral birthplace. 900,000 French colonists (known as the Pied-Noirs) resettled in France following Algeria's independence in 1962. A significant number of these migrants were also of Algerian descent. 800,000 people of Portuguese origin migrated to Portugal after the independence of former colonies in Africa between 1974 and 1979; 300,000 settlers of Dutch origin migrated to the Netherlands from the Dutch West Indies after Dutch military control of the colony ended. After WWII 300,000 Dutchmen from the Dutch East Indies, of which the majority were people of Eurasian descent called Indo Europeans, repatriated to the Netherlands. A significant number later migrated to the US, Canada, Australia and New Zealand. Global travel and migration in general developed at an increasingly brisk pace throughout the era of European colonial expansion. Citizens of the former colonies of European countries may have a privileged status in some respects with regard to immigration rights when settling in the former European imperial nation. For example, rights to dual citizenship may be generous, or larger immigrant quotas may be extended to former colonies. In some cases, the former European imperial nations continue to foster close political and economic ties with former colonies. The Commonwealth of Nations is an organisation that promotes cooperation between and among Britain and its former colonies, the Commonwealth members. A similar organisation exists for former colonies of France, the Francophonie; the Community of Portuguese Language Countries plays a similar role for former Portuguese colonies, and the Dutch Language Union is the equivalent for former colonies of the Netherlands. Migration from former colonies has proven to be problematic for European countries, where the majority population may express hostility to ethnic minorities who have immigrated from former colonies. Cultural and religious conflict have often erupted in France in recent decades, between immigrants from the Maghreb countries of north Africa and the majority population of France. Nonetheless, immigration has changed the ethnic composition of France; by the 1980s, 25% of the total population of "inner Paris" and 14% of the metropolitan region were of foreign origin, mainly Algerian. Introduced diseases Encounters between explorers and populations in the rest of the world often introduced new diseases, which sometimes caused local epidemics of extraordinary virulence. For example, smallpox, measles, malaria, yellow fever, and others were unknown in pre-Columbian America. Half the native population of Hispaniola in 1518 was killed by smallpox. Smallpox also ravaged Mexico in the 1520s, killing 150,000 in Tenochtitlan alone, including the emperor, and Peru in the 1530s, aiding the European conquerors. Measles killed a further two million Mexican natives in the 17th century. In 1618–1619, smallpox wiped out 90% of the Massachusetts Bay Native Americans. Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic depopulation among the Plains Indians. Some believe that the death of up to 95% of the Native American population of the New World was caused by Old World diseases. Over the centuries, the Europeans had developed high degrees of immunity to these diseases, while the indigenous peoples had no time to build such immunity. Smallpox decimated the native population of Australia, killing around 50% of indigenous Australians in the early years of British colonisation. It also killed many New Zealand Māori. As late as 1848–49, as many as 40,000 out of 150,000 Hawaiians are estimated to have died of measles, whooping cough and influenza. Introduced diseases, notably smallpox, nearly wiped out the native population of Easter Island. In 1875, measles killed over 40,000 Fijians, approximately one-third of the population. The Ainu population decreased drastically in the 19th century, due in large part to infectious diseases brought by Japanese settlers pouring into Hokkaido. Conversely, researchers have hypothesised that a precursor to syphilis may have been carried from the New World to Europe after Columbus's voyages. The findings suggested Europeans could have carried the nonvenereal tropical bacteria home, where the organisms may have mutated into a more deadly form in the different conditions of Europe. The disease was more frequently fatal than it is today; syphilis was a major killer in Europe during the Renaissance. The first cholera pandemic began in Bengal, then spread across India by 1820. Ten thousand British troops and countless Indians died during this pandemic. Between 1736 and 1834 only some 10% of East India Company's officers survived to take the final voyage home. Waldemar Haffkine, who mainly worked in India, who developed and used vaccines against cholera and bubonic plague in the 1890s, is considered the first microbiologist. According to a 2021 study by Jörg Baten and Laura Maravall on the anthropometric influence of colonialism on Africans, the average height of Africans decreased by 1.1 centimetres upon colonization and later recovered and increased overall during colonial rule. The authors attributed the decrease to diseases, such as malaria and sleeping sickness, forced labor during the early decades of colonial rule, conflicts, land grabbing, and widespread cattle deaths from the rinderpest viral disease. Countering disease As early as 1803, the Spanish Crown organised a mission (the Balmis expedition) to transport the smallpox vaccine to the Spanish colonies, and establish mass vaccination programs there. By 1832, the federal government of the United States established a smallpox vaccination program for Native Americans. Under the direction of Mountstuart Elphinstone a program was launched to propagate smallpox vaccination in India. From the beginning of the 20th century onwards, the elimination or control of disease in tropical countries became a driving force for all colonial powers. The sleeping sickness epidemic in Africa was arrested due to mobile teams systematically screening millions of people at risk. In the 20th century, the world saw the biggest increase in its population in human history due to lessening of the mortality rate in many countries due to medical advances. The world population has grown from 1.6 billion in 1900 to over seven billion today. Colonialism and the history of thought Colonial botany Colonial botany refers to the body of works concerning the study, cultivation, marketing and naming of the new plants that were acquired or traded during the age of European colonialism. Notable examples of these plants included sugar, nutmeg, tobacco, cloves, cinnamon, Peruvian bark, peppers and tea. This work was a large part of securing financing for colonial ambitions, supporting European expansion and ensuring the profitability of such endeavors. Vasco de Gama and Christopher Columbus were seeking to establish routes to trade spices, dyes and silk from the Moluccas, India and China by sea that would be independent of the established routes controlled by Venetian and Middle Eastern merchants. Naturalists like Hendrik van Rheede, Georg Eberhard Rumphius, and Jacobus Bontius compiled data about eastern plants on behalf of the Europeans. Though Sweden did not possess an extensive colonial network, botanical research based on Carl Linnaeus identified and developed techniques to grow cinnamon, tea and rice locally as an alternative to costly imports. Universalism The conquest of vast territories brings multitudes of diverse cultures under the central control of the imperial authorities. From the time of Ancient Greece and Ancient Rome, this fact has been addressed by empires adopting the concept of universalism, and applying it to their imperial policies towards their subjects far from the imperial capitol. The capitol, the metropole, was the source of ostensibly enlightened policies imposed throughout the distant colonies. The empire that grew from Greek conquest, particularly by Alexander the Great, spurred the spread of Greek language, religion, science and philosophy throughout the colonies. While most Greeks considered their own culture superior to all others (the word barbarian is derived from mutterings that sounded to Greek ears like "bar-bar"), Alexander was unique in promoting a campaign to win the hearts and minds of the Persians. He adopted Persian customs of clothing and otherwise encouraged his men to go native by adopting local wives and learning their mannerisms. Of note is that he radically departed from earlier Greek attempts at colonisation, characterised by the murder and enslavement of the local inhabitants and the settling of Greek citizens from the polis. Roman universalism was characterised by cultural and religious tolerance and a focus on civil efficiency and the rule of law. Roman law was imposed on both Roman citizens and colonial subjects. Although Imperial Rome had no public education, Latin spread through its use in government and trade. Roman law prohibited local leaders to wage war between themselves, which was responsible for the 200 year long Pax Romana, at the time the longest period of peace in history. The Roman Empire was tolerant of diverse cultures and religious practises, even allowing them on a few occasions to threaten Roman authority. Colonialism and geography Settlers acted as the link between indigenous populations and the imperial hegemony, thus bridging the geographical, ideological and commercial gap between the colonisers and colonised. While the extent in which geography as an academic study is implicated in colonialism is contentious, geographical tools such as cartography, shipbuilding, navigation, mining and agricultural productivity were instrumental in European colonial expansion. Colonisers' awareness of the Earth's surface and abundance of practical skills provided colonisers with a knowledge that, in turn, created power. Anne Godlewska and Neil Smith argue that "empire was 'quintessentially a geographical project. Historical geographical theories such as environmental determinism legitimised colonialism by positing the view that some parts of the world were underdeveloped, which created notions of skewed evolution. Geographers such as Ellen Churchill Semple and Ellsworth Huntington put forward the notion that northern climates bred vigour and intelligence as opposed to those indigenous to tropical climates (See The Tropics) viz a viz a combination of environmental determinism and Social Darwinism in their approach. Political geographers also maintain that colonial behaviour was reinforced by the physical mapping of the world, therefore creating a visual separation between "them" and "us". Geographers are primarily focused on the spaces of colonialism and imperialism; more specifically, the material and symbolic appropriation of space enabling colonialism. Maps played an extensive role in colonialism, as Bassett would put it "by providing geographical information in a convenient and standardised format, cartographers helped open West Africa to European conquest, commerce, and colonisation". However, because the relationship between colonialism and geography was not scientifically objective, cartography was often manipulated during the colonial era. Social norms and values had an effect on the constructing of maps. During colonialism map-makers used rhetoric in their formation of boundaries and in their art. The rhetoric favoured the view of the conquering Europeans; this is evident in the fact that any map created by a non-European was instantly regarded as inaccurate. Furthermore, European cartographers were required to follow a set of rules which led to ethnocentrism; portraying one's own ethnicity in the centre of the map. As J.B. Harley put it, "The steps in making a map – selection, omission, simplification, classification, the creation of hierarchies, and 'symbolisation' – are all inherently rhetorical." A common practice by the European cartographers of the time was to map unexplored areas as "blank spaces". This influenced the colonial powers as it sparked competition amongst them to explore and colonise these regions. Imperialists aggressively and passionately looked forward to filling these spaces for the glory of their respective countries. The Dictionary of Human Geography notes that cartography was used to empty 'undiscovered' lands of their Indigenous meaning and bring them into spatial existence via the imposition of "Western place-names and borders, [therefore] priming 'virgin' (putatively empty land, 'wilderness') for colonisation (thus sexualising colonial landscapes as domains of male penetration), reconfiguring alien space as absolute, quantifiable and separable (as property)." David Livingstone stresses "that geography has meant different things at different times and in different places" and that we should keep an open mind in regards to the relationship between geography and colonialism instead of identifying boundaries. Geography as a discipline was not and is not an objective science, Painter and Jeffrey argue, rather it is based on assumptions about the physical world. Comparison of exogeographical representations of ostensibly tropical environments in science fiction art support this conjecture, finding the notion of the tropics to be an artificial collection of ideas and beliefs that are independent of geography. Colonialism and imperialism A colony is a part of an empire and so colonialism is closely related to imperialism. Assumptions are that colonialism and imperialism are interchangeable, however Robert J. C. Young suggests that imperialism is the concept while colonialism is the practice. Colonialism is based on an imperial outlook, thereby creating a consequential relationship. Through an empire, colonialism is established and capitalism is expanded, on the other hand a capitalist economy naturally enforces an empire. Marxist view of colonialism Marxism views colonialism as a form of capitalism, enforcing exploitation and social change. Marx thought that working within the global capitalist system, colonialism is closely associated with uneven development. It is an "instrument of wholesale destruction, dependency and systematic exploitation producing distorted economies, socio-psychological disorientation, massive poverty and neocolonial dependency". Colonies are constructed into modes of production. The search for raw materials and the current search for new investment opportunities is a result of inter-capitalist rivalry for capital accumulation. Lenin regarded colonialism as the root cause of imperialism, as imperialism was distinguished by monopoly capitalism via colonialism and as Lyal S. Sunga explains: "Vladimir Lenin advocated forcefully the principle of self-determination of peoples in his "Theses on the Socialist Revolution and the Right of Nations to Self-Determination" as an integral plank in the programme of socialist internationalism" and he quotes Lenin who contended that "The right of nations to self-determination implies exclusively the right to independence in the political sense, the right to free political separation from the oppressor nation. Specifically, this demand for political democracy implies complete freedom to agitate for secession and for a referendum on secession by the seceding nation." Non Russian marxists within the RSFSR and later the USSR, like Sultan Galiev and Vasyl Shakhrai, meanwhile, between 1918 and 1923 and then after 1929, considered the Soviet Regime a renewed version of the Russian imperialism and colonialism. In his critique of colonialism in Africa, the Guyanese historian and political activist Walter Rodney states: "The decisiveness of the short period of colonialism and its negative consequences for Africa spring mainly from the fact that Africa lost power. Power is the ultimate determinant in human society, being basic to the relations within any group and between groups. It implies the ability
to serve on a more domestic level. The result was a technologic, economic, and class-based gender gap that widened over time. Within a colony, the presence of extractive colonial institutions in a given area has been found have effects on the modern day economic development, institutions and infrastructure of these areas. Slavery and indentured servitude European nations entered their imperial projects with the goal of enriching the European metropoles. Exploitation of non-Europeans and of other Europeans to support imperial goals was acceptable to the colonisers. Two outgrowths of this imperial agenda were the extension of slavery and indentured servitude. In the 17th century, nearly two-thirds of English settlers came to North America as indentured servants. European slave traders brought large numbers of African slaves to the Americas by sail. Spain and Portugal had brought African slaves to work in African colonies such as Cape Verde and São Tomé and Príncipe, and then in Latin America, by the 16th century. The British, French and Dutch joined in the slave trade in subsequent centuries. The European colonial system took approximately 11 million Africans to the Caribbean and to North and South America as slaves. Abolitionists in Europe and Americas protested the inhumane treatment of African slaves, which led to the elimination of the slave trade (and later, of most forms of slavery) by the late 19th century. One (disputed) school of thought points to the role of abolitionism in the American Revolution: while the British colonial metropole started to move towards outlawing slavery, slave-owning elites in the Thirteen Colonies saw this as one of the reasons to fight for their post-colonial independence and for the right to develop and continue a largely slave-based economy. British colonising activity in New Zealand from the early 19th century played a part in ending slave-taking and slave-keeping among the indigenous Māori. On the other hand, British colonial administration in Southern Africa, when it officially abolished slavery in the 1830s, caused rifts in society which arguably perpetuated slavery in the Boer Republics and fed into the philosophy of apartheid. The labour shortages that resulted from abolition inspired European colonisers in Queensland, British Guaiana and Fiji (for example) to develop new sources of labour, re-adopting a system of indentured servitude. Indentured servants consented to a contract with the European colonisers. Under their contract, the servant would work for an employer for a term of at least a year, while the employer agreed to pay for the servant's voyage to the colony, possibly pay for the return to the country of origin, and pay the employee a wage as well. The employees became "indentured" to the employer because they owed a debt back to the employer for their travel expense to the colony, which they were expected to pay through their wages. In practice, indentured servants were exploited through terrible working conditions and burdensome debts imposed by the employers, with whom the servants had no means of negotiating the debt once they arrived in the colony. India and China were the largest source of indentured servants during the colonial era. Indentured servants from India travelled to British colonies in Asia, Africa and the Caribbean, and also to French and Portuguese colonies, while Chinese servants travelled to British and Dutch colonies. Between 1830 and 1930, around 30 million indentured servants migrated from India, and 24 million returned to India. China sent more indentured servants to European colonies, and around the same proportion returned to China. Following the Scramble for Africa, an early but secondary focus for most colonial regimes was the suppression of slavery and the slave trade. By the end of the colonial period they were mostly successful in this aim, though slavery persists in Africa and in the world at large with much the same practices of de facto servility despite legislative prohibition. Military innovation Conquering forces have throughout history applied innovation in order to gain an advantage over the armies of the people they aim to conquer. Greeks developed the phalanx system, which enabled their military units to present themselves to their enemies as a wall, with foot soldiers using shields to cover one another during their advance on the battlefield. Under Philip II of Macedon, they were able to organise thousands of soldiers into a formidable battle force, bringing together carefully trained infantry and cavalry regiments. Alexander the Great exploited this military foundation further during his conquests. The Spanish Empire held a major advantage over Mesoamerican warriors through the use of weapons made of stronger metal, predominantly iron, which was able to shatter the blades of axes used by the Aztec civilisation and others. The use of gunpowder weapons cemented the European military advantage over the peoples they sought to subjugate in the Americas and elsewhere. The end of empire The populations of some colonial territories, such as Canada, enjoyed relative peace and prosperity as part of a European power, at least among the majority; however, minority populations such as First Nations peoples and French-Canadians experienced marginalisation and resented colonial practices. Francophone residents of Quebec, for example, were vocal in opposing conscription into the armed services to fight on behalf of Britain during World War I, resulting in the Conscription crisis of 1917. Other European colonies had much more pronounced conflict between European settlers and the local population. Rebellions broke out in the later decades of the imperial era, such as India's Sepoy Rebellion of 1857. The territorial boundaries imposed by European colonisers, notably in central Africa and South Asia, defied the existing boundaries of native populations that had previously interacted little with one another. European colonisers disregarded native political and cultural animosities, imposing peace upon people under their military control. Native populations were often relocated at the will of the colonial administrators. The Partition of British India in August 1947 led to the Independence of India and the creation of Pakistan. These events also caused much bloodshed at the time of the migration of immigrants from the two countries. Muslims from India and Hindus and Sikhs from Pakistan migrated to the respective countries they sought independence for. Post-independence population movement In a reversal of the migration patterns experienced during the modern colonial era, post-independence era migration followed a route back towards the imperial country. In some cases, this was a movement of settlers of European origin returning to the land of their birth, or to an ancestral birthplace. 900,000 French colonists (known as the Pied-Noirs) resettled in France following Algeria's independence in 1962. A significant number of these migrants were also of Algerian descent. 800,000 people of Portuguese origin migrated to Portugal after the independence of former colonies in Africa between 1974 and 1979; 300,000 settlers of Dutch origin migrated to the Netherlands from the Dutch West Indies after Dutch military control of the colony ended. After WWII 300,000 Dutchmen from the Dutch East Indies, of which the majority were people of Eurasian descent called Indo Europeans, repatriated to the Netherlands. A significant number later migrated to the US, Canada, Australia and New Zealand. Global travel and migration in general developed at an increasingly brisk pace throughout the era of European colonial expansion. Citizens of the former colonies of European countries may have a privileged status in some respects with regard to immigration rights when settling in the former European imperial nation. For example, rights to dual citizenship may be generous, or larger immigrant quotas may be extended to former colonies. In some cases, the former European imperial nations continue to foster close political and economic ties with former colonies. The Commonwealth of Nations is an organisation that promotes cooperation between and among Britain and its former colonies, the Commonwealth members. A similar organisation exists for former colonies of France, the Francophonie; the Community of Portuguese Language Countries plays a similar role for former Portuguese colonies, and the Dutch Language Union is the equivalent for former colonies of the Netherlands. Migration from former colonies has proven to be problematic for European countries, where the majority population may express hostility to ethnic minorities who have immigrated from former colonies. Cultural and religious conflict have often erupted in France in recent decades, between immigrants from the Maghreb countries of north Africa and the majority population of France. Nonetheless, immigration has changed the ethnic composition of France; by the 1980s, 25% of the total population of "inner Paris" and 14% of the metropolitan region were of foreign origin, mainly Algerian. Introduced diseases Encounters between explorers and populations in the rest of the world often introduced new diseases, which sometimes caused local epidemics of extraordinary virulence. For example, smallpox, measles, malaria, yellow fever, and others were unknown in pre-Columbian America. Half the native population of Hispaniola in 1518 was killed by smallpox. Smallpox also ravaged Mexico in the 1520s, killing 150,000 in Tenochtitlan alone, including the emperor, and Peru in the 1530s, aiding the European conquerors. Measles killed a further two million Mexican natives in the 17th century. In 1618–1619, smallpox wiped out 90% of the Massachusetts Bay Native Americans. Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic depopulation among the Plains Indians. Some believe that the death of up to 95% of the Native American population of the New World was caused by Old World diseases. Over the centuries, the Europeans had developed high degrees of immunity to these diseases, while the indigenous peoples had no time to build such immunity. Smallpox decimated the native population of Australia, killing around 50% of indigenous Australians in the early years of British colonisation. It also killed many New Zealand Māori. As late as 1848–49, as many as 40,000 out of 150,000 Hawaiians are estimated to have died of measles, whooping cough and influenza. Introduced diseases, notably smallpox, nearly wiped out the native population of Easter Island. In 1875, measles killed over 40,000 Fijians, approximately one-third of the population. The Ainu population decreased drastically in the 19th century, due in large part to infectious diseases brought by Japanese settlers pouring into Hokkaido. Conversely, researchers have hypothesised that a precursor to syphilis may have been carried from the New World to Europe after Columbus's voyages. The findings suggested Europeans could have carried the nonvenereal tropical bacteria home, where the organisms may have mutated into a more deadly form in the different conditions of Europe. The disease was more frequently fatal than it is today; syphilis was a major killer in Europe during the Renaissance. The first cholera pandemic began in Bengal, then spread across India by 1820. Ten thousand British troops and countless Indians died during this pandemic. Between 1736 and 1834 only some 10% of East India Company's officers survived to take the final voyage home. Waldemar Haffkine, who mainly worked in India, who developed and used vaccines against cholera and bubonic plague in the 1890s, is considered the first microbiologist. According to a 2021 study by Jörg Baten and Laura Maravall on the anthropometric influence of colonialism on Africans, the average height of Africans decreased by 1.1 centimetres upon colonization and later recovered and increased overall during colonial rule. The authors attributed the decrease to diseases, such as malaria and sleeping sickness, forced labor during the early decades of colonial rule, conflicts, land grabbing, and widespread cattle deaths from the rinderpest viral disease. Countering disease As early as 1803, the Spanish Crown organised a mission (the Balmis expedition) to transport the smallpox vaccine to the Spanish colonies, and establish mass vaccination programs there. By 1832, the federal government of the United States established a smallpox vaccination program for Native Americans. Under the direction of Mountstuart Elphinstone a program was launched to propagate smallpox vaccination in India. From the beginning of the 20th century onwards, the elimination or control of disease in tropical countries became a driving force for all colonial powers. The sleeping sickness epidemic in Africa was arrested due to mobile teams systematically screening millions of people at risk. In the 20th century, the world saw the biggest increase in its population in human history due to lessening of the mortality rate in many countries due to medical advances. The world population has grown from 1.6 billion in 1900 to over seven billion today. Colonialism and the history of thought Colonial botany Colonial botany refers to the body of works concerning the study, cultivation, marketing and naming of the new plants that were acquired or traded during the age of European colonialism. Notable examples of these plants included sugar, nutmeg, tobacco, cloves, cinnamon, Peruvian bark, peppers and tea. This work was a large part of securing financing for colonial ambitions, supporting European expansion and ensuring the profitability of such endeavors. Vasco de Gama and Christopher Columbus were seeking to establish routes to trade spices, dyes and silk from the Moluccas, India and China by sea that would be independent of the established routes controlled by Venetian and Middle Eastern merchants. Naturalists like Hendrik van Rheede, Georg Eberhard Rumphius, and Jacobus Bontius compiled data about eastern plants on behalf of the Europeans. Though Sweden did not possess an extensive colonial network, botanical research based on Carl Linnaeus identified and developed techniques to grow cinnamon, tea and rice locally as an alternative to costly imports. Universalism The conquest of vast territories brings multitudes of diverse cultures under the central control of the imperial authorities. From the time of Ancient Greece and Ancient Rome, this fact has been addressed by empires adopting the concept of universalism, and applying it to their imperial policies towards their subjects far from the imperial capitol. The capitol, the metropole, was the source of ostensibly enlightened policies imposed throughout the distant colonies. The empire that grew from Greek conquest, particularly by Alexander the Great, spurred the spread of Greek language, religion, science and philosophy throughout the colonies. While most Greeks considered their own culture superior to all others (the word barbarian is derived from mutterings that sounded to Greek ears like "bar-bar"), Alexander was unique in promoting a campaign to win the hearts and minds of the Persians. He adopted Persian customs of clothing and otherwise encouraged his men to go native by adopting local wives and learning their mannerisms. Of note is that he radically departed from earlier Greek attempts at colonisation, characterised by the murder and enslavement of the local inhabitants and the settling of Greek citizens from the polis. Roman universalism was characterised by cultural and religious tolerance and a focus on civil efficiency and the rule of law. Roman law was imposed on both Roman citizens and colonial subjects. Although Imperial Rome had no public education, Latin spread through its use in government and trade. Roman law prohibited local leaders to wage war between themselves, which was responsible for the 200 year long Pax Romana, at the time the longest period of peace in history. The Roman Empire was tolerant of diverse cultures and religious practises, even allowing them on a few occasions to threaten Roman authority. Colonialism and geography Settlers acted as the link between indigenous populations and the imperial hegemony, thus bridging the geographical, ideological and commercial gap between the colonisers and colonised. While the extent in which geography as an academic study is implicated in colonialism is contentious, geographical tools such as cartography, shipbuilding, navigation, mining and agricultural productivity were instrumental in European colonial expansion. Colonisers' awareness of the Earth's surface and abundance of practical skills provided colonisers with a knowledge that, in turn, created power. Anne Godlewska and Neil Smith argue that "empire was 'quintessentially a geographical project. Historical geographical theories such as environmental determinism legitimised colonialism by positing the view that some parts of the world were underdeveloped, which created notions of skewed evolution. Geographers such as Ellen Churchill Semple and Ellsworth Huntington put forward the notion that northern climates bred vigour and intelligence as opposed to those indigenous to tropical climates (See The Tropics) viz a viz a combination of environmental determinism and Social Darwinism in their approach. Political geographers also maintain that colonial behaviour was reinforced by the physical mapping of the world, therefore creating a visual separation between "them" and "us". Geographers are primarily focused on the spaces of colonialism and imperialism; more specifically, the material and symbolic appropriation of space enabling colonialism. Maps played an extensive role in colonialism, as Bassett would put it "by providing geographical information in a convenient and standardised format, cartographers helped open West Africa to European conquest, commerce, and colonisation". However, because the relationship between colonialism and geography was not scientifically objective, cartography was often manipulated during the colonial era. Social norms and values had an effect on the constructing of maps. During colonialism map-makers used rhetoric in their formation of boundaries and in their art. The rhetoric favoured the view of the conquering Europeans; this is evident in the fact that any map created by a non-European was instantly regarded as inaccurate. Furthermore, European cartographers were required to follow a set of rules which led to ethnocentrism; portraying one's own ethnicity in the centre of the map. As J.B. Harley put it, "The steps in making a map – selection, omission, simplification, classification, the creation of hierarchies, and 'symbolisation' – are all inherently rhetorical." A common practice by the European cartographers of the time was to map unexplored areas as "blank spaces". This influenced the colonial powers as it sparked competition amongst them to explore and colonise these regions. Imperialists aggressively and passionately looked forward to filling these spaces for the glory of their respective countries. The Dictionary of Human Geography notes that cartography was used to empty 'undiscovered' lands of their Indigenous meaning and bring them into spatial existence via the imposition of "Western place-names and borders, [therefore] priming 'virgin' (putatively empty land, 'wilderness') for colonisation (thus sexualising colonial landscapes as domains of male penetration), reconfiguring alien space as absolute, quantifiable and separable (as property)." David Livingstone stresses "that geography has meant different things at different times and in different places" and that we should keep an open mind in regards to the relationship between geography and colonialism instead of identifying boundaries. Geography as a discipline was not and is not an objective science, Painter and Jeffrey argue, rather it is based on assumptions about the physical world. Comparison of exogeographical representations of ostensibly tropical environments in science fiction art support this conjecture, finding the notion of the tropics to be an artificial collection of ideas and beliefs that are independent of geography. Colonialism and imperialism A colony is a part of an empire and so colonialism is closely related to imperialism. Assumptions are that colonialism and imperialism are interchangeable, however Robert J. C. Young suggests that imperialism is the concept while colonialism is the practice. Colonialism is based on an imperial outlook, thereby creating a consequential relationship. Through an empire, colonialism is established and capitalism is expanded, on the other hand a capitalist economy naturally enforces an empire. Marxist view of colonialism Marxism views colonialism as a form of capitalism, enforcing exploitation and social change. Marx thought that working within the global capitalist system, colonialism is closely associated with uneven development. It is an "instrument of wholesale destruction, dependency and systematic exploitation producing distorted economies, socio-psychological disorientation, massive poverty and neocolonial dependency". Colonies are constructed into modes of production. The search for raw materials and the current search for new investment opportunities is a result of inter-capitalist rivalry for capital accumulation. Lenin regarded colonialism as the root cause of imperialism, as imperialism was distinguished by monopoly capitalism via colonialism and as Lyal S. Sunga explains: "Vladimir Lenin advocated forcefully the principle of self-determination of peoples in his "Theses on the Socialist Revolution and the Right of Nations to Self-Determination" as an integral plank in the programme of socialist internationalism" and he quotes Lenin who contended that "The right of nations to self-determination implies exclusively the right to independence in the political sense, the right to free political separation from the oppressor nation. Specifically, this demand for political democracy implies complete freedom to agitate for secession and for a referendum on secession by the seceding nation." Non Russian marxists within the RSFSR and later the USSR, like Sultan Galiev and Vasyl Shakhrai, meanwhile, between 1918 and 1923 and then after 1929, considered the Soviet Regime a renewed version of the Russian imperialism and colonialism. In his critique of colonialism in Africa, the Guyanese historian and political activist Walter Rodney states: "The decisiveness of the short period of colonialism and its negative consequences for Africa spring mainly from the fact that Africa lost power. Power is the ultimate determinant in human society, being basic to the relations within any group and between groups. It implies the ability to defend one's interests and
New York City, last operated in 1973 by Amtrak Colonial (Amtrak train), an Amtrak train that ran between Newport News, Virginia and Boston from 1976 to 1992, and between Richmond, Virginia and New York City from 1997 to 1999 Other uses Inmobiliaria Colonial, a Spanish corporation, which includes companies in the domains of real estate See also Colonial history of the United States, the period of American history from the 17th century to 1776, under the rule of Great Britain, France and Spain Colonial Hotel (disambiguation) Colonial Revival architecture Colonial Theatre (disambiguation) Colonial troops, any of various military units recruited from, or used as garrison troops in, colonial territories Colonialism, the extension of political control to new areas Colonials (disambiguation) Colonist, a person who has migrated to an area and established a permanent residence there to colonize the area History of Australia Spanish colonization of
a Pennsylvania Railroad run between Washington, DC and New York City, last operated in 1973 by Amtrak Colonial (Amtrak train), an Amtrak train that ran between Newport News, Virginia and Boston from 1976 to 1992, and between Richmond, Virginia and New York City from 1997 to 1999 Other uses Inmobiliaria Colonial, a Spanish corporation, which includes companies in the domains of real estate See also Colonial history of the United States, the period of American history from the 17th century to 1776, under the rule of Great Britain, France and Spain Colonial Hotel (disambiguation) Colonial Revival architecture Colonial Theatre (disambiguation) Colonial troops, any of various military units recruited from, or used as garrison troops in, colonial territories Colonialism, the extension of political control to new areas Colonials (disambiguation) Colonist, a person who has migrated to an area and established a permanent residence there to colonize the area History
who supposedly came from Tunisia and settled in Casablanca with his wife Lalla al-Baiḍā' ( White Lady). The villagers of Mediouna would reportedly provision themselves at "Dar al-Baiḍā" ( House of the White). In fact, rising above the ruins of Anfa, it appears there was a tall white-washed structure, as the Portuguese cartographer Duarte Pacheco wrote in the early 16th century that the city could easily be identified by a large tower, and nautical guides from the late 19th century still mentioned a "white tower" as a point of reference. The Portuguese mariners came to call the city "Casa Branca" ( White House) in place of Anfa. The present name, "Casablanca," which is the Spanish version (), came when the Kingdom of Portugal came under Spanish control through the Iberian Union. Adam argues that it is unlikely that the Arabic name "Dar al-Baiḍā" () is a translation of the European names; the presence of the two names indicates that they came about together, not one from the other. During the French protectorate in Morocco, the name remained Casablanca (). The city is still nicknamed Casa by many locals and outsiders to the city. In many other cities with a different dialect, it is called Ad-dār al-Bayḍā, instead. History Early history The area which is today Casablanca was founded and settled by Berbers by at least the seventh century BC. It was used as a port by the Phoenicians and later the Romans. In his book Description of Africa, Leo Africanus refers to ancient Casablanca as "Anfa", a great city founded in the Berber kingdom of Barghawata in 744 AD. He believed Anfa was the most "prosperous city on the Atlantic Coast because of its fertile land." Barghawata rose as an independent state around this time, and continued until it was conquered by the Almoravids in 1068. Following the defeat of the Barghawata in the 12th century, Arab tribes of Hilal and Sulaym descent settled in the region, mixing with the local Berbers, which led to widespread Arabization.S. Lévy, Pour une histoire linguistique du Maroc, in Peuplement et arabisation au Maghreb occidental: dialectologie et histoire, 1998, pp.11–26 () During the 14th century, under the Merinids, Anfa rose in importance as a port. The last of the Merinids were ousted by a popular revolt in 1465. Portuguese conquest and Spanish influence In the early 15th century, the town became an independent state once again, and emerged as a safe harbour for pirates and privateers, leading to it being targeted by the Portuguese, who bombarded the town which led to its destruction in 1468. The Portuguese used the ruins of Anfa to build a military fortress in 1515. The town that grew up around it was called Casa Branca, meaning "white house" in Portuguese. Between 1580 and 1640, the Crown of Portugal was integrated to the Crown of Spain, so Casablanca and all other areas occupied by the Portuguese were under Spanish control, though maintaining an autonomous Portuguese administration. As Portugal broke ties with Spain in 1640, Casablanca came under fully Portuguese control once again. The Europeans eventually abandoned the area completely in 1755 following an earthquake which destroyed most of the town. The town was finally reconstructed by Sultan Mohammed ben Abdallah (1756–1790), the grandson of Moulay Ismail and an ally of George Washington, with the help of Spaniards from the nearby emporium. The town was called ad-Dār al-Bayḍāʼ (الدار البيضاء), the Arabic translation of the Portuguese Casa Branca. Colonial struggle In the 19th century, the area's population began to grow as it became a major supplier of wool to the booming textile industry in Britain and shipping traffic increased (the British, in return, began importing gunpowder tea, used in Morocco's national drink, mint tea). By the 1860s, around 5,000 residents were there, and the population grew to around 10,000 by the late 1880s. Casablanca remained a modestly sized port, with a population reaching around 12,000 within a few years of the French conquest and arrival of French colonialists in 1906. By 1921, this rose to 110,000, largely through the development of shanty towns. French rule and influence The Treaty of Algeciras of 1906 formalized French preeminence in Morocco and included three measures that directly impacted Casablanca: that French officers would control operations at the customs office and seize revenue as collateral for loans given by France, that the French holding company La Compagnie Marocaine would develop the port of Casablanca, and that a French-and-Spanish-trained police force would be assembled to patrol the port. To build the port's breakwater, narrow-gauge track was laid in June 1907 for a small Decauville locomotive to connect the port to a quarry in Roches Noires, passing through the sacred Sidi Belyout graveyard. In resistance to this and the measures of the 1906 Treaty of Algeciras, tribesmen of the Chaouia attacked the locomotive, killing 9 Compagnie Marocaine laborers—3 French, 3 Italians, and 3 Spanish. In response, the French bombarded the city with multiple gunboats and landed troops inside the town, causing severe damage and 15,000 dead and wounded. In the immediate aftermath of the bombardment and the deployment of French troops, the European homes and the Mellah, or Jewish quarter, were sacked, and the latter was also set ablaze.As Oujda had already been occupied, the bombardment and military invasion of the city opened a western front to the French military conquest of Morocco. French control of Casablanca was formalized March 1912 when the Treaty of Fes established the French Protectorat. General Hubert Lyautey assigned the planning of the new colonial port city to Henri Prost. As he did in other Moroccan cities, Prost designed a European ville nouvelle outside the walls of the medina. In Casablanca, he also designed a new "ville indigène" to house Moroccans arriving from other cities. Europeans formed almost half the population of Casablanca. World War II After Philippe Pétain of France signed the armistice with the Nazis, he ordered French troops in France's colonial empire to defend French territory against any aggressors—Allied or otherwise—applying a policy of "asymmetrical neutrality" in favour of the Germans. French colonists in Morocco generally supported Pétain, while politically conscious Moroccans tended to favour de Gaulle and the Allies. Operation Torch, which started on 8 November 1942, was the British-American invasion of French North Africa during the North African campaign of World War II. The Western Task Force, composed of American units led by Major General George S. Patton and Rear Admiral Henry Kent Hewitt, carried out the invasions of Mehdia, Fedhala, and Asfi. American forces captured Casablanca from Vichy control when France surrendered November 11, 1942, but the Naval Battle of Casablanca continued until American forces sank German submarine U-173 on November 16. Casablanca was the site of the Nouasseur Air Base, a large American air base used as the staging area for all American aircraft for the European Theatre of Operations during World War II. The airfield has since become Mohammed V International Airport. Anfa Conference Casablanca hosted the Anfa Conference (also called the Casablanca Conference) in January 1943. Prime Minister Winston Churchill and President Franklin D. Roosevelt discussed the progress of the war. Also in attendance were the Free France generals Charles de Gaulle and Henri Giraud, though they played minor roles and didn't participate in the military planning. It was at this conference that the Allies adopted the doctrine of "unconditional surrender," meaning that the Axis powers would be fought until their defeat. Roosevelt also met privately with Sultan Muhammad V and expressed his support for Moroccan independence after the war. This became a turning point, as Moroccan nationalists were emboldened to openly seek complete independence. Toward independence During the 1940s and 1950s, Casablanca was a major centre of anti-French rioting. April 7, 1947, a massacre of working class Moroccans, carried out by Senegalese Tirailleurs in the service of the French colonial army, was instigated just as Sultan Muhammed V was due to make a speech in Tangier appealing for independence. Riots in Casablanca took place from December 7–8, 1952, in response to the assassination of the Tunisian labor unionist Farhat Hached by La Main Rouge—the clandestine militant wing of French intelligence. Then, on 25 December 1953 (Christmas Day), Muhammad Zarqtuni orchestrated a bombing of Casablanca's Central Market in response to the forced exile of Sultan Muhammad V and the royal family on August 20 (Eid al-Adha) of that year. Since independence Morocco gained independence from France in 1956. Casablanca Group January 4–7, 1961, the city hosted an ensemble of progressive African leaders during the Casablanca Conference of 1961. Among those received by King Muhammad V were Gamal Abd An-Nasser, Kwame Nkrumah, Modibo Keïta, and Ahmed Sékou Touré, Ferhat Abbas. Jewish emigration Casablanca was a major departure point for Jews leaving Morocco through Operation Yachin, an operation conducted by Mossad to secretly migrate Moroccan Jews to Israel between November 1961 and spring 1964. 1965 riots The 1965 student protests organized by the National Union of Popular Forces-affiliated National Union of Moroccan Students, which spread to cities around the country and devolved into riots, started on March 22, 1965, in front of Lycée Mohammed V in Casablanca."Il y avait au moins quinze mille lycéens. Je n'avais jamais vu un rassemblement d'adolescents aussi impressionnant" as quoted in Brousky, 2005. The protests started as a peaceful march to demand the right to public higher education for Morocco, but expanded to include concerns of laborers, the unemployed, and other marginalized segments of society, and devolved into vandalism and rioting. The riots were violently repressed by security forces with tanks and armored vehicles; Moroccan authorities reported a dozen deaths while the UNFP reported more than 1,000. King Hassan II blamed the events on teachers and parents, and declared in a speech to the nation on March 30, 1965: "There is no greater danger to the State than a so-called intellectual. It would have been better if you were all illiterate.”Susan Ossman, Picturing Casablanca: Portraits of Power in a Modern City; University of California Press, 1994; p. 37. 1981 riots On June 6, 1981, the Casablanca Bread Riots took place. Hassan II appointed the French-trained interior minister Driss Basri as hardliner, who would later become a symbol of the Years of Lead, with quelling the protests. The government stated that 66 people were killed and 100 were injured, while opposition leaders put the number of dead at 637, saying that many of these were killed by police and army gunfire. Mudawana In March 2000, more than 60 women's groups organized demonstrations in Casablanca proposing reforms to the legal status of women in the country. About 40,000 women attended, calling for a ban on polygamy and the introduction of divorce law (divorce being a purely religious procedure at that time). Although the counter-demonstration attracted half a million participants, the movement for change started in 2000 was influential on King Mohammed VI, and he enacted a new mudawana, or family law, in early 2004, meeting some of the demands of women's rights activists. On 16 May 2003, 33 civilians were killed and more than 100 people were injured when Casablanca was hit by a multiple suicide bomb attack carried out by Moroccans and claimed by some to have been linked to al-Qaeda. Twelve suicide bombers struck five locations in the city. Another series of suicide bombings struck the city in early 2007. These events illustrated some of the persistent challenges the city faces in addressing poverty and integrating disadvantaged neighborhoods and populations. One initiative to improve conditions in the city's disadvantaged neighborhoods was the creation of the Sidi Moumen Cultural Center. As calls for reform spread through the Arab world in 2011, Moroccans joined in, but concessions by the ruler led to acceptance. However, in December, thousands of people demonstrated in several parts of the city, especially the city center near la Fontaine, desiring more significant political reforms. Geography Casablanca is located on the Atlantic coast of the Chaouia Plains, which have historically been the breadbasket of Morocco. Apart from the Atlantic coast, the Bouskoura forest is the only natural attraction in the city. The forest was planted in the 20th century and consists mostly of eucalyptus, palm, and pine trees. It is located halfway to the city's international airport. The only watercourse in Casablanca is oued Bouskoura, a small seasonal creek that until 1912 reached the Atlantic Ocean near the actual port. Most of oued Bouskoura's bed has been covered due to urbanization and only the part south of El Jadida road can now be seen. The closest permanent river to Casablanca is Oum Rabia, to the south-east. Climate Casablanca has a hot-summer Mediterranean climate (Köppen climate classification Csa). The cool Canary Current off the Atlantic coast moderates temperature variation, which results in a climate remarkably similar to that of coastal Los Angeles, with similar temperature ranges. The city has an annual average of 72 days with significant precipitation, which amounts to per year. The highest and lowest temperatures ever recorded in the city are and , respectively. The highest amount of rainfall recorded in a single day is on 30 November 2010. Economy The Grand Casablanca region is considered the locomotive of the development of the Moroccan economy. It attracts 32% of the country's production units and 56% of industrial labor. The region uses 30% of the national electricity production. With MAD 93 billion, the region contributes to 44% of the industrial production of the kingdom. About 33% of national industrial exports, MAD 27 billion, comes from the Grand Casablanca; 30% of the Moroccan banking network is concentrated in Casablanca. One of the most important Casablancan exports is phosphate. Other industries include fishing, fish canning, sawmills, furniture production, building
collateral for loans given by France, that the French holding company La Compagnie Marocaine would develop the port of Casablanca, and that a French-and-Spanish-trained police force would be assembled to patrol the port. To build the port's breakwater, narrow-gauge track was laid in June 1907 for a small Decauville locomotive to connect the port to a quarry in Roches Noires, passing through the sacred Sidi Belyout graveyard. In resistance to this and the measures of the 1906 Treaty of Algeciras, tribesmen of the Chaouia attacked the locomotive, killing 9 Compagnie Marocaine laborers—3 French, 3 Italians, and 3 Spanish. In response, the French bombarded the city with multiple gunboats and landed troops inside the town, causing severe damage and 15,000 dead and wounded. In the immediate aftermath of the bombardment and the deployment of French troops, the European homes and the Mellah, or Jewish quarter, were sacked, and the latter was also set ablaze.As Oujda had already been occupied, the bombardment and military invasion of the city opened a western front to the French military conquest of Morocco. French control of Casablanca was formalized March 1912 when the Treaty of Fes established the French Protectorat. General Hubert Lyautey assigned the planning of the new colonial port city to Henri Prost. As he did in other Moroccan cities, Prost designed a European ville nouvelle outside the walls of the medina. In Casablanca, he also designed a new "ville indigène" to house Moroccans arriving from other cities. Europeans formed almost half the population of Casablanca. World War II After Philippe Pétain of France signed the armistice with the Nazis, he ordered French troops in France's colonial empire to defend French territory against any aggressors—Allied or otherwise—applying a policy of "asymmetrical neutrality" in favour of the Germans. French colonists in Morocco generally supported Pétain, while politically conscious Moroccans tended to favour de Gaulle and the Allies. Operation Torch, which started on 8 November 1942, was the British-American invasion of French North Africa during the North African campaign of World War II. The Western Task Force, composed of American units led by Major General George S. Patton and Rear Admiral Henry Kent Hewitt, carried out the invasions of Mehdia, Fedhala, and Asfi. American forces captured Casablanca from Vichy control when France surrendered November 11, 1942, but the Naval Battle of Casablanca continued until American forces sank German submarine U-173 on November 16. Casablanca was the site of the Nouasseur Air Base, a large American air base used as the staging area for all American aircraft for the European Theatre of Operations during World War II. The airfield has since become Mohammed V International Airport. Anfa Conference Casablanca hosted the Anfa Conference (also called the Casablanca Conference) in January 1943. Prime Minister Winston Churchill and President Franklin D. Roosevelt discussed the progress of the war. Also in attendance were the Free France generals Charles de Gaulle and Henri Giraud, though they played minor roles and didn't participate in the military planning. It was at this conference that the Allies adopted the doctrine of "unconditional surrender," meaning that the Axis powers would be fought until their defeat. Roosevelt also met privately with Sultan Muhammad V and expressed his support for Moroccan independence after the war. This became a turning point, as Moroccan nationalists were emboldened to openly seek complete independence. Toward independence During the 1940s and 1950s, Casablanca was a major centre of anti-French rioting. April 7, 1947, a massacre of working class Moroccans, carried out by Senegalese Tirailleurs in the service of the French colonial army, was instigated just as Sultan Muhammed V was due to make a speech in Tangier appealing for independence. Riots in Casablanca took place from December 7–8, 1952, in response to the assassination of the Tunisian labor unionist Farhat Hached by La Main Rouge—the clandestine militant wing of French intelligence. Then, on 25 December 1953 (Christmas Day), Muhammad Zarqtuni orchestrated a bombing of Casablanca's Central Market in response to the forced exile of Sultan Muhammad V and the royal family on August 20 (Eid al-Adha) of that year. Since independence Morocco gained independence from France in 1956. Casablanca Group January 4–7, 1961, the city hosted an ensemble of progressive African leaders during the Casablanca Conference of 1961. Among those received by King Muhammad V were Gamal Abd An-Nasser, Kwame Nkrumah, Modibo Keïta, and Ahmed Sékou Touré, Ferhat Abbas. Jewish emigration Casablanca was a major departure point for Jews leaving Morocco through Operation Yachin, an operation conducted by Mossad to secretly migrate Moroccan Jews to Israel between November 1961 and spring 1964. 1965 riots The 1965 student protests organized by the National Union of Popular Forces-affiliated National Union of Moroccan Students, which spread to cities around the country and devolved into riots, started on March 22, 1965, in front of Lycée Mohammed V in Casablanca."Il y avait au moins quinze mille lycéens. Je n'avais jamais vu un rassemblement d'adolescents aussi impressionnant" as quoted in Brousky, 2005. The protests started as a peaceful march to demand the right to public higher education for Morocco, but expanded to include concerns of laborers, the unemployed, and other marginalized segments of society, and devolved into vandalism and rioting. The riots were violently repressed by security forces with tanks and armored vehicles; Moroccan authorities reported a dozen deaths while the UNFP reported more than 1,000. King Hassan II blamed the events on teachers and parents, and declared in a speech to the nation on March 30, 1965: "There is no greater danger to the State than a so-called intellectual. It would have been better if you were all illiterate.”Susan Ossman, Picturing Casablanca: Portraits of Power in a Modern City; University of California Press, 1994; p. 37. 1981 riots On June 6, 1981, the Casablanca Bread Riots took place. Hassan II appointed the French-trained interior minister Driss Basri as hardliner, who would later become a symbol of the Years of Lead, with quelling the protests. The government stated that 66 people were killed and 100 were injured, while opposition leaders put the number of dead at 637, saying that many of these were killed by police and army gunfire. Mudawana In March 2000, more than 60 women's groups organized demonstrations in Casablanca proposing reforms to the legal status of women in the country. About 40,000 women attended, calling for a ban on polygamy and the introduction of divorce law (divorce being a purely religious procedure at that time). Although the counter-demonstration attracted half a million participants, the movement for change started in 2000 was influential on King Mohammed VI, and he enacted a new mudawana, or family law, in early 2004, meeting some of the demands of women's rights activists. On 16 May 2003, 33 civilians were killed and more than 100 people were injured when Casablanca was hit by a multiple suicide bomb attack carried out by Moroccans and claimed by some to have been linked to al-Qaeda. Twelve suicide bombers struck five locations in the city. Another series of suicide bombings struck the city in early 2007. These events illustrated some of the persistent challenges the city faces in addressing poverty and integrating disadvantaged neighborhoods and populations. One initiative to improve conditions in the city's disadvantaged neighborhoods was the creation of the Sidi Moumen Cultural Center. As calls for reform spread through the Arab world in 2011, Moroccans joined in, but concessions by the ruler led to acceptance. However, in December, thousands of people demonstrated in several parts of the city, especially the city center near la Fontaine, desiring more significant political reforms. Geography Casablanca is located on the Atlantic coast of the Chaouia Plains, which have historically been the breadbasket of Morocco. Apart from the Atlantic coast, the Bouskoura forest is the only natural attraction in the city. The forest was planted in the 20th century and consists mostly of eucalyptus, palm, and pine trees. It is located halfway to the city's international airport. The only watercourse in Casablanca is oued Bouskoura, a small seasonal creek that until 1912 reached the Atlantic Ocean near the actual port. Most of oued Bouskoura's bed has been covered due to urbanization and only the part south of El Jadida road can now be seen. The closest permanent river to Casablanca is Oum Rabia, to the south-east. Climate Casablanca has a hot-summer Mediterranean climate (Köppen climate classification Csa). The cool Canary Current off the Atlantic coast moderates temperature variation, which results in a climate remarkably similar to that of coastal Los Angeles, with similar temperature ranges. The city has an annual average of 72 days with significant precipitation, which amounts to per year. The highest and lowest temperatures ever recorded in the city are and , respectively. The highest amount of rainfall recorded in a single day is on 30 November 2010. Economy The Grand Casablanca region is considered the locomotive of the development of the Moroccan economy. It attracts 32% of the country's production units and 56% of industrial labor. The region uses 30% of the national electricity production. With MAD 93 billion, the region contributes to 44% of the industrial production of the kingdom. About 33% of national industrial exports, MAD 27 billion, comes from the Grand Casablanca; 30% of the Moroccan banking network is concentrated in Casablanca. One of the most important Casablancan exports is phosphate. Other industries include fishing, fish canning, sawmills, furniture production, building materials, glass, textiles, electronics, leather work, processed food, spirits, soft drinks, and cigarettes. The Casablanca and Mohammedia seaports activity represent 50% of the international commercial flows of Morocco. Almost the entire Casablanca waterfront is under development, mainly the construction of huge entertainment centres between the port and Hassan II Mosque, the Anfa Resort project near the business, entertainment and living centre of Megarama, the shopping and entertainment complex of Morocco Mall, as well as a complete renovation of the coastal walkway. The Sindbad park is planned to be totally renewed with rides, games and entertainment services. Royal Air Maroc has its head office at the Casablanca-Anfa Airport. In 2004, it announced that it was moving its head office from Casablanca to a location in Province of Nouaceur, close to Mohammed V International Airport. The agreement to build the head office in Nouaceur was signed in 2009. The largest CBD both in Casablanca and the Maghreb is in Sidi Maarouf, near the Hassan II Mosque. Administrative divisions Casablanca is a commune, part of the region of Casablanca-Settat. The commune is divided into eight districts or prefectures, which are themselves divided into 16 subdivisions or arrondissements and one municipality. The districts and their subdivisions are: Aïn Chock (عين الشق) – Aïn Chock (عين الشق) Aïn Sebaâ - Hay Mohammadi (عين السبع الحي المحمدي) – Aïn Sebaâ (عين السبع), Hay Mohammadi (الحي المحمدي), Roches Noires (روش نوار). Anfa (أنفا) – Anfa (أنفا), Maârif (المعاريف), Sidi Belyout (سيدي بليوط). Ben M'Sick (بن مسيك) – Ben M'Sick (بن مسيك), Sbata (سباته). Sidi Bernoussi (سيدي برنوصي) – Sidi Bernoussi (سيدي برنوصي), Sidi Moumen (سيدي مومن). Al Fida - Mers Sultan (الفداء – مرس السلطان) – Al Fida (الفداء); Mechouar (المشور) (municipality), Mers Sultan (مرس السلطان). Hay Hassani (الحي الحسني) – Hay Hassani (الحي الحسني). Moulay Rachid''' (مولاي رشيد) – Moulay Rachid (مولاي رشيد), Sidi Othmane (سيدي عثمان). Neighborhoods The list of neighborhoods is indicative and not complete: 2 Mars Ain Chock Ain Diab Ain Sebaa Belvédère Beausejour Bouchentouf Bouskoura Bourgogne Californie Centre Ville C.I.L. La Colline Derb Ghallef Derb Sultan Derb Tazi Gauthier Ghandi Habous El Hank Hay Dakhla Hay El Baraka Hay El Hanaa Hay El Hassani Hay El Mohammadi Hay Farah Hay Moulay Rachid Hay Salama Hubous Inara Laimoun (Hay Hassani) Lamkansa Lissasfa Maârif Mers Sultan Nassim Oasis Old Madina Oulfa Palmiers Polo Racine Riviera Roches Noires Salmia 2 Sbata Sidi Bernoussi Sidi Maârouf Sidi Moumen Sidi Othmane Demographics The commune of Casablanca recorded a population of 3,359,818 in the 2014 Moroccan census. About 98% live in urban areas. Around 25% of them are under 15 and 9% are over 60 years old. The population of the city is about 11% of the total population of Morocco. Grand Casablanca is also the largest urban area in the Maghreb. 99.9% of the population of Morocco are Arab and Berber Muslims. During the French protectorate in Morocco, European Christians formed almost half the population of Casablanca. Since independence in 1956, the European population has decreased substantially. The city also is still home to a small community of Moroccan Christians, as well as a small group of foreign Roman Catholic and Protestant residents. Judaism in Casablanca Jews have a long history in Casablanca. A Sephardic Jewish community was in Anfa up to the destruction of the city by the Portuguese in 1468. Jews were slow to return to the town, but by 1750, the Rabbi Elijah Synagogue was built as the first Jewish synagogue in Casablanca. It was destroyed along with much of the town in the 1755 Lisbon
than one. The latter shapes include not only the traditional †-shaped cross (the ), but also the T-shaped cross (the or tau cross), which the descriptions in antiquity of the execution cross indicate as the normal form in use at that time, and the X-shaped cross (the crux decussata or saltire). The Greek equivalent of Latin crux "stake, gibbet" is , found in texts of four centuries or more before the gospels and always in the plural number to indicate a stake or pole. From the first century BC, it is used to indicate an instrument used in executions. The Greek word is used in descriptions in antiquity of the execution cross, which indicate that its normal shape was similar to the Greek letter tau (Τ). History Pre-Christian Due to the simplicity of the design (two intersecting lines), cross-shaped incisions make their appearance from deep prehistory; as petroglyphs in European cult caves, dating back to the beginning of the Upper Paleolithic, and throughout prehistory to the Iron Age. Also of prehistoric age are numerous variants of the simple cross mark, including the crux gammata with curving or angular lines, and the Egyptian crux ansata with a loop. Speculation has associated the cross symbol – even in the prehistoric period – with astronomical or cosmological symbology involving "four elements" (Chevalier, 1997) or the cardinal points, or the unity of a vertical axis mundi or celestial pole with the horizontal world (Koch, 1955). Speculation of this kind became especially popular in the mid- to late-19th century in the context of comparative mythology seeking to tie Christian mythology to ancient cosmological myths. Influential works in this vein included G. de Mortillet (1866), L. Müller (1865), W. W. Blake (1888), Ansault (1891), etc. In the European Bronze Age the cross symbol appeared to carry a religious meaning, perhaps as a symbol of consecration, especially pertaining to burial. The cross sign occurs trivially in tally marks, and develops into a number symbol independently in the Roman numerals (X "ten"), the Chinese rod numerals (十 "ten") and the Brahmi numerals ("four", whence the numeral 4). In the Phoenician alphabet and derived scripts, the cross symbol represented the phoneme /t/, i.e. the letter taw, which is the historical predecessor of Latin T. The letter name taw means "mark", presumably continuing the Egyptian hieroglyph "two crossed sticks" (Gardiner Z9). According to W. E. Vine's Expository Dictionary of New Testament Words, worshippers of Tammuz in Chaldea and thereabouts used the cross as symbol of that god. Christian cross The shape of the cross (crux, stauros "stake, gibbet"), as represented by the letter T, came to be used as a "seal" or symbol of Early Christianity by the 2nd century. Clement of Alexandria in the early 3rd century calls it ("the Lord's sign") he repeats the idea, current as early as the Epistle of Barnabas, that the number 318 (in Greek numerals, ΤΙΗ) in Genesis 14:14 was a foreshadowing (a "type") of the cross (the letter Tau) and of Jesus (the letters Iota Eta). Clement's contemporary Tertullian rejects the accusation that Christians are crucis religiosi
contemporary Tertullian rejects the accusation that Christians are crucis religiosi (i.e. "adorers of the gibbet"), and returns the accusation by likening the worship of pagan idols to the worship of poles or stakes. In his book De Corona, written in 204, Tertullian tells how it was already a tradition for Christians to trace repeatedly on their foreheads the sign of the cross. While early Christians used the T-shape to represent the cross in writing and gesture, the use of the Greek cross and Latin cross, i.e. crosses with intersecting beams, appears in Christian art towards the end of Late Antiquity. An early example of the cruciform halo, used to identify Christ in paintings, is found in the Miracles of the Loaves and Fishes mosaic of Sant'Apollinare Nuovo, Ravenna (6th century). The Patriarchal cross, a Latin cross with an additional horizontal bar, first appears in the 10th century. A wide variation of cross symbols is introduced for the purposes of heraldry beginning in the age of the Crusades. Cross-like marks and graphemes The cross mark is used to mark a position, or as a check mark, but also to mark deletion. Derived from Greek Chi are the Latin letter X, Cyrillic Kha and possibly runic Gyfu. Egyptian hieroglyphs involving cross shapes include ankh "life", ndj "protect" and nfr "good; pleasant, beautiful". Sumerian cuneiform had a simple cross-shaped character, consisting of a horizontal and a vertical wedge (𒈦), read as maš "tax, yield, interest"; the superposition of two diagonal wedges results in a decussate cross (𒉽), read as pap "first, pre-eminent" (the superposition of these two types of crosses results in the eight-pointed star used as the sign for "sky" or "deity" (𒀭), DINGIR). The cuneiform script has other, more complex, cruciform characters, consisting of an arrangement of boxes or the fourfold arrangement of other characters, including the archaic cuneiform characters LAK-210, LAK-276, LAK-278, LAK-617 and the classical sign EZEN (𒂡). Phoenician tāw is still cross-shaped in Paleo-Hebrew alphabet and in some Old Italic scripts (Raetic and Lepontic), and its descendant T becomes again cross-shaped in the Latin minuscule t. The plus sign (+) is derived from Latin t via a simplification of a ligature for et "and" (introduced by Johannes Widmann in the late 15th century). The letter Aleph is cross-shaped in Aramaic and paleo-Hebrew. Egyptian hieroglyphs with cross-shapes include Gardiner Z9 – Z11 ("crossed sticks", "crossed planks"). Other, unrelated cross-shaped letters include Brahmi ka (predecessor of the Devanagari letter क) and Old Turkic (Orkhon) d² and Old Hungarian b, and Katakana ナ na and メme. The multiplication sign (×), often attributed to William Oughtred (who first used it in an appendix to the 1618 edition of John Napier's Descriptio) apparently had been in occasional use since the mid 16th century. Other typographical symbols resembling crosses include the dagger or obelus (†), the Chinese (十, Kangxi radical 24) and Roman (X ten). Unicode has a variety of cross symbols in the "Dingbat" block (U+2700–U+27BF) : ✕ ✖ ✗ ✘ ✙ ✚ ✛ ✜ ✝ ✞ ✟ ✠ ✢ ✣ ✤ ✥ The Miscellaneous Symbols block (U+2626 to U+262F) adds three specific Christian cross variants, viz. the Patriarchal cross (☦), Cross of Lorraine (☨) and Cross potent (☩, mistakenly labeled a "Cross of Jerusalem"). Cross-like emblems The following is a list of cross symbols, except for variants of the Christian cross and Heraldic crosses, for which see the dedicated lists at Christian cross variants and Crosses in heraldry, respectively. As a design element Physical gestures Cross shapes are made by a variety of physical gestures. Crossing the fingers of one hand is a common invocation of the symbol. The sign of the cross associated with Christian genuflection is made with one hand: in Eastern Orthodox tradition the sequence is head-heart-right shoulder-left shoulder, while in Oriental Orthodox, Catholic and Anglican tradition the sequence is head-heart-left-right. Crossing the index fingers of both hands represents and a charm against evil in European folklore. Other gestures involving more than one hand include the "cross my heart" movement associated with making a promise and the Tau shape of the referee's "time out" hand signal. In Chinese-speaking cultures, crossed index fingers represent the number 10. Other things known as "cross" Crux, or the Southern Cross, is a cross-shaped constellation in the Southern Hemisphere. It appears on the national flags of Australia, Brazil, New Zealand, Niue, Papua New Guinea and Samoa. Notable free-standing Christian crosses (or Summit crosses): The tallest cross, at 152.4 metres high, is part of Francisco Franco's monumental "Valley of the Fallen", the Monumento Nacional de Santa Cruz del Valle de los Caidos in Spain. A cross at the junction of Interstates 57 and 70 in Effingham, Illinois, is purportedly the tallest in the United States, at 198 feet (60.3 m) tall. The tallest freestanding cross in the United States is located in Saint Augustine, FL and stands 208 feet. The tombs at Naqsh-e Rustam, Iran, made in the 5th century BC, are carved into the cliffside in the shape of a cross. They are known as the "Persian crosses". Cross-ndj (hieroglyph) Cross cap, topological surface Crossroads (mythology) Crossbuck Cross (album) References Chevalier, Jean (1997).
attached , but sometimes even the counting can become ambiguous. Coordination numbers are normally between two and nine, but large numbers of ligands are not uncommon for the lanthanides and actinides. The number of bonds depends on the size, charge, and electron configuration of the metal ion and the ligands. Metal ions may have more than one coordination number. Typically the chemistry of transition metal complexes is dominated by interactions between s and p molecular orbitals of the donor-atoms in the ligands and the d orbitals of the metal ions. The s, p, and d orbitals of the metal can accommodate 18 electrons (see 18-Electron rule). The maximum coordination number for a certain metal is thus related to the electronic configuration of the metal ion (to be more specific, the number of empty orbitals) and to the ratio of the size of the ligands and the metal ion. Large metals and small ligands lead to high coordination numbers, e.g. [Mo(CN)8]4−. Small metals with large ligands lead to low coordination numbers, e.g. Pt[P(CMe3)]2. Due to their large size, lanthanides, actinides, and early transition metals tend to have high coordination numbers. Most structures follow the points-on-a-sphere pattern (or, as if the central atom were in the middle of a polyhedron where the corners of that shape are the locations of the ligands), where orbital overlap (between ligand and metal orbitals) and ligand-ligand repulsions tend to lead to certain regular geometries. The most observed geometries are listed below, but there are many cases that deviate from a regular geometry, e.g. due to the use of ligands of diverse types (which results in irregular bond lengths; the coordination atoms do not follow a points-on-a-sphere pattern), due to the size of ligands, or due to electronic effects (see, e.g., Jahn–Teller distortion): Linear for two-coordination Trigonal planar for three-coordination Tetrahedral or square planar for four-coordination Trigonal bipyramidal for five-coordination Octahedral for six-coordination Pentagonal bipyramidal for seven-coordination Square antiprismatic for eight-coordination Tricapped trigonal prismatic for nine-coordination The idealized descriptions of 5-, 7-, 8-, and 9- coordination are often indistinct geometrically from alternative structures with slightly differing L-M-L (ligand-metal-ligand) angles, e.g. the difference between square pyramidal and trigonal bipyramidal structures. Square pyramidal for five-coordination Capped octahedral or capped trigonal prismatic for seven-coordination Dodecahedral or bicapped trigonal prismatic for eight-coordination Capped square antiprismatic for nine-coordination To distinguish between the alternative coordinations for five-coordinated complexes, the τ geometry index was invented by Addison et al. This index depends on angles by the coordination center and changes between 0 for the square pyramidal to 1 for trigonal bipyramidal structures, allowing to classify the cases in between. This system was later extended to four-coordinated complexes by Houser et al. and also Okuniewski et al. In systems with low d electron count, due to special electronic effects such as (second-order) Jahn–Teller stabilization, certain geometries (in which the coordination atoms do not follow a points-on-a-sphere pattern) are stabilized relative to the other possibilities, e.g. for some compounds the trigonal prismatic geometry is stabilized relative to octahedral structures for six-coordination. Bent for two-coordination Trigonal pyramidal for three-coordination Trigonal prismatic for six-coordination Isomerism The arrangement of the ligands is fixed for a given complex, but in some cases it is mutable by a reaction that forms another stable isomer. There exist many kinds of isomerism in coordination complexes, just as in many other compounds. Stereoisomerism Stereoisomerism occurs with the same bonds in distinct orientations. Stereoisomerism can be further classified into: Cis–trans isomerism and facial–meridional isomerism Cis–trans isomerism occurs in octahedral and square planar complexes (but not tetrahedral). When two ligands are adjacent they are said to be cis, when opposite each other, trans. When three identical ligands occupy one face of an octahedron, the isomer is said to be facial, or fac. In a fac isomer, any two identical ligands are adjacent or cis to each other. If these three ligands and the metal ion are in one plane, the isomer is said to be meridional, or mer. A mer isomer can be considered as a combination of a trans and a cis, since it contains both trans and cis pairs of identical ligands. Optical isomerism Optical isomerism occurs when a complex is not superimposable with its mirror image. It is so called because the two isomers are each optically active, that is, they rotate the plane of polarized light in opposite directions. In the first molecule shown, the symbol Λ (lambda) is used as a prefix to describe the left-handed propeller twist formed by three bidentate ligands. The second molecule is the mirror image of the first, with the symbol Δ (delta) as a prefix for the right-handed propeller twist. The third and fourth molecules are a similar pair of Λ and Δ isomers, in this case with two bidentate ligands and two identical monodentate ligands. Structural isomerism Structural isomerism occurs when the bonds are themselves different. Four types of structural isomerism are recognized: ionisation isomerism, solvate or hydrate isomerism, linkage isomerism and coordination isomerism. Ionisation isomerism – the isomers give different ions in solution although they have the same composition. This type of isomerism occurs when the counter ion of the complex is also a potential ligand. For example, pentaamminebromocobalt(III) sulphate [Co(NH3)5Br]SO4 is red violet and in solution gives a precipitate with barium chloride, confirming the presence of sulphate ion, while pentaamminesulphatecobalt(III) bromide [Co(NH3)5SO4]Br is red and tests negative for sulphate ion in solution, but instead gives a precipitate of AgBr with silver nitrate. Solvate or hydrate isomerism – the isomers have the same composition but differ with respect to the number of molecules of solvent that serve as ligand vs simply occupying sites in the crystal. Examples: [Cr(H2O)6]Cl3 is violet colored, [CrCl(H2O)5]Cl2·H2O is blue-green, and [CrCl2(H2O)4]Cl·2H2O is dark green. See water of crystallization. Linkage isomerism occurs with ligands with more than one type of donor atom, known as ambidentate ligands. For example, nitrite can coordinate through O or N. One pair of nitrite linkage isomers have structures (NH3)5CoNO22+ (nitro isomer) and (NH3)5CoONO2+ (nitrito isomer). Coordination isomerism – this occurs when both positive and negative ions of a salt are complex ions and the two isomers differ in the distribution of ligands between the cation and the anion. For example, [Co(NH3)6][Cr(CN)6] and [Cr(NH3)6][Co(CN)6]. Electronic properties Many of the properties of transition metal complexes are dictated by their electronic structures. The electronic structure can be described by a relatively ionic model that ascribes formal charges to the metals and ligands. This approach is the essence of crystal field theory (CFT). Crystal field theory, introduced by Hans Bethe in 1929, gives a quantum mechanically based attempt at understanding complexes. But crystal field theory treats all interactions in a complex as ionic and assumes that the ligands can be approximated by negative point charges. More sophisticated models embrace covalency, and this approach is described by ligand field theory (LFT) and Molecular orbital theory (MO). Ligand field theory, introduced in 1935 and built from molecular orbital theory, can handle a broader range of complexes and can explain complexes in which the interactions are covalent. The chemical applications of group theory can aid in the understanding of crystal or ligand field theory, by allowing simple, symmetry based solutions to the formal equations. Chemists tend to employ
aid of electronic spectroscopy; also known as UV-Vis. For simple compounds with high symmetry, the d–d transitions can be assigned using Tanabe–Sugano diagrams. These assignments are gaining increased support with computational chemistry. Colors of lanthanide complexes Superficially lanthanide complexes are similar to those of the transition metals in that some are colored. However, for the common Ln3+ ions (Ln = lanthanide) the colors are all pale, and hardly influenced by the nature of the ligand. The colors are due to 4f electron transitions. As the 4f orbitals in lanthanides are "buried" in the xenon core and shielded from the ligand by the 5s and 5p orbitals they are therefore not influenced by the ligands to any great extent leading to a much smaller crystal field splitting than in the transition metals. The absorption spectra of an Ln3+ ion approximates to that of the free ion where the electronic states are described by spin-orbit coupling. This contrasts to the transition metals where the ground state is split by the crystal field. Absorptions for Ln3+ are weak as electric dipole transitions are parity forbidden (Laporte forbidden) but can gain intensity due to the effect of a low-symmetry ligand field or mixing with higher electronic states (e.g. d orbitals). f-f absorption bands are extremely sharp which contrasts with those observed for transition metals which generally have broad bands. This can lead to extremely unusual effects, such as significant color changes under different forms of lighting. Magnetism Metal complexes that have unpaired electrons are magnetic. Considering only monometallic complexes, unpaired electrons arise because the complex has an odd number of electrons or because electron pairing is destabilized. Thus, monomeric Ti(III) species have one "d-electron" and must be (para)magnetic, regardless of the geometry or the nature of the ligands. Ti(II), with two d-electrons, forms some complexes that have two unpaired electrons and others with none. This effect is illustrated by the compounds TiX2[(CH3)2PCH2CH2P(CH3)2]2: when X = Cl, the complex is paramagnetic (high-spin configuration), whereas when X = CH3, it is diamagnetic (low-spin configuration). It is important to realize that ligands provide an important means of adjusting the ground state properties. In bi- and polymetallic complexes, in which the individual centres have an odd number of electrons or that are high-spin, the situation is more complicated. If there is interaction (either direct or through ligand) between the two (or more) metal centres, the electrons may couple (antiferromagnetic coupling, resulting in a diamagnetic compound), or they may enhance each other (ferromagnetic coupling). When there is no interaction, the two (or more) individual metal centers behave as if in two separate molecules. Reactivity Complexes show a variety of possible reactivities: Electron transfers Electron transfer (ET) between metal ions can occur via two distinct mechanisms, inner and outer sphere electron transfers. In an inner sphere reaction, a bridging ligand serves as a conduit for ET. (Degenerate) ligand exchange One important indicator of reactivity is the rate of degenerate exchange of ligands. For example, the rate of interchange of coordinate water in [M(H2O)6]n+ complexes varies over 20 orders of magnitude. Complexes where the ligands are released and rebound rapidly are classified as labile. Such labile complexes can be quite stable thermodynamically. Typical labile metal complexes either have low-charge (Na+), electrons in d-orbitals that are antibonding with respect to the ligands (Zn2+), or lack covalency (Ln3+, where Ln is any lanthanide). The lability of a metal complex also depends on the high-spin vs. low-spin configurations when such is possible. Thus, high-spin Fe(II) and Co(III) form labile complexes, whereas low-spin analogues are inert. Cr(III) can exist only in the low-spin state (quartet), which is inert because of its high formal oxidation state, absence of electrons in orbitals that are M–L antibonding, plus some "ligand field stabilization" associated with the d3 configuration. Associative processes Complexes that have unfilled or half-filled orbitals often show the capability to react with substrates. Most substrates have a singlet ground-state; that is, they have lone electron pairs (e.g., water, amines, ethers), so these substrates need an empty orbital to be able to react with a metal centre. Some substrates (e.g., molecular oxygen) have a triplet ground state, which results that metals with half-filled orbitals have a tendency to react with such substrates (it must be said that the dioxygen molecule also has lone pairs, so it is also capable to react as a 'normal' Lewis base). If the ligands around the metal are carefully chosen, the metal can aid in (stoichiometric or catalytic) transformations of molecules or be used as a sensor. Classification Metal complexes, also known as coordination compounds, include virtually all metal compounds. The study of "coordination chemistry" is the study of "inorganic chemistry" of all alkali and alkaline earth metals, transition metals, lanthanides, actinides, and metalloids. Thus, coordination chemistry is the chemistry of the majority of the periodic table. Metals and metal ions exist, in the condensed phases at least, only surrounded by ligands. The areas of coordination chemistry can be classified according to the nature of the ligands, in broad terms: Classical (or "Werner Complexes"): Ligands in classical coordination chemistry bind to metals, almost exclusively, via their lone pairs of electrons residing on the main-group atoms of the ligand. Typical ligands are H2O, NH3, Cl−, CN−, en. Some of the simplest members of such complexes are described in metal aquo complexes, metal ammine complexes, Examples: [Co(EDTA)]−, [Co(NH3)6]3+, [Fe(C2O4)3]3- Organometallic Chemistry: Ligands are organic (alkenes, alkynes, alkyls) as well as "organic-like" ligands such as phosphines, hydride, and CO. Example: (C5H5)Fe(CO)2CH3 Bioinorganic Chemistry: Ligands are those provided by nature, especially including the side chains of amino acids, and many cofactors such as porphyrins. Example: hemoglobin contains heme, a porphyrin complex of iron Example: chlorophyll contains a porphyrin complex of magnesium Many natural ligands are "classical" especially including water. Cluster Chemistry: Ligands include all of the above as well as other metal ions or atoms as well. Example Ru3(CO)12 In some cases there are combinations of different fields: Example: [Fe4S4(Scysteinyl)4]2−, in which a cluster is embedded in a biologically active species. Mineralogy, materials science, and solid state chemistry – as they apply to metal ions – are subsets of coordination chemistry in the sense that the metals are surrounded by ligands. In many cases these ligands are oxides or sulfides, but the metals are coordinated nonetheless, and the principles and guidelines discussed below apply. In hydrates, at least some of the ligands are water molecules. It is true that the focus of mineralogy, materials science, and solid state chemistry differs from the usual focus of coordination or inorganic chemistry. The former are concerned primarily with polymeric structures, properties arising from a collective effects of many highly interconnected metals. In contrast, coordination chemistry focuses on reactivity and properties of complexes containing individual metal atoms or small ensembles of metal atoms. Nomenclature of coordination complexes The basic procedure for naming a complex is: When naming a complex ion, the ligands are named before the metal ion. The ligands' names are given in alphabetical order. Numerical prefixes do not affect the order. Multiple occurring monodentate ligands receive a prefix according to the number of occurrences: di-, tri-, tetra-, penta-, or hexa-. Multiple occurring polydentate ligands (e.g., ethylenediamine, oxalate) receive bis-, tris-, tetrakis-, etc. Anions end in o. This replaces the final 'e' when the anion ends with '-ide', '-ate' or '-ite', e.g. chloride becomes chlorido and sulfate becomes sulfato. Formerly, '-ide' was changed to '-o' (e.g. chloro and cyano), but this rule has been modified in the 2005 IUPAC recommendations and the correct forms for these ligands are now chlorido and cyanido. Neutral ligands are given their usual name, with some exceptions: NH3 becomes ammine; H2O becomes aqua or aquo; CO becomes carbonyl; NO becomes nitrosyl. Write the name of the central atom/ion. If the complex is an anion, the central atom's name will end in -ate, and its Latin name will be used if available (except for mercury). The oxidation state of the central atom is to be specified (when it is one of several possible, or zero), and should be written as a Roman numeral (or 0) enclosed in parentheses. Name of the cation should be preceded by the name of anion. (if applicable, as in last example) Examples: [Cd(CN)2(en)2] → dicyanidobis(ethylenediamine)cadmium(II) [CoCl(NH3)5]SO4 → pentaamminechloridocobalt(III) sulfate [Cu(H2O)6] 2+ → hexaaquacopper(II) ion [CuCl5NH3]3− → amminepentachloridocuprate(II) ion K4[Fe(CN)6] → potassium hexacyanidoferrate(II) [NiCl4]2− → tetrachloridonickelate(II) ion (The use of chloro- was removed from IUPAC naming convention) The coordination number of ligands attached to more than one metal (bridging ligands) is indicated by a subscript to the Greek symbol μ placed before the ligand name. Thus the dimer of aluminium trichloride is described by Al2Cl4(μ2-Cl)2. Any anionic group can be electronically stabilized by any cation. An anionic complex can be stabilised by a hydrogen cation, becoming an acidic complex which can dissociate to release the cationic hydrogen. This kind of complex compound has a name with "ic" added after the central metal. For example, H2[Pt(CN)4] has the name tetracyanoplatinic (II) acid. Stability constant The affinity of metal ions for ligands is described by a stability constant, also called the formation constant, and is represented by the symbol Kf. It is the equilibrium constant for
in 1976. Dozens of companies were introducing game systems that year after Atari's successful Pong console. Nearly all of these new games were based on General Instrument's "Pong-on-a-chip". General Instrument had underestimated demand, and there were severe shortages. Coleco had been one of the first to place an order, and was one of the few companies to receive an order in full. Though dedicated game consoles did not last long on the market, their early order enabled Coleco to break even. Coleco continued to do well in electronics. The company transitioned into handheld electronic games, a market popularized by Mattel. An early success was Electronic Quarterback. Coleco produced two popular lines of games, the "head to head" series of two player sports games, (Football, Baseball, Basketball, Soccer, Hockey) and the Mini-Arcade series of licensed video arcade titles such as Donkey Kong and Ms. Pac-Man. A third line of educational handhelds was also produced and included the Electronic Learning Machine, Lil Genius, Digits, and a trivia game called Quiz Wiz. Launched in 1982, their first four tabletop Mini-Arcades, for Pac-Man, Galaxian, Donkey Kong, and Frogger, sold approximately three million units within a year. Among these, 1.5 million units were sold for Pac-Man alone. In 1983, it released three more Mini-Arcades: for Ms. Pac-Man, Donkey Kong Junior, and Zaxxon. Coleco returned to the video game console market in 1982 with the launch of the ColecoVision. The system was quite popular, and came bundled with a copy of Donkey Kong. The console sold 560,000 units in 1982. Coleco also hedged its bet on video games by introducing a line of ROM cartridges for the Atari 2600 and Intellivision, selling six million cartridges for both systems, along with two million sold for the ColecoVision for a total of eight million cartridges sold in 1982. It also introduced the Coleco Gemini, a clone of the popular Atari 2600, which came bundled with a copy of Donkey Kong. When the video game business began to implode in 1983, it seemed clear that video game consoles were being supplanted by home computers. Bob Greenberg, son of Leonard Greenberg and nephew of Arnold Greenberg, left Microsoft where he had been working as a program developer at the time to assist in Coleco's entry into this market. Coleco's strategy was to introduce the Coleco Adam home computer, both as a stand-alone system and as an expansion module to the ColecoVision. The effort failed, in part because Adams were often unreliable due to being released with fatal bugs, and in part because the computer's release coincided with the home computer industry crashing. Coleco withdrew from electronics early in 1985. In 1983, Coleco released the Cabbage Patch Kids series of dolls which were wildly successful. In the same year, Dr. Seuss signed a deal with Coleco to design a line of toys, including home video games based on his characters. Flush with success, Coleco purchased Leisure Dynamics (manufacturer of Aggravation and Perfection) and beleaguered Selchow and Righter, manufacturers of Scrabble, Parcheesi, and Trivial Pursuit, in 1986. Sales of Selchow & Righter games had plummeted, leaving them with warehouses full of unsold games. The purchase price for Selchow & Righter was $75 million. That same year, Coleco introduced an ALF plush-based on the furry alien character who had his own television series at the time, as well as a talking version and a cassette-playing "Storytelling ALF" doll. The
the war the company was larger and had branched out into new and used shoe machinery, hat cleaning equipment and even marble shoeshine stands. By the early 1950s, and thanks to Maurice Greenberg's son, Leonard Greenberg, the company had diversified further and was making leather lacing and leathercraft kits. In 1954, at the New York Toy Fair, the leather moccasin kit was selected as a Child Guidance Prestige Toy, and Connecticut Leather Company decided to go wholeheartedly into the toy business. In 1956, Leonard read of an emerging technology, the vacuum forming of plastic, which led the company to become very successful, producing an enormous array of different plastic toys and wading pools. In 1961, the leather and shoe findings portion of the business was sold, and Connecticut Leather Company became Coleco Industries, Inc. On January 9, 1962 Coleco went public, offering stock at $5.00 a share. In 1963, the company acquired the Kestral Corporation of Springfield, Massachusetts, a manufacturer of inflatable vinyl pools and toys. This led to Coleco becoming the largest manufacturer of above-ground swimming pools in the world. By 1966, the company had grown so Leonard persuaded his brother Arnold Greenberg to join the company. Further acquisitions added to the company's growth, namely Playtime Products (1966) and Eagle Toys of Canada (1968). By the end of the 1960s, Coleco ran ten manufacturing facilities and had a new corporate headquarters in Hartford, Connecticut. The 1970s were difficult for Coleco. Despite this, sales exceeded $100 million. When Coleco became listed on the New York Stock Exchange in 1971 sales had grown to $48.6 million. In 1972 Coleco entered the snowmobile market through acquisition. Poor snowfall and market conditions led to disappointing sales and profits. Under CEO Arnold Greenberg, the company entered the video game console business with the Telstar in 1976. Dozens of companies were introducing game systems that year after Atari's successful Pong console. Nearly all of these new games were based on General Instrument's "Pong-on-a-chip". General Instrument had underestimated demand, and there were severe shortages. Coleco had been one of the first to place an order, and was one of the few companies to receive an order in full. Though dedicated game consoles did not last long on the market, their early order enabled Coleco to break even. Coleco continued to do well in electronics. The company transitioned into handheld electronic games, a market popularized by Mattel. An early success was Electronic Quarterback. Coleco produced two popular lines of games, the "head to head" series of two player sports games, (Football, Baseball, Basketball, Soccer, Hockey) and the Mini-Arcade series of licensed video arcade titles such as Donkey Kong and Ms. Pac-Man. A third line of educational handhelds was also produced and included the Electronic Learning Machine, Lil Genius, Digits, and a trivia game called Quiz Wiz. Launched in 1982, their first four tabletop Mini-Arcades, for Pac-Man, Galaxian, Donkey Kong, and Frogger, sold approximately three million units within a year. Among these, 1.5 million units were sold for Pac-Man alone. In 1983, it released three more
system to play Atari 2600 cartridges. A later module converts ColecoVision into the Coleco Adam home computer. ColecoVision was discontinued in 1985 when Coleco withdrew from the video game market. History Coleco entered the video game market in 1976 during the dedicated-game home console period with their line of Telstar consoles. When that market became oversaturated over the next few years, the company nearly went bankrupt, but found a successful product through handheld electronic games, with products that beat out those of the current market leader, Mattel. The company also developed a line of miniaturized tabletop arcade video games with licensed rights from arcade game makers including Sega, Bally, Midway, and Nintendo. Coleco was able to survive on sales of their electronic games through to 1982, but that market itself began to wane, and Greenberg was still interested in producing a home video game console. According to Eric Bromley, who led the engineering for the ColecoVision, Coleco president Arnold Greenberg had wanted to get into the programmable home console market with arcade-quality games but the cost of components had been a limiting factor. As early as 1979, Bromley had drawn out specifications for a system using a Texas Instruments video and a General Instruments audio chip but could not get the go-ahead due to cost of RAM. Around 1981, Bromley saw an article in The Wall Street Journal that asserted the price of RAM had fallen and, after working the cost numbers, Bromley found the system cost fell within their cost margins. Within ten minutes of reporting this to Greenberg, they had established the working name "ColecoVision" for the console as they began a more thorough design, which the marketing department never was able to surpass. Coleco recognized that licensed conversion of arcade conversions had worked for Atari in selling the Atari VCS, so they had approached Nintendo around 1981 for potential access to their arcade titles. Bromley described a tense set of meetings with Nintendo's president Hiroshi Yamauchi under typical Japanese customs where he sought to negotiate for game rights, though Yamauchi only offered seemingly obscure titles. After a meal with Yamauchi during one day, Bromley excused himself to the restroom and happened upon one of the first Donkey Kong cabinets which had yet to be released to Western countries. Knowing this game would likely be a hit, Bromley arranged a meeting the following day with Yamauchi and requested the exclusive rights to Donkey Kong; Yamauchi offered them if only they could provide upfront by that day and gave them per unit sold. Greenberg agreed, though as in Japanese custom, Bromley did not have a formal contract from Nintendo on his return. By the time of that year's Consumer Electronics Show, which Yamauchi was attending, Bromley found out from Yamauchi's daughter and translator that he had apparently given the rights to Atari. With Yamauchi's daughter's help,
the dedicated-game home console period with their line of Telstar consoles. When that market became oversaturated over the next few years, the company nearly went bankrupt, but found a successful product through handheld electronic games, with products that beat out those of the current market leader, Mattel. The company also developed a line of miniaturized tabletop arcade video games with licensed rights from arcade game makers including Sega, Bally, Midway, and Nintendo. Coleco was able to survive on sales of their electronic games through to 1982, but that market itself began to wane, and Greenberg was still interested in producing a home video game console. According to Eric Bromley, who led the engineering for the ColecoVision, Coleco president Arnold Greenberg had wanted to get into the programmable home console market with arcade-quality games but the cost of components had been a limiting factor. As early as 1979, Bromley had drawn out specifications for a system using a Texas Instruments video and a General Instruments audio chip but could not get the go-ahead due to cost of RAM. Around 1981, Bromley saw an article in The Wall Street Journal that asserted the price of RAM had fallen and, after working the cost numbers, Bromley found the system cost fell within their cost margins. Within ten minutes of reporting this to Greenberg, they had established the working name "ColecoVision" for the console as they began a more thorough design, which the marketing department never was able to surpass. Coleco recognized that licensed conversion of arcade conversions had worked for Atari in selling the Atari VCS, so they had approached Nintendo around 1981 for potential access to their arcade titles. Bromley described a tense set of meetings with Nintendo's president Hiroshi Yamauchi under typical Japanese customs where he sought to negotiate for game rights, though Yamauchi only offered seemingly obscure titles. After a meal with Yamauchi during one day, Bromley excused himself to the restroom and happened upon one of the first Donkey Kong cabinets which had yet to be released to Western countries. Knowing this game would likely be a hit, Bromley arranged a meeting the following day with Yamauchi and requested the exclusive rights to Donkey Kong; Yamauchi offered them if only they could provide upfront by that day and gave them per unit sold. Greenberg agreed, though as in Japanese custom, Bromley did not have a formal contract from Nintendo on his return. By the time of that year's Consumer Electronics Show, which Yamauchi was attending, Bromley found out from Yamauchi's daughter and translator that he had apparently given the rights to Atari. With Yamauchi's daughter's help, Bromley was able to commit Yamauchi to sign a formal contract to affirm the rights to Coleco. Coleco's announcement that they would bundle Donkey Kong with the console was initially met with surprise and skepticism, with journalists and retailers questioning why they would give away their most anticipated home video game with the console. The ColecoVision was released in August 1982. By Christmas 1982, Coleco had sold more than 500,000 units, in part on the strength of Donkey Kong as the bundled game. ColecoVision's main competitor was the less commercially successful Atari 5200. Sales quickly passed 1 million in early 1983. The ColecoVision was distributed by CBS Electronics outside of North America and was branded the CBS ColecoVision. In Europe the console was released in July 1983, nearly one year after the North American release. By the beginning of 1984, quarterly sales of the ColecoVision had dramatically decreased. In January 1985, Coleco discontinued the Adam, which was a home computer expansion for ColecoVision. By mid-1985, Coleco planned to withdraw from the video game market, and the ColecoVision was officially discontinued by October. Total sales are uncertain but were ultimately in excess of 2 million consoles, with the console continuing to sell modestly up until its discontinuation. The video game crash of 1983 has been cited as the
Dot Eaters entry on the history of Telstar and Coleco The COLECO Story by Ralph H. Baer Feature titled "THE MOST BIZARRE CONSOLE FLOPS IN GAMING HISTORY" by ADAM JAMES at SVG.com Coleco consoles Dedicated consoles First-generation video game consoles Monochrome video game consoles 1970s toys 1976 in video gaming Products introduced in 1976 1978 disestablishments Pong variations
Starting with Coleco Telstar Pong clone based video game console on General Instrument's AY-3-8500 chip in 1976, there were 14 consoles released in the Coleco Telstar series. About one million units of the first model called Coleco Telstar were sold. The large product lineup and the impending fading out of the
of conventional warfare is to weaken or destroy the opponent's military, which negates its ability to engage in conventional warfare. In forcing capitulation, however, one or both sides may eventually resort to unconventional warfare tactics. History Formation of the state The state was first advocated by Plato, then found more acceptance in the consolidation of power under the Roman Catholic Church. European monarchs then gained power as the Catholic Church was stripped of temporal power and was replaced by the divine right of kings. In 1648, the powers of Europe signed the Treaty of Westphalia which ended the religious violence for purely political governance and outlook, signifying the birth of the modern 'state'. Within this statist paradigm, only the state and its appointed representatives were allowed to bear arms and enter into war. In fact, war was only understood as a conflict between sovereign states. Kings strengthened this idea and gave it the force of law. Whereas previously any noble could start a war, the monarchs of Europe of necessity consolidated military power in response to the Napoleonic war. The Clausewitzian paradigm Prussia was one country attempting to amass military power. Carl von Clausewitz, one of Prussia's officers, wrote On War, a work rooted solely in the world of the state. All other forms of intrastate conflict, such as rebellion, are not accounted for because in theoretical terms, Clausewitz could not account for warfare before the state. However, near the end of his life, Clausewitz grew increasingly aware
criminal activities and stripped of legitimacy. This war paradigm reflected the view of most of the modernized world at the beginning of the 21st century, as verified by examination of the conventional armies of the time: large, high maintenance, technologically advanced armies designed to compete against similarly designed forces. Clausewitz also forwarded the issue of casus belli. While previous wars were fought for social, religious, or even cultural reasons, Clausewitz taught that war is merely "a continuation of politics by other means." It is a rational calculation in which states fight for their interests (whether they are economic, security-related, or otherwise) once normal discourse has broken down. Prevalence The majority of modern wars have been conducted using the means of conventional means. Confirmed use of biological warfare by a nation state has not occurred since 1945, and chemical warfare has been used only a few times (the latest known confrontation in which it was utilized being the Syrian Civil War). Nuclear warfare has only occurred once with the United States bombing the Japanese cities of Hiroshima and Nagasaki in August 1945. Decline The state and Clausewitzian principles peaked in the World Wars of the 20th century, but also laid the groundwork for their dilapidation due to nuclear proliferation and the manifestation of culturally aligned conflict. The nuclear bomb was the result of the state perfecting its quest to overthrow its competitive duplicates. This development seems to have pushed conventional conflict waged by the state to the sidelines. Were two conventional armies to fight, the loser would have redress in its nuclear arsenal. Thus, no two nuclear powers have yet fought a conventional war directly, with the exception of two brief skirmishes between, China and Russia in the 1969 Sino-Soviet conflict and between India and Pakistan in the 1999 Kargil War. Despite a general decline in combat deaths worldwide, some conventional wars have been fought since 1945 between countries without nuclear weapons, such as the Iran-Iraq War, or between a nuclear state and a weaker non-nuclear state, like the Gulf War and Russo-Ukrainian War. Replacement With the invention of nuclear weapons, the concept of full-scale war carries the prospect of global annihilation, and as such conflicts since WWII have by definition been "low intensity" conflicts, typically in the form of proxy wars
to his cause, despite neglect by his faction and harassment by its enemies, started the use of the term. Chauvinism has extended from its original use to include fanatical devotion and undue partiality to any group or cause to which one belongs, especially when such partisanship includes prejudice against or hostility toward outsiders or rival groups and persists even in the face of overwhelming opposition. This French quality finds its parallel in the English-language term jingoism, which has retained the meaning of chauvinism strictly in its original sense; that is, an attitude of belligerent nationalism. In 1945, political theorist Hannah Arendt described the concept thus: In this sense, chauvinism is irrational, in that no nation or ethnic group can claim to be inherently superior to another. Male chauvinism Male chauvinism is the belief that men are superior to women. The first documented use of the phrase "male chauvinism" is in the 1935 Clifford Odets play Till the Day I Die. In the workplace The balance of the workforce changed during World War II. As men left their positions to enlist in the military and fight in the war, women started replacing them. After the war ended, men returned home to find jobs in the workplace now occupied by women, which "threatened the self-esteem many men derive from their dominance over women in the family, the economy, and society at large." Consequently, male chauvinism was on the rise, according to Cynthia B. Lloyd.
by his faction and harassment by its enemies, started the use of the term. Chauvinism has extended from its original use to include fanatical devotion and undue partiality to any group or cause to which one belongs, especially when such partisanship includes prejudice against or hostility toward outsiders or rival groups and persists even in the face of overwhelming opposition. This French quality finds its parallel in the English-language term jingoism, which has retained the meaning of chauvinism strictly in its original sense; that is, an attitude of belligerent nationalism. In 1945, political theorist Hannah Arendt described the concept thus: In this sense, chauvinism is irrational, in that no nation or ethnic group can claim to be inherently superior to another. Male chauvinism Male chauvinism is the belief that men are superior to women. The first documented use of the phrase "male chauvinism" is in the 1935 Clifford Odets play Till the Day I Die. In the workplace The balance of the workforce changed during World War II. As men left their positions to enlist in the military and fight in the war, women started replacing them. After the war ended, men returned home to find jobs in the workplace now occupied by women, which "threatened the self-esteem many men derive from their dominance over women in the family, the economy, and society at large." Consequently, male chauvinism was on the rise, according to Cynthia B. Lloyd. Lloyd and Michael Korda have
It is of interest in synthetic biology and is also a common subject in science fiction. The element silicon has been much discussed as a hypothetical alternative to carbon. Silicon is in the same group as carbon on the periodic table and, like carbon, it is tetravalent. Hypothetical alternatives to water include ammonia, which, like water, is a polar molecule, and cosmically abundant; and non-polar hydrocarbon solvents such as methane and ethane, which are known to exist in liquid form on the surface of Titan. Overview Shadow biosphere A shadow biosphere is a hypothetical microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. Although life on Earth is relatively well-studied, the shadow biosphere may still remain unnoticed because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms. Alternative-chirality biomolecules Perhaps the least unusual alternative biochemistry would be one with differing chirality of its biomolecules. In known Earth-based life, amino acids are almost universally of the form and sugars are of the form. Molecules using amino acids or sugars may be possible; molecules of such a chirality, however, would be incompatible with organisms using the opposing chirality molecules. Amino acids whose chirality is opposite to the norm are found on Earth, and these substances are generally thought to result from decay of organisms of normal chirality. However, physicist Paul Davies speculates that some of them might be products of "anti-chiral" life. It is questionable, however, whether such a biochemistry would be truly alien. Although it would certainly be an alternative stereochemistry, molecules that are overwhelmingly found in one enantiomer throughout the vast majority of organisms can nonetheless often be found in another enantiomer in different (often basal) organisms such as in comparisons between members of Archaea and other domains, making it an open topic whether an alternative stereochemistry is truly novel. Non-carbon-based biochemistries On Earth, all known living things have a carbon-based structure and system. Scientists have speculated about the pros and cons of using atoms other than carbon to form the molecular structures necessary for life, but no one has proposed a theory employing such atoms to form all the necessary structures. However, as Carl Sagan argued, it is very difficult to be certain whether a statement that applies to all life on Earth will turn out to apply to all life throughout the universe. Sagan used the term "carbon chauvinism" for such an assumption. He regarded silicon and germanium as conceivable alternatives to carbon (other plausible elements include but are not limited to palladium and titanium); but, on the other hand, he noted that carbon does seem more chemically versatile and is more abundant in the cosmos). Norman Horowitz devised the experiments to determine whether life might exist on Mars that were carried out by the Viking Lander of 1976, the first U.S. mission to successfully land an unmanned probe on the surface of Mars. Horowitz argued that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. He considered that there was only a remote possibility that non-carbon life forms could exist with genetic information systems capable of self-replication and the ability to evolve and adapt. Silicon biochemistry The silicon atom has been much discussed as the basis for an alternative biochemical system, because silicon has many chemical properties similar to those of carbon and is in the same group of the periodic table, the carbon group. Like carbon, silicon can create molecules that are sufficiently large to carry biological information. However, silicon has several drawbacks as an alternative to carbon. Silicon, unlike carbon, lacks the ability to form chemical bonds with diverse types of atoms as is necessary for the chemical versatility required for metabolism, and yet this precise inability is what makes silicon less susceptible to bond with all sorts of impurities from which carbon, in comparison, is not shielded. Elements creating organic functional groups with carbon include hydrogen, oxygen, nitrogen, phosphorus, sulfur, and metals such as iron, magnesium, and zinc. Silicon, on the other hand, interacts with very few other types of atoms. Moreover, where it does interact with other atoms, silicon creates molecules that have been described as "monotonous compared with the combinatorial universe of organic macromolecules". This is because silicon atoms are much bigger, having a larger mass and atomic radius, and so have difficulty forming double bonds (the double-bonded carbon is part of the carbonyl group, a fundamental motif of carbon-based bio-organic chemistry). Silanes, which are chemical compounds of hydrogen and silicon that are analogous to the alkane hydrocarbons, are highly reactive with water, and long-chain silanes spontaneously decompose. Molecules incorporating polymers of alternating silicon and oxygen atoms instead of direct bonds between silicon, known collectively as silicones, are much more stable. It has been suggested that silicone-based chemicals would be more stable than equivalent hydrocarbons in a sulfuric-acid-rich environment, as is found in some extraterrestrial locations. Of the varieties of molecules identified in the interstellar medium , 84 are based on carbon, while only 8 are based on silicon. Moreover, of those 8 compounds, 4 also include carbon within them. The cosmic abundance of carbon to silicon is roughly 10 to 1. This may suggest a greater variety of complex carbon compounds throughout the cosmos, providing less of a foundation on which to build silicon-based biologies, at least under the conditions prevalent on the surface of planets. Also, even though Earth and other terrestrial planets are exceptionally silicon-rich and carbon-poor (the relative abundance of silicon to carbon in Earth's crust is roughly 925:1), terrestrial life is carbon-based. The fact that carbon is used instead of silicon may be evidence that silicon is poorly suited for biochemistry on Earth-like planets. Reasons for this may be that silicon is less versatile than carbon in forming compounds, that the compounds formed by silicon are unstable, and that it blocks the flow of heat. Even so, biogenic silica is used by some Earth life, such as the silicate skeletal structure of diatoms. According to the clay hypothesis of A. G. Cairns-Smith, silicate minerals in water played a crucial role in abiogenesis: they replicated their crystal structures, interacted with carbon compounds, and were the precursors of carbon-based life. Although not observed in nature, carbon–silicon bonds have been added to biochemistry by using directed evolution (artificial selection). A heme containing cytochrome c protein from Rhodothermus marinus has been engineered using directed evolution to catalyze the formation of new carbon–silicon bonds between hydrosilanes and diazo compounds. Silicon compounds may possibly be biologically useful under temperatures or pressures different from the surface of a terrestrial planet, either in conjunction with or in a role less directly analogous to carbon. Polysilanols, the silicon compounds corresponding to sugars, are soluble in liquid nitrogen, suggesting that they could play a role in very-low-temperature biochemistry. In cinematic and literary science fiction, at a moment when man-made machines cross from nonliving to living, it is often posited, this new form would be the first example of non-carbon-based life. Since the advent of the microprocessor in the late 1960s, these machines are often classed as computers (or computer-guided robots) and filed under "silicon-based life", even though the silicon backing matrix of these processors is not nearly as fundamental to their operation as carbon is for "wet life". Other exotic element-based biochemistries Boranes are dangerously explosive in Earth's atmosphere, but would be more stable in a reducing atmosphere. However, boron's low cosmic abundance makes it less likely as a base for life than carbon. Various metals, together with oxygen, can form very complex and thermally stable structures rivaling those of organic compounds; the heteropoly acids are one such family. Some metal oxides are also similar to carbon in their ability to form both nanotube structures and diamond-like crystals (such as cubic zirconia). Titanium, aluminium, magnesium, and iron are all more abundant in the Earth's crust than carbon. Metal-oxide-based life could therefore be a possibility under certain conditions, including those (such as high temperatures) at which carbon-based life would be unlikely. The Cronin group at Glasgow University reported self-assembly of tungsten polyoxometalates into cell-like spheres. By modifying their metal oxide content, the spheres can acquire holes that act as porous membrane, selectively allowing chemicals in and out of the sphere according to size. Sulfur is also able to form long-chain molecules, but suffers from the same high-reactivity problems as phosphorus and silanes. The biological use of sulfur as an alternative to carbon is purely hypothetical, especially because sulfur usually forms only linear chains rather than branched ones. (The biological use of sulfur as an electron acceptor is widespread and can be traced back 3.5 billion years on Earth, thus predating the use of molecular oxygen. Sulfur-reducing bacteria can utilize elemental sulfur instead of oxygen, reducing sulfur to hydrogen sulfide.) Arsenic as an alternative to phosphorus Arsenic, which is chemically similar to phosphorus, while poisonous for most life forms on Earth, is incorporated into the biochemistry of some organisms. Some marine algae incorporate arsenic into complex organic molecules such as arsenosugars and arsenobetaines. Fungi and bacteria can produce volatile methylated arsenic compounds. Arsenate reduction and arsenite oxidation have been observed in microbes (Chrysiogenes arsenatis). Additionally, some prokaryotes can use arsenate as a terminal electron acceptor during anaerobic growth and some can utilize arsenite as an electron donor to generate energy. It has been speculated that the earliest life forms on Earth may have used arsenic biochemistry in place of phosphorus in the structure of their DNA. A common objection to this scenario is that arsenate esters are so much less stable to hydrolysis than corresponding phosphate esters that arsenic is poorly suited for this function. The authors of a 2010 geomicrobiology study, supported in part by NASA, have postulated that a bacterium, named GFAJ-1, collected in the sediments of Mono Lake in eastern California, can employ such 'arsenic DNA' when cultured without phosphorus. They proposed that the bacterium may employ high levels of poly-β-hydroxybutyrate or other means to reduce the effective concentration of water and stabilize its arsenate esters. This claim was heavily criticized almost immediately after publication for the perceived lack of appropriate controls. Science writer Carl Zimmer contacted several scientists for an assessment: "I reached out to a dozen experts ... Almost unanimously, they think the NASA scientists have failed to make their case". Other authors were unable to reproduce their results and showed that the study had issues with phosphate contamination, suggesting that the low amounts present could sustain extremophile lifeforms. Alternatively, it was suggested that GFAJ-1 cells grow by recycling phosphate from degraded ribosomes, rather than by replacing it with arsenate. Non-water solvents In addition to carbon compounds, all currently known terrestrial life also requires water as a solvent. This has led to discussions about whether water is the only liquid capable of filling that role. The idea that an extraterrestrial life-form might be based on a solvent other than water has been taken seriously in recent scientific literature by the biochemist Steven Benner, and by the astrobiological committee chaired by John A. Baross. Solvents discussed by the Baross committee include ammonia, sulfuric acid, formamide, hydrocarbons, and (at temperatures much lower than Earth's) liquid nitrogen, or hydrogen in the form of a supercritical fluid. Carl Sagan once described himself as both a carbon chauvinist and a water chauvinist; however, on another occasion he said that he was a carbon chauvinist but "not that much of a water chauvinist". He speculated on hydrocarbons, hydrofluoric acid, and ammonia as possible alternatives to water. Some of the properties of water that are important for life processes include: A complexity which leads to a large number of permutations of possible reaction paths including acid–base chemistry, H+ cations, OH− anions, hydrogen bonding, van der Waals bonding, dipole–dipole and other polar interactions, aqueous solvent cages, and hydrolysis. This complexity offers a large number of pathways for evolution to produce life, many other solvents have dramatically fewer possible reactions, which severely limits evolution. Thermodynamic stability: the free energy of formation of liquid water is low enough (−237.24 kJ/mol) that water undergoes few reactions. Other solvents are highly reactive, particularly with oxygen. Water does not combust in oxygen because it is already the combustion product of hydrogen with oxygen. Most alternative solvents are not stable in an oxygen-rich atmosphere, so it is highly unlikely that those liquids could support aerobic life. A large temperature range over which it is liquid. High solubility of oxygen and carbon dioxide at room temperature supporting the evolution of aerobic aquatic plant and animal life. A high heat capacity (leading to higher environmental temperature stability). Water is a room-temperature liquid leading to a large population of quantum transition states required to overcome reaction barriers. Cryogenic liquids (such as liquid methane) have exponentially lower transition state populations which are needed for life based on chemical reactions. This leads to chemical reaction rates which may be so slow as to preclude the development of any life based on chemical reactions. Spectroscopic transparency allowing solar radiation to penetrate several meters into the liquid (or solid), greatly aiding the evolution of aquatic life. A large heat of vaporization leading to stable lakes and oceans. The ability to dissolve a wide variety of compounds. The solid (ice) has lower density than the liquid, so ice floats on the liquid. This
to a large number of permutations of possible reaction paths including acid–base chemistry, H+ cations, OH− anions, hydrogen bonding, van der Waals bonding, dipole–dipole and other polar interactions, aqueous solvent cages, and hydrolysis. This complexity offers a large number of pathways for evolution to produce life, many other solvents have dramatically fewer possible reactions, which severely limits evolution. Thermodynamic stability: the free energy of formation of liquid water is low enough (−237.24 kJ/mol) that water undergoes few reactions. Other solvents are highly reactive, particularly with oxygen. Water does not combust in oxygen because it is already the combustion product of hydrogen with oxygen. Most alternative solvents are not stable in an oxygen-rich atmosphere, so it is highly unlikely that those liquids could support aerobic life. A large temperature range over which it is liquid. High solubility of oxygen and carbon dioxide at room temperature supporting the evolution of aerobic aquatic plant and animal life. A high heat capacity (leading to higher environmental temperature stability). Water is a room-temperature liquid leading to a large population of quantum transition states required to overcome reaction barriers. Cryogenic liquids (such as liquid methane) have exponentially lower transition state populations which are needed for life based on chemical reactions. This leads to chemical reaction rates which may be so slow as to preclude the development of any life based on chemical reactions. Spectroscopic transparency allowing solar radiation to penetrate several meters into the liquid (or solid), greatly aiding the evolution of aquatic life. A large heat of vaporization leading to stable lakes and oceans. The ability to dissolve a wide variety of compounds. The solid (ice) has lower density than the liquid, so ice floats on the liquid. This is why bodies of water freeze over but do not freeze solid (from the bottom up). If ice were denser than liquid water (as is true for nearly all other compounds), then large bodies of liquid would slowly freeze solid, which would not be conducive to the formation of life. Water as a compound is cosmically abundant, although much of it is in the form of vapour or ice. Subsurface liquid water is considered likely or possible on several of the outer moons: Enceladus (where geysers have been observed), Europa, Titan, and Ganymede. Earth and Titan are the only worlds currently known to have stable bodies of liquid on their surfaces. Not all properties of water are necessarily advantageous for life, however. For instance, water ice has a high albedo, meaning that it reflects a significant quantity of light and heat from the Sun. During ice ages, as reflective ice builds up over the surface of the water, the effects of global cooling are increased. There are some properties that make certain compounds and elements much more favorable than others as solvents in a successful biosphere. The solvent must be able to exist in liquid equilibrium over a range of temperatures the planetary object would normally encounter. Because boiling points vary with the pressure, the question tends not to be does the prospective solvent remain liquid, but at what pressure. For example, hydrogen cyanide has a narrow liquid-phase temperature range at 1 atmosphere, but in an atmosphere with the pressure of Venus, with of pressure, it can indeed exist in liquid form over a wide temperature range. Ammonia The ammonia molecule (NH3), like the water molecule, is abundant in the universe, being a compound of hydrogen (the simplest and most common element) with another very common element, nitrogen. The possible role of liquid ammonia as an alternative solvent for life is an idea that goes back at least to 1954, when J. B. S. Haldane raised the topic at a symposium about life's origin. Numerous chemical reactions are possible in an ammonia solution, and liquid ammonia has chemical similarities with water. Ammonia can dissolve most organic molecules at least as well as water does and, in addition, it is capable of dissolving many elemental metals. Haldane made the point that various common water-related organic compounds have ammonia-related analogs; for instance the ammonia-related amine group (−NH2) is analogous to the water-related hydroxyl group (−OH). Ammonia, like water, can either accept or donate an H+ ion. When ammonia accepts an H+, it forms the ammonium cation (NH4+), analogous to hydronium (H3O+). When it donates an H+ ion, it forms the amide anion (NH2−), analogous to the hydroxide anion (OH−). Compared to water, however, ammonia is more inclined to accept an H+ ion, and less inclined to donate one; it is a stronger nucleophile. Ammonia added to water functions as Arrhenius base: it increases the concentration of the anion hydroxide. Conversely, using a solvent system definition of acidity and basicity, water added to liquid ammonia functions as an acid, because it increases the concentration of the cation ammonium. The carbonyl group (C=O), which is much used in terrestrial biochemistry, would not be stable in ammonia solution, but the analogous imine group (C=NH) could be used instead. However, ammonia has some problems as a basis for life. The hydrogen bonds between ammonia molecules are weaker than those in water, causing ammonia's heat of vaporization to be half that of water, its surface tension to be a third, and reducing its ability to concentrate non-polar molecules through a hydrophobic effect. Gerald Feinberg and Robert Shapiro have questioned whether ammonia could hold prebiotic molecules together well enough to allow the emergence of a self-reproducing system. Ammonia is also flammable in oxygen and could not exist sustainably in an environment suitable for aerobic metabolism. A biosphere based on ammonia would likely exist at temperatures or air pressures that are extremely unusual in relation to life on Earth. Life on Earth usually exists within the melting point and boiling point of water, at a pressure designated as normal pressure, and between 0 °C (273 K) and 100 °C (373 K). When also held to normal pressure, ammonia's melting and boiling points are −78 °C (195 K) and −33 °C (240 K) respectively. Because chemical reactions generally proceed more slowly at lower temperatures, ammonia-based life existing in this set of conditions might metabolize more slowly and evolve more slowly than life on Earth. On the other hand, lower temperatures could also enable living systems to use chemical species that would be too unstable at Earth temperatures to be useful. Another set of conditions where ammonia is liquid at Earth-like temperatures would involve it being at a much higher pressure. For example, at 60 atm ammonia melts at −77 °C (196 K) and boils at 98 °C (371 K). Ammonia and ammonia–water mixtures remain liquid at temperatures far below the freezing point of pure water, so such biochemistries might be well suited to planets and moons orbiting outside the water-based habitability zone. Such conditions could exist, for example, under the surface of Saturn's largest moon Titan. Methane and other hydrocarbons Methane (CH4) is a simple hydrocarbon: that is, a compound of two of the most common elements in the cosmos: hydrogen and carbon. It has a cosmic abundance comparable with ammonia. Hydrocarbons could act as a solvent over a wide range of temperatures, but would lack polarity. Isaac Asimov, the biochemist and science fiction writer, suggested in 1981 that poly-lipids could form a substitute for proteins in a non-polar solvent such as methane. Lakes composed of a mixture of hydrocarbons, including methane and ethane, have been detected on the surface of Titan by the Cassini spacecraft. There is debate about the effectiveness of methane and other hydrocarbons as a solvent for life compared to water or ammonia. Water is a stronger solvent than the hydrocarbons, enabling easier transport of substances in a cell. However, water is also more chemically reactive and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the threat of its biomolecules being destroyed in this way. Also, the water molecule's tendency to form strong hydrogen bonds can interfere with internal hydrogen bonding in complex organic molecules. Life with a hydrocarbon solvent could make more use of hydrogen bonds within its biomolecules. Moreover, the strength of hydrogen bonds within biomolecules would be appropriate to a low-temperature biochemistry. Astrobiologist Chris McKay has argued, on thermodynamic grounds, that if life does exist on Titan's surface, using hydrocarbons as a solvent, it is likely also to use the more complex hydrocarbons as an energy source by reacting them with hydrogen, reducing ethane and acetylene to methane. Possible evidence for this form of life on Titan was identified in 2010 by Darrell Strobel of Johns Hopkins University; a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward diffusion at a rate of roughly 1025 molecules per second and disappearance of hydrogen near Titan's surface. As Strobel noted, his findings were in line with the effects Chris McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by Chris McKay as consistent with the hypothesis of organisms reducing acetylene to methane. While restating the biological hypothesis, McKay cautioned that other explanations for the hydrogen and acetylene findings are to be considered more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a non-living surface catalyst enabling acetylene to react with hydrogen), or flaws in the current models of material flow. He noted that even a non-biological catalyst effective at 95 K would in itself be a startling discovery. Azotosome A hypothetical cell membrane termed an azotosome, capable of functioning in liquid methane in Titan conditions was computer-modeled in an article published in February 2015. Composed of acrylonitrile, a small molecule containing carbon, hydrogen, and nitrogen, it is predicted to have stability and flexibility in liquid methane comparable to that of a phospholipid bilayer (the type of cell membrane possessed by all life on Earth) in liquid water. An analysis of data obtained using the Atacama
for the basic patterns of life and culture." "Creation myths tell us how things began. All cultures have creation myths; they are our primary myths, the first stage in what might be called the psychic life of the species. As cultures, we identify ourselves through the collective dreams we call creation myths, or cosmogonies. … Creation myths explain in metaphorical terms our sense of who we are in the context of the world, and in so doing they reveal our real priorities, as well as our real prejudices. Our images of creation say a great deal about who we are." A "philosophical and theological elaboration of the primal myth of creation within a religious community. The term myth here refers to the imaginative expression in narrative form of what is experienced or apprehended as basic reality … The term creation refers to the beginning of things, whether by the will and act of a transcendent being, by emanation from some ultimate source, or in any other way." Religion professor Mircea Eliade defined the word myth in terms of creation: Myth narrates a sacred history; it relates an event that took place in primordial Time, the fabled time of the "beginnings." In other words, myth tells how, through the deeds of Supernatural Beings, a reality came into existence, be it the whole of reality, the Cosmos, or only a fragment of reality – an island, a species of plant, a particular kind of human behavior, an institution. Meaning and function All creation myths are in one sense etiological because they attempt to explain how the world formed and where humanity came from. Myths attempt to explain the unknown and sometimes teach a lesson. Ethnologists and anthropologists who study origin myths say that in the modern context theologians try to discern humanity's meaning from revealed truths and scientists investigate cosmology with the tools of empiricism and rationality, but creation myths define human reality in very different terms. In the past, historians of religion and other students of myth thought of such stories as forms of primitive or early-stage science or religion and analyzed them in a literal or logical sense. Today, however, they are seen as symbolic narratives which must be understood in terms of their own cultural context. Charles Long writes: "The beings referred to in the myth – gods, animals, plants – are forms of power grasped existentially. The myths should not be understood as attempts to work out a rational explanation of deity." While creation myths are not literal explications, they do serve to define an orientation of humanity in the world in terms of a birth story. They provide the basis of a worldview that reaffirms and guides how people relate to the natural world, to any assumed spiritual world, and to each other. A creation myth acts as a cornerstone for distinguishing primary reality from relative reality, the origin and nature of being from non-being. In this sense cosmogonic myths serve as a philosophy of life – but one expressed and conveyed through symbol rather than through systematic reason. And in this sense they go beyond etiological myths (which explain specific features in religious rites, natural phenomena or cultural life). Creation myths also help to orient human beings in the world, giving them a sense of their place in the world and the regard that they must have for humans and nature. Historian David Christian has summarised issues common to multiple creation myths: Each beginning seems to presuppose an earlier beginning. ... Instead of meeting a single starting point, we encounter an infinity of them, each of which poses the same problem. ... There are no entirely satisfactory solutions to this dilemma. What we have to find is not a solution but some way of dealing with the mystery .... And we have to do so using words. The words we reach for, from God to gravity, are inadequate to the task. So we have to use language poetically or symbolically; and such language, whether used by a scientist, a poet, or a shaman, can easily be misunderstood. Classification Mythologists have applied various schemes to classify creation myths found throughout human cultures. Eliade and his colleague Charles Long developed a classification based on some common motifs that reappear in stories the world over. The classification identifies five basic types: Creation ex nihilo in which the creation is through the thought, word, dream or bodily secretions of a divine being. Earth diver creation in which a diver, usually a bird or amphibian sent by a creator, plunges to the seabed through a primordial ocean to bring up sand or mud which develops into a terrestrial world. Emergence myths in which progenitors pass through a series of worlds and metamorphoses until reaching the present world. Creation by the dismemberment of a primordial being. Creation by the splitting or ordering of a primordial unity such as the cracking of a cosmic egg or a bringing order from chaos. Marta Weigle further developed and refined this typology to highlight nine themes, adding elements such as deus faber, a creation crafted by a deity, creation from the work of two creators working together or against each other, creation from sacrifice and creation from division/conjugation, accretion/conjunction, or secretion. An alternative system based on six recurring narrative themes was designed by Raymond Van Over: Primeval abyss, an infinite expanse of waters or space. Originator deity which is awakened or an eternal entity within the abyss. Originator deity poised above the abyss. Cosmic egg or embryo. Originator deity creating life through sound or word. Life generating from the corpse or dismembered parts of an originator deity. Ex nihilo The myth that God created the world out of nothing – ex nihilo – is central today to Judaism, Christianity and Islam, and the medieval Jewish philosopher Maimonides felt it was the only concept that the three religions shared. Nonetheless, the concept is not found in the entire Hebrew Bible. The authors of Genesis 1 were concerned not with the origins of matter (the material which God formed into the habitable cosmos), but with assigning roles so that the Cosmos should function. In the early 2nd century CE, early Christian scholars were beginning to see a tension between the idea of world-formation and the omnipotence of God, and by the beginning of the 3rd century creation ex nihilo had become a fundamental tenet of Christian theology. Ex nihilo creation is found in creation stories from ancient Egypt, the Rig Veda, and many animistic cultures in Africa, Asia, Oceania and North America. In most of these stories, the world is brought into being by the speech, dream, breath, or pure thought of a creator but creation ex nihilo may also take place through a creator's bodily secretions. The literal translation of the phrase ex nihilo is "from nothing" but in many creation myths the line is blurred whether the creative act would be better classified as a creation ex nihilo or creation from chaos. In ex nihilo creation myths, the potential and the substance of creation springs from within the creator. Such a creator may or may not be existing in physical surroundings such as darkness or water, but does not create the world from them, whereas in creation from chaos the substance used for creation is pre-existing within the unformed void. Creation from chaos In creation from chaos myths, initially there is nothing but a formless, shapeless expanse. In these stories the word "chaos" means "disorder", and this formless expanse, which is also sometimes called a void or an abyss, contains the material with which the created world will be made. Chaos may be described as having the consistency of vapor or water, dimensionless, and sometimes salty or muddy. These myths associate chaos with evil and oblivion, in contrast to "order" (cosmos) which is the good. The act of creation is the bringing of order from disorder, and in many of these cultures it is believed that at some point the forces preserving order and form will weaken and the world will once again be engulfed into the abyss. One example is the Genesis creation narrative from the first chapter of the Book of Genesis. World parent There are two types of world parent myths, both describing a separation or splitting of a primeval entity, the world parent or parents. One form describes the primeval state as an eternal union of two parents, and the creation takes place when the two are pulled apart. The two parents are commonly identified as Sky (usually male) and Earth (usually female), who in the primeval state were so tightly bound to each other that no offspring could emerge. These myths often depict creation as the result of a sexual union and serve as genealogical record of the deities born from it. In the second form of world parent myths, creation itself springs from dismembered parts of the body of the primeval being. Often, in these
myth definitions from modern references: A "symbolic narrative of the beginning of the world as understood in a particular tradition and community. Creation myths are of central importance for the valuation of the world, for the orientation of humans in the universe, and for the basic patterns of life and culture." "Creation myths tell us how things began. All cultures have creation myths; they are our primary myths, the first stage in what might be called the psychic life of the species. As cultures, we identify ourselves through the collective dreams we call creation myths, or cosmogonies. … Creation myths explain in metaphorical terms our sense of who we are in the context of the world, and in so doing they reveal our real priorities, as well as our real prejudices. Our images of creation say a great deal about who we are." A "philosophical and theological elaboration of the primal myth of creation within a religious community. The term myth here refers to the imaginative expression in narrative form of what is experienced or apprehended as basic reality … The term creation refers to the beginning of things, whether by the will and act of a transcendent being, by emanation from some ultimate source, or in any other way." Religion professor Mircea Eliade defined the word myth in terms of creation: Myth narrates a sacred history; it relates an event that took place in primordial Time, the fabled time of the "beginnings." In other words, myth tells how, through the deeds of Supernatural Beings, a reality came into existence, be it the whole of reality, the Cosmos, or only a fragment of reality – an island, a species of plant, a particular kind of human behavior, an institution. Meaning and function All creation myths are in one sense etiological because they attempt to explain how the world formed and where humanity came from. Myths attempt to explain the unknown and sometimes teach a lesson. Ethnologists and anthropologists who study origin myths say that in the modern context theologians try to discern humanity's meaning from revealed truths and scientists investigate cosmology with the tools of empiricism and rationality, but creation myths define human reality in very different terms. In the past, historians of religion and other students of myth thought of such stories as forms of primitive or early-stage science or religion and analyzed them in a literal or logical sense. Today, however, they are seen as symbolic narratives which must be understood in terms of their own cultural context. Charles Long writes: "The beings referred to in the myth – gods, animals, plants – are forms of power grasped existentially. The myths should not be understood as attempts to work out a rational explanation of deity." While creation myths are not literal explications, they do serve to define an orientation of humanity in the world in terms of a birth story. They provide the basis of a worldview that reaffirms and guides how people relate to the natural world, to any assumed spiritual world, and to each other. A creation myth acts as a cornerstone for distinguishing primary reality from relative reality, the origin and nature of being from non-being. In this sense cosmogonic myths serve as a philosophy of life – but one expressed and conveyed through symbol rather than through systematic reason. And in this sense they go beyond etiological myths (which explain specific features in religious rites, natural phenomena or cultural life). Creation myths also help to orient human beings in the world, giving them a sense of their place in the world and the regard that they must have for humans and nature. Historian David Christian has summarised issues common to multiple creation myths: Each beginning seems to presuppose an earlier beginning. ... Instead of meeting a single starting point, we encounter an infinity of them, each of which poses the same problem. ... There are no entirely satisfactory solutions to this dilemma. What we have to find is not a solution but some way of dealing with the mystery .... And we have to do so using words. The words we reach for, from God to gravity, are inadequate to the task. So we have to use language poetically or symbolically; and such language, whether used by a scientist, a poet, or a shaman, can easily be misunderstood. Classification Mythologists have applied various schemes to classify creation myths found throughout human cultures. Eliade and his colleague Charles Long developed a classification based on some common motifs that reappear in stories the world over. The classification identifies five basic types: Creation ex nihilo in which the creation is through the thought, word, dream or bodily secretions of a divine being. Earth diver creation in which a diver, usually a bird or amphibian sent by a creator, plunges to the seabed through a primordial ocean to bring up sand or mud which develops into a terrestrial world. Emergence myths in which progenitors pass through a series of worlds and metamorphoses until reaching the present world. Creation by the dismemberment of a primordial being. Creation by the splitting or ordering of a primordial unity such as the cracking of a cosmic egg or a bringing order from chaos. Marta Weigle further developed and refined this typology to highlight nine themes, adding elements such as deus faber, a creation crafted by a deity, creation from the work of two creators working together or against each other, creation from sacrifice and creation from division/conjugation, accretion/conjunction, or secretion. An alternative system based on six recurring narrative themes was designed by Raymond Van Over: Primeval abyss, an infinite expanse of waters or space. Originator deity which is awakened or an eternal entity within the abyss. Originator deity poised above the abyss. Cosmic egg or embryo. Originator deity creating life through sound or word. Life generating from the corpse or dismembered parts of an originator deity. Ex nihilo The myth that God created the world out of nothing – ex nihilo – is central today to Judaism, Christianity and Islam, and the medieval Jewish philosopher Maimonides felt it was the only concept that the three religions shared. Nonetheless, the concept is not found in the entire Hebrew Bible. The authors of Genesis 1 were concerned not with the origins of matter (the material which God formed into the habitable cosmos), but with assigning roles so that the Cosmos should function. In the early 2nd century CE, early Christian scholars were beginning to see a tension between the idea of world-formation and the omnipotence of God, and by the beginning of the 3rd century creation ex nihilo had become a fundamental tenet of Christian theology. Ex nihilo creation is found in creation stories from ancient Egypt, the Rig Veda, and many animistic cultures in Africa, Asia, Oceania and North America. In most of these stories, the world is brought into being by the speech, dream, breath, or pure thought of a creator but creation ex nihilo may also take place through a creator's bodily secretions. The literal translation of the phrase ex nihilo is "from nothing" but in many creation myths the line is blurred whether the creative act would be better classified as a creation ex nihilo or creation from chaos. In ex nihilo creation myths, the potential and the substance of creation springs from within the creator. Such a creator may or may not be existing in physical surroundings such as darkness or water, but does not create the world from them, whereas in creation from chaos the substance used for creation is pre-existing within the unformed void. Creation from chaos In creation from chaos myths, initially there is nothing but a formless, shapeless expanse. In these stories the word "chaos" means "disorder", and this formless expanse, which is also sometimes called a void or an abyss, contains the material with which the created world will be made. Chaos may be described as having the consistency of vapor or water, dimensionless, and sometimes salty or muddy. These myths associate chaos with evil and oblivion, in contrast
the homes of the laity, spreading down from the top of society as these became cheap enough for the average person to afford. Most towns had a large crucifix erected as a monument, or some other shrine at the crossroads of the town. Building on the ancient custom, many Catholics, Lutherans and Anglicans hang a crucifix inside their homes and also use the crucifix as a focal point of a home altar. The wealthy erected proprietary chapels as they could afford to do this. Catholic (both Eastern and Western), Eastern Orthodox, Oriental Orthodox, Moravian, Anglican and Lutheran Christians generally use the crucifix in public religious services. They believe use of the crucifix is in keeping with the statement by Saint Paul in Scripture, "we preach Christ crucified, a stumbling block to Jews and folly to Gentiles, but to those who are called, both Jews and Greeks, Christ the power of God and the wisdom of God". In the West altar crosses and processional crosses began to be crucifixes in the 11th century, which became general around the 14th century, as they became cheaper. The Roman Rite requires that "either on the altar or near it, there is to be a cross, with the figure of Christ crucified upon it, a cross clearly visible to the assembled people. It is desirable that such a cross should remain near the altar even outside of liturgical celebrations, so as to call to mind for the faithful the saving Passion of the Lord." The requirement of the altar cross was also mentioned in pre-1970 editions of the Roman Missal, though not in the original 1570 Roman Missal of Pope Pius V. The Rite of Funerals says that the Gospel Book, the Bible, or a cross (which will generally be in crucifix form) may be placed on the coffin for a Requiem Mass, but a second standing cross is not to be placed near the coffin if the altar cross can be easily seen from the body of the church. Eastern Christian liturgical processions called crucessions include a cross or crucifix at their head. In the Eastern Orthodox Church, the crucifix is often placed above the iconostasis in the church. In the Russian Orthodox Church a large crucifix ("Golgotha") is placed behind the Holy Table (altar). During Matins of Good Friday, a large crucifix is taken in procession to the centre of the church, where it is venerated by the faithful. Sometimes the soma (corpus) is removable and is taken off the crucifix at Vespers that evening during the Gospel lesson describing the Descent from the Cross. The empty cross may then remain in the centre of the church until the Paschal vigil (local practices vary). The blessing cross which the priest uses to bless the faithful at the dismissal will often have the crucifix on one side and an icon of the Resurrection of Jesus on the other, the side with the Resurrection being used on Sundays and during Paschaltide, and the crucifix on other days. Exorcist Gabriele Amorth has stated that the crucifix is one of the most effective means of averting or opposing demons. In folklore, it is believed to ward off vampires, incubi, succubi, and other evils. Modern iconoclasts have used an inverted (upside-down) crucifix when showing disdain for Jesus Christ or the Catholic Church which believes in his divinity. According to Christian tradition, Saint Peter was martyred by being crucified upside-down. Controversies Protestant Reformation In the Moravian Church, Nicolaus Zinzendorf had an experience in which he believed he encountered Jesus. Seeing a painting of a crucifix, Zinzendorf fell on his knees vowing to glorify Jesus after contemplating on the wounds of Christ and an inscription that stated "This is what I have done for you, what will you do for me?” The Lutheran Churches retained the use of the crucifix, "justifying their continued use of medieval crucifixes with the same arguments employed since the Middle Ages, as is evident from the example of the altar of the Holy Cross in the Cistercian church of Doberan." Martin Luther did not object to them, and this was among his differences with Andreas Karlstadt as early as 1525. At the time of the Reformation, Luther retained the crucifix in the Lutheran Church and they remain
cross, as distinct from a bare cross. The representation of Jesus himself on the cross is referred to in English as the corpus (Latin for "body"). The crucifix is a principal symbol for many groups of Christians, and one of the most common forms of the Crucifixion in the arts. It is especially important in the Latin Rite of the Roman Catholic Church, but is also used in the Eastern Orthodox Church, most Oriental Orthodox Churches (except the Armenian & Syriac Church), and the Eastern Catholic Churches, as well as by the Lutheran, Moravian and Anglican Churches. The symbol is less common in churches of other Protestant denominations, and in the Assyrian Church of the East and Armenian Apostolic Church, which prefer to use a cross without the figure of Jesus (the corpus). The crucifix emphasizes Jesus' sacrifice—his death by crucifixion, which Christians believe brought about the redemption of humankind. Most crucifixes portray Jesus on a Latin cross, rather than any other shape, such as a Tau cross or a Coptic cross. Western crucifixes usually have a three-dimensional corpus, but in Eastern Orthodoxy Jesus' body is normally painted on the cross, or in low relief. Strictly speaking, to be a crucifix, the cross must be three-dimensional, but this distinction is not always observed. An entire painting of the Crucifixion of Jesus including a landscape background and other figures is not a crucifix either. Large crucifixes high across the central axis of a church are known by the Old English term rood. By the late Middle Ages these were a near-universal feature of Western churches, but are now very rare. Modern Roman Catholic churches and many Lutheran churches often have a crucifix above the altar on the wall; for the celebration of Mass, the Roman Rite of the Catholic Church requires that "on or close to the altar there is to be a cross with a figure of Christ crucified". Description The standard, four-pointed Latin crucifix consists of an upright post or stipes and a single crosspiece to which the sufferer's arms were nailed. There may also be a short projecting nameplate, showing the letters INRI (Greek: INBI). The Russian Orthodox crucifix usually has an additional third crossbar, to which the feet are nailed, and which is angled upward toward the penitent thief Saint Dismas (to the viewer's left) and downward toward the impenitent thief Gestas (to the viewer's right). The corpus of Eastern crucifixes is normally a two-dimensional or low relief icon that shows Jesus as already dead, his face peaceful and somber. They are rarely three-dimensional figures as in the Western tradition, although these may be found where Western influences are strong, but are more typically icons painted on a piece of wood shaped to include the double-barred cross and perhaps the edge of Christ's hips and halo, and no background. More sculptural small crucifixes in metal relief are also used in Orthodoxy (see gallery examples), including as pectoral crosses and blessing crosses. Western crucifixes may show Christ dead or alive, the presence of the spear wound in his ribs traditionally indicating that he is dead. In either case his face very often shows his suffering. In the Eastern Orthodox tradition he has normally been shown as dead since around the end of the period of Byzantine Iconoclasm. Eastern crucifixes have Jesus' two feet nailed side by side, rather than crossed one above the other, as Western crucifixes have shown them since around the 13th century. The crown of thorns is also generally absent in Eastern crucifixes, since the emphasis is not on Christ's suffering, but on his triumph over sin and death. The "S"-shaped position of Jesus' body on the cross is a Byzantine innovation of the late 10th century, though also found in the German Gero Cross of the same date. Probably more from Byzantine influence, it spread elsewhere in the West, especially to Italy, by the Romanesque period, though it was more usual in painting than sculpted crucifixes. It's in Italy that the emphasis was put on Jesus' suffering and realistic details, during a process of general humanization of Christ favored by the Franciscan order. During the 13th century the suffering Italian model (Christus patiens) triumphed over the traditional Byzantine one (Christus gloriosus) anywhere in Europe also due to the works of artists such as Giunta Pisano and Cimabue. Since the Renaissance the "S"-shape is generally much less pronounced. Eastern
Earth or in the Solar System, are not privileged observers of the universe. Named for Copernican heliocentrism, it is a working assumption that arises from a modified cosmological extension of Copernicus's argument of a moving Earth. Origin and implications Hermann Bondi named the principle after Copernicus in the mid-20th century, although the principle itself dates back to the 16th-17th century paradigm shift away from the Ptolemaic system, which placed Earth at the center of the universe. Copernicus proposed that the motion of the planets could be explained by reference to an assumption that the Sun is centrally located and stationary in contrast to the then currently upheld belief that the Earth was central. He argued that the apparent retrograde motion of the planets is an illusion caused by Earth's movement around the Sun, which the Copernican model placed at the centre of the universe. Copernicus himself was mainly motivated by technical dissatisfaction with the earlier system and not by support for any mediocrity principle. In fact, although the Copernican heliocentric model is often described as "demoting" Earth from its central role it had in the Ptolemaic geocentric model, it was successors to Copernicus, notably the 16th century Giordano Bruno, who adopted this new perspective. The Earth's central position had been interpreted as being in the "lowest and filthiest parts". Instead, as Galileo said, the Earth is part of the "dance of the stars" rather than the "sump where the universe's filth and ephemera collect". In the late 20th Century, Carl Sagan asked, "Who are we? We find that we live on an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people." In cosmology, if one assumes the Copernican principle and observes that the universe appears isotropic or the same in all directions from the vantage point of Earth, then one can infer that the universe is generally homogeneous or the same everywhere (at any given time) and is also isotropic about any given point. These two conditions make up the cosmological principle. In practice, astronomers observe that the universe has heterogeneous or non-uniform structures up to the scale of galactic superclusters, filaments and great voids. It becomes more and more homogeneous and isotropic when observed on larger and larger scales, with little detectable structure on scales of more than about 200 million parsecs. However, on scales comparable to the radius of the observable universe, we see systematic changes with distance from Earth. For instance, galaxies contain more young stars and are less clustered, and quasars appear more numerous. While this might suggest that Earth is at the center of the universe, the Copernican principle requires us to interpret it as evidence for the evolution of the universe with time: this distant light has taken most of the age of the universe to reach Earth and shows the universe when it was young. The most distant light of all, cosmic microwave background radiation, is isotropic to at least one part in a thousand. Modern mathematical cosmology is based on the assumption that the Cosmological principle is almost, but not exactly, true on the largest scales. The Copernican principle represents the irreducible philosophical assumption needed to justify this, when combined with the observations. Michael Rowan-Robinson emphasizes the Copernican principle as the threshold test for
given point. These two conditions make up the cosmological principle. In practice, astronomers observe that the universe has heterogeneous or non-uniform structures up to the scale of galactic superclusters, filaments and great voids. It becomes more and more homogeneous and isotropic when observed on larger and larger scales, with little detectable structure on scales of more than about 200 million parsecs. However, on scales comparable to the radius of the observable universe, we see systematic changes with distance from Earth. For instance, galaxies contain more young stars and are less clustered, and quasars appear more numerous. While this might suggest that Earth is at the center of the universe, the Copernican principle requires us to interpret it as evidence for the evolution of the universe with time: this distant light has taken most of the age of the universe to reach Earth and shows the universe when it was young. The most distant light of all, cosmic microwave background radiation, is isotropic to at least one part in a thousand. Modern mathematical cosmology is based on the assumption that the Cosmological principle is almost, but not exactly, true on the largest scales. The Copernican principle represents the irreducible philosophical assumption needed to justify this, when combined with the observations. Michael Rowan-Robinson emphasizes the Copernican principle as the threshold test for modern thought, asserting that: "It is evident that in the post-Copernican era of human history, no well-informed and rational person can imagine that the Earth occupies a unique position in the universe." Bondi and Thomas Gold used the Copernican principle to argue for the perfect cosmological principle which maintains that the universe is also homogeneous in time, and is the basis for the steady-state cosmology. However, this strongly conflicts with the evidence for cosmological evolution mentioned earlier: the universe has progressed from extremely different conditions at the Big Bang, and will continue to progress toward extremely different conditions, particularly under the rising influence of dark energy, apparently toward the Big Freeze or Big Rip. Since the 1990s the term has been used (interchangeably with "the Copernicus method") for J. Richard Gott's Bayesian-inference-based prediction of duration of ongoing events, a generalized version of the Doomsday argument. Tests of the principle The Copernican principle has never been proven, and in the most general sense cannot be proven, but it is implicit in many modern theories of physics. Cosmological models are often derived with reference to the cosmological principle, slightly more general than the Copernican principle, and many tests of these models can be considered tests of the Copernican principle. Historical Before the term Copernican principle was even coined, Earth
(Somalian cavefish) Placogobio Platypharodon Pogobrama Poropuntius Probarbus Procypris Prolabeo Prolabeops Pseudaspius Ptychobarbus Puntioplites Rasborichthys Rohtee (Vatani rohtee) Rohteichthys Sanagia Sawbwa (Sawbwa barb) Scaphiodonichthys Scaphognathops Scardinius (rudds) Schismatorhynchos Schizocypris (snowtrouts) Schizopygopsis (snowtrouts) Semiplotus Sikukia Spinibarbus Thryssocypris Thynnichthys Tor (mahseers) Troglocyclocheilus Tropidophoxinellus Typhlobarbus Typhlogarra (Iraq blind barb) Xenobarbus Xenocyprioides ZaccoWith such a large and diverse family the taxonomy and phylogenies are always being worked on so alternative classifications are being created as new information is discovered, foe example: Phylogeny Subfamily Probarbinae Catlocarpio ProbarbusSubfamily Labeoninae Tribe Parapsilorhynchini Diplocheilichthys Neorohita Parapsilorhynchus Longanalus Protolabeo Sinilabeo Tribe Labeonini Bangana Cirrhinus (mud carps) Decourus Gymnostomus Incisilabeo Labeo (labeos) Speolabeo Schismatorhynchos Tribe Garrini Garra Paracrossocheilus Tariqilabeo Osteochilus clade Barbichthys Crossocheilus Epalzeorhynchos Henicorhynchus Labiobarbus Lobocheilos Osteochilus Thynnichthys Semilabeo clade Ageneiogarra Altigena Cophecheilus Discogobio Hongshuia Linichthys Mekongina Paraqianlabeo Parasinilabeo Placocheilus Prolixicheilus Pseudocrossocheilus Pseudogyrinocheilus Ptychidio Qianlabeo Rectoris Semilabeo Sinigarra Sinocrossocheilus Stenorynchoacrum Subfamily Torinae Acapoeta Arabibarbus Barbopsis (Somalian blind barb) Carasobarbus Hypselobarbus Labeobarbus (yellowfish) Lepidopygopsis Mesopotamichthys Naziritor (Zhobi mahseers) Neolissochilus (mahseers) Osteochilichthys Pterocapoeta Sanagia Tor (mahseers) PterocapoetaSubfamily Smiliogastrinae Barbodes Barboides Caecobarbus (Congo blind barb) Chagunius Clypeobarbus Coptostomabarbus Dawkinsia Desmopuntius Eechathalakenda Enteromius Haludaria Hampala Oliotius Oreichthys Osteobrama Pethia Prolabeo Prolabeops Pseudobarbus (redfins) Puntigrus Puntius (spotted barbs) Rohtee (Vatani rohtee) Sahyadria Striuntius Systomus XenobarbusSubfamily Cyprininae [incl. Barbinae] Tribe Cyprinini Aaptosyax (giant salmon carp) Carassioides Carassius (Crucian carps and goldfish) Cyprinus (typical carps) Luciocyprinus Paraspinibarbus Parator Procypris Pseudosinocyclocheilus Sinibarbus Sinocyclocheilus (golden-line fish) Typhlobarbus Tribe Rohteichthyini Albulichthys Amblyrhynchichthys Anematichthys Balantiocheilos Barbonymus (tinfoil barbs) Cosmochilus Cyclocheilichthys Cyclocheilos Discherodontus Eirmotus Hypsibarbus Kalimantania Laocypris Mystacoleucus Parasikukia Poropuntius Puntioplites Rohteichthys Sawbwa (Sawbwa barb) Scaphognathops Sikukia Troglocyclocheilus Tribe Acrossocheilini Acrossocheilus Folifer Onychostoma Tribe Spinibarbini Spinibarbus Spinibarbichthys Tribe Schizothoracini Aspiorhynchus Percocypris Schizopyge (snowtrouts) Schizothorax (snowtrouts) Tribe Schizopygopsini Chuanchia Diptychus Gymnocypris Gymnodiptychus Oreinus Oxygymnocypris Platypharodon Ptychobarbus Schizopygopsis (snowtrouts) Tribe Barbini Aulopyge (Dalmatian barbelgudgeon) Barbus (typical barbels and barbs) Hsianwenia Caecocypris Capoeta (khramulyas) Cyprinion Kantaka Luciobarbus Scaphiodonichthys Schizocypris (snowtrouts) SemiplotusSubfamily Danioninae Tribe Paedocypridini Paedocypris Tribe Sundadanionini Fangfangia Sundadanio Tribe Rasborini Amblypharyngodon (carplets) Boraras (rasboras) Brevibora Horadandia Kottelatia Pectenocypris Rasbora Rasboroides Rasbosoma (dwarf scissortail rasbora) Trigonopoma Trigonostigma Tribe Danionini Betadevario Brachydanio Celestichthys Chela Danio (danios) Danionella Devario Inlecypris Laubuka Microdevario Microrasbora Tribe Chedrini Barilius Bengala Cabdio [Aspidoparia] Chelaethiops Engraulicypris Esomus (flying barbs) Leptocypris Luciosoma Malayochela Nematabramis Neobola Opsaridium Opsarius Raiamas Rastrineobola (silver cyprinid) Salmostoma (razorbelly minnows) Securicula ThryssocyprisSubfamily Leptobarbinae LeptobarbusSubfamily Xenocypridinae [incl. Cultrinae & Squaliobarbinae] Tribe Squaliobarbini Squaliobarbus Tribe Opsariichthyini Candidia Nipponocypris Opsariichthys Parazacco Xenocyprioides Tribe Oxygastrini Aphyocypris Araiocypris Gymnodanio Hemigrammocypris Macrochirichthys (long pectoral-fin minnow) Metzia Oxygaster Parachela Paralaubuca Rasborichthys Tribe Hypophthalmichthyini Atrilinea Ctenopharyngodon (grass carp) Elopichthys Hypophthalmichthys (bighead carps) Luciobrama Mylopharyngodon (black carp) Ochetobius Tribe Xenocypridini Subtribe Xenocypridina Distoechodon Plagiognathops Pseudobrama Xenocypris Subtribe Cultrina Anabarilius Chanodichthys Culter Ischikauia Longiculter Megalobrama Parabramis (white Amur bream) Pogobrama Sinibrama 'Hemiculter clade Hainania Hemiculter (sharpbellies) Pseudohemiculter Pseudolaubuca ToxabramisSubfamily Tincinae TincaSubfamily Acheilognathinae (bitterlings) ?Acanthorhodeus (Khanka spiny bitterling) Acheilognathus Paratanakia Pseudorhodeus Rhodeus TanakiaSubfamily Gobioninae Hemibarbus-Squalidus clade Belligobio Hemibarbus (steeds) Squalidus Tribe Gobionini Subtribe Gobiobotiina Gobiobotia Xenophysogobio Subtribe Gobionina Gobio (typical gudgeons) Mesogobio Romanogobio Acanthogobio Subtribe Armatogobionina Abbottina (false gudgeons) Biwia ?Huigobio Microphysogobio Platysmacheilus Pseudogobio Saurogobio Tribe Sarcocheilichthyini Coreius Coreoleuciscus (Korean splendid dace) Gnathopogon Gobiocypris Ladislavia Paracanthobrama Paraleucogobio ?Parasqualidus Pseudopungtungia Pseudorasbora Pungtungia Rhinogobio SarcocheilichthysSubfamily Tanichthyinae TanichthysSubfamily Leuciscinae [incl. Alburninae] Tribe Phoxinini Oreoleuciscus Phoxinus (Eurasian minnows and daces) Pseudaspius Tribe Laviniini Subtribe Chrosomina Chrosomus (typical daces) Subtribe Laviniina Eremichthys (desert dace) Gila (western chubs) Hesperoleucus (California roach) Klamathella Lavinia (hitch) Mylopharodon (hardhead) Orthodon (Sacramento blackfish) Ptychocheilus (pikeminnows) Relictus (relict dace) Siphateles Tribe Leuciscini Pachychilon clade Pachychilon Alburnoides clade Alburnoides Primitive Leuciscine clade Delminichthys Leucalburnus Notemigonus (golden shiner) Pelasgus Subtribe Leuciscina Aspiolucius (pike asp) Leuciscus (Eurasian daces) Pelecus (sabre carp) Subtribe Abramina Abramis (common bream) Acanthobrama (bleaks) Capoetobrama Mirogrex Vimba (Vimbas) Subtribe Chondrostomina Achondrostoma Alburnus (bleaks) Anaecypris Chondrostoma (typical nases) Iberochondrostoma Leucaspius (moderlieschen) Leucos Parachondrostoma Petroleuciscus (Ponto-Caspian chubs and daces) Phoxinellus Protochondrostoma (South European nase) Pseudochondrostoma Pseudophoxinus Rutilus (roaches) Sarmarutilus Scardinius (rudds) Squalius (European chubs) Telestes Tropidophoxinellus Tribe Plagiopterini Couesius (lake chub) Hemitremia (flame chub) Lepidomeda (spinedaces) Margariscus (pearl daces) Meda (pikedace) Plagopterus (woundfin) Rhynchocypris (Eurasian minnows) Semotilus (creek chubs) †Stypodon (stumptooth minnow) Tribe Pogonichthyini Subtribe Pogonichthyina Clinostomus (redside daces) Iotichthys (least chub) Mylocheilus (peamouth) Pogonichthys (splittails) Richardsonius (redside shiners) Subtribe Exoglossina Exoglossum (cutlips minnows) Oregonichthys (Oregon chubs) Pararhinichthys (cheat minnow) Rhinichthys (riffle daces, loach minnows) Tiaroga Subtribe Campostomina Campostoma (stonerollers) Nocomis (hornyhead chubs) Subtribe Hybognathina' Agosia (longfin dace) Alburnops Algansea (Mexican chubs) ?Aztecula (Aztec chub) ?Ballerus (breams) ?Blicca (silver bream) Codoma (ornate shiner) Cyprinella (satinfin shiners) Dionda (desert minnows) ?Ericymba (longjaw minnows) Erimonax Erimystax (slender chubs) †Evarra (Mexican daces) Graodus Hudsonius Hybognathus (silvery minnows) Hybopsis (bigeye chubs) ?Iberocypris ?Ladigesocypris Luxilus (highscale shiners) Lythrurus (finescale shiners) Macrhybopsis (blacktail chubs) Miniellus ?Moapa (moapa dace) Notropis (eastern shiners) Opsopoeodus (pugnose minnow) Phenacobius (suckermouth minnows) Pimephales (bluntnose minnows) Platygobio (flathead chub) Pteronotropis (flagfin shiners) ?Snyderichthys (spinedaces) Tampichthys ?Tribolodon ?YuririaIncertae sedis Acanthalburnus (bleaks) Acrocheilus (chiselmouth) Ancherythroculter Anchicyclocheilus Gibelion (catla) (some authorities consider this species to belong in the genus Catla) Cultrichthys Discocheilus Discolabeo Hemiculterella Herzensteinia Horalabiosa Megarasbora Neobarynotus Paracrossochilus Phreatichthys (Somalian cavefish) Placogobio
or minnow family. It includes the carps, the true minnows, and relatives like the barbs and barbels. Cyprinidae is the largest and most diverse fish family and the largest vertebrate animal family in general with about 3,000 species, of which only 1,270 remain extant, divided into about 370 genera. Cyprinids range from about 12 mm in size to the 3-m giant barb (Catlocarpio siamensis). By genus and species count, the family makes up more than two-thirds of the ostariophysian order Cypriniformes. The family name is derived from the Greek word ( 'carp'). Biology and ecology Cyprinids are stomachless fish with toothless jaws. Even so, food can be effectively chewed by the gill rakers of the specialized last gill bow. These pharyngeal teeth allow the fish to make chewing motions against a chewing plate formed by a bony process of the skull. The pharyngeal teeth are unique to each species and are used by scientists to identify species. Strong pharyngeal teeth allow fish such as the common carp and ide to eat hard baits such as snails and bivalves. Hearing is a well-developed sense in the cyprinids since they have the Weberian organ, three specialized vertebral processes that transfer motion of the gas bladder to the inner ear. The vertebral processes of the Weberian organ also permit a cyprinid to detect changes in motion of the gas bladder due to atmospheric conditions or depth changes. The cyprinids are considered physostomes because the pneumatic duct is retained in adult stages and the fish are able to gulp air to fill the gas bladder, or they can dispose of excess gas to the gut. Cyprinids are native to North America, Africa, and Eurasia. The largest known cyprinid is the giant barb (Catlocarpio siamensis), which may grow up to in length and in weight. Other very large species that can surpass are the golden mahseer (Tor putitora) and mangar (Luciobarbus esocinus). The largest North American species is the Colorado pikeminnow (Ptychocheilus lucius), which can reach up to in length. Conversely, many species are smaller than . The smallest known fish is Paedocypris progenetica, reaching at the longest. All fish in this family are egg-layers and most do not guard their eggs; however, a few species build nests and/or guard the eggs. The bitterlings of subfamily Acheilognathinae are notable for depositing their eggs in bivalve molluscs, where the young develop until able to fend for themselves. Cyprinids contain the first and only known example of androgenesis in a vertebrate, in the Squalius alburnoides allopolyploid complex. Most cyprinids feed mainly on invertebrates and vegetation, probably due to the lack of teeth and stomach; however, some species, like the asp, are predators that specialize in fish. Many species, such as the ide and the common rudd, prey on small fish when individuals become large enough. Even small species, such as the moderlieschen, are opportunistic predators that will eat larvae of the common frog in artificial circumstances. Some cyprinids, such as the grass carp, are specialized herbivores; others, such as the common nase, eat algae and biofilms, while others, such as the black carp, specialize in snails, and some, such as the silver carp, are specialized filter feeders. For this reason, cyprinids are often introduced as a management tool to control various factors in the aquatic environment, such as aquatic vegetation and diseases transmitted by snails. Unlike most fish species, cyprinids generally increase in abundance in eutrophic lakes. Here, they contribute towards positive feedback as they are efficient at eating the zooplankton that would otherwise graze on the algae, reducing its abundance tiger barb. Relationship with humans Cyprinids are highly important food fish; they are fished and farmed across Eurasia. In land-locked countries in particular, cyprinids are often the major species of fish eaten because they make the largest part of biomass in most water types except for fast-flowing rivers. In Eastern Europe, they are often prepared with traditional methods such as drying and salting. The prevalence of inexpensive frozen fish products made this less important now than it was in earlier times. Nonetheless, in certain places, they remain popular for food, as well as recreational fishing, and have been deliberately stocked in ponds and lakes for centuries for this reason. Cyprinids are popular for angling especially for match fishing (due to their dominance in biomass and numbers) and fishing for common carp because of its size and strength. Several cyprinids have been introduced to waters outside their natural ranges to provide food, sport, or biological control for some pest species. The common carp (Cyprinus carpio) and the grass carp (Ctenopharyngodon idella) are the most important of these, for example in Florida. In some cases, such as the Asian carp in the Mississippi Basin, they have become invasive species that compete with native fishes or disrupt the environment. Carp in particular can stir up sediment, reducing the clarity of the water and making plant growth difficult. Numerous cyprinids have become important in the aquarium and fishpond hobbies, most famously the goldfish, which was bred in China from the Prussian carp (Carassius (auratus) gibelio). First imported into Europe around 1728, it was much fancied by Chinese nobility as early as 1150AD and after it arrived there in 1502, also in Japan. In the latter country, from the 18th century onwards, the common carp was bred into the ornamental variety known as koi – or more accurately , as simply means "common carp" in Japanese. Other popular aquarium cyprinids include danionins, rasborines, and true barbs. Larger species are bred by the thousands in outdoor ponds, particularly in Southeast Asia, and trade in these aquarium fishes is of considerable commercial importance. The small rasborines and danionines are perhaps only rivalled by characids and poecilid livebearers in their popularity for community aquaria. One particular species of these small and undemanding danionines is the zebrafish (Danio rerio). It has become the standard model species for studying developmental genetics of vertebrates, in particular fish. Habitat destruction and other causes have reduced the wild stocks of several cyprinids to dangerously low levels; some are already entirely extinct. In particular, the cyprinids of the subfamily Leuciscinae from southwestern North America have been hit hard by pollution and unsustainable water use in the early to mid-20th century; most globally extinct cypriniform species are in fact leuciscinid cyprinids from the southwestern United States and northern Mexico. Systematics The massive diversity of cyprinids has so far made it difficult to resolve their phylogeny in sufficient detail to make assignment to subfamilies more than tentative in many cases. Some distinct lineages obviously exist – for example, the Cultrinae and Leuciscinae, regardless of their exact delimitation, are rather close relatives and stand apart from Cyprininaebut the overall systematics and taxonomy of the Cyprinidae remain a subject of considerable debate. A large number of genera are incertae sedis, too equivocal in their traits and/or too little-studied to permit assignment to
reduced RNase H activity suited for transcription of longer RNAs. The AMV reverse transcriptase from the avian myeloblastosis virus may also be used for RNA templates with strong secondary structures (i.e. high melting temperature). cDNA is commonly generated from mRNA for gene expression analyses such as RT-qPCR and RNA-seq. mRNA is selectively reverse transcribed using oligo-dT primers that are the reverse complement of the poly-adenylated tail on the 3' end of all mRNA. An optimized mixture of oligo-dT and random hexamer primers increases the chance of obtaining full-length cDNA while reducing 5' or 3' bias. Ribosomal RNA may also be depleted to enrich both mRNA and non-poly-adenylated transcripts such as some non-coding RNA. Second-strand synthesis The result of first-strand syntheses, RNA-DNA hybrids, can be processed through multiple second-strand synthesis methods or processed directly in downstream assays. An early method known as hairpin-primed synthesis relied on hairpin formation on the 3' end of the first-strand cDNA to prime second-strand synthesis. However, priming is random and hairpin hydrolysis leads to loss of information. The Gubler and Hoffman Procedure uses E. Coli RNase H to nick mRNA that is replaced with E. Coli DNA Polymerase I and sealed with E. Coli DNA Ligase. An optimization of this procedure relies on low RNase H activity of M-MLV to nick mRNA with remaining RNA later removed by adding RNase H after DNA Polymerase translation of the second-strand cDNA. This prevents lost sequence information at the 5' end of the mRNA. Applications Complementary DNA is often used in gene cloning or as gene probes or in the creation of a cDNA library. When scientists transfer a gene from one cell into another cell in order to express the new genetic material as a protein in the recipient cell, the cDNA will be added to the recipient (rather than the entire gene), because the DNA for an entire gene may include DNA that does not code for the protein or that interrupts the coding sequence of the protein (e.g., introns). Partial sequences of cDNAs are often obtained as expressed sequence tags. With amplification of DNA sequences via
the recipient cell. In molecular biology, cDNA is also generated to analyze transcriptomic profiles in bulk tissue, single cells, or single nuclei in assays such as microarrays and RNA-seq. cDNA is also produced naturally by retroviruses (such as HIV-1, HIV-2, simian immunodeficiency virus, etc.) and then integrated into the host's genome, where it creates a provirus. The term cDNA is also used, typically in a bioinformatics context, to refer to an mRNA transcript's sequence, expressed as DNA bases (deoxy-GCAT) rather than RNA bases (GCAU). Synthesis RNA serves as a template for cDNA synthesis. In cellular life, cDNA is generated by viruses and retrotransposons for integration of RNA into target genomic DNA. In molecular biology, RNA is purified from source material after genomic DNA, proteins and other cellular components are removed. cDNA is then synthesized through in vitro reverse transcription. RNA Purification RNA is transcribed from genomic DNA in host cells and is extracted by first lysing cells then purifying RNA utilizing widely-used methods such as phenol-chloroform, silica column, and bead-based RNA extraction methods. Extraction methods vary depending on the source material. For example, extracting RNA from plant tissue requires additional reagents, such as polyvinylpyrrolidone (PVP), to remove phenolic compounds, carbohydrates, and other compounds that will otherwise render RNA unusable. To remove DNA and proteins, enzymes such as DNase and Proteinase K are used for degradation. Importantly, RNA integrity is maintained by inactivating RNases with chaotropic agents such as guanidinium isothiocyanate, sodium dodecyl sulphate (SDS), phenol or chloroform. Total RNA is then separated from other cellular components and precipitated with alcohol. Various commercial kits exist for simple and rapid RNA extractions for specific applications. Additional bead-based methods can be used to isolate specific sub-types of RNA (e.g. mRNA and microRNA) based on size or unique RNA regions. Reverse Transcription First-strand synthesis Using a reverse transcriptase enzyme and purified RNA templates, one strand of cDNA is produced (first-strand cDNA synthesis). The M-MLV reverse transcriptase from the Moloney murine leukemia virus is commonly used due to its reduced RNase H activity suited for transcription of longer RNAs. The AMV reverse transcriptase from the avian myeloblastosis virus may also be used for RNA templates with strong secondary structures (i.e. high melting temperature). cDNA is commonly generated from mRNA for gene expression analyses such as RT-qPCR and RNA-seq. mRNA is selectively reverse transcribed using oligo-dT primers that are the reverse complement of the poly-adenylated tail on the 3' end of all mRNA. An optimized mixture of oligo-dT and random hexamer primers increases the chance of obtaining full-length cDNA while reducing 5' or 3' bias. Ribosomal RNA may also be depleted to enrich both mRNA and non-poly-adenylated transcripts such as some non-coding RNA. Second-strand synthesis The result of first-strand syntheses, RNA-DNA hybrids, can be processed through multiple second-strand synthesis methods or processed directly in downstream assays. An early method known as hairpin-primed synthesis relied on hairpin formation on the 3' end of the first-strand cDNA to prime second-strand synthesis. However, priming is random and hairpin hydrolysis leads to loss of information. The Gubler and Hoffman Procedure uses E. Coli RNase H to nick mRNA that is replaced with E. Coli DNA Polymerase I and sealed with E. Coli DNA Ligase. An optimization of this procedure relies on low RNase H activity of M-MLV to nick mRNA with remaining RNA later removed by adding RNase H after DNA Polymerase translation of the second-strand cDNA.
particularly popular as a first-generation wireless data solution for telemetry devices (machine to machine communications) and for public safety mobile data terminals. In 2004, major carriers in the United States announced plans to shut down CDPD service. In July 2005, the AT&T Wireless and Cingular Wireless CDPD networks were shut down. Equipment for this service now has little to no residual value. CDPD Network and system Primary elements of a CDPD network are: 1. End systems: physical & logical end systems that exchange information 2. Intermediate systems: CDPD infrastructure elements that store, forward & route the information There are 2 kinds of End systems 1. Mobile end system: subscriber unit to access CDPD network over a wireless interface 2. Fixed end system: common host/server that is connected to the CDPD backbone and providing access to specific application and data There are 2 kinds of Intermediate systems 1. Generic intermediate system: simple router with no knowledge of mobility issues 2. mobile data intermediate system: specialized intermediate system that routes data based on its knowledge of the current location of Mobile end system. It is a set of hardware and software functions that provide switching, accounting, registration, authentication, encryption, and so on. The design of CDPD was based on several design objectives that are often repeated in designing overlay networks or new networks. A lot of emphasis was laid on open architectures and reusing as much of the existing RF infrastructure as possible. The design goal of CDPD included location independence and independence fro, service provider, so that coverage could be maximized ; application transparency and multiprotocol support, interoperability between products from multiple vendors. External links CIO CDPD article History and Development Detailed Description About CDPD
as 1xRTT, EV-DO, and UMTS/HSPA. Developed in the early 1990s, CDPD was large on the horizon as a future technology. However, it had difficulty competing against existing slower but less expensive Mobitex and DataTac systems, and never quite gained widespread acceptance before newer, faster standards such as GPRS became dominant. CDPD had very limited consumer products. AT&T Wireless first sold the technology in the United States under the PocketNet brand. It was one of the first products of wireless web service. Digital Ocean, Inc. an OEM licensee of the Apple Newton, sold the Seahorse product, which integrated the Newton handheld computer, an AMPS/CDPD handset/modem along with a web browser in 1996, winning the CTIA's hardware product of the year award as a smartphone, arguably the world's first. A company named OmniSky provided service for Palm V devices. Omnisky then filed for bankruptcy in 2001 then was picked up by EarthLink Wireless the technician that developed the tech support for all of the wireless technology was a man by the name of Myron Feasel he was brought from company to company ending up at Palm. Sierra Wireless sold PCMCIA devices and Airlink sold a serial modem. Both of these were used by police and fire departments for dispatch. Wirelesss later sold CDPD under the Wireless Internet brand (not to be confused with Wireless Internet Express, their brand for GPRS/EDGE data). PocketNet was generally considered a failure with competition from 2G services such as Sprint's Wireless Web. AT&T Wireless sold four PocketNet Phone models to the public: the Samsung Duette and the Mitsubishi MobileAccess-120 were AMPS/CDPD PocketNet phones introduced in October 1997; and two IS-136/CDPD Digital PocketNet phones, the
by Gérard de Nerval Chimaira, a 2001 novel by Valerio Massimo Manfredi Chimera (Barth novel) (1972) Chimera (CrossGen), a 2003 comic book mini series Chimaera (novel), a 2004 novel by Ian Irvine Chimera (novel series), a Japanese novel series by Baku Yumemakura Chimera (short story), a short story by Lee Youngdo Chimera (2015), the third novel in Mira Grant's Parasitology trilogy Music Groups or artists Chimaira, an American heavy metal band from Cleveland, Ohio Chimera (Irish band), a musical group Chimera (Russian band), an underground musical band Mike Dred or Chimera (born 1967), techno musician Albums Chimaira (album), a 2005 album by Chimaira Chimera (Andromeda album) (2006) Chimera (Aria album) (2001) Chimera (Delerium album), a 2003 album by Delerium Chimera (EP), a 2014 EP by Marié Digby Chimera (Erik Friedlander album) (1995) Chimera (Mayhem album) (2004) Chimeras (album), a 2003 album by John Zorn Chimera, a 2002 album by The Cost Chimera, a 1974 album by Duncan Mackay Chimera, a 1983 album by Bill Nelson 鵺-chimera-, a 2016 EP by Girugamesh Songs "Chimeres I, II and III", 2007 compositions by Fred Momotenko "Chimera", a song by Duncan Sheik from a version of Daylight "Chimaera", a 1992 song by Bad Religion from Generator "Chimera", a 1999 song by the Tea Party from Triptych "Chimeras", a track by Tim Hecker from Harmony in Ultraviolet "Chimera", a song by Bonham from Mad Hatter "The Chimera", a 2012 song by the Smashing Pumpkins from Oceania Television Chimera (British TV series), a 1991 British science fiction serial "Chimera" (NCIS), an episode of NCIS "Chimera" Star Trek: Deep Space Nine), a 1999 episode of Star Trek: Deep Space Nine "Chimera" (Stargate SG-1), an episode of Stargate SG-1 "Chimera" (The X-Files), an episode of The X-Files Chimera (South
EP by Girugamesh Songs "Chimeres I, II and III", 2007 compositions by Fred Momotenko "Chimera", a song by Duncan Sheik from a version of Daylight "Chimaera", a 1992 song by Bad Religion from Generator "Chimera", a 1999 song by the Tea Party from Triptych "Chimeras", a track by Tim Hecker from Harmony in Ultraviolet "Chimera", a song by Bonham from Mad Hatter "The Chimera", a 2012 song by the Smashing Pumpkins from Oceania Television Chimera (British TV series), a 1991 British science fiction serial "Chimera" (NCIS), an episode of NCIS "Chimera" Star Trek: Deep Space Nine), a 1999 episode of Star Trek: Deep Space Nine "Chimera" (Stargate SG-1), an episode of Stargate SG-1 "Chimera" (The X-Files), an episode of The X-Files Chimera (South Korean TV series), a 2021 South Korean television series People Jason Chimera (born 1979), NHL ice hockey forward for the Washington Capitals Ricardo Rodriguez (wrestler) or Chimaera (born 1986), professional wrestler Computing Chimera (software library), a peer-to-peer software research project Camino (web browser) or Chimera,
The United Kingdom Atomic Energy Authority is an example. In a wider sense, most companies in the UK are created under statute since the Companies Act 1985 specifies how a company may be created by a member of the public, but these companies are not called 'statutory corporations'. Often, in American legal and business documents that speak of governing bodies (e.g., a board that governs small businesses in China) these bodies are described as "creatures of statute" to inform readers of their origins and format although the national governments that created them may not term them as creatures of statute. Australia also uses the term "creature of statute" to describe some governmental bodies. The importance of a corporate body, regardless of its exact function, when such a body is a creature of statute is that
China) these bodies are described as "creatures of statute" to inform readers of their origins and format although the national governments that created them may not term them as creatures of statute. Australia also uses the term "creature of statute" to describe some governmental bodies. The importance of a corporate body, regardless of its exact function, when such a body is a creature of statute is that its active functions can only be within the scope detailed by the statute which created that corporation. Thereby, the creature of statute is the tangible manifestation of the functions or work described by a given statute. The jurisdiction of a body that is a creature of statute is also therefore limited to the functional scope written into the laws that created that body. Unlike most (private) corporate bodies, creatures of statute cannot expand their business interests into other diverse areas. See also Competition regulator Statutory authority
Committees, are published by the BIPM. Mission The secretariat is based in Saint-Cloud, Hauts-de-Seine, France. In 1999 the CIPM has established the CIPM Arrangement de reconnaissance mutuelle (Mutual Recognition Arrangement, MRA) which serves as the framework for the mutual acceptance of national measurement standards and for recognition of the validity of calibration and measurement certificates issued by national metrology institutes. A recent focus area of the CIPM has been the revision of the SI. Consultative committees The CIPM has set up a number of consultative committees (CC) to assist it in its work. These committees are under the authority of the CIPM. The president of each committee, who is expected to take the chair at CC meetings, is usually a member of the CIPM. Apart from the CCU, membership of a CC is open to National Metrology Institutes (NMIs) of Member States that are recognized internationally as most expert in the field. NMIs from Member States that are active in the field, but lack the expertise to become Members, are able to attend CC meetings as observers. These committees are: CCAUV: Consultative Committee for Acoustics, Ultrasound and Vibration CCEM: Consultative Committee for Electricity and Magnetism CCL: Consultative Committee for Length CCM: Consultative Committee for Mass and Related Quantities CCPR: Consultative Committee for Photometry and Radiometry CCQM: Consultative Committee for Amount of Substance - Metrology in Chemistry and Biology CCRI: Consultative Committee for Ionizing Radiation CCT: Consultative Committee for Thermometry CCTF: Consultative Committee for Time and Frequency CCU: Consultative Committee for Units The CCU's role is to advise on matters related to the development of the SI and the preparation of the SI brochure. It has liaison with other international bodies such as International Organization for Standardization (ISO), International Astronomical Union (IAU), International Union of Pure and Applied Chemistry (IUPAC) and International Union of Pure and Applied Physics (IUPAP). Major reports Official reports of the CIPM include: Reports of CIPM meetings (Procès-Verbaux) (CIPM Minutes) Annual Report to Governments on the financial and administrative situation of the BIPM Notification of the contributive parts of the Contracting States Convocation to meetings of the CGPM Report of the President of the CIPM to the CGPM From time to time the CIPM has been charged by the CGPM to undertake major investigations related to activities affecting the CGPM or the BIPM. Reports produced include: The Blevin Report The Blevin Report, published in 1998, examined the state of worldwide metrology. The report originated from a resolution passed at the 20th CGPM (October 1995) which committed the CIPM to The report identified, amongst other things, a need for closer cooperation between the BIPM and other organisations such as International Organization of Legal Metrology (OIML) and International Laboratory Accreditation Cooperation (ILAC) with clearly defined boundaries and interfaces between the organisations. Another major finding was the need for cooperation between accreditation laboratories and the need to involve developing countries in the world of metrology. The Kaarls Report The Kaarls Report published in 2003 examined the role of the BIPM in the evolving needs for metrology in trade, industry and society. SI Brochure The
and the supreme authority for all actions; Comité international des poids et mesures (CIPM), consisting of selected scientists and metrologists, which prepares and executes the decisions of the CGPM and is responsible for the supervision of the International Bureau of Weights and Measures; a permanent laboratory and secretariat function, the activities of which include the establishment of the basic standards and scales of the principal physical quantities and maintenance of the international prototype standards. The CGPM acts on behalf of the governments of its members. In so doing, it appoints members to the CIPM, receives reports from the CIPM which it passes on to the governments and national laboratories on member states, examines and where appropriate approves proposals from the CIPM in respect of changes to the International System of Units (SI), approves the budget for the BIPM (over €13 million in 2018) and it decides all major issues concerning the organization and development of the BIPM. The structure is analogous to that of a stock corporation. The BIPM is the organisation, the CGPM is the general meeting of the shareholders, the CIPM is the board of directors appointed by the CGPM, and the staff at the site in Saint-Cloud perform the day-to-day work. Membership criteria The CGPM recognises two classes of membership – full membership for those states that wish to participate in the activities of the BIPM and associate membership for those countries or economies that only wish to participate in the CIPM MRA program. Associate members have observer status at the CGPM. Since all formal liaison between the convention organisations and national governments is handled by the member state's ambassador to France, it is implicit that member states must have diplomatic relations with France, though during both world wars, nations that were at war with France retained their membership of the CGPM. CGPM meetings are chaired by the Président de l'Académie des Sciences de Paris. Of the twenty countries that attended the Conference of the Metre in 1875, representatives of seventeen signed the convention on 20 May 1875. In April 1884 HJ Chaney, Warden of Standards in London unofficially contacted the BIPM inquiring whether the BIPM would calibrate some metre standards that had been manufactured in the United Kingdom. Broch, director of the BIPM replied that he was not authorised to perform any such calibrations for non-member states. On 17 September 1884, the British Government signed the convention on behalf of the United Kingdom. This number grew to 21 in 1900, 32 in 1950, and 49 in 2001. , there are 63 Member States and 40 Associate States and Economies of the General Conference (with year of partnership in parentheses): Member States Argentina (1877) Australia (1947) Austria (1875) Belarus Belgium (1875) Brazil (1921) Bulgaria (1911) Canada (1907) Chile (1908) China (1977) Colombia (2012) Croatia (2008) Czech Republic (1922) Denmark (1875) Ecuador Egypt (1962) Estonia Finland (1913) France (1875) Germany (1875) Greece (2001) Hungary (1925) India (18 80 ) Indonesia (1960) Iran (1975) Iraq (2013) Ireland (1925) Israel (1985) Italy (1875) Japan (1885) Kazakhstan (2008) Kenya (2010) Lithuania (2015) Malaysia (2001) Mexico (1890) Montenegro (2018) Morocco Netherlands (1929) New Zealand (1991) Norway (1875) Pakistan (1973) Poland (1925) Portugal (1876) Romania (1884) Russian Federation (1875) Saudi Arabia (2011) Serbia (2001) Singapore (1994) Slovakia (1922) Slovenia (2016) South Africa (1964) South Korea (1959) Spain (1875) Sweden (1875) Switzerland (1875) Thailand (1912) Tunisia (2012) Turkey (1875) Ukraine (2018) United Arab Emirates (2015) United Kingdom (1884) United States (1878) Uruguay (1908) Notes Associates At the 21st meeting of the CGPM in October 1999, the category of "associate" was created for states not yet BIPM members and for economic unions. Albania (2007) Azerbaijan (2015) Bangladesh (2010) Bolivia (2008) Bosnia and Herzegovina (2011) Botswana (2012) Cambodia Caribbean Community (2005) Chinese Taipei (2002) Costa Rica (2004) Cuba (2000) Ethiopia (2018) Georgia (2008) Ghana (2009) Hong Kong (2000) Jamaica (2003) Kuwait (2018) Latvia (2001) Luxembourg (2014) Malta (2001) Mauritius (2010) Moldova (2007) Mongolia (2013) Namibia (2012) North Macedonia (2006) Oman (2012) Panama (2003) Paraguay (2009) Peru (2009) Philippines (2002) Qatar (2016) Seychelles (2010) Sri Lanka (2007) Sudan (2014) Syria (2012) Tanzania (2018) Uzbekistan (2018) Vietnam (2003) Zambia (2010) CGPM meetings International Committee for Weights and Measures The International Committee for Weights and Measures consists of eighteen persons, each of a different nationality. elected by the General Conference on Weights and Measures (CGPM)
Its most prominent themes are existential ennui, loneliness, and the inability to escape one's past. Cowboy Bebop was dubbed into English by Animaze and ZRO Limit Productions, and was originally licensed in North America by Bandai Entertainment (and is now licensed by Funimation) and in Britain by Beez Entertainment (now by Anime Limited); Madman Entertainment owns the license in Australia and New Zealand. In 2001, the series became the first anime title to be broadcast on Adult Swim. Since its debut, Cowboy Bebop has been hailed as one of the greatest animated television series of all time. It was a critical and commercial success both in Japanese and international markets, most notably in the United States. It garnered several major anime and science-fiction awards upon its release, and received unanimous praise for its style, characters, story, voice acting, animation, and soundtrack. The English dub was particularly lauded, and is regarded as one of the best anime dubs. Credited with helping to introduce anime to a new wave of Western viewers in the early 2000s, Cowboy Bebop has also been called a gateway series for anime in general. Plot In 2071, roughly fifty years after an accident with a hyperspace gateway which made Earth almost uninhabitable, humanity has colonized most of the rocky planets and moons of the Solar System. Amid a rising crime rate, the Inter Solar System Police (ISSP) set up a legalized contract system, in which registered bounty hunters (also referred to as "Cowboys") chase criminals and bring them in alive in return for a reward. The series' protagonists are bounty hunters working from the spaceship Bebop. The original crew are Spike Spiegel, an exiled former hitman of the criminal Red Dragon Syndicate, and Jet Black, a former ISSP officer. They are later joined by Faye Valentine, an amnesiac con artist; Edward, an eccentric child skilled in hacking; and Ein, a genetically-engineered Pembroke Welsh Corgi with human-like intelligence. Over the course of the series, the team get involved in disastrous mishaps leaving them without money, while often confronting faces and events from their past: these include Jet's reasons for leaving the ISSP, and Faye's past as a young woman from Earth injured in an accident and cryogenically frozen to save her life. While much of the show is episodic in nature, the main story arc focuses on Spike and his deadly rivalry with Vicious, an ambitious criminal affiliated with the Red Dragon Syndicate. Spike and Vicious were once partners and friends, but when Spike began an affair with Vicious's girlfriend Julia and resolved to leave the Syndicate with her, Vicious sought to eliminate Spike by blackmailing Julia into killing him. Julia goes into hiding to protect herself and Spike fakes his death to escape the Syndicate. In the present, Julia comes out of hiding and reunites with Spike, intending to complete their plan. Vicious, having staged a coup d'état and taken over the Syndicate, sends hitmen after the pair. Julia is killed, leaving Spike alone. Spike leaves the Bebop after saying a final goodbye to Faye and Jet. Upon infiltrating the syndicate, he finds Vicious on the top floor of the building and confronts him after dispatching the remaining Red Dragon members. The final battle ends with Spike killing Vicious, only to be seriously wounded himself in the ensuing confrontation. The series concludes as Spike descends the main staircase of the building into the rising sun before eventually falling to the ground. Genre and themes Watanabe created a special tagline for the series to promote it during its original presentation, calling it "a new genre unto itself". The line was inserted before and after commercial breaks during its Japanese and US broadcasts. Later, Watanabe called the phrase an "exaggeration". The show is a hybrid of multiple genres, including westerns and pulp fiction. One reviewer described it as "space opera meets noir, meets comedy, meets cyberpunk". It has also been called a "genre-busting space Western". The musical style was emphasized in many of the episode titles. Multiple philosophical themes are explored using the characters, including existentialism, existential boredom, loneliness, and the effect of the past on the protagonists. Other concepts referenced include environmentalism and capitalism. The series also makes specific references to or pastiches multiple films, including the works of John Woo and Bruce Lee, Midnight Run, 2001: A Space Odyssey, and Alien. The series also includes extensive references and elements from science fiction, bearing strong similarities to the cyberpunk fiction of William Gibson. Several planets and space stations in the series are made in Earth's image. The streets of celestial objects such as Ganymede resemble a modern port city, while Mars features shopping malls, theme parks, casinos and cities. Cowboy Bebops universe is filled with video players and hyperspace gates, eco-politics and fairgrounds, spaceships and Native American shamans. This setting has been described as "one part Chinese diaspora and two parts wild west". Characters The characters were created by Watanabe and character designed by Toshihiro Kawamoto. Watanabe envisioned each character as an extension of his own personality, or as an opposite person to himself. Each character, from the main cast to supporting characters, were designed to be outlaws unable to fit into society. Kawamoto designed the characters so they were easily distinguished from one another. All the main cast are characterized by a deep sense of loneliness or resignation to their fate and past. From the perspective of Brian Camp and Julie Davis, the main characters resemble the main characters of the anime series Lupin III, if only superficially, given their more troubled pasts and more complex personalities. The show focuses on the character of Spike Spiegel, an iconic space cowboy with green hair and often seen wearing a blue suit, with the overall theme of the series being Spike's past and its karmic effect on him. Spike was portrayed as someone who had lost his expectations for the future, having lost the woman he loved, and so was in a near-constant lethargy. Spike's artificial eye was included as Watanabe wanted his characters to have flaws. He was originally going to be given an eyepatch, but this decision was vetoed by producers. Jet is shown as someone who lost confidence in his former life and has become cynical about the state of society. Spike and Jet were designed to be opposites, with Spike being thin and wearing smart attire, while Jet was bulky and wore more casual clothing. The clothing, which was dark in color, also reflected their states of mind. Faye Valentine, Edward Wong, and Ein joined the crew in later episodes. Their designs were intended to contrast against Spike. Faye was described by her voice actress as initially being an "ugly" woman, with her defining traits being her liveliness, sensuality and humanity. To emphasize her situation when first introduced, she was compared to Poker Alice, a famous Western figure. Edward and Ein were the only main characters to have real-life models. The former had her behavior based on the antics of Yoko Kanno as observed by Watanabe when he first met her. While generally portrayed as carefree and eccentric, Edward is motivated by a sense of loneliness after being abandoned by her father. Kawamoto initially based Ein's design on a friend's pet corgi, later getting one himself to use as a motion model. Production Cowboy Bebop was developed by animation studio Sunrise and created by Hajime Yatate, the well-known pseudonym for the collective contributions of Sunrise's animation staff. The leader of the series' creative team was director Shinichirō Watanabe, most notable at the time for directing Macross Plus and Mobile Suit Gundam 0083: Stardust Memory. Other leading members of Sunrise's creative team were screenwriter Keiko Nobumoto, character designer Toshihiro Kawamoto, mechanical art designer Kimitoshi Yamane, composer Yoko Kanno, and producers Masahiko Minami and Yoshiyuki Takei. Most of them had previously worked together, in addition to having credits on other popular anime titles. Nobumoto had scripted Macross Plus, Kawamoto had designed the characters for Gundam, and Kanno had composed the music for Macross Plus and The Vision of Escaflowne. Yamane had not worked with Watanabe yet, but his credits in anime included Bubblegum Crisis and The Vision of Escaflowne. Minami joined the project as he wanted to do something different from his previous work on mecha anime. Concept Cowboy Bebop was Watanabe's first project as solo director, as he had been co-director in his previous works. His original concept was for a movie, and during production he treated each episode as a miniature movie. His main inspiration for Cowboy Bebop was Lupin III, a crime anime series focusing on the exploits of the series' titular character. When developing the series' story, Watanabe began by creating the characters first. He explained, "the first image that occurred to me was one of Spike, and from there I tried to build a story around him, trying to make him cool." While the original dialogue of the series was kept clean to avoid any profanities, its level of sophistication was made appropriate to adults in a criminal environment. Watanabe described Cowboy Bebop as "80% serious story and 20% humorous touch". The comical episodes were harder for the team to write than the serious ones, and though several events in them seemed random, they were carefully planned in advance. Watanabe conceived the series' ending early on, and each episode involving Spike and Vicious was meant to foreshadow their final confrontation. Some of the staff were unhappy about this approach as a continuation of the series would be difficult. While he considered altering the ending, he eventually settled with his original idea. The reason for creating the ending was that Watanabe did not want the series to become like Star Trek, with him being tied to doing it for years. Development The project had initially originated with Bandai's toy division as a sponsor, with the goal of selling spacecraft toys. Watanabe recalled his only instruction was "So long as there's a spaceship in it, you can do whatever you want." But upon viewing early footage, it became clear that Watanabe's vision for the series didn't match with that of Bandai's. Believing the series would never sell toy merchandise, Bandai pulled out of the project, leaving it in development hell until sister company Bandai Visual stepped in to sponsor it. Since there was no need to merchandise toys with the property any more, Watanabe had free rein in the development of the series. Watanabe wanted to design not just a space adventure series for adolescent boys but a program that would also appeal to sophisticated adults. During the making of Bebop, Watanabe often attempted to rally the animation staff by telling them that the show would be something memorable up to three decades later. While some of them were doubtful of that at the time, Watanabe many years later expressed his happiness to have been proven right in retrospect. He joked that if Bandai Visual hadn't intervened then "you might be seeing me working the supermarket checkout counter right now." The city locations were generally inspired by the cities of New York and Hong Kong. The atmospheres of the planets and the ethnic groups in Cowboy Bebop mostly originated from Watanabe's ideas, with some collaboration from set designers Isamu Imakake, Shoji Kawamori, and Dai Satō. The animation staff established the particular planet atmospheres early in the production of the series before working on the ethnic groups. It was Watanabe who wanted to have several groups of ethnic diversity appear in the series. Mars was the planet most often used in Cowboy Bebops storylines, with Satoshi Toba, the cultural and setting producer, explaining that the other planets "were unexpectedly difficult to use". He stated that each planet in the series had unique features, and the producers had to take into account the characteristics of each planet in the story. For the final episode, Toba explained that it was not possible for the staff to have the dramatic rooftop scene occur on Venus, so the staff "ended up normally falling back to Mars". In creating the backstory, Watanabe envisioned a world that was "multinational rather than stateless". In spite of certain American influences in the series, he stipulated that the country had been destroyed decades prior to the story, later saying the notion of the United States as the center of the world repelled him. Music The music for Cowboy Bebop was composed by Yoko Kanno. Kanno formed the blues and jazz band Seatbelts to perform the series’s music. According to Kanno, the music was one of the first aspects of the series to begin production, before most of the characters, story, or animation had been finalized. The genres she used for its composition were western, opera, and jazz. Watanabe noted that Kanno did not score the music exactly the way he told her to. He stated, "She gets inspired on her own, follows up on her own imagery, and comes to me saying 'this is the song we need for Cowboy Bebop,' and composes something completely on her own." Kanno herself was sometimes surprised at how pieces of her music were used in scenes, sometimes wishing it had been used elsewhere, though she also felt that none of their uses were "inappropriate". She was pleased with the working environment, finding the team very relaxed in comparison with other teams she had worked with. Watanabe further explained that he would take inspiration from Kanno's music after listening to it and create new scenes for the story from it. These new scenes in turn would inspire Kanno and give her new ideas for the music and she would come to Watanabe with even more music. Watanabe cited as an example, "some songs in the second half of the series, we didn't even ask her for those songs, she just made them and brought them to us." He commented that while Kanno's method was normally "unforgivable and unacceptable", it was ultimately a "big hit" with Cowboy Bebop. Watanabe described his collaboration with Kanno as "a game of catch between the two of us in developing the music and creating the TV series Cowboy Bebop". Since the series' broadcast, Kanno and the Seatbelts have released seven original soundtrack albums, two singles and extended plays, and two compilations through label Victor Entertainment. Weapons The guns on the show were chosen by the director, Watanabe, and in discussion with set designer, Isamu Imakake, and mechanical designer, Kimitoshi Yamane. Setting producer, Satoshi Toba said, "They talked about how they didn’t want common guns, because that wouldn’t be very interesting, and so they decided on these guns." Distribution Broadcast Cowboy Bebop debuted on TV Tokyo, one of the main broadcasters of anime in Japan, airing from April 3 until June 26, 1998. Due to its 6:00 PM timeslot and depictions of graphic violence, the show's first run only included episodes 2, 3, 7 to 15, 18 and a special. Later that year, the series was shown in its entirety from October 24 until April 24, 1999, on satellite network Wowow. The full series has also been broadcast across Japan by anime television network Animax, which has also aired the series via its respective networks across Southeast Asia, South Asia and East Asia. The first non-Asian country to air Cowboy Bebop was Italy. There, it was first aired on October 21, 1999, on MTV, where it inaugurated the 9:00–10:30 PM Anime Night programming block. In the United States, Cowboy Bebop was one of the programs shown when Cartoon Network's late night block Adult Swim debuted on September 2, 2001, being the first anime shown on the block that night at midnight ET. During its original run on Adult Swim, episodes 6, 8, and 22 were skipped due to their violent themes in wake of the September 11 attacks. By the third run of the series, all these episodes had premiered for the first time. Cowboy Bebop was successful enough to be broadcast repeatedly for four years. It has been run at least once every year since 2007, and HD remasters of the show began broadcasting in 2015. In the United Kingdom, it was first broadcast in 2002 on the adult-oriented channel CNX. From November 6, 2007, it was repeated on AnimeCentral until the channel's closure in August 2008. In Australia, Cowboy Bebop was first broadcast on pay television in 2002 on Adult Swim in Australia. It was broadcast on Sci-Fi Channel on Foxtel. In Australia, Cowboy Bebop was first broadcast on free-to-air-TV on ABC2 (the national digital public television channel) on January 2, 2007. It has been repeated several times, most recently starting in 2008. Cowboy Bebop: The Movie also aired again on February 23, 2009, on SBS (a hybrid-funded Australian public broadcasting television network). In Canada, Cowboy Bebop was first broadcast on December 24, 2006, on Razer. In Latin America, the series was first broadcast on pay-TV in 2001 on Locomotion. It aired again on January 9, 2016 on I.Sat. Home media Cowboy Bebop has been released in four separate editions in North America. The first release was sold in VHS format either as a box set or as seven individual tapes. The tapes were sold through Anime Village, a division of Bandai. The second release was sold in 2000 individually, and featured uncut versions of the original 26 episodes. In 2001, these DVDs were collected in the special edition Perfect Sessions which included the first 6 DVDs, the first Cowboy Bebop soundtrack, and a collector's box. At the time of release, the art box from the Perfect Sessions was made available for purchase on The Right Stuff International as a solo item for collectors who already owned the series. The third release, The Best Sessions, was sold in 2002 and featured what Bandai considered to be the best 6 episodes of the series remastered in Dolby Digital 5.1 and DTS surround sound. The fourth release, Cowboy Bebop Remix, was also distributed on 6 discs and included the original 26 uncut episodes, with sound remastered in Dolby Digital 5.1 and video remastered under the supervision of Shinichiro Watanabe. This release also included various extras that were not present in the original release. Cowboy Bebop Remix was itself collected as the Cowboy Bebop Remix DVD Collection in 2008. A fourth release in Blu-ray format was released on December 21, 2012 exclusively in Japan. In
portrayed as someone who had lost his expectations for the future, having lost the woman he loved, and so was in a near-constant lethargy. Spike's artificial eye was included as Watanabe wanted his characters to have flaws. He was originally going to be given an eyepatch, but this decision was vetoed by producers. Jet is shown as someone who lost confidence in his former life and has become cynical about the state of society. Spike and Jet were designed to be opposites, with Spike being thin and wearing smart attire, while Jet was bulky and wore more casual clothing. The clothing, which was dark in color, also reflected their states of mind. Faye Valentine, Edward Wong, and Ein joined the crew in later episodes. Their designs were intended to contrast against Spike. Faye was described by her voice actress as initially being an "ugly" woman, with her defining traits being her liveliness, sensuality and humanity. To emphasize her situation when first introduced, she was compared to Poker Alice, a famous Western figure. Edward and Ein were the only main characters to have real-life models. The former had her behavior based on the antics of Yoko Kanno as observed by Watanabe when he first met her. While generally portrayed as carefree and eccentric, Edward is motivated by a sense of loneliness after being abandoned by her father. Kawamoto initially based Ein's design on a friend's pet corgi, later getting one himself to use as a motion model. Production Cowboy Bebop was developed by animation studio Sunrise and created by Hajime Yatate, the well-known pseudonym for the collective contributions of Sunrise's animation staff. The leader of the series' creative team was director Shinichirō Watanabe, most notable at the time for directing Macross Plus and Mobile Suit Gundam 0083: Stardust Memory. Other leading members of Sunrise's creative team were screenwriter Keiko Nobumoto, character designer Toshihiro Kawamoto, mechanical art designer Kimitoshi Yamane, composer Yoko Kanno, and producers Masahiko Minami and Yoshiyuki Takei. Most of them had previously worked together, in addition to having credits on other popular anime titles. Nobumoto had scripted Macross Plus, Kawamoto had designed the characters for Gundam, and Kanno had composed the music for Macross Plus and The Vision of Escaflowne. Yamane had not worked with Watanabe yet, but his credits in anime included Bubblegum Crisis and The Vision of Escaflowne. Minami joined the project as he wanted to do something different from his previous work on mecha anime. Concept Cowboy Bebop was Watanabe's first project as solo director, as he had been co-director in his previous works. His original concept was for a movie, and during production he treated each episode as a miniature movie. His main inspiration for Cowboy Bebop was Lupin III, a crime anime series focusing on the exploits of the series' titular character. When developing the series' story, Watanabe began by creating the characters first. He explained, "the first image that occurred to me was one of Spike, and from there I tried to build a story around him, trying to make him cool." While the original dialogue of the series was kept clean to avoid any profanities, its level of sophistication was made appropriate to adults in a criminal environment. Watanabe described Cowboy Bebop as "80% serious story and 20% humorous touch". The comical episodes were harder for the team to write than the serious ones, and though several events in them seemed random, they were carefully planned in advance. Watanabe conceived the series' ending early on, and each episode involving Spike and Vicious was meant to foreshadow their final confrontation. Some of the staff were unhappy about this approach as a continuation of the series would be difficult. While he considered altering the ending, he eventually settled with his original idea. The reason for creating the ending was that Watanabe did not want the series to become like Star Trek, with him being tied to doing it for years. Development The project had initially originated with Bandai's toy division as a sponsor, with the goal of selling spacecraft toys. Watanabe recalled his only instruction was "So long as there's a spaceship in it, you can do whatever you want." But upon viewing early footage, it became clear that Watanabe's vision for the series didn't match with that of Bandai's. Believing the series would never sell toy merchandise, Bandai pulled out of the project, leaving it in development hell until sister company Bandai Visual stepped in to sponsor it. Since there was no need to merchandise toys with the property any more, Watanabe had free rein in the development of the series. Watanabe wanted to design not just a space adventure series for adolescent boys but a program that would also appeal to sophisticated adults. During the making of Bebop, Watanabe often attempted to rally the animation staff by telling them that the show would be something memorable up to three decades later. While some of them were doubtful of that at the time, Watanabe many years later expressed his happiness to have been proven right in retrospect. He joked that if Bandai Visual hadn't intervened then "you might be seeing me working the supermarket checkout counter right now." The city locations were generally inspired by the cities of New York and Hong Kong. The atmospheres of the planets and the ethnic groups in Cowboy Bebop mostly originated from Watanabe's ideas, with some collaboration from set designers Isamu Imakake, Shoji Kawamori, and Dai Satō. The animation staff established the particular planet atmospheres early in the production of the series before working on the ethnic groups. It was Watanabe who wanted to have several groups of ethnic diversity appear in the series. Mars was the planet most often used in Cowboy Bebops storylines, with Satoshi Toba, the cultural and setting producer, explaining that the other planets "were unexpectedly difficult to use". He stated that each planet in the series had unique features, and the producers had to take into account the characteristics of each planet in the story. For the final episode, Toba explained that it was not possible for the staff to have the dramatic rooftop scene occur on Venus, so the staff "ended up normally falling back to Mars". In creating the backstory, Watanabe envisioned a world that was "multinational rather than stateless". In spite of certain American influences in the series, he stipulated that the country had been destroyed decades prior to the story, later saying the notion of the United States as the center of the world repelled him. Music The music for Cowboy Bebop was composed by Yoko Kanno. Kanno formed the blues and jazz band Seatbelts to perform the series’s music. According to Kanno, the music was one of the first aspects of the series to begin production, before most of the characters, story, or animation had been finalized. The genres she used for its composition were western, opera, and jazz. Watanabe noted that Kanno did not score the music exactly the way he told her to. He stated, "She gets inspired on her own, follows up on her own imagery, and comes to me saying 'this is the song we need for Cowboy Bebop,' and composes something completely on her own." Kanno herself was sometimes surprised at how pieces of her music were used in scenes, sometimes wishing it had been used elsewhere, though she also felt that none of their uses were "inappropriate". She was pleased with the working environment, finding the team very relaxed in comparison with other teams she had worked with. Watanabe further explained that he would take inspiration from Kanno's music after listening to it and create new scenes for the story from it. These new scenes in turn would inspire Kanno and give her new ideas for the music and she would come to Watanabe with even more music. Watanabe cited as an example, "some songs in the second half of the series, we didn't even ask her for those songs, she just made them and brought them to us." He commented that while Kanno's method was normally "unforgivable and unacceptable", it was ultimately a "big hit" with Cowboy Bebop. Watanabe described his collaboration with Kanno as "a game of catch between the two of us in developing the music and creating the TV series Cowboy Bebop". Since the series' broadcast, Kanno and the Seatbelts have released seven original soundtrack albums, two singles and extended plays, and two compilations through label Victor Entertainment. Weapons The guns on the show were chosen by the director, Watanabe, and in discussion with set designer, Isamu Imakake, and mechanical designer, Kimitoshi Yamane. Setting producer, Satoshi Toba said, "They talked about how they didn’t want common guns, because that wouldn’t be very interesting, and so they decided on these guns." Distribution Broadcast Cowboy Bebop debuted on TV Tokyo, one of the main broadcasters of anime in Japan, airing from April 3 until June 26, 1998. Due to its 6:00 PM timeslot and depictions of graphic violence, the show's first run only included episodes 2, 3, 7 to 15, 18 and a special. Later that year, the series was shown in its entirety from October 24 until April 24, 1999, on satellite network Wowow. The full series has also been broadcast across Japan by anime television network Animax, which has also aired the series via its respective networks across Southeast Asia, South Asia and East Asia. The first non-Asian country to air Cowboy Bebop was Italy. There, it was first aired on October 21, 1999, on MTV, where it inaugurated the 9:00–10:30 PM Anime Night programming block. In the United States, Cowboy Bebop was one of the programs shown when Cartoon Network's late night block Adult Swim debuted on September 2, 2001, being the first anime shown on the block that night at midnight ET. During its original run on Adult Swim, episodes 6, 8, and 22 were skipped due to their violent themes in wake of the September 11 attacks. By the third run of the series, all these episodes had premiered for the first time. Cowboy Bebop was successful enough to be broadcast repeatedly for four years. It has been run at least once every year since 2007, and HD remasters of the show began broadcasting in 2015. In the United Kingdom, it was first broadcast in 2002 on the adult-oriented channel CNX. From November 6, 2007, it was repeated on AnimeCentral until the channel's closure in August 2008. In Australia, Cowboy Bebop was first broadcast on pay television in 2002 on Adult Swim in Australia. It was broadcast on Sci-Fi Channel on Foxtel. In Australia, Cowboy Bebop was first broadcast on free-to-air-TV on ABC2 (the national digital public television channel) on January 2, 2007. It has been repeated several times, most recently starting in 2008. Cowboy Bebop: The Movie also aired again on February 23, 2009, on SBS (a hybrid-funded Australian public broadcasting television network). In Canada, Cowboy Bebop was first broadcast on December 24, 2006, on Razer. In Latin America, the series was first broadcast on pay-TV in 2001 on Locomotion. It aired again on January 9, 2016 on I.Sat. Home media Cowboy Bebop has been released in four separate editions in North America. The first release was sold in VHS format either as a box set or as seven individual tapes. The tapes were sold through Anime Village, a division of Bandai. The second release was sold in 2000 individually, and featured uncut versions of the original 26 episodes. In 2001, these DVDs were collected in the special edition Perfect Sessions which included the first 6 DVDs, the first Cowboy Bebop soundtrack, and a collector's box. At the time of release, the art box from the Perfect Sessions was made available for purchase on The Right Stuff International as a solo item for collectors who already owned the series. The third release, The Best Sessions, was sold in 2002 and featured what Bandai considered to be the best 6 episodes of the series remastered in Dolby Digital 5.1 and DTS surround sound. The fourth release, Cowboy Bebop Remix, was also distributed on 6 discs and included the original 26 uncut episodes, with sound remastered in Dolby Digital 5.1 and video remastered under the supervision of Shinichiro Watanabe. This release also included various extras that were not present in the original release. Cowboy Bebop Remix was itself collected as the Cowboy Bebop Remix DVD Collection in 2008. A fourth release in Blu-ray format was released on December 21, 2012 exclusively in Japan. In December 2012, newly founded distributor Anime Limited announced via Facebook and Twitter that they had acquired the home video license for the United Kingdom. Part 1 of the Blu-ray collection was released on July 29, 2013, while Part 2 was released on October 14. The standard DVD Complete Collection was originally meant to be released on September 23, 2013 with Part 2 of the Blu-ray release but due to mastering and manufacturing errors, the Complete Collection was delayed until November 27. Following the closure of Bandai Entertainment in 2012, Funimation and Sunrise had announced that they rescued Cowboy Bebop, along with a handful of other former Bandai Entertainment properties, for home video and digital release. Funimation released the series on Blu-ray and DVD on December 16, 2014. The series was released in four separate editions: standard DVD, standard Blu-ray, an Amazon.com exclusive Blu-ray/DVD combo, and a Funimation.com exclusive Blu-ray/DVD combo. Streaming Netflix acquired the streaming rights to the original
fifth book returns to the subject of faith. Clement argues that truth, justice, and goodness can be seen only by the mind, not the eye; faith is a way of accessing the unseeable. He stresses that knowledge of God can only be achieved through faith once one's moral faults have been corrected. This parallels Clement's earlier insistence that martyrdom can only be achieved by those who practice their faith in Christ through good deeds, not those who simply profess their faith. God transcends matter entirely, and thus the materialist cannot truly come to know God. Although Christ was God incarnate, it is spiritual, not physical comprehension of him that is important. In the beginning of the sixth book, Clement intends to demonstrate that the works of Greek poets were derived from the prophetic books of the Bible. In order to reinforce his position that the Greeks were inclined toward plagiarism, he cites numerous instances of such inappropriate appropriation by classical Greek writers, reported second-hand from On Plagiarism, an anonymous 3rd-century BC work sometimes ascribed to Aretades. Clement then digresses to the subject of sin and hell, arguing that Adam was not perfect when created, but given the potential to achieve perfection. He espouses broadly universalist doctrine, holding that Christ's promise of salvation is available to all, even those condemned to hell. The final extant book begins with a description of the nature of Christ, and that of the true Christian, who aims to be as similar as possible to both the Father and the Son. Clement then criticizes the simplistic anthropomorphism of most ancient religions, quoting Xenophanes' famous description of African, Thracian, and Egyptian deities. He indicates that the Greek deities may also have had their origins in the personification of material objects: Ares representing iron, and Dionysus wine. Prayer, and the relationship between love and knowledge are then discussed. Corinthians 13:8 seems to contradict the characterization of the true Christian as one who knows; but to Clement knowledge vanishes only in that it is subsumed by the universal love expressed by the Christian in reverence for the Creator. Following Socrates, he argues that vice arises from a state of ignorance, not from intention. The Christian is a "laborer in God's vineyard", responsible both for one's own path to salvation and that of one's neighbor. The work ends with an extended passage against the contemporary divisions and heresies within the church. Other works Besides the great trilogy, Clement's only other extant work is the treatise Salvation for the Rich, also known as Who is the Rich Man who is Saved? written c. 203 AD Having begun with a scathing criticism of the corrupting effects of money and misguided servile attitudes toward the wealthy, Clement discusses the implications of Mark 10:25. The rich are either unconvinced by the promise of eternal life, or unaware of the conflict between the possession of material and spiritual wealth, and the good Christian has a duty to guide them toward a better life through the Gospel. Jesus' words are not to be taken literally — the supercelestial () meanings should be sought in which the true route to salvation is revealed. The holding of material wealth in itself is not a wrong, so long as it is used charitably, but Christians should be careful not to let their wealth dominate their spirit. It is more important to give up sinful passions than external wealth. If the rich are to be saved, all they must do is to follow the two commandments, and while material wealth is of no value to God, it can be used to alleviate the suffering of neighbors. Other known works exist in fragments alone, including the four eschatological works in the secret tradition: Hypotyposes, Excerpta ex Theodoto, Eclogae Propheticae, and the Adumbraetiones. These cover Clement's celestial hierarchy, a complex schema in which the universe is headed by the Face of God, below which lie seven protoctists, followed by archangels, angels, and humans. According to Jean Daniélou, this schema is inherited from a Judaeo-Christian esotericism, followed by the Apostles, which was only imparted orally to those Christians who could be trusted with such mysteries. The proctocists are the first beings created by God, and act as priests to the archangels. Clement identifies them both as the "Eyes of the Lord" and with the Thrones. Clement characterizes the celestial forms as entirely different from anything earthly, although he argues that members of each order only seem incorporeal to those of lower orders. According to the Eclogae Propheticae, every thousand years every member of each order moves up a degree, and thus humans can become angels. Even the protoctists can be elevated, although their new position in the hierarchy is not clearly defined. The apparent contradiction between the fact that there can be only seven protoctists but also a vast number of archangels to be promoted to their order is problematical. One modern solution regards the story as an example of "interiorized apocalypticism": imagistic details are not to be taken literally, but as symbolizing interior transformation. The titles of several lost works are known because of a list in Eusebius' Ecclesiastical History, 6.13.1–3. They include the Outlines, in eight books, and Against Judaizers. Others are known only from mentions in Clement's own writings, including On Marriage and On Prophecy, although few are attested by other writers and it is difficult to separate works that he intended to write from those that were completed. The Mar Saba letter was attributed to Clement by Morton Smith, but there remains much debate today over whether it is an authentic letter from Clement, an ancient pseudepigraph, or a modern forgery. If authentic, its main significance would be in its relating that the Apostle Mark came to Alexandria from Rome and there, wrote a more spiritual Gospel, which he entrusted to the Church in Alexandria on his death; if genuine, the letter pushes back the tradition related by Eusebius connecting Mark with Alexandria by a century. Legacy Eusebius is the first writer to provide an account of Clement's life and works, in his Ecclesiastical History, 5.11.1–5, 6.6.1 Eusebius provides a list of Clement's works, biographical information, and an extended quotation from the Stromata. Photios I of Constantinople writes against Clement's theology in the Bibliotheca, although he is appreciative of Clement's learning and the literary merits of his work. In particular, he is highly critical
from a state of ignorance, not from intention. The Christian is a "laborer in God's vineyard", responsible both for one's own path to salvation and that of one's neighbor. The work ends with an extended passage against the contemporary divisions and heresies within the church. Other works Besides the great trilogy, Clement's only other extant work is the treatise Salvation for the Rich, also known as Who is the Rich Man who is Saved? written c. 203 AD Having begun with a scathing criticism of the corrupting effects of money and misguided servile attitudes toward the wealthy, Clement discusses the implications of Mark 10:25. The rich are either unconvinced by the promise of eternal life, or unaware of the conflict between the possession of material and spiritual wealth, and the good Christian has a duty to guide them toward a better life through the Gospel. Jesus' words are not to be taken literally — the supercelestial () meanings should be sought in which the true route to salvation is revealed. The holding of material wealth in itself is not a wrong, so long as it is used charitably, but Christians should be careful not to let their wealth dominate their spirit. It is more important to give up sinful passions than external wealth. If the rich are to be saved, all they must do is to follow the two commandments, and while material wealth is of no value to God, it can be used to alleviate the suffering of neighbors. Other known works exist in fragments alone, including the four eschatological works in the secret tradition: Hypotyposes, Excerpta ex Theodoto, Eclogae Propheticae, and the Adumbraetiones. These cover Clement's celestial hierarchy, a complex schema in which the universe is headed by the Face of God, below which lie seven protoctists, followed by archangels, angels, and humans. According to Jean Daniélou, this schema is inherited from a Judaeo-Christian esotericism, followed by the Apostles, which was only imparted orally to those Christians who could be trusted with such mysteries. The proctocists are the first beings created by God, and act as priests to the archangels. Clement identifies them both as the "Eyes of the Lord" and with the Thrones. Clement characterizes the celestial forms as entirely different from anything earthly, although he argues that members of each order only seem incorporeal to those of lower orders. According to the Eclogae Propheticae, every thousand years every member of each order moves up a degree, and thus humans can become angels. Even the protoctists can be elevated, although their new position in the hierarchy is not clearly defined. The apparent contradiction between the fact that there can be only seven protoctists but also a vast number of archangels to be promoted to their order is problematical. One modern solution regards the story as an example of "interiorized apocalypticism": imagistic details are not to be taken literally, but as symbolizing interior transformation. The titles of several lost works are known because of a list in Eusebius' Ecclesiastical History, 6.13.1–3. They include the Outlines, in eight books, and Against Judaizers. Others are known only from mentions in Clement's own writings, including On Marriage and On Prophecy, although few are attested by other writers and it is difficult to separate works that he intended to write from those that were completed. The Mar Saba letter was attributed to Clement by Morton Smith, but there remains much debate today over whether it is an authentic letter from Clement, an ancient pseudepigraph, or a modern forgery. If authentic, its main significance would be in its relating that the Apostle Mark came to Alexandria from Rome and there, wrote a more spiritual Gospel, which he entrusted to the Church in Alexandria on his death; if genuine, the letter pushes back the tradition related by Eusebius connecting Mark with Alexandria by a century. Legacy Eusebius is the first writer to provide an account of Clement's life and works, in his Ecclesiastical History, 5.11.1–5, 6.6.1 Eusebius provides a list of Clement's works, biographical information, and an extended quotation from the Stromata. Photios I of Constantinople writes against Clement's theology in the Bibliotheca, although he is appreciative of Clement's learning and the literary merits of his work. In particular, he is highly critical of the Hypotyposes, a work of biblical exegesis of which only a few fragments have survived. Photios compared Clement's treatise, which, like his other works, was highly syncretic, featuring ideas of Hellenistic, Jewish, and Gnostic origin, unfavorably against the prevailing orthodoxy of the 9th century. Amongst the particular ideas Photios deemed heretical were: His belief that matter and thought are eternal, and thus did not originate from God, contradicting the doctrine of Creatio ex nihilo His belief in cosmic cycles predating the creation of the world, following Heraclitus, which is extra-Biblical in origin His belief that Christ, as Logos, was in some sense created, contrary to John 1, but following Philo His ambivalence toward docetism, the heretical doctrine that Christ's earthly body was an illusion His belief that Eve was created from Adam's sperm after he ejaculated during the night His belief that Genesis 6:2 implies that angels indulged in coitus with human women (in Chalcedonian theology, angels are considered sexless) His belief in reincarnation, i.e., the transmigration of souls As one of the earliest of the Church fathers whose works have survived, he is the subject of a significant amount of recent academic work, focusing on, among other things, his exegesis of scripture, his Logos-theology and pneumatology, the relationship between his thought and non-Christian philosophy, and his influence on Origen. Veneration Up until the 17th century Clement was venerated as a saint in the Roman Catholic Church. His name was to be found in the martyrologies, and his feast fell on the fourth of December, but when the Roman Martyrology was revised by Pope Clement VIII his name was dropped from the calendar on the advice of Cardinal Baronius. Benedict XIV maintained this decision of his predecessor on the grounds that Clement's life was little known, that he had never obtained public cultus in the Church, and that some of his doctrines were, if not erroneous, at least suspect. Although Clement is not widely venerated in Eastern Christianity, the Prologue of Ohrid repeatedly refers to him as a saint, as do various Orthodox authorities including the Greek Metropolitan Kallinikos of Edessa. The Coptic tradition considers Clement a saint. Saint Clement Coptic Orthodox Christian Academy in Nashville, Tennessee, is specifically named after him. Clement is commemorated in Anglicanism. The independent Universal Catholic Church's cathedral in Dallas is also dedicated to him. Works Editions Sylburg, Friedrich (ed.) (1592). Clementis Alexandrini Opera Quae Extant. Heidelberg: ex typographeio Hieronymi Commelini. Heinsius, Daniel (ed.) (1616). Clementis Alexandrini Opera Graece et Latine Quae Extant. Leiden: excudit Ioannes Patius academiae typographus. Potter, John (ed.) (1715). Clementis Alexandrini Opera, 2 vols. Oxonii: e theatro Sheldoniano. Vol. 1. Cohortatio ad gentes. Paedagogus. Stromatum I-IV. Vol. 2. Stromatum V-VIII. Quis dives salvetur. Excerpta Theodoti. Prophetarum ecologiae. Fragmenta. Klotz, Reinhold (ed.) (1831–34). Titi Flaui Clementis Alexandrini Opera Omnia, 4 vols. Leipzig: E. B. Schwickert. Vol. 1. Ρrotrepticus. Paedagogus. Vol. 2. Stromatorum I-IV. Vol. 3. Stromatourm V-VIII. Quis dives salvetur. Vol. 4. Fragmenta. Scholia. Annotationes. Indices. Migne, J.-P. (ed.) (1857). Clementis Alexandrini Opera Quae Exstant Omnia, 2 toms. (= PG 8, 9) Paris: J.-P. Migne. Tom. 1. Cohortatio ad gentes. Paedagogus. Stromata I-IV. Tom. 2. Stromata V-VIII. Quis dives salvetur. Fragmenta. Dindorf, Wilhelm (ed.) (1869). Clementis Alexandrini Opera, 4 vols. Oxonni: e typographeo Clarendoniano. Vol. 1. Ρrotrepticus. Paedagogus. Vol. 2. Stromatum I-IV. Vol. 3. Stromatum V-VIII. Vol. 4. Annotationes. Interpretum. Barnard, P. Mourdant (ed.) (1897). Clement of Alexandria, Quis dives salvetur. Texts and Studies 5/2. Cambridge: Cambridge University Press. :de:Otto Stählin (ed.) (1905–36). Clemens Alexandrinus, 4 bds. (= GCS 12, 15, 17, 39) Leipzig: J. C. Hinrichs. Bd. 1. Ρrotrepticus und Paedagogus. Bd. 2. Stromata I-VI. Bd. 3. Stromata VII-VIII. Excerpta ex
generic" (cf. gnomic aspect). Also following Lyons, Ann Banfield writes, "In order for the statement on which Descartes's argument depends to represent certain knowledge,… its tense must be a true present—in English, a progressive,… not as 'I think' but as 'I am thinking, in conformity with the general translation of the Latin or French present tense in such nongeneric, nonstative contexts." Or in the words of Simon Blackburn, "Descartes’s premise is not ‘I think’ in the sense of ‘I ski’, which can be true even if you are not at the moment skiing. It is supposed to be parallel to ‘I am skiing’." The similar translation “I am thinking, therefore I exist” of Descartes's correspondence in French (“, ”) appears in The Philosophical Writings of Descartes by Cottingham et al. (1988). The earliest known translation as "I am thinking, therefore I am" is from 1872 by Charles Porterfield Krauth. Fumitaka Suzuki writes "Taking consideration of Cartesian theory of continuous creation, which theory was developed especially in the Meditations and in the Principles, we would assure that 'I am thinking, therefore I am/exist' is the most appropriate English translation of 'ego cogito, ergo sum'." "I exist" vs. "I am" Alexis Deodato S. Itao notes that is "literally 'I think, therefore I am'." Others differ: 1) "[A] precise English translation will read as 'I am thinking, therefore I exist'.; and 2) "[S]ince Descartes … emphasized that existence is such an important 'notion,' a better translation is 'I am thinking, therefore I exist.'" Punctuation Descartes wrote this phrase as such only once, in the posthumously published lesser-known work noted above,The Search for Truth by Natural Light. It appeared there mid-sentence, uncapitalized, and with a comma. (Commas were not used in Classical Latin but were a regular feature of scholastic Latin, the Latin Descartes "had learned in a Jesuit college at La Flèche.") Most modern reference works show it with a comma, but it is often presented without a comma in academic work and in popular usage. In Descartes's Principia Philosophiae, the proposition appears as ego cogito, ergo sum'''. Interpretation As put succinctly by Krauth (1872), "That cannot doubt which does not think, and that cannot think which does not exist. I doubt, I think, I exist." The phrase cogito, ergo sum is not used in Descartes's Meditations on First Philosophy but the term "the cogito" is used to refer to an argument from it. In the Meditations, Descartes phrases the conclusion of the argument as "that the proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind" (Meditation II). At the beginning of the second meditation, having reached what he considers to be the ultimate level of doubt—his argument from the existence of a deceiving god—Descartes examines his beliefs to see if any have survived the doubt. In his belief in his own existence, he finds that it is impossible to doubt that he exists. Even if there were a deceiving god (or an evil demon), one's belief in their own existence would be secure, for there is no way one could be deceived unless one existed in order to be deceived. But I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I, too, do not exist? No. If I convinced myself of something [or thought anything at all], then I certainly existed. But there is a deceiver of supreme power and cunning who deliberately and constantly deceives me. In that case, I, too, undoubtedly exist, if he deceives me; and let him deceive me as much as he can, he will never bring it about that I am nothing, so long as I think that I am something. So, after considering everything very thoroughly, I must finally conclude that the proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT VII 25; CSM II 16–17) There are three important notes to keep in mind here. First, he claims only the certainty of his own existence from the first-person point of view — he has not proved the existence of other minds at this point. This is something that has to be thought through by each of us for ourselves, as we follow the course of the meditations. Second, he does not say that his existence is necessary; he says that if he thinks, then necessarily he exists (see the instantiation principle). Third, this proposition "I am, I exist" is held true not based on a deduction (as mentioned above) or on empirical induction but on the clarity and self-evidence of the proposition. Descartes does not use this first certainty, the cogito, as a foundation upon which to build further knowledge; rather, it is the firm ground upon which he can stand as he works to discover further truths. As he puts it: Archimedes used to demand just one firm and immovable point in order to shift the entire earth; so I too can hope for great things if I manage to find just one thing, however slight, that is certain and unshakable. (AT VII 24; CSM II 16) According to many Descartes specialists, including Étienne Gilson, the goal of Descartes in establishing this first truth is to demonstrate the capacity of his criterion — the immediate clarity and distinctiveness of self-evident propositions — to establish true and justified propositions despite having adopted a method of generalized doubt. As a consequence of this demonstration, Descartes considers science and mathematics to be justified to the extent that their proposals are established on a similarly immediate clarity, distinctiveness, and self-evidence that presents itself to the mind. The originality of Descartes's thinking, therefore, is
thank Andreas Colvius (a friend of Descartes's mentor, Isaac Beeckman) for drawing his attention to Augustine: I am obliged to you for drawing my attention to the passage of St Augustine relevant to my I am thinking, therefore I exist. I went today to the library of this town to read it, and I do indeed find that he does use it to prove the certainty of our existence. He goes on to show that there is a certain likeness of the Trinity in us, in that we exist, we know that we exist, and we love the existence and the knowledge we have. I, on the other hand, use the argument to show that this I which is thinking is an immaterial substance with no bodily element. These are two very different things. In itself it is such a simple and natural thing to infer that one exists from the fact that one is doubting that it could have occurred to any writer. But I am very glad to find myself in agreement with St Augustine, if only to hush the little minds who have tried to find fault with the principle. Another predecessor was Avicenna's "Floating Man" thought experiment on human self-awareness and self-consciousness. The 8th century Hindu philosopher Adi Shankara wrote, in a similar fashion, that no one thinks 'I am not', arguing that one's existence cannot be doubted, as there must be someone there to doubt. The central idea of cogito, ergo sum is also the topic of Mandukya Upanishad. Spanish philosopher Gómez Pereira in his 1554 work De Inmortalitate Animae, published in 1749, wrote "nosco me aliquid noscere, & quidquid noscit, est, ergo ego sum" ('I know that I know something, anyone who knows exists, then I exist').López, Modesto Santos. 1986. "Gómez Pereira, médico y filósofo medinense." In Historia de Medina del Campo y su Tierra, volumen I: Nacimiento y expansión, edited by E. L. Sanz. Critique Use of "I" In Descartes, The Project of Pure Enquiry, Bernard Williams provides a history and full evaluation of this issue. The first to raise the "I" problem was Pierre Gassendi, who in his , as noted by Saul Fisher "points out that recognition that one has a set of thoughts does not imply that one is a particular thinker or another. …[T]he only claim that is indubitable here is the agent-independent claim that there is cognitive activity present." The objection, as presented by Georg Lichtenberg, is that rather than supposing an entity that is thinking, Descartes should have said: "thinking is occurring." That is, whatever the force of the cogito, Descartes draws too much from it; the existence of a thinking thing, the reference of the "I," is more than the cogito can justify. Friedrich Nietzsche criticized the phrase in that it presupposes that there is an "I", that there is such an activity as "thinking", and that "I" know what "thinking" is. He suggested a more appropriate phrase would be "it thinks" wherein the "it" could be an impersonal subject as in the sentence "It is raining." Kierkegaard The Danish philosopher Søren Kierkegaard calls the phrase a tautology in his Concluding Unscientific Postscript. He argues that the cogito already presupposes the existence of "I", and therefore concluding with existence is logically trivial. Kierkegaard's argument can be made clearer if one extracts the premise "I think" into the premises "'x' thinks" and "I am that 'x'", where "x" is used as a placeholder in order to disambiguate the "I" from the thinking thing. Here, the cogito has already assumed the "I"'s existence as that which thinks. For Kierkegaard, Descartes is merely "developing the content of a concept", namely that the "I", which already exists, thinks. As Kierkegaard argues, the proper logical flow of argument is that existence is already assumed or presupposed in order for thinking to occur, not that existence is concluded from that thinking. Williams Bernard Williams claims that what we are dealing with when we talk of thought, or when we say "I am thinking," is something conceivable from a third-person perspective—namely objective "thought-events" in the former case, and an objective thinker in the latter. He argues, first, that it is impossible to make sense of "there is thinking" without relativizing it to something. However, this something cannot be Cartesian egos, because it is impossible to differentiate objectively between things just on the basis of the pure content of consciousness. The obvious problem is that, through introspection, or our experience of consciousness, we have no way of moving to conclude the existence of any third-personal fact, to conceive of which would require something above and beyond just the purely subjective contents of the mind. Heidegger As a critic of Cartesian subjectivity, Heidegger sought to ground human subjectivity in death as that certainty which individualizes and authenticates our being. As he wrote in 1925 in History of the Concept of Time:This certainty, that "I myself am in that I will die," is the basic certainty of Dasein itself. It is a genuine statement of Dasein, while cogito sum is only the semblance of such a statement. If such pointed formulations mean anything at all, then the appropriate statement pertaining to Dasein in its being would have to be sum moribundus [I am in dying], moribundus not as someone gravely ill or wounded, but insofar as I am, I am moribundus. The MORIBUNDUS first gives the SUM its sense. John Macmurray The Scottish philosopher John Macmurray rejects the cogito outright in order to place action at the center of a philosophical system he entitles the Form of the Personal. "We must reject this, both as standpoint and as method. If this be philosophy, then philosophy is a bubble floating in an atmosphere of unreality." The reliance on thought creates an irreconcilable dualism between thought and action in which the unity of experience is lost, thus dissolving the integrity of our selves, and destroying any connection with reality. In order to formulate a more adequate cogito, Macmurray proposes the substitution of "I do" for "I think," ultimately leading to a belief in God as an agent to whom all persons stand in relation. See also Cartesian doubt Floating man List of Latin phrases Solipsism Academic skepticism Brain in a vat I Am that I Am Notes References Further reading Abraham, W. E. 1974. "Disentangling the Cogito." Mind 83:329. Baird, Forrest E., and Walter Kaufmann. 2008. From Plato to Derrida. Upper Saddle River, NJ: Pearson Prentice Hall. . Boufoy-Bastick, Z. 2005. "Introducing 'Applicable Knowledge' as a Challenge to the Attainment of Absolute Knowledge." Sophia Journal of Philosophy 8:39–52. Christofidou, A. 2013. Self, Reason, and Freedom: A New Light on Descartes' Metaphysics. Routledge. Hatfield, G. 2003. Routledge Philosophy Guidebook to Descartes and the Meditations. Routledge. . Kierkegaard, Søren. [1844] 1985. Philosophical Fragments. Princeton. . — [1846] 1985. Concluding Unscientific Postscript''. Princeton. . External links Rene Descartes Arguments in philosophy of mind Cartesianism Concepts in epistemology Concepts in metaphilosophy Concepts in metaphysics Concepts in the philosophy of mind Concepts in the philosophy of science Epistemological theories Latin philosophical phrases Latin quotations Latin words and phrases Metaphysical theories Metaphysics of mind Ontology Philosophical problems Psychological concepts Qualia Quotations from
through in their persistence despite the obstacles. These stories found popularity not only among young children but adults as well. Despite the fact that Barks had done little traveling his adventure stories often had the duck clan globe-trotting to the most remote or spectacular of places. This allowed Barks to indulge his penchant for elaborate backgrounds that hinted at his thwarted ambitions of doing realistic stories in the vein of Hal Foster's Prince Valiant. Third marriage As Barks blossomed creatively, his marriage to Clara deteriorated. This is the period referred to in Barks' famed quip that he could feel his creative juices flowing while the whiskey bottles hurled at him by a tipsy Clara flew by his head. They were divorced in 1951, his second and last divorce. In this period Barks dabbled in fine art, exhibiting paintings at local art shows. It was at one of these in 1952 he became acquainted with fellow exhibitor Margaret Wynnfred Williams (1917 – March 10, 1993), nicknamed Garé. She was an accomplished landscape artist, some of whose paintings are in the collection of the Leanin' Tree Museum of Western Art. During her lifetime, and to this day, note cards of her paintings are available from Leanin' Tree. Her nickname appears as a store name in the story "Christmas in Duckburg", featured on page 1 of Walt Disney's Christmas Parade #9, published in 1958. Soon after they met, she started assisting Barks, handling the solid blacks and lettering, both of which he had found onerous. They married in 1954 and the union lasted until her death. No longer anonymous People who worked for Disney (and its comic book licensees) generally did so in relative anonymity; stories would only carry Walt Disney's name and (sometimes) a short identification number. Prior to 1960 Barks' identity remained a mystery to his readers. However, many readers recognized Barks' work and drawing style and began to call him the Good Duck Artist, a label that stuck even after his true identity was discovered by fans in the late 1950s. Malcolm Willits was the first person to learn Barks's name and address, but two brothers named John and Bill Spicer became the first fans to contact Barks after independently discovering the same information. After Barks received a 1960 visit from the Spicer brothers and Ron Leonard, he was no longer anonymous, as word of his identity spread through the emerging network of comic book fandom fanzines and conventions. Later life Carl Barks retired in 1966, but was persuaded by editor Chase Craig to continue to script stories for Western. The last new comic book story drawn by Carl Barks was a Daisy Duck tale ("The Dainty Daredevil") published in Walt Disney Comics Digest issue 5 (Nov. 1968). When bibliographer Michael Barrier asked Barks why he drew it, Barks' vague recollection was no one was available and he was asked to do it as a favor by Craig. He wrote one Uncle Scrooge story, and three Donald Duck stories. From 1970 to 1974, Barks was the main writer for the Junior Woodchucks comic book (issues 6 through 25). The latter included environmental themes that Barks first explored in 1957 ["Land of the Pygmy Indians", Uncle Scrooge #18]. Barks also sold a few sketches to Western that were redrawn as covers. For a time the Barkses lived in Goleta, California, before returning to the Inland Empire by moving to Temecula. To make a little extra money beyond what his pension and scripting earnings brought in, Barks started doing oil paintings to sell at the local art shows where he and Garé exhibited. Subjects included humorous depictions of life on the farm and portraits of Native American princesses. These skillfully rendered paintings encouraged fan Glenn Bray to ask Barks if he could commission a painting of the ducks ("A Tall Ship and a Star to Steer Her By", taken from the cover of Walt Disney's Comics and Stories #108 by Barks). This prompted Barks to contact George Sherman at Disney's Publications Department to request permission to produce and sell oil paintings of scenes from his stories. In July 1971 Barks was granted a royalty-free license by Disney. When word spread that Barks was taking commissions from those interested in purchasing an oil of the ducks, much to his astonishment the response quickly outstripped what he reasonably could produce in the next few years. When Barks expressed dismay at coping with the backlog of orders he faced, fan/dealers Bruce Hamilton and Russ Cochran suggested Barks instead auction his paintings at conventions and via Cochran's catalog Graphic Gallery. By September 1974 Barks had discontinued taking commissions. At Boston's NewCon convention, in October 1975, the first Carl Barks oil painting auctioned at a comic book convention ("She Was Spangled and Flashy") sold for $2,500. Subsequent offerings saw an escalation in the prices realized. In 1976, Barks and Garé went to Boston for the NewCon show, their first comic convention appearance. Among the other attendees was famed Little Lulu comic book scripter John Stanley; despite both having worked for Western Publishing this was the first time they met. The highlight of the convention was the auctioning of what was to that time the largest duck oil painting Barks had done, "July Fourth in Duckburg", which included depictions of several prominent Barks fans and collectors. It sold for a then record high amount: $6,400. Soon thereafter a fan sold unauthorized prints of some of the Scrooge McDuck paintings, leading Disney to withdraw permission for further paintings. To meet demand for new work Barks embarked on a series of paintings of non-Disney ducks and fantasy subjects such as Beowulf and Xerxes. These were eventually collected in the limited-edition book Animal Quackers. As the result of heroic efforts by Star Wars producer Gary Kurtz and screenwriter Edward Summer, Disney relented and, in 1981, allowed Barks to do a now seminal oil painting called Wanderers of Wonderlands for a breakthrough limited edition book entitled Uncle Scrooge McDuck: His Life and Times. The book collected 11 classic Barks stories of Uncle Scrooge colored by artist Peter Ledger along with a new Scrooge story by Barks done storybook style with watercolor illustrations, "Go Slowly, Sands of Time". After being turned down by every major publisher in New York City, Kurtz and Summer published the book through Celestial Arts, which Kurtz acquired partly for this purpose. The book went on to become the model for virtually every important collection of comic book stories. It was the first book of its kind ever reviewed in Time magazine and subsequently in Newsweek, and the first book review in Time with large color illustrations. In 1977 and 1982, Barks attended the San Diego Comic-Con. As with his appearance in Boston, the response to his presence was overwhelming, with long lines of fans waiting to meet Barks and get his autograph. In 1981, Bruce Hamilton and Russ Cochran, two long-time Disney comics fans, decided to combine forces to bring greater recognition to the works of Carl Barks. Their first efforts went into establishing Another Rainbow Publishing, the banner under which they produced and issued the award-winning book The Fine Art of Walt Disney's Donald Duck by Carl Barks, a comprehensive collection of the Disney duck paintings of this artist and storyteller. Not long after, the company began producing fine art lithographs of many of these paintings, in strictly limited editions, all signed by Barks, who eventually produced many original works for the series. In 1983, Barks relocated one last time to Grants Pass, Oregon, near where he grew up, partly at the urging of friend and Broom Hilda artist Russell Myers, who lived in the area. The move also was motivated, Barks stated in another famous quip, by Temecula being too close to Disneyland and thus facilitating a growing torrent of drop-in visits by vacationing fans. In this period Barks made only one public appearance, at a comic book shop near Grants Pass. In 1983, Another Rainbow took up the daunting task of collecting the entire Disney comic book oeuvre of Barks—over 500 stories in all—in the ten-set, thirty-volume Carl Barks Library. These oversized hardbound volumes reproduced Barks' pages in pristine black and white line art, as close as possible to the way he would originally draw them, and included mountains of special features, articles, reminiscences, interviews, storyboards, critiques, and more than a few surprises. This monumental project was finally completed in mid-1990. In 1985, a new division was founded, Gladstone Publishing, which took up the then-dormant Disney comic book license. Gladstone introduced a new generation of Disney comic book readers to the storytelling of Barks, Paul Murry, and Floyd Gottfredson, as well as presenting the first works of modern Disney comics artists Don Rosa and William Van Horn. Seven years after Gladstone's founding, the Carl Barks Library was revived as the Carl Barks Library in Color, as full-color, high-quality squarebound comic albums (including the first-ever Carl Barks trading cards). From 1993 to 1998, Barks' career was managed by the "Carl Barks Studio" (Bill Grandey and Kathy Morby—they had sold Barks original art since 1979). This involved numerous art projects and activities, including a tour of 11 European countries in 1994, Iceland being the first foreign country he ever visited. Barks appeared at the first of many Disneyana conventions in 1993. Silk screen prints of paintings along with high-end art objects (such as original water colors, bronze figurines and ceramic tiles) were produced based on designs by Barks. During the summer of 1994 and until his death, Barks and his studio personally assigned Peter Reichelt, a museum exhibition producer from Mannheim, Germany, as his agent for Europe. Publisher "Edition 313" put out numerous lithographs. In 1997, tensions between Barks and the Studio eventually resulted in a lawsuit that was settled with an agreement that included the disbanding of the Studio. Barks never traveled to make another Disney appearance. He was represented by Ed Bergen, as he completed a final project. Gerry Tank and Jim Mitchell were to assist Barks in his final years. During his Carl Barks Studio years, Barks created two more stories: the script for the final Uncle Scrooge story "Horsing Around with History", which was first published in Denmark in 1994 with Bill Van Horn art. The outlines for Barks' final Donald Duck story "Somewhere in Nowhere", were first published in 1997, in Italy, with art by Pat Block. Austrian artist Gottfried Helnwein curated and organized the first solo museum-exhibition of Barks. Between 1994 and 1998 the retrospective was shown in ten European museums and seen by more than 400,000 visitors. At the same time in spring 1994, Reichelt and Ina Brockmann designed a special museum exhibition tour about Barks' life and work. Also represented for the first time at this exhibition were Disney artists Al Taliaferro and Floyd Gottfredson. Since 1995, more than 500,000 visitors have attended the shows in Europe. Reichelt also translated Michael Barrier's biography of Barks into German and published it in 1994. Final days and death Barks spent his final years in a new home in Grants Pass, Oregon, which he and Garé, who died in 1993, had built next door to their original home. In July 1999, he was diagnosed with chronic lymphocytic leukemia, a form of cancer arising from the white blood cells in the bone marrow, for which he received oral chemotherapy. However, as the disease progressed, causing him great discomfort, the ailing Barks decided to stop receiving treatment in June 2000. In spite of his terminal condition, Barks remained, according to caregiver Serene Hunicke, "funny up to the end". The year before, Barks, an atheist, had told the university professor Donald Ault: I have no apprehension, no fear of death. I do not believe in an afterlife. ... I think of death as total peace. You're beyond the clutches of all those who would crush you. On August 25, 2000, shortly after midnight, Carl Barks died quietly in his sleep at the age of 99. He was interred in Hillcrest Memorial Cemetery in Grants Pass, beside Garé's grave. Influence Barks' Donald Duck stories were rated #7 on The Comics Journal list of 100 top comics; his Uncle Scrooge stories were rated #20. Steven Spielberg and George Lucas have acknowledged that the rolling-boulder booby trap in the opening scene of Raiders of the Lost Ark was inspired by the 1954 Carl Barks Uncle Scrooge adventure "The Seven Cities of Cibola" (Uncle Scrooge #7). Lucas and Spielberg have also said that some of Barks' stories about space travel and the depiction of aliens had an influence on them. Lucas wrote the foreword to the 1982 Uncle Scrooge McDuck: His Life and Times. In it he calls Barks' stories "cinematic" and "a priceless part of our literary heritage". The Walt Disney Treasures DVD set Chronological Donald, Volume 2 includes a salute to Barks. Carl Barks has an asteroid named after him, 2730 Barks. In Almere, Netherlands, a street was named after him: Carl Barksweg. The same neighborhood also includes a Donald Ducklaan and a Goofystraat. Japanese animator and cartoonist Osamu Tezuka, who created manga such as Astro Boy and Black Jack, was a fan of Barks' work. New Treasure Island, one of Tezuka's first works, was partly influenced by "Donald Duck Finds Pirate Gold". A 1949 Donald Duck ten-pager features Donald raising a yacht from the ocean floor by filling it with ping pong balls. In December 1965 Karl Krøyer, a Dane, lifted the sunken freight vessel Al Kuwait in the Kuwait Harbor by filling the hull with 27 million tiny inflatable balls of polystyrene. Krøyer denies having been inspired by this Barks story. Some sources claim Krøyer was denied a Dutch patent registration (application number NL 6514306) for his invention on the grounds that the Barks story was a prior publication of the invention. Krøyer later successfully raised another ship off Greenland using the same method, and several other sunken vessels worldwide have since been
cruel bumps and bruises of everyday life with the nephews often acting as a Greek chorus commenting on the unfolding disasters Donald wrought upon himself. Yet while seemingly defeatist in tone, the humanity of the characters shines through in their persistence despite the obstacles. These stories found popularity not only among young children but adults as well. Despite the fact that Barks had done little traveling his adventure stories often had the duck clan globe-trotting to the most remote or spectacular of places. This allowed Barks to indulge his penchant for elaborate backgrounds that hinted at his thwarted ambitions of doing realistic stories in the vein of Hal Foster's Prince Valiant. Third marriage As Barks blossomed creatively, his marriage to Clara deteriorated. This is the period referred to in Barks' famed quip that he could feel his creative juices flowing while the whiskey bottles hurled at him by a tipsy Clara flew by his head. They were divorced in 1951, his second and last divorce. In this period Barks dabbled in fine art, exhibiting paintings at local art shows. It was at one of these in 1952 he became acquainted with fellow exhibitor Margaret Wynnfred Williams (1917 – March 10, 1993), nicknamed Garé. She was an accomplished landscape artist, some of whose paintings are in the collection of the Leanin' Tree Museum of Western Art. During her lifetime, and to this day, note cards of her paintings are available from Leanin' Tree. Her nickname appears as a store name in the story "Christmas in Duckburg", featured on page 1 of Walt Disney's Christmas Parade #9, published in 1958. Soon after they met, she started assisting Barks, handling the solid blacks and lettering, both of which he had found onerous. They married in 1954 and the union lasted until her death. No longer anonymous People who worked for Disney (and its comic book licensees) generally did so in relative anonymity; stories would only carry Walt Disney's name and (sometimes) a short identification number. Prior to 1960 Barks' identity remained a mystery to his readers. However, many readers recognized Barks' work and drawing style and began to call him the Good Duck Artist, a label that stuck even after his true identity was discovered by fans in the late 1950s. Malcolm Willits was the first person to learn Barks's name and address, but two brothers named John and Bill Spicer became the first fans to contact Barks after independently discovering the same information. After Barks received a 1960 visit from the Spicer brothers and Ron Leonard, he was no longer anonymous, as word of his identity spread through the emerging network of comic book fandom fanzines and conventions. Later life Carl Barks retired in 1966, but was persuaded by editor Chase Craig to continue to script stories for Western. The last new comic book story drawn by Carl Barks was a Daisy Duck tale ("The Dainty Daredevil") published in Walt Disney Comics Digest issue 5 (Nov. 1968). When bibliographer Michael Barrier asked Barks why he drew it, Barks' vague recollection was no one was available and he was asked to do it as a favor by Craig. He wrote one Uncle Scrooge story, and three Donald Duck stories. From 1970 to 1974, Barks was the main writer for the Junior Woodchucks comic book (issues 6 through 25). The latter included environmental themes that Barks first explored in 1957 ["Land of the Pygmy Indians", Uncle Scrooge #18]. Barks also sold a few sketches to Western that were redrawn as covers. For a time the Barkses lived in Goleta, California, before returning to the Inland Empire by moving to Temecula. To make a little extra money beyond what his pension and scripting earnings brought in, Barks started doing oil paintings to sell at the local art shows where he and Garé exhibited. Subjects included humorous depictions of life on the farm and portraits of Native American princesses. These skillfully rendered paintings encouraged fan Glenn Bray to ask Barks if he could commission a painting of the ducks ("A Tall Ship and a Star to Steer Her By", taken from the cover of Walt Disney's Comics and Stories #108 by Barks). This prompted Barks to contact George Sherman at Disney's Publications Department to request permission to produce and sell oil paintings of scenes from his stories. In July 1971 Barks was granted a royalty-free license by Disney. When word spread that Barks was taking commissions from those interested in purchasing an oil of the ducks, much to his astonishment the response quickly outstripped what he reasonably could produce in the next few years. When Barks expressed dismay at coping with the backlog of orders he faced, fan/dealers Bruce Hamilton and Russ Cochran suggested Barks instead auction his paintings at conventions and via Cochran's catalog Graphic Gallery. By September 1974 Barks had discontinued taking commissions. At Boston's NewCon convention, in October 1975, the first Carl Barks oil painting auctioned at a comic book convention ("She Was Spangled and Flashy") sold for $2,500. Subsequent offerings saw an escalation in the prices realized. In 1976, Barks and Garé went to Boston for the NewCon show, their first comic convention appearance. Among the other attendees was famed Little Lulu comic book scripter John Stanley; despite both having worked for Western Publishing this was the first time they met. The highlight of the convention was the auctioning of what was to that time the largest duck oil painting Barks had done, "July Fourth in Duckburg", which included depictions of several prominent Barks fans and collectors. It sold for a then record high amount: $6,400. Soon thereafter a fan sold unauthorized prints of some of the Scrooge McDuck paintings, leading Disney to withdraw permission for further paintings. To meet demand for new work Barks embarked on a series of paintings of non-Disney ducks and fantasy subjects such as Beowulf and Xerxes. These were eventually collected in the limited-edition book Animal Quackers. As the result of heroic efforts by Star Wars producer Gary Kurtz and screenwriter Edward Summer, Disney relented and, in 1981, allowed Barks to do a now seminal oil painting called Wanderers of Wonderlands for a breakthrough limited edition book entitled Uncle Scrooge McDuck: His Life and Times. The book collected 11 classic Barks stories of Uncle Scrooge colored by artist Peter Ledger along with a new Scrooge story by Barks done storybook style with watercolor illustrations, "Go Slowly, Sands of Time". After being turned down by every major publisher in New York City, Kurtz and Summer published the book through Celestial Arts, which Kurtz acquired partly for this purpose. The book went on to become the model for virtually every important collection of comic book stories. It was the first book of its kind ever reviewed in Time magazine and subsequently in Newsweek, and the first book review in Time with large color illustrations. In 1977 and 1982, Barks attended the San Diego Comic-Con. As with his appearance in Boston, the response to his presence was overwhelming, with long lines of fans waiting to meet Barks and get his autograph. In 1981, Bruce Hamilton and Russ Cochran, two long-time Disney comics fans, decided to combine forces to bring greater recognition to the works of Carl Barks. Their first efforts went into establishing Another Rainbow Publishing, the banner under which they produced and issued the award-winning book The Fine Art of Walt Disney's Donald Duck by Carl Barks, a comprehensive collection of the Disney duck paintings of this artist and storyteller. Not long after, the company began producing fine art lithographs of many of these paintings, in strictly limited editions, all signed by Barks, who eventually produced many original works for the series. In 1983, Barks relocated one last time to Grants Pass, Oregon, near where he grew up, partly at the urging of friend and Broom Hilda artist Russell Myers, who lived in the area. The move also was motivated, Barks stated in another famous quip, by Temecula being too close to Disneyland and thus facilitating a growing torrent of drop-in visits by vacationing fans. In this period Barks made only one public appearance, at a comic book shop near Grants Pass. In 1983, Another Rainbow took up the daunting task of collecting the entire Disney comic book oeuvre of Barks—over 500 stories in all—in the ten-set, thirty-volume Carl Barks Library. These oversized hardbound volumes reproduced Barks' pages in pristine black and white line art, as close as possible to the way he would originally draw them, and included mountains of special features, articles, reminiscences, interviews, storyboards, critiques, and more than a few surprises. This monumental project was finally completed in mid-1990. In 1985, a new division was founded, Gladstone Publishing, which took up the then-dormant Disney comic book license. Gladstone introduced a new generation of Disney comic book readers to the storytelling of Barks, Paul Murry, and Floyd Gottfredson, as well as presenting the first works of modern Disney comics artists Don Rosa and William Van Horn. Seven years after Gladstone's founding, the Carl Barks Library was revived as the Carl Barks Library in Color, as full-color, high-quality squarebound comic albums (including the first-ever Carl Barks trading cards). From 1993 to 1998, Barks' career was managed by the "Carl Barks Studio" (Bill Grandey and Kathy Morby—they had sold Barks original art since 1979). This involved numerous art projects and activities, including a tour of 11 European countries in 1994, Iceland being the first foreign country he ever visited. Barks appeared at the first of many Disneyana conventions in 1993. Silk screen prints of paintings along with high-end art objects (such as original water colors, bronze figurines and ceramic tiles) were produced based on designs by Barks. During the summer of 1994 and until his death, Barks and his studio personally assigned Peter Reichelt, a museum exhibition producer from Mannheim, Germany, as his agent for Europe. Publisher "Edition 313" put out numerous lithographs. In 1997, tensions between Barks and the Studio eventually resulted in a lawsuit that was settled with an agreement that included the disbanding of the Studio. Barks never traveled to make another Disney appearance. He was represented by Ed Bergen, as he completed a final project. Gerry Tank and Jim Mitchell were to assist Barks in his final years. During his Carl Barks Studio years, Barks created two more stories: the script for the final Uncle Scrooge story "Horsing Around with History", which was first published in Denmark in 1994 with Bill Van Horn art. The outlines for Barks' final Donald Duck story "Somewhere in Nowhere", were first published in 1997, in Italy, with art by Pat Block. Austrian artist Gottfried Helnwein curated and organized the first solo museum-exhibition of Barks. Between 1994 and 1998 the retrospective was shown in ten European museums and seen by more than 400,000 visitors. At the same time in spring 1994, Reichelt and Ina Brockmann designed a special museum exhibition tour about Barks' life and work. Also represented for the first time at this exhibition were Disney artists Al Taliaferro and Floyd Gottfredson. Since 1995, more than 500,000 visitors have attended the shows in Europe. Reichelt also translated Michael Barrier's biography of Barks into German and published it in 1994. Final days and death Barks spent his final years in a new home in Grants Pass, Oregon, which he and Garé, who died in 1993, had built next door to their original home. In July 1999, he was diagnosed with chronic lymphocytic leukemia, a form of cancer arising from the white blood cells in the bone marrow, for which he received oral chemotherapy. However, as the disease progressed, causing him great discomfort, the ailing Barks decided to stop receiving treatment in June 2000. In spite of his terminal condition, Barks remained, according to caregiver Serene Hunicke, "funny up to the end". The year before, Barks, an atheist, had told the university professor Donald Ault: I have no apprehension, no fear of death. I do not believe in an afterlife. ... I think of death as total peace. You're beyond the clutches of all those who would crush you. On August 25, 2000, shortly after midnight, Carl Barks died quietly in his sleep at the age of 99. He was interred in Hillcrest Memorial Cemetery in Grants Pass, beside Garé's grave. Influence Barks' Donald Duck stories were rated #7 on The Comics Journal list of 100 top comics; his Uncle Scrooge stories were rated #20. Steven Spielberg and George Lucas have acknowledged that the rolling-boulder booby trap in the opening scene of Raiders of the Lost Ark was inspired by the 1954 Carl Barks Uncle Scrooge adventure "The Seven Cities of Cibola" (Uncle Scrooge #7). Lucas and Spielberg have also said that some of Barks' stories about space travel and the depiction of aliens had an influence on them. Lucas wrote the foreword to the 1982 Uncle Scrooge McDuck: His Life and Times. In it he calls Barks' stories "cinematic" and "a priceless part of our literary heritage". The Walt Disney Treasures DVD set Chronological Donald, Volume 2 includes a salute to Barks. Carl Barks has an asteroid named after him, 2730 Barks. In Almere, Netherlands, a street was named after him: Carl Barksweg. The same neighborhood also includes a Donald Ducklaan and a Goofystraat. Japanese animator and cartoonist Osamu Tezuka, who created manga such as Astro Boy and Black Jack, was a fan of Barks' work. New Treasure Island, one of Tezuka's first works, was partly influenced by "Donald Duck Finds Pirate Gold". A 1949 Donald Duck ten-pager features Donald raising a yacht from the ocean floor by filling it with ping pong balls. In December 1965 Karl Krøyer, a Dane, lifted the sunken freight vessel Al Kuwait in the Kuwait Harbor by filling the hull with 27 million tiny inflatable balls of polystyrene. Krøyer denies having been inspired by this Barks story. Some sources claim Krøyer was denied a Dutch patent registration (application number NL 6514306) for his invention on the grounds that the Barks story was a prior publication of the invention. Krøyer later successfully raised another ship off Greenland using the same method, and several other sunken vessels worldwide have since been raised by modified versions of this concept. The television show MythBusters also tested this method and was able to raise a small boat. Don Rosa, one of the most popular living Disney artists, and possibly the one who has been most keen on connecting the various stories into a coherent universe and chronology, considers (with few exceptions) all Barks' duck stories as canon, and all others as apocryphal. Rosa has said that a number of novelists and movie-makers cite Carl Barks as their 'major influence and inspiration'. When the news of Barks' passing was hardly covered by the press in America, "in Europe the sad news was flashed instantly across the airwaves and every newspaper — they realized the world had lost one of the most beloved, influential and well-known creators in international culture." In 2010 Oregon Cartoon Institute produced a video about the influence of Carl Barks and Basil Wolverton on Robert Crumb. The video game Donald Duck: Goin' Quackers is dedicated to the memory of Carl Barks. Carl Barks drew an early Andy Panda comic book story published in New Funnies #76, 1943. It is one of his few stories to feature humans interacting with talking animal characters (another is Dangerous Disguise, Four Color #308, 1951). See List of Fictional Pandas. The life story of Carl Barks, largely drawing upon his relationship with Disney and the phonetic similarity of his name to Karl Marx, serves as a loose inspiration to one of the subplots in The Last Song of Manuel Sendero by Ariel Dorfman. The first image ever to be displayed on an Apple Macintosh was a scan of Carl Barks' Scrooge McDuck. Bibliography Coo Coo #1, Hamilton Comics, 1997 (a facsimile of one of the racy magazines Barks did cartoons for in the thirties). The Carl Barks' Big Book of Barney Bear (), 2011 collection edited by Craig Yoe and published by IDW of the Barney Bear and Benny Burro stories that originally appeared in Our Gang Comics #11–36 (May/June 1944 – June 1947); Barks' one substantial non-Disney series. Carl Barks Library, 1984–1990, 30 hardback volumes in black and white published by Another Rainbow Publishing. Carl Barks Library (graphic album format, in color) 1992–1998 O Melhor da Disney: As Obras Completas de Carl Barks 2004–2008, 41 volume limited edition published by Abril Jovem in Brazil, compiling all the stories written by Barks, with his oil paintings as the cover art. The Carl Barks Collection 2005–2009, 30 volume limited edition published by Egmont in Norway, Sweden, Denmark, and Germany, and by Sanoma in Finland. Edited by Barks expert Geoffrey Blum. The Complete Carl Barks Disney Library 2011–?, hardback volumes with separate Uncle Scrooge and Donald Duck volumes from Fantagraphics Books. Uack! and Uack! presenta April 2014-ongoing, 26-volume edition with the collected stories written by Barks, including a few drawn by other artists, and previously unpublished stories, enriched with sketches and photographs. After the 23rd volume, the series got the name of "Uack! presenta, and
goes back to a proposal in 1832 by the German mathematician Carl Friedrich Gauss to base a system of absolute units on the three fundamental units of length, mass and time. Gauss chose the units of millimetre, milligram and second. In 1873, a committee of the British Association for the Advancement of Science, including physicists James Clerk Maxwell and William Thomson recommended the general adoption of centimetre, gram and second as fundamental units, and to express all derived electromagnetic units in these fundamental units, using the prefix "C.G.S. unit of ...". The sizes of many CGS units turned out to be inconvenient for practical purposes. For example, many everyday objects are hundreds or thousands of centimetres long, such as humans, rooms and buildings. Thus the CGS system never gained wide general use outside the field of science. Starting in the 1880s, and more significantly by the mid-20th century, CGS was gradually superseded internationally for scientific purposes by the MKS (metre–kilogram–second) system, which in turn developed into the modern SI standard. Since the international adoption of the MKS standard in the 1940s and the SI standard in the 1960s, the technical use of CGS units has gradually declined worldwide. SI units are predominantly used in engineering applications and physics education, while Gaussian CGS units are commonly used in theoretical physics, describing microscopic systems, relativistic electrodynamics, and astrophysics. CGS units are today no longer accepted by the house styles of most scientific journals, textbook publishers, or standards bodies, although they are commonly used in astronomical journals such as The Astrophysical Journal. The continued usage of CGS units is prevalent in magnetism and related fields because the B and H fields have the same units in free space and there is a lot of potential for confusion when converting published measurements from CGS to MKS. The units gram and centimetre remain useful as noncoherent units within the SI system, as with any other prefixed SI units. Definition of CGS units in mechanics In mechanics, the quantities in the CGS and SI systems are defined identically. The two systems differ only in the scale of the three base units (centimetre versus metre and gram versus kilogram, respectively), with the third unit (second) being the same in both systems. There is a direct correspondence between the base units of mechanics in CGS and SI. Since the formulae expressing the laws of mechanics are the same in both systems and since both systems are coherent, the definitions of all coherent derived units in terms of the base units are the same in both systems, and there is an unambiguous correspondence of derived units: (definition of velocity) (Newton's second law of motion) (energy defined in terms of work) (pressure defined as force per unit area) (dynamic viscosity defined as shear stress per unit velocity gradient). Thus, for example, the CGS unit of pressure, barye, is related to the CGS base units of length, mass, and time in the same way as the SI unit of pressure, pascal, is related to the SI base units of length, mass, and time: 1 unit of pressure = 1 unit of force/(1 unit of length)2 = 1 unit of mass/(1 unit of length⋅(1 unit of time)2) 1 Ba = 1 g/(cm⋅s2) 1 Pa = 1 kg/(m⋅s2). Expressing a CGS derived unit in terms of the SI base units, or vice versa, requires combining the scale factors that relate the two systems: 1 Ba = 1 g/(cm⋅s2) = 10−3 kg / (10−2 m⋅s2) = 10−1 kg/(m⋅s2) = 10−1 Pa. Definitions and conversion factors of CGS units in mechanics Derivation of CGS units in electromagnetism CGS approach to electromagnetic units The conversion factors relating electromagnetic units in the CGS and SI systems are made more complex by the differences in the formulae expressing physical laws of electromagnetism as assumed by each system of units, specifically in the nature of the constants that appear in these formulae. This illustrates the fundamental difference in the ways the two systems are built: In SI, the unit of electric current, the ampere (A), was historically defined such that the magnetic force exerted by two infinitely long, thin, parallel wires 1 metre apart and carrying a current of 1 ampere is exactly . This definition results in all SI electromagnetic units being numerically consistent (subject to factors of some integer powers of 10) with those of the CGS-EMU system described in further sections. The ampere is a base unit of the SI system, with the same status as the metre, kilogram, and second. Thus the relationship in the definition of the ampere with the metre and newton is disregarded, and the ampere is not treated as dimensionally equivalent to any combination of other base units. As a result, electromagnetic laws in SI require an additional constant of proportionality (see Vacuum permeability) to relate electromagnetic units to kinematic units. (This constant of proportionality is derivable directly from the above definition of the ampere.) All other electric and magnetic units are derived from these four base units using the most basic common definitions: for example, electric charge q is defined as current I multiplied by time t, resulting in the unit of electric charge, the coulomb (C), being defined as 1 C = 1 A⋅s. The CGS
and the magnetic field H in a medium other than vacuum, we need to also define the constants ε0 and μ0, which are the vacuum permittivity and permeability, respectively. Then we have (generally) and , where P and M are polarization density and magnetization vectors. The units of P and M are usually so chosen that the factors λ and λ′ are equal to the "rationalization constants" and , respectively. If the rationalization constants are equal, then . If they are equal to one, then the system is said to be "rationalized": the laws for systems of spherical geometry contain factors of 4π (for example, point charges), those of cylindrical geometry – factors of 2π (for example, wires), and those of planar geometry contain no factors of π (for example, parallel-plate capacitors). However, the original CGS system used λ = λ′ = 4π, or, equivalently, . Therefore, Gaussian, ESU, and EMU subsystems of CGS (described below) are not rationalized. Various extensions of the CGS system to electromagnetism The table below shows the values of the above constants used in some common CGS subsystems: Also, note the following correspondence of the above constants to those in Jackson and Leung: Of these variants, only in Gaussian and Heaviside–Lorentz systems equals rather than 1. As a result, vectors and of an electromagnetic wave propagating in vacuum have the same units and are equal in magnitude in these two variants of CGS. In each of these systems the quantities called "charge" etc. may be a different quantity; they are distinguished here by a superscript. The corresponding quantities of each system are related through a proportionality constant. Maxwell's equations can be written in each of these systems as: Electrostatic units (ESU) In the electrostatic units variant of the CGS system, (CGS-ESU), charge is defined as the quantity that obeys a form of Coulomb's law without a multiplying constant (and current is then defined as charge per unit time): The ESU unit of charge, franklin (Fr), also known as statcoulomb or esu charge, is therefore defined as follows: Therefore, in CGS-ESU, a franklin is equal to a centimetre times square root of dyne: The unit of current is defined as: Dimensionally in the CGS-ESU system, charge q is therefore equivalent to M1/2L3/2T−1. In CGS-ESU, all electric and magnetic quantities are dimensionally expressible terms of length, mass, and time, and none has an independent dimension. Such a system of units of electromagnetism, in which the dimensions of all electric and magnetic quantities are expressible in terms of the mechanical dimensions of mass, length, and time, is traditionally called an 'absolute system'.:3 ESU notation All electromagnetic units in ESU CGS system that do not have proper names are denoted by a corresponding SI name with an attached prefix "stat" or with a separate abbreviation "esu". Electromagnetic units (EMU) In another variant of the CGS system, electromagnetic units (EMUs), current is defined via the force existing between two thin, parallel, infinitely long wires carrying it, and charge is then defined as current multiplied by time. (This approach was eventually used to define the SI unit of ampere as well). In the EMU CGS subsystem, this is done by setting the Ampere force constant , so that Ampère's force law simply contains 2 as an explicit prefactor. The EMU unit of current, biot (Bi), also known as abampere or emu current, is therefore defined as follows: Therefore, in electromagnetic CGS units, a biot is equal to a square root of dyne: . The unit of charge in CGS EMU is: . Dimensionally in the EMU CGS system, charge q is therefore equivalent to M1/2L1/2. Hence, neither charge nor current is an independent physical quantity in EMU CGS. EMU notation All electromagnetic units in EMU CGS system that do not have proper names are denoted by a corresponding SI name with an attached prefix "ab" or with a separate abbreviation "emu". Relations between ESU and EMU units The ESU and EMU subsystems of CGS are connected by the fundamental relationship (see above), where c = ≈ is the speed of light in vacuum in centimetres per second. Therefore, the ratio of the corresponding "primary" electrical and magnetic units (e.g. current, charge, voltage, etc. – quantities proportional to those that enter directly into Coulomb's law or Ampère's force law) is equal either to c−1 or c: and . Units derived from these may have ratios equal to higher powers of c, for example: . Practical CGS units The practical CGS system is a hybrid system that uses the volt and the ampere as the unit of voltage and current respectively. Doing this avoids the inconveniently large and small quantities that arise for electromagnetic units in the esu and emu systems. This system was at one time widely used by electrical engineers because the volt and ampere had been adopted as international standard units by the International Electrical Congress of 1881. As well as the volt and amp, the farad (capacitance), ohm (resistance), coulomb (electric charge), and henry are consequently also used in the practical system and are the same as the SI units. Other variants There were at various points in time about half a dozen systems of electromagnetic units in use, most based on the CGS system. These include the Gaussian units and the Heaviside–Lorentz units. Electromagnetic units in various CGS systems In this table, c = is the dimensionless numeric value of the speed of light in vacuum when expressed in units of centimetres per second. The symbol "≘" is used instead of "=" as a reminder that the quantities are corresponding but not in general equal, even between CGS variants. For example, according to the next-to-last row of the table, if a capacitor has a capacitance of 1 F in SI, then it has a capacitance of (10−9 c2) cm in ESU; but it is incorrect to replace "1 F" with "(10−9 c2) cm" within an equation or formula. (This warning is a special aspect of electromagnetism units in CGS. By contrast, for example, it is always correct to replace "1 m" with "100 cm" within an equation or formula.) One can think of the SI value of the Coulomb constant kC as: This explains why SI to ESU conversions involving factors of c2 lead to significant simplifications of the ESU units, such as 1 statF = 1 cm and 1 statΩ = 1 s/cm: this is the consequence of the fact that in ESU system kC = 1. For example, a centimetre of capacitance is the capacitance of a sphere of radius 1 cm in vacuum. The capacitance C between two concentric spheres of radii R and r in ESU CGS system is: . By taking the limit as R goes to infinity we see C equals r. Physical constants in CGS units Advantages and disadvantages While the absence of constant coefficients in the formulae expressing some relation between the quantities in some CGS subsystems simplifies some calculations, it has the disadvantage that sometimes the units in CGS are hard to define through experiment. Also, lack of unique unit names leads to a great confusion: thus "15 emu" may mean either 15 abvolts, or 15 emu units of electric dipole moment, or 15 emu units of magnetic susceptibility, sometimes (but not always) per gram, or per mole. On the other hand, SI starts with a unit of current, the ampere, that is easier to determine through experiment, but which requires extra coefficients in the electromagnetic equations. With its system of uniquely named units, the SI also removes any confusion in usage: 1 ampere is a fixed value of a specified quantity, and so are 1 henry, 1 ohm, and 1 volt. An advantage of the Gaussian CGS system is that electric and magnetic fields have the same units, 4πε0 is replaced by 1, and the only dimensional constant appearing in the Maxwell equations is
his work De Jesu Christo servatore (1578), who rejected the idea of "vicarious satisfaction." According to Socinus, Jesus' death offers us a perfect example of self-sacrificial dedication to God." Other theories are the "embracement theory" and the "shared atonement" theory. Early Christologies (1st century) Early notions of Christ The earliest christological reflections were shaped by both the Jewish background of the earliest Christians, and by the Greek world of the eastern Mediterranean in which they operated. The earliest Christian writings give several titles to Jesus, such as Son of Man, Son of God, Messiah, and Kyrios, which were all derived from the Hebrew scriptures. According to Matt Stefon and Hans J. Hillerbrand, Historically in the Alexandrian school of thought (fashioned on the Gospel of John), Jesus Christ is the eternal Logos who already possesses unity with the Father before the act of Incarnation. In contrast, the Antiochian school viewed Christ as a single, unified human person apart from his relationship to the divine. Pre-existence The notion of pre-existence is deeply rooted in Jewish thought, and can be found in apocalyptic thought and among the rabbis of Paul's time, but Paul was most influenced by Jewish-Hellenistic wisdom literature, where Wisdom' is extolled as something existing before the world and already working in creation. According to Witherington, Paul "subscribed to the christological notion that Christ existed prior to taking on human flesh[,] founding the story of Christ ... on the story of divine Wisdom". Kyrios The title Kyrios for Jesus is central to the development of New Testament Christology. In the Septuagint it translates the Tetragrammaton, the holy Name of God. As such, it closely links Jesus with God – in the same way a verse such as Matthew 28:19, "The Name (singular) of the Father, the Son, and the Holy Spirit". Kyrios is also conjectured to be the Greek translation of Aramaic Mari, which in everyday Aramaic usage was a very respectful form of polite address, which means more than just "teacher" and was somewhat similar to rabbi. While the term Mari expressed the relationship between Jesus and his disciples during his life, the Greek Kyrios came to represent his lordship over the world. The early Christians placed Kyrios at the center of their understanding, and from that center attempted to understand the other issues related to the Christian mysteries. The question of the deity of Christ in the New Testament is inherently related to the Kyrios title of Jesus used in the early Christian writings and its implications for the absolute lordship of Jesus. In early Christian belief, the concept of Kyrios included the pre-existence of Christ, for they believed if Christ is one with God, he must have been united with God from the very beginning. Development of "low Christology" and "high Christology" Two fundamentally different Christologies developed in the early Church, namely a "low" or adoptionist Christology, and a "high" or "incarnation" Christology. The chronology of the development of these early Christologies is a matter of debate within contemporary scholarship. The "low Christology" or "adoptionist Christology" is the belief "that God exalted Jesus to be his Son by raising him from the dead", thereby raising him to "divine status". According to the "evolutionary model" c.q. "evolutionary theories", the christological understanding of Christ developed over time, as witnessed in the Gospels, with the earliest Christians believing that Jesus was a human who was exalted, c.q. adopted as God's Son, when he was resurrected. Later beliefs shifted the exaltation to his baptism, birth, and subsequently to the idea of his pre-existence, as witnessed in the Gospel of John. This "evolutionary model" was proposed by proponents of the Religionsgeschichtliche Schule, especially Wilhelm Boussets influential Kyrios Christos (1913). This evolutionary model was very influential, and the "low Christology" has long been regarded as the oldest Christology. The other early Christology is "high Christology", which is "the view that Jesus was a pre-existent divine being who became a human, did the Father's will on earth, and then was taken back up into heaven whence he had originally come", and from where he appeared on earth. According to Bousset, this "high Christology" developed at the time of Paul's writing, under the influence of Gentile Christians, who brought their pagan Hellenistic traditions to the early Christian communities, introducing divine honours to Jesus. According to Casey and Dunn, this "high Christology" developed after the time of Paul, at the end of the first century CE when the Gospel according to John was written. Since the 1970s, these late datings for the development of a "high Christology" have been contested, and a majority of scholars argue that this "high Christology" existed already before the writings of Paul. According to the "New Religionsgeschichtliche Schule", c.q. "Early High Christology Club", which includes Martin Hengel, Larry Hurtado, N. T. Wright, and Richard Bauckham, this "incarnation Christology" or "high Christology" did not evolve over a longer time, but was a "big bang" of ideas which were already present at the start of Christianity, and took further shape in the first few decades of the church, as witnessed in the writings of Paul. Some 'Early High Christology' proponents scholars argue that this "high Christology" may go back to Jesus himself. There is a controversy regarding whether Jesus himself claimed to be divine. In Honest to God, then-Bishop of Woolwich John A. T. Robinson, questioned the idea. John Hick, writing in 1993, mentioned changes in New Testament studies, citing "broad agreement" that scholars do not today support the view that Jesus claimed to be God, quoting as examples Michael Ramsey (1980), C. F. D. Moule (1977), James Dunn (1980), Brian Hebblethwaite (1985) and David Brown (1985). Larry Hurtado, who argues that the followers of Jesus within a very short period developed an exceedingly high level of devotional reverence to Jesus, at the same time rejects the view that Jesus made a claim to messiahship or divinity to his disciples during his life as "naive and ahistorical". According to Gerd Lüdemann, the broad consensus among modern New Testament scholars is that the proclamation of the divinity of Jesus was a development within the earliest Christian communities. N. T. Wright points out that arguments over the claims of Jesus regarding divinity have been passed over by more recent scholarship, which sees a more complex understanding of the idea of God in first century Judaism. But Andrew Loke argues that if Jesus did not claim and show himself to be truly divine and rise from the dead, the earliest Christian leaders who were devout ancient monotheistic Jews would have regarded Jesus as merely a teacher or a prophet, but not as truly divine, which they did. New Testamentical writings The study of the various Christologies of the Apostolic Age is based on early Christian documents. Paul The oldest Christian sources are the writings of Paul. The central Christology of Paul conveys the notion of Christ's pre-existence and the identification of Christ as Kyrios. Both notions already existed before him in the early Christian communities, and Paul deepened them and used them for preaching in the Hellenistic communities. What exactly Paul believed about the nature of Jesus cannot be determined decisively. In Philippians 2, Paul states that Jesus was preexistent and came to Earth "by taking the form of a servant, being made in human likeness". This sounds like an incarnation Christology. In Romans 1:4, however, Paul states that Jesus "was declared with power to be the Son of God by his resurrection from the dead", which sounds like an adoptionistic Christology, where Jesus was a human being who was "adopted" after his
from what would otherwise be the consequences of sin. The earliest Christian writings gave several titles to Jesus, such as Son of Man, Son of God, Messiah, and Kyrios, which were all derived from the Hebrew scriptures. These terms centered around two opposing themes, namely "Jesus as a preexistent figure who becomes human and then returns to God", versus adoptionism – that Jesus was human who was "adopted" by God at his baptism, crucifixion, or resurrection. From the second to the fifth centuries, the relation of the human and divine nature of Christ was a major focus of debates in the early church and at the first seven ecumenical councils. The Council of Chalcedon in 451 issued a formulation of the hypostatic union of the two natures of Christ, one human and one divine, "united with neither confusion nor division". Most of the major branches of Western Christianity and Eastern Orthodoxy subscribe to this formulation, while many branches of Oriental Orthodox Churches reject it, subscribing to miaphysitism. Definition and approaches Christology (from Greek Χριστός Khristós and , -logia), literally "the understanding of Christ," is the study of the nature (person) and work (role in salvation) of Jesus Christ. It studies Jesus Christ's humanity and divinity, and the relation between these two aspects; and the role he plays in salvation. "Ontological Christology" analyzes the nature or being of Jesus Christ. "Functional Christology" analyzes the works of Jesus Christ, while "soteriological Christology" analyzes the "salvific" standpoints of Christology. Several approaches can be distinguished within Christology. The term "Christology from above" or "high Christology" refers to approaches that include aspects of divinity, such as Lord and Son of God, and the idea of the pre-existence of Christ as the Logos (the Word), as expressed in the prologue to the Gospel of John. These approaches interpret the works of Christ in terms of his divinity. According to Pannenberg, Christology from above "was far more common in the ancient Church, beginning with Ignatius of Antioch and the second century Apologists." The term "Christology from below" or "low Christology" refers to approaches that begin with the human aspects and the ministry of Jesus (including the miracles, parables, etc.) and move towards his divinity and the mystery of incarnation. Person of Christ A basic christological teaching is that the person of Jesus Christ is both human and divine. The human and divine natures of Jesus Christ apparently (prosopic) form a duality, as they coexist within one person (hypostasis). There are no direct discussions in the New Testament regarding the dual nature of the Person of Christ as both divine and human, and since the early days of Christianity, theologians have debated various approaches to the understanding of these natures, at times resulting in ecumenical councils, and schisms. Some historical christological doctrines gained broad support: Monophysitism (monophysite controversy, 3rd–8th centuries) After the union of the divine and the human in the historical incarnation, Jesus Christ had only a single nature Miaphysitism (Oriental Orthodox churches) In the person of Jesus Christ, divine nature and human nature are united in a compound nature ("physis") Dyophysitism (Chalcedonian Creed) Christ maintained two natures, one divine and one human, after the Incarnation Monarchianism (Adoptionism (2nd century onwards) and Modalism) God as one, in contrast to the doctrine of the Trinity Influential Christologies which were broadly condemned as heretical are: Docetism (3rd–4th centuries) claimed the human form of Jesus was mere semblance without any true reality Arianism (4th century) viewed the divine nature of Jesus, the Son of God, as distinct and inferior to God the Father, e.g., by having a beginning in time Nestorianism (5th century) considered the two natures (human and divine) of Jesus Christ almost entirely distinct Various church councils, mainly in the 4th and 5th centuries, resolved most of these controversies, making the doctrine of the Trinity orthodox in nearly all branches of Christianity. Among them, only the Dyophysite doctrine was recognized as true and not heretical, belonging to the Christian orthodoxy and deposit of faith. Salvation In Christian theology, atonement is the method by which human beings can be reconciled to God through Christ's sacrificial suffering and death. Atonement is the forgiving or pardoning of sin in general and original sin in particular through the suffering, death and resurrection of Jesus, enabling the reconciliation between God and his creation. Due to the influence of Gustaf Aulèn's (1879–1978) Christus Victor (1931), the various theories or paradigma's of atonement are often grouped as "classical paradigm," "objective paradigm," and the "subjective paradigm": Classical paradigm: Ransom theory of atonement, which teaches that the death of Christ was a ransom sacrifice, usually said to have been paid to Satan or to death itself, in some views paid to God the Father, in satisfaction for the bondage and debt on the souls of humanity as a result of inherited sin. Gustaf Aulén reinterpreted the ransom theory, calling it the Christus Victor doctrine, arguing that Christ's death was not a payment to the Devil, but defeated the powers of evil, which had held humankind in their dominion.; Recapitulation theory, which says that Christ succeeded where Adam failed. Theosis ("divinization") is a "corollary" of the recapitulation. Objective paradigm: Satisfaction theory of atonement, developed by Anselm of Canterbury (1033/4–1109), which teaches that Jesus Christ suffered crucifixion as a substitute for human sin, satisfying God's just wrath against humankind's transgression due to Christ's infinite merit. Penal substitution, also called "forensic theory" and "vicarious punishment," which was a development by the Reformers of Anselm's satisfaction theory. Instead of considering sin as an affront to God's honour, it sees sin as the breaking of God's moral law. Penal substitution sees sinful man as being subject to God's wrath, with the essence of Jesus' saving work being his substitution in the sinner's place, bearing the curse in the place of man. Governmental theory of atonement, "which views God as both the loving creator and moral Governor of the universe." Subjective paradigm: Moral influence theory of atonement, developed, or most notably propagated, by Abelard (1079–1142), who argued that "Jesus died as the demonstration of God's love," a demonstration which can change the hearts and minds of the sinners, turning back to God. Moral example theory, developed by Faustus Socinus (1539–1604) in his work De Jesu Christo servatore (1578), who rejected the idea of "vicarious satisfaction." According to Socinus, Jesus' death offers us a perfect example of self-sacrificial dedication to God." Other theories are the "embracement theory" and the "shared atonement" theory. Early Christologies (1st century) Early notions of Christ The earliest christological reflections were shaped by both the Jewish background of the earliest Christians, and by the Greek world of the eastern Mediterranean in which they operated. The earliest Christian writings give several titles to Jesus, such as Son of Man, Son of God, Messiah, and Kyrios, which were all derived from the Hebrew scriptures. According to Matt Stefon and Hans J. Hillerbrand, Historically in the Alexandrian school of thought (fashioned on the Gospel of John), Jesus Christ is the eternal Logos who already possesses unity with the Father before the act of Incarnation. In contrast, the Antiochian school viewed Christ as a single, unified human person apart from his relationship to the divine. Pre-existence The notion of pre-existence is deeply rooted in Jewish thought, and can be found in apocalyptic thought and among the rabbis of Paul's time, but Paul was most influenced by Jewish-Hellenistic wisdom literature, where Wisdom' is extolled as something existing before the world and already working in creation. According to Witherington, Paul "subscribed to the christological notion that Christ existed prior to taking on human flesh[,] founding the story of Christ ... on the story of divine Wisdom". Kyrios The title Kyrios for Jesus is central to the development of New Testament Christology. In the Septuagint it translates the Tetragrammaton, the holy Name of God. As such, it closely links Jesus with God – in the same way a verse such as Matthew 28:19, "The Name (singular) of the Father, the Son, and the Holy Spirit". Kyrios is also conjectured to be the Greek translation of Aramaic Mari, which in everyday Aramaic usage was a very respectful form of polite address, which means more than just "teacher" and was somewhat similar to rabbi. While the term Mari expressed the relationship between Jesus and his disciples during his life, the Greek Kyrios came to represent his lordship over the world. The early Christians placed Kyrios at the center of their understanding, and from that center attempted to understand the other issues related to the Christian mysteries. The question of the deity of Christ in the New Testament is inherently related to the Kyrios title of Jesus used in the early Christian writings and its implications for the absolute lordship of Jesus. In early Christian belief, the concept of Kyrios included the pre-existence of Christ, for they believed if Christ is one with God, he must have been united with God from the very beginning. Development of "low Christology" and "high Christology" Two fundamentally different Christologies developed in the early Church, namely a "low" or adoptionist Christology, and a "high" or "incarnation" Christology. The chronology of the development of these early Christologies is a matter of debate within contemporary scholarship. The "low Christology" or "adoptionist Christology" is the belief "that God exalted Jesus to be his Son by raising him from the dead", thereby raising him to "divine status". According to the "evolutionary model" c.q. "evolutionary theories", the christological understanding of Christ developed over time, as witnessed in the Gospels, with the earliest Christians believing that Jesus was a human who was exalted, c.q. adopted as God's Son, when he was resurrected. Later beliefs shifted the exaltation to his baptism, birth, and subsequently to the idea of his pre-existence, as witnessed in the Gospel of John. This "evolutionary model" was proposed by proponents of the Religionsgeschichtliche Schule, especially Wilhelm Boussets influential Kyrios Christos (1913). This evolutionary model was very influential, and the "low Christology" has long been regarded as the oldest Christology. The other early Christology is "high Christology", which is "the view that Jesus was a pre-existent divine being who became a human, did the Father's will on earth, and then was taken back up into heaven whence he had originally come", and from where he appeared on earth. According to Bousset, this "high Christology" developed at the time of Paul's writing, under the influence of Gentile Christians, who brought their pagan Hellenistic traditions to the early Christian communities, introducing divine honours to Jesus. According to Casey and Dunn, this "high Christology" developed after the time of Paul, at the end of the first century CE when the Gospel according to John was written. Since the 1970s, these late datings for the development of a "high Christology" have been contested, and a majority of scholars argue that this "high Christology" existed already before the writings of Paul. According to the "New Religionsgeschichtliche Schule", c.q. "Early High Christology Club", which includes Martin Hengel, Larry Hurtado, N. T. Wright, and Richard Bauckham, this "incarnation Christology" or "high Christology" did not evolve over a longer time, but was a "big bang" of ideas which were already present at the start of Christianity, and took further shape in the first few decades of the church, as witnessed in the writings of Paul. Some 'Early High Christology' proponents scholars argue that this "high Christology" may go back to Jesus himself. There is a controversy regarding whether Jesus himself claimed to be divine. In Honest to God, then-Bishop of Woolwich John A. T. Robinson, questioned the idea. John Hick, writing in 1993, mentioned changes in New Testament studies, citing "broad agreement" that scholars do not today support the view that Jesus claimed to be God, quoting as examples Michael Ramsey (1980), C. F. D. Moule (1977), James Dunn (1980), Brian Hebblethwaite (1985) and David Brown (1985). Larry Hurtado, who argues that the followers of Jesus within a very short period developed an exceedingly high level of devotional reverence to Jesus, at the same time rejects the view that Jesus made a claim to messiahship or divinity to his disciples during his life as "naive and ahistorical". According to Gerd Lüdemann, the broad consensus among modern New Testament scholars is that the proclamation of the divinity of Jesus was a development within the earliest Christian communities. N. T. Wright points out that arguments over the claims of Jesus regarding divinity have been passed over by more recent scholarship, which sees a more complex understanding of the idea of God in first century Judaism. But Andrew Loke argues that if Jesus did not claim and show himself to be truly divine and rise from the dead, the earliest Christian leaders who were devout ancient monotheistic Jews would have regarded Jesus as merely a teacher or a prophet, but not as truly divine, which they did. New Testamentical writings The study of the various Christologies of the Apostolic Age is based on early Christian documents. Paul The oldest Christian sources are the writings of Paul. The central Christology of Paul conveys the notion of Christ's pre-existence and the identification of Christ as Kyrios. Both notions already existed before him in the early Christian communities, and Paul deepened them and used them for preaching in the Hellenistic communities. What exactly Paul believed about the nature of Jesus cannot be determined decisively. In Philippians 2, Paul states that Jesus was preexistent and came to Earth "by taking the form of a servant, being made in human likeness". This sounds like an incarnation Christology. In Romans 1:4, however, Paul states that Jesus "was declared with power to be the Son of God by his resurrection from the dead", which sounds like an adoptionistic Christology, where Jesus was a human being who was "adopted" after his death. Different views would be debated for centuries by Christians and finally settled on the idea that he was both fully human and fully divine by the middle of the 5th century in the Council of Ephesus. Paul's thoughts on Jesus' teachings, versus his nature and being, is more defined, in that Paul believed Jesus was sent as an atonement for the sins of everyone. The Pauline epistles use Kyrios to identify Jesus almost 230 times, and
must, as soon as practicable initiate a conference between the parties to plan for the rest of the discovery process and then the parties should submit a proposed discovery plan to the judge within 14 days after the conference. In many U.S. jurisdictions, a complaint submitted to a court must be accompanied by a Case Information Statement, which sets forth specific key information about the case and the lawyers representing the parties. This allows the judge to make determinations about which deadlines to set for different phases of the case, as it moves through the court system. There are also freely accessible web search engines to assist parties in finding court decisions that can be cited in the complaint as an example or analogy to resolve similar questions of law. Google Scholar is the biggest database of full text state and federal courts decisions that can be accessed without charge. These web search engines often allow one to select specific state courts to search. Federal courts created the Public Access to Court Electronic Records (PACER) system to obtain case and docket information from the United States district courts, United States courts of appeals, and United States bankruptcy courts. The system is managed by the Administrative Office of the United States Courts; it allows lawyers and self-represented clients to obtain documents entered in the case much faster than regular mail. Filing and privacy In addition to Federal Rules of Civil Procedure, many of the U.S. district courts have developed their own requirements included in Local Rules for filing with the Court. Local Rules can set up a limit on the number of pages, establish deadlines for motions and responses, explain whether it is acceptable to combine a motion petition with a response, specify if a judge needs an additional copy of the documents (called "judge’s copy"), etc. Local Rules can define page layout elements like: margins, text font/size, distance between lines, mandatory footer text, page numbering, and provide directions on how the pages need to be bound together – i.e. acceptable fasteners, number and location of fastening holes, etc. If the filed motion does not comply with the Local Rules then the judge can choose to strike the motion completely, or order the party to re-file its motion, or grant a special exception to the Local Rules. According to Federal Rules of Civil Procedure (FRCP) , sensitive text like Social Security number, Taxpayer Identification Number, birthday, bank accounts and children’s names, should be redacted from the filings made with the court and accompanying exhibits, (exhibits normally do not need to be attached to the original complaint, but should be presented to Court after the discovery). The redacted text can be erased with black-out or white-out, and the page should have an indication that it was redacted - most often by stamping word "redacted" on the bottom. Alternately, the filing party may ask the court’s permission to file some exhibits completely under seal. A minor's name of
often associated with misdemeanor criminal charges presented by the prosecutor without the grand jury process. In most U.S. jurisdictions, the charging instrument presented to and authorized by a grand jury is referred to as an indictment. United States Virtually every U.S. state has some forms available on the web for most common complaints for lawyers and self-representing litigants; if a petitioner cannot find an appropriate form in their state, they often can modify a form from another state to fit his or her request. Several United States federal courts publish general guidelines for the petitioners and Civil Rights complaint forms. A complaint generally has the following structural elements: Caption and heading - lists name, address and telephone number of the filing attorney or self-representing litigant at the top of the complaint. The case caption usually also indicates the court in which the case originates, names of the parties and a brief description of the document. Jurisdiction and venue - this section describes why the case should be heard in the selected court rather than some other court or forum. Parties - identifies plaintiffs and defendants. Definitions - optional section which defines some terms used throughout the document. The main purpose of a definition is to achieve clarity without needless repetition. Statement of facts - lists facts that brought the case to the court. Cause of action - a numbered list of legal allegations (called "counts"), with specific details about application of the governing law to each count. In this section the plaintiff usually cites existing Law, previous decisions of the court where the case is being processed, decisions of the higher appellate courts, and cases from other courts, - as an analogy to resolve similar questions of law. Injury - plaintiff explains to the judge how the actions of the defendant(s) harmed his rights. Demand for relief (also known as the prayer for relief or the ad damnum clause) - describes the relief that plaintiff is seeking as a result of the lawsuit. The relief can include a request for declaratory judgment, a request for injunctive relief (non-monetary relief), compensatory and actual damages (such as monetary relief), punitive damages (non-compensatory), and other relief. After the complaint has been filed with the court, it has to be properly served to the opposite parties, but usually petitioners are not allowed to serve the complaint personally. The court also can issue a summons – an official summary document which the plaintiff needs to have served together with the complaint. The defendants have limited time to respond, depending on the State or Federal rules. A defendant's failure to answer a complaint can result in a default judgment in favor of the petitioner. For example, in United States federal courts, any person who is at least 18 years old and not a party may serve a summons and complaint in a civil case. The defendant must submit an answer within 21 days after being served with the summons and complaint, or request a waiver, according to FRCP Rule 12. After the civil complaint has been served to the defendants, the plaintiff must, as soon as practicable initiate a conference between the parties to plan for the rest of the discovery process and then the parties should submit a proposed discovery plan to the judge within 14 days after the conference. In many U.S. jurisdictions, a complaint submitted to a court must be accompanied by a Case Information Statement, which sets forth specific key information about the case and the lawyers representing the parties. This allows the judge to make determinations about which deadlines to set for different phases of the case, as it moves through the court system. There are also freely accessible web search engines to assist parties in finding court decisions that can be cited in the
distinction, since it was the second university founded in Central Europe, after the Charles University in Prague. Politics and Expansion Casimir demonstrated competence in foreign diplomacy and managed to double the size of the kingdom. He neutralized the relations with potential enemies in the West and the North, and set on an expansion eastward. He took over the Ruthenian kingdom of Halych and Volodymyr (a territory in the modern-day Ukraine), known in Polish history as Red Ruthenia and Volhynia. By extending the borders far south-east, the Polish kingdom gained access to the lucrative Black Sea trade. Succession In 1355, in Buda, Casimir designated his nephew Louis I of Hungary as his successor should he produce no male heir, just as his father had with Charles I of Hungary to gain help against Bohemia. In exchange Casimir gained a favourable Hungarian attitude, needed in disputes with the hostile Teutonic Order and the Kingdom of Bohemia. At the time Casimir was 45 years old, and so producing a son did not seem unreasonable (he already had a few children). Casimir left no legal son, however, begetting five daughters instead. He tried to adopt his grandson, Casimir IV, Duke of Pomerania, in his last will. The child had been born to his eldest daughter, Elisabeth, Duchess of Pomerania, in 1351. This part of the testament was invalidated by Louis I of Hungary, however, who had traveled to Kraków quickly after Casimir died (in 1370) and bribed the nobles with future privileges. Casimir III also had a son-in-law, Louis VI of Bavaria, Margrave and Prince-elector of Brandenburg, who was considered a possible successor, but he was deemed ineligible as his wife, Casimir's daughter Cunigunde, had died in 1357 without issue. Thus King Louis I of Hungary became successor in Poland. Louis was proclaimed king upon Casimir's death in 1370, though Casimir's sister Elisabeth (Louis's mother) held much of the real power until her death in 1380. Society under the reign of Casimir Casimir was facetiously named "the Peasants' King". He introduced the codes of law of Greater and Lesser Poland as an attempt to end the overwhelming superiority of the nobility. During his reign all three major classes — the nobility, priesthood, and bourgeoisie — were more or less counterbalanced, allowing Casimir to strengthen his monarchic position. He was known for siding with the weak when the law did not protect them from nobles and clergymen. He reportedly even supported a peasant whose house had been demolished by his own mistress, after she had ordered it to be pulled down because it disturbed her enjoyment of the beautiful landscape. His popularity with the peasants helped to rebuild the country, as part of the reconstruction program was funded by a land tax paid by the lower social class. Relationship with Jews On 9 October 1334, Casimir confirmed the privileges granted to Jews in 1264 by Bolesław V the Chaste. Under penalty of death, he prohibited the kidnapping of Jewish children for the purpose of enforced Christian baptism, and he inflicted heavy punishment for the desecration of Jewish cemeteries. While Jews had lived in Poland since before his reign, Casimir allowed them to settle in Poland in great numbers and protected them as people of the king. Casimir's legendary Jewish mistress Esterka remains unconfirmed by direct historical evidence. Relationships and children Casimir III was married four times: Aldona of Lithuania On 30 April or 16 October 1325, Casimir married Aldona of Lithuania. She was also known as Anna, possibly a baptismal name. She was a daughter of Grand Duke Gediminas of Lithuania and Jewna. They had two children: Elisabeth of Poland (ca. 1326–1361); married Duke Bogislaus V of Pomerania Cunigunde of Poland (1334–1357); married Louis VI the Roman, the son of Louis IV, Holy Roman Emperor Aldona died on 26 May 1339. Casimir remained a widower for two years. Adelheid of Hesse On 29 September 1341, Casimir married his second wife, Adelaide of Hesse. She was a daughter of Henry II, Landgrave of Hesse, and Elizabeth of Meissen. They had no children. Casimir started living separately from Adelaide soon after the marriage. Their loveless marriage lasted until 1356, when he declared himself divorced. Christina Rokiczana After Casimir "divorced" Adelaide he married his mistress Christina Rokiczana, the widow of Miklusz Rokiczani, a wealthy merchant. Her own origins are unknown. Following the death of her first husband she had entered the court of Bohemia in Prague as a
a peasant whose house had been demolished by his own mistress, after she had ordered it to be pulled down because it disturbed her enjoyment of the beautiful landscape. His popularity with the peasants helped to rebuild the country, as part of the reconstruction program was funded by a land tax paid by the lower social class. Relationship with Jews On 9 October 1334, Casimir confirmed the privileges granted to Jews in 1264 by Bolesław V the Chaste. Under penalty of death, he prohibited the kidnapping of Jewish children for the purpose of enforced Christian baptism, and he inflicted heavy punishment for the desecration of Jewish cemeteries. While Jews had lived in Poland since before his reign, Casimir allowed them to settle in Poland in great numbers and protected them as people of the king. Casimir's legendary Jewish mistress Esterka remains unconfirmed by direct historical evidence. Relationships and children Casimir III was married four times: Aldona of Lithuania On 30 April or 16 October 1325, Casimir married Aldona of Lithuania. She was also known as Anna, possibly a baptismal name. She was a daughter of Grand Duke Gediminas of Lithuania and Jewna. They had two children: Elisabeth of Poland (ca. 1326–1361); married Duke Bogislaus V of Pomerania Cunigunde of Poland (1334–1357); married Louis VI the Roman, the son of Louis IV, Holy Roman Emperor Aldona died on 26 May 1339. Casimir remained a widower for two years. Adelheid of Hesse On 29 September 1341, Casimir married his second wife, Adelaide of Hesse. She was a daughter of Henry II, Landgrave of Hesse, and Elizabeth of Meissen. They had no children. Casimir started living separately from Adelaide soon after the marriage. Their loveless marriage lasted until 1356, when he declared himself divorced. Christina Rokiczana After Casimir "divorced" Adelaide he married his mistress Christina Rokiczana, the widow of Miklusz Rokiczani, a wealthy merchant. Her own origins are unknown. Following the death of her first husband she had entered the court of Bohemia in Prague as a lady-in-waiting. Casimir brought her with him from Prague and convinced the abbot of the Benedictine abbey of Tyniec to marry them. The marriage was held in a secret ceremony but soon became known. Queen Adelaide renounced it as bigamous and returned to Hesse. Casimir continued living with Christine despite complaints by Pope Innocent VI on behalf of Queen Adelaide. This marriage lasted until 1363–64 when Casimir again declared himself divorced. They had no children. Hedwig of Żagań In about 1365, Casimir married his fourth wife Hedwig of Żagań. She was a daughter of Henry V of Iron, Duke of Żagań and Anna of Mazovia. They had three children: Anna of Poland, Countess of Celje (1366 – 9 June 1422); married firstly William of Celje; their only daughter was Anne of Celje, who married Jogaila of Lithuania when he was king of Poland (as Władysław II Jagiełło). Anna married secondly Ulrich, Duke of Teck; they had no children. Kunigunde of Poland (1367 – 1370) Jadwiga of Poland (1368 – ca. 1382) As Adelheid was still alive (and possibly Christina as well), the marriage to Hedwig was also considered bigamous. Because of this, the legitimacy of his three young daughters was disputed. Casimir managed to have Anna and Kunigunde legitimated by Pope Urban V on 5 December 1369. Jadwiga the younger was legitimated by Pope Gregory XI on 11 October 1371 (after Casimir's death). Title and style Casimir's full title was: Casimir by the grace of God king of Poland and Rus' (Ruthenia), lord and heir of
measured in bits), using the most efficient algorithm, and the space complexity of a problem equal to the volume of the memory used by the algorithm (e.g., cells of the tape) that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm. This allows classification of computational problems by complexity class (such as P, NP, etc.). An axiomatic approach to computational complexity was developed by Manuel Blum. It allows one to deduce many properties of concrete computational complexity measures, such as time complexity or space complexity, from properties of axiomatically defined measures. In algorithmic information theory, the Kolmogorov complexity (also called descriptive complexity, algorithmic complexity or algorithmic entropy) of a string is the length of the shortest binary program that outputs that string. Minimum message length is a practical application of this approach. Different kinds of Kolmogorov complexity are studied: the uniform complexity, prefix complexity, monotone complexity, time-bounded Kolmogorov complexity, and space-bounded Kolmogorov complexity. An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov. The axiomatic approach encompasses other approaches to Kolmogorov complexity. It is possible to treat different kinds of Kolmogorov complexity as particular cases of axiomatically defined generalized Kolmogorov complexity. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to Kolmogorov complexity was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003). In information theory, information fluctuation complexity is the fluctuation of information about information entropy. It is derivable from fluctuations in the predominance of order and chaos in a dynamic system and has been used as a measure of complexity in many diverse fields. In information processing, complexity is a measure of the total number of properties transmitted by an object and detected by an observer. Such a collection of properties is often referred to as a state. In physical systems, complexity is a measure of the probability of the state vector of the system. This should not be confused with entropy; it is a distinct mathematical measure, one in which two distinct states are never conflated and considered equal, as is done for the notion of entropy in statistical mechanics. In dynamical systems, statistical complexity measures the size of the minimum program able to statistically reproduce the patterns (configurations) contained in the data set (sequence). While the algorithmic complexity implies a deterministic description of an object (it measures the information content of an individual sequence), the statistical complexity, like forecasting complexity, implies a statistical description, and refers to an ensemble of sequences generated by a certain source. Formally, the statistical complexity reconstructs a minimal model comprising the collection of all histories sharing a similar probabilistic future, and measures the entropy of the probability distribution of the states within this model. It is a computable and observer-independent measure based only on the internal dynamics of the system, and has been used in studies of emergence and self-organization. In mathematics, Krohn–Rhodes complexity is an important topic in the study of finite semigroups and automata. In Network theory complexity is the product of richness in the connections between components of a system, and defined by a very unequal distribution of certain measures (some elements being highly connected and some very few, see complex network). In software engineering, programming complexity is a measure of the interactions of the various elements of the software. This differs from the computational complexity described above in that it is a measure of the design of the software. In abstract sense – Abstract Complexity, is based on visual structures perception It is complexity of binary string defined as a square of features number divided by number of elements (0's and 1's). Features comprise here all distinctive arrangements of 0's and 1's. Though the features number have to be always approximated the definition is precise and meet intuitive criterion. Other fields introduce less precisely defined notions of complexity: A complex adaptive system has some or all of the following attributes: The number of parts (and types of parts) in the system and the number of relations between the parts is non-trivial – however, there is no general rule to separate "trivial" from "non-trivial"; The system has memory or includes feedback; The system can adapt itself according to its history or feedback; The relations between the system and its environment are non-trivial or non-linear; The system can be influenced by, or can adapt itself to, its environment; The system is highly sensitive to initial conditions. Study Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex systems and phenomena. From one perspective, that which is somehow complex – displaying variation without being random – is most worthy of interest given the rewards found in
complexity as time of computation is smaller when multitape Turing machines are used than when Turing machines with one tape are used. Random Access Machines allow one to even more decrease time complexity (Greenlaw and Hoover 1998: 226), while inductive Turing machines can decrease even the complexity class of a function, language or set (Burgin 2005). This shows that tools of activity can be an important factor of complexity. Varied meanings In several scientific fields, "complexity" has a precise meaning: In computational complexity theory, the amounts of resources required for the execution of algorithms is studied. The most popular types of computational complexity are the time complexity of a problem equal to the number of steps that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm, and the space complexity of a problem equal to the volume of the memory used by the algorithm (e.g., cells of the tape) that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm. This allows classification of computational problems by complexity class (such as P, NP, etc.). An axiomatic approach to computational complexity was developed by Manuel Blum. It allows one to deduce many properties of concrete computational complexity measures, such as time complexity or space complexity, from properties of axiomatically defined measures. In algorithmic information theory, the Kolmogorov complexity (also called descriptive complexity, algorithmic complexity or algorithmic entropy) of a string is the length of the shortest binary program that outputs that string. Minimum message length is a practical application of this approach. Different kinds of Kolmogorov complexity are studied: the uniform complexity, prefix complexity, monotone complexity, time-bounded Kolmogorov complexity, and space-bounded Kolmogorov complexity. An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov. The axiomatic approach encompasses other approaches to Kolmogorov complexity. It is possible to treat different kinds of Kolmogorov complexity as particular cases of axiomatically defined generalized Kolmogorov complexity. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to Kolmogorov complexity was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003). In information theory, information fluctuation complexity is the fluctuation of information about information entropy. It is derivable from fluctuations in the predominance of order and chaos in a dynamic system and has been used as a measure of complexity in many diverse fields. In information processing, complexity is a measure of the total number of properties transmitted by an object and detected by an observer. Such a collection of properties is often referred to as a state. In physical systems, complexity is a measure of the probability of the state vector of the system. This should not be confused with entropy; it is a distinct mathematical measure, one in which two distinct states are never conflated and considered equal, as is done for the notion of entropy in statistical mechanics. In dynamical systems, statistical complexity measures the size of the minimum program able to statistically reproduce the patterns (configurations) contained in the data set (sequence). While the algorithmic complexity implies a deterministic description of an object (it measures the information content of an individual sequence), the statistical complexity, like forecasting complexity, implies a statistical description, and refers to an ensemble of sequences generated by a certain source. Formally, the statistical complexity reconstructs a minimal model comprising the collection of all histories sharing a similar probabilistic future, and measures the entropy of the probability distribution of the states within this model. It is a computable and observer-independent measure based only on the internal dynamics of the system, and has been used in studies of emergence and self-organization. In mathematics, Krohn–Rhodes complexity is an important topic in the study of finite semigroups and automata. In Network theory complexity is the product of richness in the connections between components of a system, and defined by a very unequal distribution of certain measures (some elements being highly connected and some very few, see complex network). In software engineering, programming complexity is a measure of the interactions of the various elements of the software. This differs from the computational complexity described above in that it is a measure of the design of the software. In abstract sense – Abstract Complexity, is based on visual structures perception It is complexity of binary string defined as a square of features number divided by number of elements (0's and 1's). Features comprise here all distinctive arrangements of 0's and 1's. Though the features number have to be always approximated the definition is precise and meet intuitive criterion. Other fields introduce less precisely defined notions of complexity: A complex adaptive system has some or all of the following attributes: The number of parts (and types of parts) in the system and the number of relations between the parts is non-trivial – however, there is no general rule to separate "trivial" from "non-trivial"; The system has memory or includes feedback; The system can adapt itself according to its history or feedback; The relations between the system and its environment are non-trivial or non-linear; The system can be influenced by, or can adapt itself to, its environment; The system is highly sensitive to initial conditions. Study Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex systems and phenomena. From one perspective, that which is somehow complex – displaying variation without being random – is most worthy of interest given the rewards found in the depths of exploration. The use of the term complex is often confused with the term complicated. In today's systems, this is the difference between myriad connecting "stovepipes" and effective "integrated" solutions. This means that complex is the opposite of independent, while complicated is the opposite of simple. While this has led some fields to come up with specific definitions of complexity, there is a more recent movement to regroup observations from different fields to study complexity in itself, whether it appears in anthills, human brains, or economic systems, social systems. One such interdisciplinary group of fields is relational order theories. Topics Behaviour The behavior of a complex system is often said to be due to emergence and self-organization. Chaos theory has investigated the sensitivity of systems to variations in initial conditions as one cause of complex behaviour. Mechanisms Recent developments in artificial life, evolutionary computation and genetic algorithms have led to an increasing emphasis on complexity and complex adaptive systems. Simulations In social science, the study on the emergence of macro-properties from the micro-properties, also known as macro-micro view in sociology. The topic is commonly recognized as social complexity that is often related to the use of computer simulation in social science, i.e.: computational sociology. Systems Systems theory has long been concerned with the study of complex systems (in recent times, complexity theory and complex systems have also been used as names of the field). These systems are present in the research of a variety disciplines, including biology, economics, social studies and technology. Recently, complexity has become a natural domain of interest of real world socio-cognitive systems and emerging systemics research. Complex systems tend to be high-dimensional, non-linear, and difficult to model. In specific circumstances, they may exhibit low-dimensional behaviour. Data In information theory, algorithmic information theory is concerned with the complexity of strings of data. Complex strings are harder to compress. While intuition tells us that this may depend on the codec used to compress a string (a codec could be theoretically created in any arbitrary language, including one in which the very small command "X" could cause the computer to output a very complicated string like "18995316"), any two Turing-complete languages can be implemented in each other, meaning that the
5 rules in Holy Living (1650), including abstaining from marrying "so long as she is with child by her former husband" and "within the year of mourning". Celibacy In the Roman Catholic Church, celibacy is vowed or promised as one of the evangelical counsels by the persons of the consecrated life. Furthermore, in 306, the Synod of Elvira proscribed clergy from marrying. This was unevenly enforced until the Second Lateran Council in 1139 and found its way into Canon law. Unmarried deacons promise celibacy to either their local bishop when ordained. Eastern Catholic priests are permitted to marry, provided they do so before ordination and outside monastic life. Vows of chastityVows of chastity can be taken either as part of an organised religious life (such as Roman Catholic Beguines and Beghards in the past) or on an individual basis: as a voluntary act of devotion, or as part of an ascetic lifestyle (often devoted to contemplation), or both. Some Protestant religious communities, such as the Bruderhof, take vows of chastity as part of the church membership process. Teaching by denomination Catholicism Chastity is a central and pivotal concept in Roman Catholic praxis. Chastity's importance in traditional Roman Catholic teaching stems from the fact that it is regarded as essential in maintaining and cultivating the unity of body with spirit and thus the integrity of the human being. It is also regarded as fundamental to the practise of the Catholic life because it involves an apprenticeship in self-mastery. By attaining mastery over one's passions, reason, will, and desire can harmoniously work together to do what is good. Lutheranism The theology of the body of the Lutheran Churches emphasizes the role of the Holy Spirit, who has sanctified the bodies of Christians to be God's temple. Many Lutheran monks and Lutheran nuns practice celibacy, though in other Lutheran religious orders it is not compulsory. The Church of Jesus Christ of Latter-Day Saints In The Church of Jesus Christ of Latter-day Saints chastity is very important, quoting: "Physical intimacy between husband and wife is a beautiful and sacred part of God's plan for His children. It is an expression of love within marriage and allows husband and wife to participate in the creation of life. God has commanded that this sacred power be expressed only between a man and a woman who are legally married. The law of chastity applies to both men and women. It includes strict abstinence from sexual relations before marriage and complete fidelity and loyalty to one's spouse after marriage. "The law of chastity requires that sexual relations be reserved for marriage between a man and a woman. "In addition to reserving sexual intimacy for marriage, we obey the law of chastity by controlling our thoughts, words, and actions. Jesus Christ taught, "Ye have heard that it was said by them of old time, Thou shalt not commit adultery: but I say unto you, That whosoever looketh on a woman to lust after her hath committed adultery with her already in his heart" (Matthew 5:27–28)." Teachings of the Church of Jesus Christ of Latter-Day Saints also include that sexual expression within marriage is an important dimension of spousal bonding apart from, but not necessarily avoiding its procreative result. Islam
meet a penalty. Multiplied for him is the punishment on the Day of Resurrection, and he will abide therein humiliated -Except for those who repent, believe and do righteous work. For them Allah will replace their evil deeds with good. And ever is Allah Forgiving and Merciful." (25:68-70) In a list of commendable deeds the Quran says: "Indeed, the Muslim men and Muslim women, the believing men and believing women, the obedient men and obedient women, the truthful men and truthful women, the patient men and patient women, the humble men and humble women, the charitable men and charitable women, the fasting men and fasting women, the men who guard their private parts and the women who do so, and the men who remember Allah often and the women who do so - for them Allah has prepared forgiveness and a great reward." (33:35) Because the sex desire is usually attained before a man is financially capable of marriage, the love to God and mindfulness of Him should be sufficient motive for chastity: "But let them who find not [the means for] marriage abstain [from sexual relations] until Allah enriches them from His bounty. And those who seek a contract [for eventual emancipation] from among whom your right hands possess - then make a contract with them if you know there is within them goodness and give them from the wealth of Allah which He has given you. And do not compel your slave girls to prostitution, if they desire chastity, to seek [thereby] the temporary interests of worldly life. And if someone should compel them, then indeed, Allah is [to them], after their compulsion, Forgiving and Merciful." (24:33) Sharia (Law) Chastity is mandatory in Islam. Sex outside legitimacy is prohibited, for both men and women whether married or unmarried. The injunctions and forbiddings in Islam apply equally to men and women. The legal punishment for adultery is equal for men and women. Social hypocrisy in many societies over history had led to a double standard when considering sin committed by men versus sin committed by women. Society tended to be more lenient and permissive towards men forgiving men for sins not forgivable when women do them. The prophet's prescription to the youth was: Those of you who own the means should marry for this should keep their eyes uncraving and their chastity secure. Those who don't, may practise fasting for it curbs desire. " (Ibn Massoud) Chastity is an attitude and a way of life. In Islam it is both a personal and a social value. A Muslim society should not condone relations entailing or conducive to sexual license. Social patterns and practices calculated to inflame sexual desire are frowned upon by Islam, such incitements to immorality including permissive ideologies, titillating works of art and the failure to inculcate sound moral principles in the young. At the heart of such a view of human sexuality lies the conviction that the notion of personal freedom should never be misconstrued as the freedom to flout God's laws by overstepping the bounds which, in His infinite wisdom, He has set upon the relations of the sexes. Baháʼí Faith Chastity is highly prized in the Baháʼí Faith. Similar to other Abrahamic religions, Baháʼí teachings call for the restriction of sexual activity to that between a wife and husband in Baháʼí marriage, and discourage members
Yakov Zel'dovich in the early 1960s, and independently predicted by Robert Dicke at the same time. The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery. The interpretation of the cosmic microwave background was a controversial issue in the 1960s with some proponents of the steady state theory arguing that the microwave background was the result of scattered starlight from distant galaxies. Using this model, and based on the study of narrow absorption line features in the spectra of stars, the astronomer Andrew McKellar wrote in 1941: "It can be calculated that the 'rotational temperature' of interstellar space is 2 K." However, during the 1970s the consensus was established that the cosmic microwave background is a remnant of the big bang. This was largely because new measurements at a range of frequencies showed that the spectrum was a thermal, black body spectrum, a result that the steady state model was unable to reproduce. Harrison, Peebles, Yu and Zel'dovich realized that the early universe would have to have inhomogeneities at the level of 10−4 or 10−5. Rashid Sunyaev later calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. Increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983) gave upper limits on the large-scale anisotropy. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery. Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the Toco experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation. The second peak was tentatively detected by several experiments before being definitively detected by WMAP, which has tentatively detected the third peak. As of 2010, several experiments to improve measurements of the polarization and the microwave background on small angular scales are ongoing. These include DASI, WMAP, BOOMERanG, QUaD, Planck spacecraft, Atacama Cosmology Telescope, South Pole Telescope and the QUIET telescope. Relationship to the Big Bang The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang theory. Measurements of the CMB have made the inflationary Big Bang theory the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory. In the late 1940s Alpher and Herman reasoned that if there was a big bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to stumble into discovering that the microwave background was actually there. The CMB gives a snapshot of the universe when, according to standard cosmology, the temperature dropped enough to allow electrons and protons to form hydrogen atoms, thereby making the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When it originated some 380,000 years after the Big Bang—this time is generally known as the "time of last scattering" or the period of recombination or decoupling—the temperature of the universe was about 3000 K. This corresponds to an energy of about 0.26 eV, which is much less than the 13.6 eV ionization energy of hydrogen. Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1090 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV): Tr = 2.725 ⋅ (1 + z) For details about the reasoning that the radiation is evidence for the Big Bang, see Cosmic background radiation of the Big Bang. Primary anisotropy The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer. The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density. The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures. Adiabatic density perturbationsIn an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons ...) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic. Isocurvature density perturbationsIn an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations. The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings. Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe, the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring. These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt. The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and when it was complete, the universe was roughly 487,000 years old. Late time anisotropy Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.) The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation. Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. Polarization The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. The B-modes are not produced by standard scalar type perturbations. Instead they can be created by two mechanisms: the first one is by gravitational lensing of E-modes, which has been measured by the South Pole Telescope in 2013; the second one is from gravitational waves arising from cosmic inflation. Detecting the B-modes is extremely difficult, particularly as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal. E-modes E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). B-modes Cosmologists predict two types of B-modes, the first generated during cosmic inflation shortly after the big bang, and the second generated by gravitational lensing at later times. Primordial gravitational waves Primordial gravitational waves are gravitational waves that could be observed in the polarisation of the cosmic microwave background and having their origin in the early universe. Models of cosmic inflation predict that such gravitational waves should appear; thus, their detection supports the theory of inflation, and their strength can confirm and exclude different models of inflation. It is the result of three things: inflationary expansion of space itself, reheating after inflation, and turbulent fluid mixing of matter and radiation. On 17 March 2014 it was announced that the BICEP2 instrument had detected the first type of B-modes, consistent with inflation and gravitational waves in the early universe at the level of , which is the amount of power present in gravitational waves compared to the amount of power present in other scalar density perturbations in the very early universe. Had this been confirmed it would have provided strong evidence for cosmic inflation and the Big Bang and against the ekpyrotic model of Paul Steinhardt and Neil Turok. However, on 19 June 2014, considerably lowered confidence in confirming the findings was reported and on 19 September 2014 new results of the Planck experiment reported that the results of BICEP2 can be fully attributed to cosmic dust. Gravitational lensing The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level. Microwave background observations Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. The most famous experiment is probably the NASA Cosmic Background Explorer (COBE) satellite that orbited in 1989–1996 and which detected and quantified the large scale anisotropies at the limit of its detection capabilities. Inspired by the initial
peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures. Adiabatic density perturbationsIn an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons ...) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic. Isocurvature density perturbationsIn an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations. The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings. Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe, the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring. These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt. The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and when it was complete, the universe was roughly 487,000 years old. Late time anisotropy Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.) The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation. Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. Polarization The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. The B-modes are not produced by standard scalar type perturbations. Instead they can be created by two mechanisms: the first one is by gravitational lensing of E-modes, which has been measured by the South Pole Telescope in 2013; the second one is from gravitational waves arising from cosmic inflation. Detecting the B-modes is extremely difficult, particularly as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal. E-modes E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). B-modes Cosmologists predict two types of B-modes, the first generated during cosmic inflation shortly after the big bang, and the second generated by gravitational lensing at later times. Primordial gravitational waves Primordial gravitational waves are gravitational waves that could be observed in the polarisation of the cosmic microwave background and having their origin in the early universe. Models of cosmic inflation predict that such gravitational waves should appear; thus, their detection supports the theory of inflation, and their strength can confirm and exclude different models of inflation. It is the result of three things: inflationary expansion of space itself, reheating after inflation, and turbulent fluid mixing of matter and radiation. On 17 March 2014 it was announced that the BICEP2 instrument had detected the first type of B-modes, consistent with inflation and gravitational waves in the early universe at the level of , which is the amount of power present in gravitational waves compared to the amount of power present in other scalar density perturbations in the very early universe. Had this been confirmed it would have provided strong evidence for cosmic inflation and the Big Bang and against the ekpyrotic model of Paul Steinhardt and Neil Turok. However, on 19 June 2014, considerably lowered confidence in confirming the findings was reported and on 19 September 2014 new results of the Planck experiment reported that the results of BICEP2 can be fully attributed to cosmic dust. Gravitational lensing The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level. Microwave background observations Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. The most famous experiment is probably the NASA Cosmic Background Explorer (COBE) satellite that orbited in 1989–1996 and which detected and quantified the large scale anisotropies at the limit of its detection capabilities. Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. All-sky mollweide map of the CMB, created from Planck spacecraft data In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers to minimize non-sky signal noise. The first results from this mission, disclosed in 2003, were detailed measurements of the angular power spectrum at a scale of less than one degree, tightly constraining various cosmological parameters. The results are broadly consistent with those expected from cosmic inflation as well as various other competing theories, and are available in detail at NASA's data bank for Cosmic Microwave Background (CMB) (see links below). Although WMAP provided very accurate measurements of the large scale angular fluctuations in the CMB (structures about as broad in the sky as the moon), it did not have the angular resolution to measure the smaller scale fluctuations which had been observed by former ground-based interferometers. A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be . Additional ground-based instruments such as the South Pole Telescope in Antarctica and the proposed Clover Project, Atacama Cosmology Telescope and the QUIET telescope in Chile will provide additional data not available from satellite observations, possibly including the B-mode polarization. Data reduction and analysis Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background. The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. Although computing a power spectrum from a map is in principle a simple Fourier transform, decomposing the map of the sky into spherical harmonics, where the term measures the mean temperature and term accounts for the fluctuation, where the refers to a spherical harmonic, and ℓ is the multipole number while m is the azimuthal number. By applying the angular correlation function, the sum can be reduced to an expression that only involves ℓ and power spectrum term The angled brackets indicate the average with respect to all observers in the universe; since the universe is homogenous and isotropic, therefore there is an absence of preferred observing direction. Thus, C is independent of m. Different choices of ℓ correspond to multipole moments of CMB. In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum. Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques. CMBR monopole term (ℓ = 0) When ℓ = 0, the term reduced to 1, and what we have left here is just the mean temperature of the CMB. This "mean" is called CMB monopole, and it is observed to have an average temperature of about Tγ = 2.7255 ± 0.0006K with one standard deviation confidence. The accuracy of this mean temperature may be impaired by the diverse measurements done by different mapping measurements. Such measurements demand absolute temperature devices, such as the FIRAS instrument on the COBE satellite. The measured kTγ is equivalent to 0.234 meV or 4.6 × 10−10 mec2. The photon number density of a blackbody having such temperature is . Its energy density is and the ratio to the critical density is Ωγ = 5.38 × 10−5. CMBR dipole anisotropy (ℓ = 1) CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (ℓ = 1). When ℓ = 1, the term reduces to one cosine function and thus encodes amplitude fluctuation. The amplitude of CMB dipole is around 3.3621 ± 0.0010 mK. Since the universe is presumed to be homogenous and isotropic, an observer should see the blackbody spectrum with temperature T at every point in the sky. The spectrum of the dipole has been confirmed to be the differential of a blackbody spectrum. CMB dipole is frame-dependent. The CMB dipole moment could also be interpreted as the peculiar motion of the Earth toward the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information. From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude ℓ = , b = . This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction). The standard interpretation of this temperature variation is a simple velocity redshift and blueshift due to motion relative to the CMB, but alternative cosmological models can explain some fraction of the observed dipole temperature distribution in the CMB. Recenty study of Wide-field Infrared Survey Explorer questions the kinematic interpretation of CMB anisotropy with high statistical confidence . Multipole (ℓ ≥ 2) The temperature variation in the CMB temperature maps at higher multipoles, or ℓ ≥ 2, is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch. Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. The said procedure happened at a redshift of around z ⋍ 1100. Other anomalies With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (ℓ = 2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole (ℓ = 3) modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data. Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is
foreign legal systems, even where no explicit comparison is undertaken. The importance of comparative law has increased enormously in the present age of internationalism, economic globalization, and democratization. History The origins of modern Comparative Law can be traced back to Gottfried Wilhelm Leibniz in 1667 in his Latin-language book Nova Methodus Discendae Docendaeque Iurisprudentiae (New Methods of Studying and Teaching Jurisprudence). Chapter 7 (Presentation of Law as the Project for all Nations, Lands and Times) introduces the idea of classifying Legal Systems into several families. Notably, a few years later, Leibniz introduced an idea of Language families. Although every Legal System is unique, Comparative Law through studies of their similarities and differences allows for classification of Legal Systems, wherein Law Families is the basic level of the classification. The main differences between Law Families are found in the source(s) of Law, the role of court precedents, the origin and development of the Legal System. Montesquieu is generally regarded as an early founding figure of comparative law. His comparative approach is obvious in the following excerpt from Chapter III of Book I of his masterpiece, De l'esprit des lois (1748; first translated by Thomas Nugent, 1750): Also, in Chapter XI (entitled 'How to compare two different Systems of Laws') of Book XXIX, discussing the French and English systems for punishment of false witnesses, he advises that "to determine which of those systems is most agreeable to reason, we must take them each as a whole and compare them in their entirety." Yet another place where Montesquieu's comparative approach is evident is the following, from Chapter XIII of Book XXIX: The modern founding figure of comparative and anthropological jurisprudence was Sir Henry Maine, a British jurist and legal historian. In his 1861 work Ancient Law: Its Connection with the Early History of Society, and Its Relation to Modern Ideas, he set out his views on the development of legal institutions in primitive societies and engaged in a comparative discussion of Eastern and Western legal traditions. This work placed comparative law in its historical context and was widely read and influential. The first university course on the subject was established at the University of Oxford in 1869, with Maine taking up the position of professor. Comparative law in the US was brought by a legal scholar fleeing persecution in Germany, Rudolf Schlesinger. Schlesinger eventually became professor of comparative law at Cornell Law School helping to spread the discipline throughout the US. Purpose Comparative law is an academic discipline that involves the study of legal systems, including their constitutive elements and how they differ, and how their elements combine into a system. Several disciplines have developed as separate branches of comparative law, including comparative constitutional law, comparative administrative law, comparative civil law (in the sense of the law of torts, contracts, property and obligations), comparative commercial law (in the sense of business organisations and trade), and comparative criminal law. Studies of these specific areas may be viewed as micro- or macro-comparative legal analysis, i.e. detailed comparisons of two countries, or broad-ranging studies of several countries. Comparative civil law studies, for instance, show how the law of private relations is organised, interpreted and used in different systems or countries. The purposes of comparative law are: To attain a deeper knowledge of the legal systems in effect To perfect the legal systems in effect Possibly, to contribute to a unification of legal systems, of a smaller or larger scale (cf. for instance, the UNIDROIT initiative) Relationship with other legal subjects Comparative law is different from general jurisprudence (i.e. legal theory) and from public and private international law. However, it helps inform all of these areas of normativity. For example, comparative law can help international legal institutions, such as those of the United Nations System, in analyzing the laws of different countries regarding their treaty obligations. Comparative law would be applicable to private international law when developing an approach to interpretation in a conflicts analysis. Comparative law may contribute to legal theory by creating categories and concepts of general application. Comparative law may also provide insights into the question of legal transplants, i.e. the transplanting of law and legal institutions from one system to another. The notion of legal transplants was coined by Alan Watson, one of the world's renowned legal scholars specializing in comparative law. Also, the usefulness of comparative law for sociology of law and law and economics (and vice versa) is very large. The comparative study of the various legal systems may show how different legal regulations for the same problem function in practice. Conversely, sociology of law and law & economics may help comparative law answer questions, such as: How do regulations in different legal systems really function in the respective societies? Are legal rules comparable? How do the similarities and differences between legal systems get explained? Classifications of legal systems Arminjon, Nolde, and Wolff Arminjon, Nolde, and Wolff believed that, for purposes of classifying the (then) contemporary legal systems of the world, it was required that those systems per se get studied, irrespective of external factors, such as geographical ones. They proposed the classification of legal system into seven groups, or so-called 'families', in particular the French group, under which they also included the countries that codified their law either in 19th or in the first half of the 20th century, using the Napoleonic code civil of year 1804 as a model; this includes
20th century, using the Napoleonic code civil of year 1804 as a model; this includes countries and jurisdictions such as Italy, Portugal, Spain, Romania, Louisiana, various South American states such as Brazil, Quebec, Saint Lucia, the Ionian Islands, Egypt, and Lebanon German group Scandinavian group, comprising the laws of Denmark, Norway, Sweden, Finland, and Iceland English group, including, inter alia, England, the United States, Canada, Australia, and New Zealand Russian group Islamic group (used in the Muslim world) Hindu group David René David proposed the classification of legal systems, according to the different ideology inspiring each one, into five groups or families: Western laws, a group subdivided into the: Civil law subgroup (whose jurisprudence is based on post-classical Roman Law) Common law subgroup (originating in English law) Soviet Law Muslim Law Hindu Law Chinese Law Jewish Law Especially with respect to the aggregating by David of the Civil and Common laws into a single family, David argued that the antithesis between the Common law and Civil law systems, is of a technical rather than of an ideological nature. Of a different kind is, for instance, the antithesis between, say, Italian and American laws, and of a different kind than between the Soviet, Muslim, Hindu, or Chinese laws. According to David, the Civil law legal systems included those countries where legal science was formulated according to Roman law, whereas Common law countries are those dominated by judge-made law. The characteristics that he believed uniquely differentiate the Western legal family from the other four are: liberal democracy capitalist economy Christian religion Zweigert and Kötz Konrad Zweigert and Hein Kötz propose a different, multidimensional methodology for categorizing laws, i.e. for ordering families of laws. They maintain that, to determine such families, five criteria should be taken into account, in particular: the historical background, the characteristic way of thought, the different institutions, the recognized sources of law, and the dominant ideology. Using the aforementioned criteria, they classify the legal systems of the world into six families: Roman family German family Common law family Nordic family Family of the laws of the Far East (China and Japan) Religious family (Jewish, Muslim, and Hindu law) Up to the second German edition of their introduction to comparative law, Zweigert and Kötz also used to mention Soviet or socialist law as another family of laws. Professional associations American Association of Law Libraries American Society of Comparative Law International Association of Judicial Independence and World Peace International Association of Procedural Law International Law Association Comparative law periodicals American Journal of Comparative Law German Law Journal Journal of Comparative Legislation and International Law The Journal of Comparative Law See also Annual Bulletin of the Comparative Law Bureau (American Bar Association: 1908–1914, 1933), the first comparative law journal in the U.S. Comparative criminal justice Comparative law wiki, online wikis where jurists can complete questionnaires regarding their home legal system Friedrich Carl von Savigny (1779–1861) – a German legal scholar who wrote on comparative law List of national legal systems Rule according to higher law Rule of law Notes References Billis, Emmanouil. 'On the methodology of comparative criminal law research: Paradigmatic approaches to the research method of functional comparison and the heuristic device of ideal types', Maastricht Journal of European and Comparative Law 6 (2017): 864–881. H Collins, ‘Methods and Aims of Comparative Contract Law’ (1989) 11 OJLS 396. Cotterrell, Roger (2006). Law, Culture and Society: Legal Ideas in the Mirror of Social Theory. Aldershot: Ashgate. De Cruz, Peter (2007) Comparative Law in a Changing World, 3rd edn (1st edn 1995). London: Routledge-Cavendish. Donahue, Charles (2008) ‘Comparative Law before the “Code Napoléon”’ in The Oxford Handbook of Comparative Law. Eds. Mathias Reimann & Reinhard Zimmermann. Oxford: Oxford University Press. Glanert, Simone (2008) 'Speaking Language to Law: The Case of Europe', Legal Studies 28: 161–171. Hamza, Gabor (1991). Comparative Law and Antiquity. Budapest: Akademiai Kiado. Husa, Jaakko. A New Introduction to Comparative Law. Oxford–Portland (Oregon): Hart, 2015. O Kahn-Freund, ‘Comparative Law as an Academic Subject’ (1966) 82 LQR 40. Kischel, Uwe. Comparative Law. Trans. Andrew Hammel. Oxford: Oxford University Press, 2019. Legrand, Pierre (1996). 'European Legal Systems Are Not Converging', International and Comparative Law Quarterly 45: 52–81. Legrand, Pierre (1997). 'Against a European Civil Code', Modern Law Review 60: 44–63. Legrand, Pierre and Roderick Munday, eds. (2003). Comparative Legal Studies: Traditions and Transitions. Cambridge: Cambridge University Press. Legrand, Pierre (2003). 'The Same and the Different', in Comparative Legal Studies: Traditions and Transitions. Eds. Pierre Legrand & Roderick Munday. Cambridge: Cambridge University Press. Leibniz, Gottffried Wilhelm (2017) The New Method of Learning and Teaching Jurisprudence... Translation of the 1667 Frankfurt Edition. Clark, NJ: Talbot Publishing. Lundmark, Thomas, Charting the divide between common and civil law, Oxford University Press, 2012. MacDougal, M.S. ‘The Comparative Study of Law for Policy Purposes: Value Clarification as an Instrument of Democratic World Order’ (1952) 61 Yale Law Journal 915 (difficulties and requirements of good comparative law). . Menski Werner (2006) Comparative Law in a Global Context: the Legal Traditions of Asia and Africa. Cambridge: Cambridge University Press. Orucu, Esin and David Nelken, eds. Comparative Law: A Handbook. Oxford: Hart, 2007. Reimann, Mathias & Reinhard Zimmermann, eds. The Oxford Handbook of Comparative Law, 2nd edn. Oxford: Oxford University Press, 2019 (1st edn. 2008). Samuel, Geoffrey. An Introduction to Comparative Law Theory and Method. Oxford: Hart, 2014. Siems, Mathias. Comparative Law. Cambridge: Cambridge University Press, 2014. Watson, Alan. Legal Transplants: An Approach to Comparative Law, 2nd edn. University of Georgia Press, 1993. Zweigert, Konrad & Hein Kötz. An Introduction to Comparative Law, 3rd edn.
current working directory in operating systems continuous delivery, a software development design practice continuous deployment, a software development design practice collision detection, CSMA/CD Mathematics cd (elliptic function), one of Jacobi's elliptic functions Other uses in science and technology Cadmium or Cd, a chemical element Candela or cd, a unit of light intensity -CD, the North American call sign suffix for Class A low-power television stations operating with digital signals Circular dichroism, a form of spectroscopy Critical Dimension, the minimum feature size that a projection system can print in photolithography Drag coefficient or cd, a dimensionless quantity used to quantify the drag of an object in a fluid Businesses and organizations Government, military, and political Canadian Forces Decoration, by post-nominal letters Centre Democrats (Denmark), a Danish former political party Centre Democrats (Netherlands), a former political party of the Netherlands Civil defense, an effort to protect the citizens of a state from military attack and natural disasters Community of Democracies, an intergovernmental organization of democracies and democratizing countries Conference on Disarmament, an international forum that negotiates multilateral arms control and disarmament agreements Corps Diplomatique, the
business and organizations Certificate of deposit, a bank account in the United States with a fixed maturity date České dráhy or ČD, a railway operator of the Czech Republic Commander of the Order of Distinction, a rank in the Jamaican Orders of Societies of Honour Places Central District, Seattle, a district in Seattle Democratic Republic of the Congo, by ISO 3166-1 alpha-2 country code .cd, the Internet domain of the Democratic Republic of the Congo cd., abbreviation for caddesi, street, in Turkish Other uses 400 (number), written CD in Roman numerals 205 (number), written CD in hexadecimal AD 400 (CD), a year of the Common Era "CD", a song by T2 (band) Geely CD, a coupe automobile made by Geely Automobile Cairo Damascus or Damascus Document, a text found among the Dead Sea Scrolls Committee Draft, a status in the International Organization for Standardization Companion dog (title),
for example, might be metaphorically said to "exist in cyberspace". According to this interpretation, events taking place on the Internet are not happening in the locations where participants or servers are physically located, but "in cyberspace". The philosopher Michel Foucault used the term heterotopias, to describe such spaces which are simultaneously physical and mental. Firstly, cyberspace describes the flow of digital data through the network of interconnected computers: it is at once not "real", since one could not spatially locate it as a tangible object, and clearly "real" in its effects. There have been several attempts to create a concise model about how cyberspace works since it is not a physical thing that can be looked at. Secondly, cyberspace is the site of computer-mediated communication (CMC), in which online relationships and alternative forms of online identity were enacted, raising important questions about the social psychology of Internet use, the relationship between "online" and "offline" forms of life and interaction, and the relationship between the "real" and the virtual. Cyberspace draws attention to remediation of culture through new media technologies: it is not just a communication tool but a social destination and is culturally significant in its own right. Finally, cyberspace can be seen as providing new opportunities to reshape society and culture through "hidden" identities, or it can be seen as borderless communication and culture. The "space" in cyberspace has more in common with the abstract, mathematical meanings of the term (see space) than physical space. It does not have the duality of positive and negative volume (while in physical space, for example, a room has the negative volume of usable space delineated by positive volume of walls, Internet users cannot enter the screen and explore the unknown part of the Internet as an extension of the space they are in), but spatial meaning can be attributed to the relationship between different pages (of books as well as web servers), considering the unturned pages to be somewhere "out there." The concept of cyberspace, therefore, refers not to the content being presented to the surfer, but rather to the possibility of surfing among different sites, with feedback loops between the user and the rest of the system creating the potential to always encounter something unknown or unexpected. Video games differ from text-based communication in that on-screen images are meant to be figures that actually occupy a space and the animation shows the movement of those figures. Images are supposed to form the positive volume that delineates the empty space. A game adopts the cyberspace metaphor by engaging more players in the game, and then figuratively representing them on the screen as avatars. Games do not have to stop at the avatar-player level, but current implementations aiming for more immersive playing space (i.e. Laser tag) take the form of augmented reality rather than cyberspace, fully immersive virtual realities remaining impractical. Although the more radical consequences of the global communication network predicted by some cyberspace proponents (i.e. the diminishing of state influence envisioned by John Perry Barlow) failed to materialize and the word lost some of its novelty appeal, it remains current . Some virtual communities explicitly refer to the concept of cyberspace, for example Linden Lab calling their customers "Residents" of Second Life, while all such communities can be positioned "in cyberspace" for explanatory and comparative purposes (as did Sterling in The Hacker Crackdown, followed by many journalists), integrating the metaphor into a wider cyber-culture. The metaphor has been useful in helping a new generation of thought leaders to reason through new military strategies around the world, led largely by the US Department of Defense (DoD). The use of cyberspace as a metaphor has had its limits, however, especially in areas where the metaphor becomes confused with physical infrastructure. It has also been critiqued as being unhelpful for falsely employing a spatial metaphor to describe what is inherently a network. Alternate realities in philosophy and art Predating computers A forerunner of the modern ideas of cyberspace is the Cartesian notion that people might be deceived by an evil demon that feeds them a false reality. This argument is the direct predecessor of modern ideas of a brain in a vat and many popular conceptions of cyberspace take Descartes's ideas as their starting point. Visual arts have a tradition, stretching back to antiquity, of artifacts meant to fool the eye and be mistaken for reality. This questioning of reality occasionally led some philosophers and especially theologians to distrust art as deceiving people into entering a world which was not real (see Aniconism). The artistic challenge was resurrected with increasing ambition as art became more and more realistic with the invention of photography, film (see Arrival of a Train at La Ciotat), and immersive computer simulations. Influenced by computers Philosophy American counterculture exponents like William S. Burroughs (whose literary influence on Gibson and cyberpunk in general is widely acknowledged) and Timothy Leary were among the first to extol the potential of computers and computer networks for individual empowerment. Some contemporary philosophers and scientists (e.g. David Deutsch in The Fabric of Reality) employ virtual reality in various thought experiments. For example, Philip Zhai in Get Real: A Philosophical Adventure in Virtual Reality connects cyberspace to the Platonic tradition: Note that this brain-in-a-vat argument conflates cyberspace with reality, while the more common descriptions of cyberspace contrast it with the "real world". Cyber-Geography The “Geography of Notopia” (Papadimitriou, 2006) theorizes about the complex interplay of cyber-cultures and the geographical space. This interplay has several philosophical and psychological facets (Papadimitriou, 2009). A New Communication Model The technological convergence of the mass media is the result of a long adaptation process of their communicative resources to the evolutionary changes of each historical moment. Thus, the new media
the "real world". Cyber-Geography The “Geography of Notopia” (Papadimitriou, 2006) theorizes about the complex interplay of cyber-cultures and the geographical space. This interplay has several philosophical and psychological facets (Papadimitriou, 2009). A New Communication Model The technological convergence of the mass media is the result of a long adaptation process of their communicative resources to the evolutionary changes of each historical moment. Thus, the new media became (plurally) an extension of the traditional media in cyberspace, allowing to the public access information in a wide range of digital devices. In other words, it is a cultural virtualization of human reality as a result of the migration from physical to virtual space (mediated by the ICTs), ruled by codes, signs and particular social relationships. Forwards, arise instant ways of communication, interaction and possible quick access to information, in which we are no longer mere senders, but also producers, reproducers, co-workers and providers. New technologies also help to "connect" people from different cultures outside the virtual space, which was unthinkable fifty years ago. In this giant relationships web, we mutually absorb each other's beliefs, customs, values, laws and habits, cultural legacies perpetuated by a physical-virtual dynamics in constant metamorphosis (ibidem). In this sense, Professor Doctor Marcelo Mendonça Teixeira created, in 2013, a new model of communication to the virtual universe, based in Claude Elwood Shannon (1948) article "A Mathematical Theory of Communication". Art Having originated among writers, the concept of cyberspace remains most popular in literature and film. Although artists working with other media have expressed interest in the concept, such as Roy Ascott, "cyberspace" in digital art is mostly used as a synonym for immersive virtual reality and remains more discussed than enacted. Computer crime Cyberspace also brings together every service and facility imaginable to expedite money laundering. One can purchase anonymous credit cards, bank accounts, encrypted global mobile telephones, and false passports. From there one can pay professional advisors to set up IBCs (International Business Corporations, or corporations with anonymous ownership) or similar structures in OFCs (Offshore Financial Centers). Such advisors are loath to ask any penetrating questions about the wealth and activities of their clients, since the average fees criminals pay them to launder their money can be as much as 20 percent. 5-level model In 2010, a five-level model was designed in France. According to this model, cyberspace is composed of five layers based on information discoveries: 1) language, 2) writing, 3) printing, 4) Internet, 5) Etc., i.e. the rest, e.g. noosphere, artificial life, artificial intelligence, etc., etc. This original model links the world of information to telecommunication technologies. See also Augmented browsing Artificial intelligence Autonomy Computer security Cyber-HUMINT Cyberwarfare Cyber security standards Framework Programmes for Research and Technological Development Wired glove Online magazine Cybersex Crypto-anarchism Digital pet Esports Global commons Information superhighway Infosphere Internet art Legal aspects of computing Wikipedia:Link surfing Real life Metaverse Mixed reality Multi-agent system Noosphere Reality–virtuality continuum Simulated reality Social software Computer program Sentience Telepresence Virtual world Virtual reality World Wide Web Further reading Branch, J. (2020). "What's in a Name? Metaphors and Cybersecurity." International Organization. References Sources Cyberculture, The key Concepts, edited by David Bell, Brian D.Loader, Nicholas Pleace and Douglas Schuler Christine Buci-Glucksmann, "L’art à l’époque virtuel", in Frontières esthétiques de l’art, Arts 8, Paris: L’Harmattan, 2004 William Gibson. Neuromancer:20th Anniversary Edition. New York:Ace Books, 2004. Oliver Grau: Virtual Art. From Illusion to Immersion, MIT-Press, Cambridge 2003. (4 Auflagen). David Koepsell, The Ontology of Cyberspace, Chicago: Open Court, 2000. Irvine, Martin. "Postmodern Science Fiction and Cyberpunk", retrieved 2006-07-19. Slater, Don 2002, 'Social Relationships and Identity Online and Offline', in L.Lievrouw and S.Livingston (eds), The Handbook of New Media, Sage, London, pp533–46. Sterling, Bruce. The Hacker Crackdown: Law and Disorder On the Electronic Frontier. Spectra Books, 1992. Zhai, Philip. Get Real: A Philosophical Adventure in Virtual Reality. New York: Rowman & Littlefield Publishers, 1998. Teixeira, Marcelo Mendonça (2012). Cyberculture: From Plato To The Virtual Universe. The Architecture of Collective Intelligence. Munich: Grin Verlag. External links A Declaration of the Independence of Cyberspace by John Perry Barlow A Critique of the word "Cyberspace" at ZeroGeography Virtual Reality
fertile tidal marshes surrounding the southeastern and northeastern reaches of the Bay of Fundy being populated by French immigrants who called themselves Acadien. The Acadians eventually built small settlements throughout what is today mainland Nova Scotia and New Brunswick, as well as Île-Saint-Jean (Prince Edward Island), Île-Royale (Cape Breton Island), and other shorelines of the Gulf of St. Lawrence in present-day Newfoundland and Labrador, and Quebec. Acadian settlements had primarily agrarian economies. Early examples of Acadian fishing settlements developed in southwestern Nova Scotia and in Île-Royale, as well as along the south and west coasts of Newfoundland, the Gaspé Peninsula, and the present-day Côte-Nord region of Quebec. Most Acadian fishing activities were overshadowed by the much larger seasonal European fishing fleets that were based out of Newfoundland and took advantage of proximity to the Grand Banks. The growing English colonies along the American seaboard to the south and various European wars between England and France during the 17th and 18th centuries brought Acadia to the centre of world-scale geopolitical forces. In 1613, Virginian raiders captured Port-Royal, and in 1621 France ceded Acadia to Scotland's Sir William Alexander, who renamed it as Nova Scotia. By 1632, Acadia was returned from Scotland to France under the Treaty of Saint-Germain-en-Laye. The Port Royale settlement was moved to the site of nearby present-day Annapolis Royal. More French immigrant settlers, primarily from the Brittany, Normandie, and Vienne regions of France, continued to populate the colony of Acadia during the latter part of the 17th and early part of the 18th centuries. Important settlements also began in the Beaubassin region of the present-day Isthmus of Chignecto, and in the Saint John River valley, as well as smaller communities on Île-Saint-Jean and Île-Royale. In 1654, New England raiders attacked Acadian settlements on the Annapolis Basin. Acadians lived with uncertainty throughout the English constitutional crises under Oliver Cromwell, and it was not until the Treaty of Breda in 1667 that France's claim to the region was reaffirmed. Colonial administration by France throughout the history of Acadia was of low priority. France's priorities were in settling and strengthening its claim on the larger territory of New France and the exploration and settlement of interior North America and the Mississippi River valley. Colonial wars Over 74 years (1689–1763) there were six colonial wars, which involved continuous warfare between New England and Acadia (see the French and Indian Wars reflecting English and French tensions in Europe, as well as Father Rale's War and Father Le Loutre's War). Throughout these wars, New England was allied with the Iroquois Confederacy based around the southern Great Lakes and west of the Hudson River. Acadian settlers were allied with the Wabanaki Confederacy. In the first war, King William's War (the North American theatre of the Nine Years' War), natives from the Maritime region participated in numerous attacks with the French on the Acadia/ New England border in southern Maine (e.g., Raid on Salmon Falls). New England retaliatory raids on Acadia, such as the Raid on Chignecto (1696), were conducted by Benjamin Church. In the second war, Queen Anne's War (the North American theatre of the War of the Spanish Succession), the British conducted the Conquest of Acadia, while the region remained primarily in control of Maliseet militia, Acadia militia and Mi'kmaw militia. In 1719, to further protect strategic interests in the Gulf of St. Lawrence and St. Lawrence River, France began the 20-year construction of a large fortress at Louisbourg on Île-Royale. Massachusetts was increasingly concerned over reports of the capabilities of this fortress, and of privateers staging out of its harbour to raid New England fishermen on the Grand Banks. In the fourth war, King George's War (the North American theatre of the War of the Austrian Succession), the British engaged successfully in the Siege of Louisbourg (1745). The British returned control of Île-Royale to France with the fortress virtually intact three years later under the Treaty of Aix-la-Chapelle and the French reestablished their forces there. In 1749, to counter the rising threat of Louisbourg, Halifax was founded and the Royal Navy established a major naval base and citadel. The founding of Halifax sparked Father Le Loutre's War. During the sixth and final colonial war, the French and Indian War (the North American theatre of the Seven Years' War), the military conflicts in Nova Scotia continued. The British Conquest of Acadia happened in 1710. Over the next forty-five years, the Acadians refused to sign an unconditional oath of allegiance to Britain. During this time period Acadians participated in various militia operations against the British and maintained vital supply lines to the French Fortress of Louisbourg and Fort Beausejour. The British sought to neutralize any military threat Acadians posed and to interrupt the vital supply lines Acadians provided to Louisbourg by deporting Acadians from Acadia. The British began the Expulsion of the Acadians with the Bay of Fundy Campaign (1755). Over the next nine years over 12,000 Acadians of 15,000 were removed from Nova Scotia. In 1758, the fortress of Louisbourg was laid siege for a second time within 15 years, this time by more than 27,000 British soldiers and sailors with over 150 warships. After the French surrender, Louisbourg was thoroughly destroyed by British engineers to ensure it would never be reclaimed. With the fall of Louisbourg, French and Mi'kmaw resistance in the region crumbled. British forces seized remaining French control over Acadia in the coming months, with Île-Saint-Jean falling in 1759 to British forces on their way to Quebec City for the Siege of Quebec and ensuing Battle of the Plains of Abraham. The war ended and Britain had gained control over the entire Maritime region and the Indigenous people signed the Halifax Treaties. American Revolution Following the Seven Years' War, empty Acadian lands were settled first by 8,000 New England Planters and then by immigrants brought from Yorkshire. Île-Royale was renamed Cape Breton Island and incorporated into the Colony of Nova Scotia. Some of the Acadians who had been deported came back but went to the eastern coasts of New Brunswick. Both the colonies of Nova Scotia (present-day Nova Scotia and New Brunswick) and St. John's Island (Prince Edward Island) were affected by the American Revolutionary War, largely by privateering against American shipping, but several coastal communities were also the targets of American raiders. Charlottetown, the capital of the new colony of St. John's Island, was ransacked in 1775 with the provincial secretary kidnapped and the Great Seal stolen. The largest military action in the Maritimes during the revolutionary war was the attack on Fort Cumberland (the renamed Fort Beausejour) in 1776 by a force of American sympathizers led by Jonathan Eddy. The fort was partially overrun after a month-long siege, but the attackers were ultimately repelled after the arrival of British reinforcements from Halifax. The most significant impact from this war was the settling of large numbers of Loyalist refugees in the region (34,000 to the 17,000 settlers already there), especially in Shelburne and Parrtown (Saint John). Following the Treaty of Paris in 1783, Loyalist settlers in what would become New Brunswick persuaded British administrators to split the Colony of Nova Scotia to create the new colony of New Brunswick in 1784. At the same time, another part of the Colony of Nova Scotia, Cape Breton Island, was split off to become the Colony of Cape Breton Island. The Colony of St. John's Island was renamed to Prince Edward Island on November 29, 1798. The War of 1812 had some effect on the shipping industry in the Maritime colonies of New Brunswick, Nova Scotia, Prince Edward Island, and Cape Breton Island; however, the significant Royal Navy presence in Halifax and other ports in the region prevented any serious attempts by American raiders. Maritime and American privateers targeted unprotected shipping of both the United States and Britain respectively, further reducing trade. New Brunswick's section of the Canada–US border did not have any significant action during this conflict, although British forces did occupy a portion of coastal Maine at one point. The most significant incident from this war which occurred in the Maritimes was the British capture and detention of the American frigate USS Chesapeake in Halifax. 19th century In 1820, the Colony of Cape Breton Island was merged back into the Colony of Nova Scotia for the second time by the British government. British settlement of the Maritimes, as the colonies of Nova Scotia, New Brunswick and Prince Edward Island came to be known, accelerated throughout the late 18th century and into the 19th century with significant immigration to the region as a result of Scottish migrants displaced by the Highland Clearances and Irish escaping the Great Irish Famine (1845–1849). As a result, significant portions of the three provinces are influenced by Celtic heritages, with Scottish Gaelic (and to a lesser degree, Irish Gaelic) having been widely spoken, particularly in Cape Breton, although it is less prevalent today. During the American Civil War, a significant number of Maritimers volunteered to fight for the armies of the Union, while a small handful joined the Confederate Army. However, the majority of the conflict's impact was felt in the shipping industry. Maritime shipping boomed during the war due to large-scale Northern imports of war supplies which were often carried by Maritime ships as Union ships were vulnerable to Confederate naval raiders. Diplomatic tensions between Britain and the Unionist North had deteriorated after some interests in Britain expressed support for the secessionist Confederate South. The Union navy, although much smaller than the British Royal Navy and no threat to the Maritimes, did posture off Maritime coasts at times chasing Confederate naval ships which sought repairs and reprovisioning in Maritime ports, especially Halifax. The immense size of the Union army (the largest on the planet toward the end of the Civil War), however, was viewed with increasing concern by Maritimers throughout the early 1860s. Another concern was the rising threat of Fenian raids on border communities in New Brunswick by those seeking to end British rule of Ireland. This combination of events, coupled with an ongoing decline in British military and economic support to the region as the Home Office favoured newer colonial endeavours in Africa and elsewhere, led to a call among Maritime politicians for a conference on Maritime Union, to be held in early September 1864 in Charlottetown – chosen in part because of Prince Edward Island's reluctance to give up its jurisdictional sovereignty in favour of uniting with New Brunswick and Nova Scotia into a single colony. New Brunswick and Nova Scotia felt that if the union conference were held in Charlottetown, they might be able to convince Island politicians to support the proposal. The Charlottetown Conference, as it came to be called, was also attended by a slew of visiting delegates from the neighbouring colony of Canada, who had largely arrived at their own invitation with their own agenda. This agenda saw the conference dominated by discussions of creating an even larger union of the entire territory of British North America into a united colony. The Charlottetown Conference ended with an agreement to meet the following month in Quebec City, where more formal discussions ensued, culminating with meetings in London and the signing of the British North America Act, 1867. Of the Maritime provinces, only Nova Scotia and New Brunswick were initially party to the BNA Act: Prince Edward Island's reluctance, combined with a booming agricultural and fishing export economy having led to that colony opting not to sign on. Major population centres The major communities of the region include Halifax and Cape Breton in Nova Scotia, Saint John, Fredericton and Moncton in New Brunswick, and Charlottetown in Prince Edward Island. Climate In spite of its name, The Maritimes has a humid continental climate of the warm-summer subtype. Especially in coastal Nova Scotia, differences between summers and winters are narrow compared to the rest of Canada. The inland climate of New Brunswick is in stark contrast during winter, resembling more continental areas. Summers are somewhat tempered by the marine influence throughout the provinces, but due to the southerly parallels still remain similar to more continental areas further west. Yarmouth in Nova Scotia has significant marine influence to have a borderline oceanic microclimate, but winter nights are still cold even in all coastal areas. The northernmost areas of New Brunswick are only just above subarctic with very cold continental winters. Demographics The Maritimes were predominantly rural until recent decades, having resource-based economies of fishing, agriculture, forestry, and coal mining. Maritimers are predominantly of west European origin: Scottish Canadians, Irish Canadians, English Canadians, and Acadians. New Brunswick, in general, differs from the other two Maritime Provinces in that it has a much higher francophone population. There was once a significant Canadian Gaelic speaking population. Helen Creighton recorded Celtic traditions of rural Nova Scotia in the mid-1900s. There are Black Canadians who are mostly descendants of Black Loyalists or black refugees from the War of 1812. This Maritime population is mainly among Black Nova Scotians. There are Mi'kmaq reserves in all three provinces, and a smaller population of the Maliseet in western New Brunswick. Economy Present status Given the small population of the region (compared with the Central Canadian provinces or the New England states), the regional economy is a net exporter of natural resources, manufactured goods, and services. The regional economy has long been tied to natural resources such as fishing, logging, farming, and mining activities. Significant industrialization in the second half of the 19th century brought steel to Trenton, Nova Scotia, and subsequent creation of a widespread industrial base to take advantage of the region's large underground coal deposits. After Confederation,
of Canada. The inland climate of New Brunswick is in stark contrast during winter, resembling more continental areas. Summers are somewhat tempered by the marine influence throughout the provinces, but due to the southerly parallels still remain similar to more continental areas further west. Yarmouth in Nova Scotia has significant marine influence to have a borderline oceanic microclimate, but winter nights are still cold even in all coastal areas. The northernmost areas of New Brunswick are only just above subarctic with very cold continental winters. Demographics The Maritimes were predominantly rural until recent decades, having resource-based economies of fishing, agriculture, forestry, and coal mining. Maritimers are predominantly of west European origin: Scottish Canadians, Irish Canadians, English Canadians, and Acadians. New Brunswick, in general, differs from the other two Maritime Provinces in that it has a much higher francophone population. There was once a significant Canadian Gaelic speaking population. Helen Creighton recorded Celtic traditions of rural Nova Scotia in the mid-1900s. There are Black Canadians who are mostly descendants of Black Loyalists or black refugees from the War of 1812. This Maritime population is mainly among Black Nova Scotians. There are Mi'kmaq reserves in all three provinces, and a smaller population of the Maliseet in western New Brunswick. Economy Present status Given the small population of the region (compared with the Central Canadian provinces or the New England states), the regional economy is a net exporter of natural resources, manufactured goods, and services. The regional economy has long been tied to natural resources such as fishing, logging, farming, and mining activities. Significant industrialization in the second half of the 19th century brought steel to Trenton, Nova Scotia, and subsequent creation of a widespread industrial base to take advantage of the region's large underground coal deposits. After Confederation, however, this industrial base withered with technological change, and trading links to Europe and the U.S. were reduced in favour of those with Ontario and Quebec. In recent years, however, the Maritime regional economy has begun increased contributions from manufacturing again and the steady transition to a service economy. Important manufacturing centres in the region include Pictou County, Truro, the Annapolis Valley and the South Shore, and the Strait of Canso area in Nova Scotia, as well as Summerside in Prince Edward Island, and the Miramichi area, the North Shore and the upper Saint John River valley of New Brunswick. Some predominantly coastal areas have become major tourist centres, such as parts of Prince Edward Island, Cape Breton Island, the South Shore of Nova Scotia and the Gulf of St. Lawrence and Bay of Fundy coasts of New Brunswick. Additional service-related industries in information technology, pharmaceuticals, insurance and financial sectors—as well as research-related spin-offs from the region's numerous universities and colleges—are significant economic contributors. Another important contribution to Nova Scotia's provincial economy is through spin-offs and royalties relating to off-shore petroleum exploration and development. Mostly concentrated on the continental shelf of the province's Atlantic coast in the vicinity of Sable Island, exploration activities began in the 1960s and resulted in the first commercial production field for oil beginning in the 1980s. Natural gas was also discovered in the 1980s during exploration work, and this is being commercially recovered, beginning in the late 1990s. Initial optimism in Nova Scotia about the potential of off-shore resources appears to have diminished with the lack of new discoveries, although exploration work continues and is moving farther off-shore into waters on the continental margin. Regional transportation networks have also changed significantly in recent decades with port modernizations, with new freeway and ongoing arterial highway construction, the abandonment of various low-capacity railway branchlines (including the entire railway system of Prince Edward Island and southwestern Nova Scotia), and the construction of the Canso Causeway and the Confederation Bridge. There have been airport improvements at various centres providing improved connections to markets and destinations in the rest of North America and overseas. Improvements in infrastructure and the regional economy notwithstanding, the three provinces remain one of the poorer regions of Canada. While urban areas are growing and thriving, economic adjustments have been harsh in rural and resource-dependent communities, and emigration has been an ongoing phenomenon for some parts of the region. Another problem is seen in the lower average wages and family incomes within the region. Property values are depressed, resulting in a smaller tax base for these three provinces, particularly when compared with the national average which benefits from central and western Canadian economic growth. This has been particularly problematic with the growth of the welfare state in Canada since the 1950s, resulting in the need to draw upon equalization payments to provide nationally mandated social services. Since the 1990s the region has experienced an exceptionally tumultuous period in its regional economy with the collapse of large portions of the ground fishery throughout Atlantic Canada, the closing of coal mines and a steel mill on Cape Breton Island, and the closure of military bases in all three provinces. That being said, New Brunswick has one of the largest military bases in the British Commonwealth (CFB Gagetown), which plays a significant role in the cultural and economic spheres of Fredericton, the province's capital city. Historical Growth While the economic underperformance of the Maritime economy has been long lasting, it has not always been present. The mid-19th century, especially the 1850s and 1860s, has long been seen as a "Golden Age" in the Maritimes. Growth was strong, and the region had one of British North America's most extensive manufacturing sectors as well as a large international shipping industry. The question of why the Maritimes fell from being a centre of Canadian manufacturing to being an economic hinterland is thus a central one to the study of the region's pecuniary difficulties. The period in which the decline occurred had a great many potential culprits. In 1867 Nova Scotia and New Brunswick merged with the Canadas in Confederation, with Prince Edward Island joining them six years later in 1873. Canada was formed only a year after free trade with the United States (in the form of the Reciprocity Agreement) had ended. In the 1870s John A. Macdonald's National Policy was implemented, creating a system of protective tariffs around the new nation. Throughout the period there was also significant technological change both in the production and transportation of goods. Reputed Golden Age Several scholars have explored the so-called "golden age" of the Maritimes in the years just before Confederation. In Nova Scotia, the population grew steadily from 277,000 in 1851 to 388,000 in 1871, mostly from natural increase since immigration was slight. The era has been called a golden age, but that was a myth created in the 1930s to lure tourists to a romantic era of tall ships and antiques. Recent historians using census data have shown that is a fallacy. In 1851–1871 there was an overall increase in per capita wealth holding. However most of the gains went to the urban elite class, especially businessmen and financiers living in Halifax. The wealth held by the top 10% rose considerably over the two decades, but there was little improvement in the wealth levels in rural areas, which comprised the great majority of the population. Likewise Gwyn reports that gentlemen, merchants, bankers, colliery owners, shipowners, shipbuilders, and master mariners flourished. However the great majority of families were headed by farmers, fishermen, craftsmen and laborers. Most of them—and many widows as well—lived in poverty. Out migration became an increasingly necessary option. Thus the era was indeed a golden age but only for a small but powerful and highly visible elite. Economic decline The cause of economic malaise in the Maritimes is an issue of great debate and controversy among historians, economists, and geographers. The differing opinions can approximately be divided into the "structuralists," who argue that poor policy decisions are to blame, and the others, who argue that unavoidable technological and geographical factors caused the decline. The exact date that the Maritimes began to fall behind the rest of Canada is difficult to determine. Historian Kris Inwood places the date very early, at least in Nova Scotia, finding clear signs that the Maritimes "Golden Age" of the mid-19th century was over by 1870, before Confederation or the National Policy could have had any significant impact. Richard Caves places the date closer to 1885. T.W. Acheson takes a similar view and provides considerable evidence that the early 1880s were in fact a booming period in Nova Scotia and this growth was only undermined towards the end of that decade. David Alexander argues that any earlier declines were simply part of the global Long Depression, and that the Maritimes first fell behind the rest of Canada when the great boom period of the early 20th century had little effect on the region. E.R. Forbes, however, emphasizes that the precipitous decline did not occur until after the First World War during the 1920s when new railway policies were implemented. Forbes also contends that significant Canadian defence spending during the Second World War favoured powerful political interests in Central Canada such as C.D. Howe, when major Maritime shipyards and factories, as well as Canada's largest steel mill, located in Cape Breton Island, fared poorly. One of the most important changes, and one that almost certainly had an effect, was the revolution in transportation that occurred at this time. The Maritimes were connected to central Canada by the Intercolonial Railway in the 1870s, removing a longstanding barrier to trade. For the first time this placed the Maritime manufacturers in direct competition with those of Central Canada. Maritime trading patterns shifted considerably from mainly trading with New England, Britain, and the Caribbean, to being focused on commerce with the Canadian interior, enforced by the federal government's tariff policies. Coincident with the construction of railways in the region, the age of the wooden sailing ship began to come to an end, being replaced by larger and faster steel steam ships. The Maritimes had long been a centre for shipbuilding, and this industry was hurt by the change. The larger ships were also less likely to call on the smaller population centres such as Saint John and Halifax, preferring to travel to cities like New York and Montreal. Even the Cunard Line, founded by Maritime-born Samuel Cunard, stopped making more than a single ceremonial voyage to Halifax each year. More controversial than the role of technology is the argument over the role of politics in the origins of the region's decline. Confederation and the tariff and railway freight policies that followed have often been blamed for having a deleterious effect on the Maritime economies. Arguments have been made that the Maritimes' poverty was caused by control over policy by Central Canada which used the national structures for its own enrichment. This was the central view of the Maritime Rights Movement of the 1920s, which advocated greater local control over the region's finances. T.W. Acheson is one of the main proponents of this theory. He notes the growth that was occurring during the early years of the National Policy in Nova Scotia demonstrates how the effects of railway fares and the tariff structure helped undermine this growth. Capitalists from Central Canada purchased the factories and industries of the Maritimes from their bankrupt local owners and proceeded to close down many of them, consolidating the industry in Central Canada. The policies in the early years of Confederation were designed by Central Canadian interests, and they reflected the needs of that region. The unified Canadian market and the introduction of railroads created a relative weakness in the Maritime economies. Central to this concept, according to Acheson, was the lack of metropolises in the Maritimes. Montreal and Toronto were well-suited to benefit from the development of large-scale manufacturing and extensive railway systems in Quebec and Ontario, these being the goals of the Macdonald and Laurier governments. In the Maritimes the situation was very different. Today New Brunswick has several mid-sized centres in Saint John, Moncton, and Fredericton but no significant population centre. Nova Scotia has a growing metropolitan area surrounding Halifax, but a contracting population in industrial Cape Breton, and several smaller centres in Bridgewater, Kentville, Yarmouth, and Pictou County. Prince Edward Island's only significant population centres are in Charlottetown and Summerside. During the late 19th and early 20th centuries, just the opposite was the case with little to no population concentration in major industrial centres as the predominantly rural resource-dependent Maritime economy continued on the same path as it
more furious, eventually resorting to violence against the Christians. They plotted to flush the Christians out at night by running through the streets claiming that the Church of Alexander was on fire. When Christians responded to what they were led to believe was the burning down of their church, "the Jews immediately fell upon and slew them" by using rings to recognize one another in the dark and killing everyone else in sight. When the morning came, Cyril, along with many of his followers, took to the city's synagogues in search of the perpetrators of the massacre. According to Socrates Scholasticus, after Cyril rounded up all the Jews in Alexandria he ordered them to be stripped of all possessions, banished them from Alexandria, and allowed their goods to be pillaged by the remaining citizens of Alexandria. Scholasticus indicates that all the Jews were banished, while John of Nikiû says only those involved in the ambush. Susan Wessel says that, while it is not clear whether Scholasticus was a Novationist (whose churches Cyril had closed), he was apparently sympathetic towards them, and makes clear Cyril's habit of abusing his episcopal power by infringing on the rights and duties of the secular authorities. Wessel says "...Socrates probably does not provide accurate and unambiguous information about Cyril's relationship to imperial authority". Nonetheless, with Cyril's banishment of the Jews, however many, "Orestes [...] was filled with great indignation at these transactions, and was excessively grieved that a city of such magnitude should have been suddenly bereft of so large a portion of its population." Because of this, the feud between Cyril and Orestes intensified, and both men wrote to the emperor regarding the situation. Eventually, Cyril attempted to reach out to Orestes through several peace overtures, including attempted mediation and, when that failed, showed him the Gospels, which he interpreted to indicate that the religious authority of Cyril would require Orestes' acquiescence in the bishop's policy. Nevertheless, Orestes remained unmoved by such gestures. This refusal almost cost Orestes his life. Nitrian monks came from the desert and instigated a riot against Orestes among the population of Alexandria. These monks had resorted to violence 15 years before, during a controversy between Theophilus (Cyril's uncle) and the "Tall Brothers"; the monks assaulted Orestes and accused him of being a pagan. Orestes rejected the accusations, showing that he had been baptised by the Archbishop of Constantinople. A monk named Ammonius threw a stone hitting Orestes in the head. The prefect had Ammonius tortured to death, whereupon the Patriarch honored him as a martyr. However, according to Scholasticus, the Christian community displayed a general lack of enthusiasm for Ammonius's case for martyrdom. The prefect then wrote to the emperor Theodosius II, as did Cyril.Wessel, pp. 35–36. Murder of Hypatia The Prefect Orestes enjoyed the political backing of Hypatia, an astronomer, philosopher and mathematician who had considerable moral authority in the city of Alexandria, and who had extensive influence. Indeed, many students from wealthy and influential families came to Alexandria purposely to study privately with Hypatia, and many of these later attained high posts in government and the Church. Several Christians thought that Hypatia's influence had caused Orestes to reject all conciliatory offerings by Cyril. Modern historians think that Orestes had cultivated his relationship with Hypatia to strengthen a bond with the pagan community of Alexandria, as he had done with the Jewish one, in order to better manage the tumultuous political life of the Egyptian capital. A mob, led by a lector named Peter, took Hypatia from her chariot and murdered her, hacking her body apart and burning the pieces outside the city walls.Giovanni di Nikiu, 84.88-100. Neoplatonist historian Damascius ( 458 – 538) was "anxious to exploit the scandal of Hypatia's death", and attributed responsibility for her murder to Bishop Cyril and his Christian followers. Damascius's account of the Christian murder of Hypatia is the sole historical source attributing direct responsibility to Bishop Cyril. Some modern studies represent Hypatia's death as the result of a struggle between two Christian factions, the moderate Orestes, supported by Hypatia, and the more rigid Cyril. According to lexicographer William Smith, "She was accused of too much familiarity with Orestes, prefect of Alexandria, and the charge spread among the clergy, who took up the notion that she interrupted the friendship of Orestes with their archbishop, Cyril." Scholasticus writes that Hypatia ultimately fell "victim to the political jealousy which at the time prevailed". News of Hypatia's murder provoked great public denunciation, not only of Cyril but of the whole Alexandrian Christian community. Conflict with Nestorius Another major conflict was between the Alexandrian and Antiochian schools of ecclesiastical reflection, piety, and discourse. This long running conflict widened with the third canon of the First Council of Constantinople which granted the see of Constantinople primacy over the older sees of Alexandria and Antioch. Thus, the struggle between the sees of Alexandria and Antioch now included Constantinople. The conflict came to a head in 428 after Nestorius, who originated in Antioch, was made Archbishop of Constantinople. Cyril gained an opportunity to restore Alexandria's pre-eminence over both Antioch and Constantinople when an Antiochine priest who was in Constantinople at Nestorius' behest began to preach against calling Mary the "Mother of God" (Theotokos). As the term "Mother of God" had long been attached to Mary, the laity in Constantinople complained against the priest. Rather than repudiating the priest, Nestorius intervened on his behalf. Nestorius argued that Mary was neither a "Mother of Man" nor "Mother of God" as these referred to Christ's two natures; rather, Mary was the "Mother of Christ" (Greek: Christotokos). Christ, according to Nestorius, was the conjunction of the Godhead with his "temple" (which Nestorius was fond of calling his human nature). The controversy seemed to be centered on the issue of the suffering of Christ. Cyril maintained that the Son of God or the divine Word, truly suffered "in the flesh." However, Nestorius claimed that the Son of God was altogether incapable of suffering, even within his union with the flesh. Eusebius of Dorylaeum went so far as to accuse Nestorius of adoptionism. By this time, news of the controversy in the capital had reached Alexandria. At Easter 429 A.D., Cyril wrote a letter to the Egyptian monks warning them of Nestorius's views. A copy of this letter reached Constantinople where Nestorius preached a sermon against it. This began a series of letters between Cyril and Nestorius which gradually became more strident in tone. Finally, Emperor Theodosius II convoked the Council of Ephesus (in 431) to solve the dispute. Cyril selected Ephesus as the venue since it supported the veneration of Mary. The council was convoked before Nestorius's supporters from Antioch and Syria had arrived and thus Nestorius refused to attend when summoned. Predictably, the Council ordered the deposition and exile of Nestorius for heresy. However, when John of Antioch and the other pro-Nestorius bishops finally reached Ephesus, they assembled their own Council, condemned Cyril for heresy, deposed him from his see, and labelled him as a "monster, born and educated for the destruction of the church". Theodosius, by now old enough to hold power by himself, annulled the verdict of the Council and arrested Cyril, but Cyril eventually escaped. Having fled to Egypt, Cyril bribed Theodosius's courtiers, and sent a mob led by Dalmatius, a hermit, to besiege Theodosius's palace, and shout abuse; the emperor eventually gave in, sending Nestorius into minor exile (Upper Egypt). Cyril died about 444, but the controversies were to continue for decades, from the "Robber Synod" of Ephesus (449) to the Council of Chalcedon (451) and beyond. Theology Cyril regarded the embodiment of God in the person of Jesus Christ to be so mystically powerful that it spread out from the body of the God-man into the rest of the race, to reconstitute human nature into a graced and deified condition of the saints, one that promised immortality and transfiguration to believers. Nestorius, on the other hand, saw the incarnation as primarily a moral and ethical example to the faithful, to follow in the footsteps of Jesus. Cyril's constant stress was on the simple idea that it was God who walked the streets of Nazareth (hence Mary was Theotokos, meaning "God bearer", which became in Latin "Mater Dei or Dei Genitrix", or Mother of God), and God who had appeared in a transfigured humanity. Nestorius spoke of the distinct "Jesus the man" and "the divine Logos" in ways that Cyril thought were too dichotomous, widening the ontological gap between man and God in a way that some of his contemporaries believed would annihilate the person of Christ. The main issue that prompted this dispute between Cyril and Nestorius was the question which arose at the Council of Constantinople: What exactly was the being to which Mary gave birth? Cyril affirmed that the Holy Trinity consists of a singular divine nature, essence, and being (ousia) in three distinct aspects, instantiations, or subsistencies of being (hypostases). These distinct hypostases are the Father or God in Himself, the Son or Word (Logos), and the Holy Spirit. Then, when the Son became flesh and entered the world, the pre-Incarnate divine nature and assumed human nature both remained, but became united in the person of Jesus. This resulted in the miaphysite slogan "One Nature united out of two" being used to encapsulate the theological position of this Alexandrian bishop. According to Cyril's theology, there were two states for the Son of God: the state that existed prior to the Son (or Word/Logos) becoming enfleshed in the person of Jesus and the state that actually became enfleshed. The Logos Incarnate suffered and died on the Cross, and therefore the Son was able to suffer without suffering. Cyril passionately argued for the continuity of a single subject, God the Word, from the pre-Incarnate state
portion of its population." Because of this, the feud between Cyril and Orestes intensified, and both men wrote to the emperor regarding the situation. Eventually, Cyril attempted to reach out to Orestes through several peace overtures, including attempted mediation and, when that failed, showed him the Gospels, which he interpreted to indicate that the religious authority of Cyril would require Orestes' acquiescence in the bishop's policy. Nevertheless, Orestes remained unmoved by such gestures. This refusal almost cost Orestes his life. Nitrian monks came from the desert and instigated a riot against Orestes among the population of Alexandria. These monks had resorted to violence 15 years before, during a controversy between Theophilus (Cyril's uncle) and the "Tall Brothers"; the monks assaulted Orestes and accused him of being a pagan. Orestes rejected the accusations, showing that he had been baptised by the Archbishop of Constantinople. A monk named Ammonius threw a stone hitting Orestes in the head. The prefect had Ammonius tortured to death, whereupon the Patriarch honored him as a martyr. However, according to Scholasticus, the Christian community displayed a general lack of enthusiasm for Ammonius's case for martyrdom. The prefect then wrote to the emperor Theodosius II, as did Cyril.Wessel, pp. 35–36. Murder of Hypatia The Prefect Orestes enjoyed the political backing of Hypatia, an astronomer, philosopher and mathematician who had considerable moral authority in the city of Alexandria, and who had extensive influence. Indeed, many students from wealthy and influential families came to Alexandria purposely to study privately with Hypatia, and many of these later attained high posts in government and the Church. Several Christians thought that Hypatia's influence had caused Orestes to reject all conciliatory offerings by Cyril. Modern historians think that Orestes had cultivated his relationship with Hypatia to strengthen a bond with the pagan community of Alexandria, as he had done with the Jewish one, in order to better manage the tumultuous political life of the Egyptian capital. A mob, led by a lector named Peter, took Hypatia from her chariot and murdered her, hacking her body apart and burning the pieces outside the city walls.Giovanni di Nikiu, 84.88-100. Neoplatonist historian Damascius ( 458 – 538) was "anxious to exploit the scandal of Hypatia's death", and attributed responsibility for her murder to Bishop Cyril and his Christian followers. Damascius's account of the Christian murder of Hypatia is the sole historical source attributing direct responsibility to Bishop Cyril. Some modern studies represent Hypatia's death as the result of a struggle between two Christian factions, the moderate Orestes, supported by Hypatia, and the more rigid Cyril. According to lexicographer William Smith, "She was accused of too much familiarity with Orestes, prefect of Alexandria, and the charge spread among the clergy, who took up the notion that she interrupted the friendship of Orestes with their archbishop, Cyril." Scholasticus writes that Hypatia ultimately fell "victim to the political jealousy which at the time prevailed". News of Hypatia's murder provoked great public denunciation, not only of Cyril but of the whole Alexandrian Christian community. Conflict with Nestorius Another major conflict was between the Alexandrian and Antiochian schools of ecclesiastical reflection, piety, and discourse. This long running conflict widened with the third canon of the First Council of Constantinople which granted the see of Constantinople primacy over the older sees of Alexandria and Antioch. Thus, the struggle between the sees of Alexandria and Antioch now included Constantinople. The conflict came to a head in 428 after Nestorius, who originated in Antioch, was made Archbishop of Constantinople. Cyril gained an opportunity to restore Alexandria's pre-eminence over both Antioch and Constantinople when an Antiochine priest who was in Constantinople at Nestorius' behest began to preach against calling Mary the "Mother of God" (Theotokos). As the term "Mother of God" had long been attached to Mary, the laity in Constantinople complained against the priest. Rather than repudiating the priest, Nestorius intervened on his behalf. Nestorius argued that Mary was neither a "Mother of Man" nor "Mother of God" as these referred to Christ's two natures; rather, Mary was the "Mother of Christ" (Greek: Christotokos). Christ, according to Nestorius, was the conjunction of the Godhead with his "temple" (which Nestorius was fond of calling his human nature). The controversy seemed to be centered on the issue of the suffering of Christ. Cyril maintained that the Son of God or the divine Word, truly suffered "in the flesh." However, Nestorius claimed that the Son of God was altogether incapable of suffering, even within his union with the flesh. Eusebius of Dorylaeum went so far as to accuse Nestorius of adoptionism. By this time, news of the controversy in the capital had reached Alexandria. At Easter 429 A.D., Cyril wrote a letter to the Egyptian monks warning them of Nestorius's views. A copy of this letter reached Constantinople where Nestorius preached a sermon against it. This began a series of letters between Cyril and Nestorius which gradually became more strident in tone. Finally, Emperor Theodosius II convoked the Council of Ephesus (in 431) to solve the dispute. Cyril selected Ephesus as the venue since it supported the veneration of Mary. The council was convoked before Nestorius's supporters from Antioch and Syria had arrived and thus Nestorius refused to attend when summoned. Predictably, the Council ordered the deposition and exile of Nestorius for heresy. However, when John of Antioch and the other pro-Nestorius bishops finally reached Ephesus, they assembled their own Council, condemned Cyril for heresy, deposed him from his see, and labelled him as a "monster, born and educated for the destruction of the church". Theodosius, by now old enough to hold power by himself, annulled the verdict of the Council and arrested Cyril, but Cyril eventually escaped. Having fled to Egypt, Cyril bribed Theodosius's courtiers, and sent a mob led by Dalmatius, a hermit, to besiege Theodosius's palace, and shout abuse; the emperor eventually gave in, sending Nestorius into minor exile (Upper Egypt). Cyril died about 444, but the controversies were to continue for decades, from the "Robber Synod" of Ephesus (449) to the Council of Chalcedon (451) and beyond. Theology Cyril regarded the embodiment of God in the person of Jesus Christ to be so mystically powerful that it spread out from the body of the God-man into the rest of the race, to reconstitute human nature into a graced and deified condition of the saints, one that promised immortality and transfiguration to believers. Nestorius, on the other hand, saw the incarnation as primarily a moral and ethical example to the faithful, to follow in the footsteps of Jesus. Cyril's constant stress was on the simple idea that it was God who walked the streets of Nazareth (hence Mary was Theotokos, meaning "God bearer", which became in Latin "Mater Dei or Dei Genitrix", or Mother of God), and God who had appeared in a transfigured humanity. Nestorius spoke of the distinct "Jesus the man" and "the divine Logos" in ways that Cyril thought were too dichotomous, widening the ontological gap between man and God in a way that some of his contemporaries believed would annihilate the person of Christ. The main issue that prompted this dispute between Cyril and Nestorius was the question which arose at the Council of Constantinople: What exactly was the being to which Mary gave birth? Cyril affirmed that the Holy Trinity consists of a singular divine nature, essence, and being (ousia) in three distinct aspects, instantiations, or subsistencies of being (hypostases). These distinct hypostases are the Father or God in Himself, the Son or Word (Logos), and the Holy Spirit. Then, when the Son became flesh and entered the world, the pre-Incarnate divine nature and assumed human nature both remained, but became united in the person of Jesus. This resulted in the miaphysite slogan "One Nature united out of two" being used to encapsulate the theological position of this Alexandrian bishop. According to Cyril's theology, there were two states for the Son of God: the state that existed prior to the Son (or Word/Logos) becoming enfleshed in the person of Jesus and the state that actually became enfleshed. The Logos Incarnate suffered and died on the Cross, and therefore the Son was able to suffer without suffering. Cyril passionately argued for the continuity of a single subject, God the Word, from the pre-Incarnate state to the Incarnate state. The divine Logos was really present in the flesh and in the world—not merely bestowed upon, semantically affixed to, or morally associated with the man Jesus, as the adoptionists and, he believed, Nestorius had taught. Mariology Cyril of Alexandria became noted in Church history because of his spirited fight for the title "Theotokos" during the First Council of Ephesus (431). His writings include the homily given in Ephesus and several other sermons. Some of his alleged homilies are in dispute as to his authorship. In several writings, Cyril focuses on the love of Jesus to his mother. On the Cross, he overcomes his pain and thinks of his mother. At the wedding in Cana, he bows to her wishes. Cyril created the basis for all other mariological developments through his teaching of the blessed Virgin Mary, as the "Mother of God." The conflict with Nestorius was mainly over this issue, and it has often been misunderstood. "[T]he debate was not so much about Mary as about Jesus. The question was not what honors were due to Mary, but how one was to speak of the birth of Jesus." St. Cyril received an important recognition of his preachings by the Second Council of Constantinople (553 d.C.) which declared; "St. Cyril who announced the right faith of Christians" (Anathematism XIV, Denzinger et Schoenmetzer 437). Works Cyril was a scholarly archbishop and a prolific writer. In the early years of his active life in the Church he wrote several exegetical documents. Among these were: Commentaries
"to inscribe for baptism is to write one's name in the register of the elect in heaven". Eschatology Oded Irshai observed that Cyril lived in a time of intense apocalyptic expectation, when Christians were eager to find apocalyptic meaning in every historical event or natural disaster. Cyril spent a good part of his episcopacy in intermittent exile from Jerusalem. Abraham Malherbe argued that when a leader's control over a community is fragile, directing attention to the imminent arrival of the antichrist effectively diverts attention from that fragility. Soon after his appointment, Cyril in his Letter to Constantius of 351 recorded the appearance of a cross of light in the sky above Golgotha, witnessed by the whole population of Jerusalem. The Greek church commemorates this miracle on 7 May. Though in modern times the authenticity of the Letter has been questioned, on the grounds that the word homoousios occurs in the final blessing, many scholars believe this may be a later interpolation, and accept the letter's authenticity on the grounds of other pieces of internal evidence. Cyril interpreted this as both a sign of support for Constantius, who was soon to face the usurper Magnentius, and as announcing the Second Coming, which was soon to take place in Jerusalem. Not surprisingly, in Cyril's eschatological analysis, Jerusalem holds a central position. Matthew 24:6 speaks of "wars and reports of wars", as a sign of the End Times, and it is within this context that Cyril read Julian's war with the Persians. Matthew 24:7 speaks of "earthquakes from place to place", and Jerusalem experienced an earthquake in 363 at a time when Julian was attempting to rebuild the temple in Jerusalem. Embroiled in a rivalry with Acacius of Caesarea over the relative primacy of their respective sees, Cyril saw even ecclesial discord a sign of the Lord's coming. Catechesis 15 would appear to cast Julian as the antichrist, although Irshai views this as a later interpolation. “In His first coming, He endured the Cross, despising shame; in His second, He comes attended by a host of Angels, receiving glory. We rest not then upon His first advent only, but look also for His second." He looked forward to the Second Advent which would bring an end to the world and then the created world to be re-made anew. At the Second Advent he expected to rise in the resurrection if it came after his time on earth. Mystagogic Catecheses There has been considerable controversy over the date and authorship of the Mystagogic Catecheses, addressed to the newly baptized, in preparation for the reception of Holy Communion, with some scholars having attributed them to Cyril's successor as Bishop of Jerusalem, John. Many scholars would currently view the Mystagogic Catecheses as being written by Cyril, but in the 370s or 380s, rather than at the same time as the Catechetical Lectures. According to the Spanish pilgrim Egeria, these mystagogical catecheses were given to the newly baptised in the Church of the Anastasis in the course of Easter Week. Works Editions W. C. Reischl, J. Rupp (1848; 1860). Cyrilli Hierosolymarum Archiepiscopi opera quae supersunt omnia. München. Christa Müller-Kessler and Michael Sokoloff (1999). The Catechism of Cyril of Jerusalem in the Christian Palestinian Aramaic Version, A Corpus of Christian Palestinian Aramaic, vol. V. Groningen: STYX-Publications. Christa Müller-Kessler (2014). Codex Sinaiticus Rescriptus (CSRG/O/P/S). A Collection of Christian Palestinian Aramaic Manuscripts, Le Muséon 127, pp. 281–288. Modern translations McCauley, Leo P. and Anthony A. Stephenson, (1969, 1970). The works of Saint Cyril of Jerusalem. 2 vols. Washington: Catholic University of America Press [contains an introduction, and English translations of: Vol 1: The introductory lecture (Procatechesis). Lenten lectures (Catecheses). Vol 2: Lenten lectures (Katēchēseis). Mystagogical lectures (Katēchēseis mystagōgikai). Sermon on the paralytic (Homilia eis ton paralytikon ton epi tēn Kolymbēthran). Letter to Constantius (Epistolē pros Kōnstantion). Fragments.] Telfer, W. (1955). Cyril of Jerusalem and Nemesius of Emesa. The Library of Christian classics, v. 4. Philadelphia: Westminster Press. Yarnold, E. (2000). Cyril
sin and Jesus’ sacrificing himself to save us from our sins. Also, the burial and Resurrection which occurred three days later proving the divinity of Jesus Christ and the loving nature of the Father. Cyril was very adamant about the fact that Jesus went to his death with full knowledge and willingness. Not only did he go willingly but throughout the process he maintained his faith and forgave all those who betrayed him and engaged in his execution. Cyril writes "who did not sin, neither was deceit found in his mouth, who, when he was reviled, did not revile, when he suffered did not threaten". This line by Cyril shows his belief in the selflessness of Jesus especially in this last final act of Love. The lecture also gives a sort of insight to what Jesus may have been feeling during the execution from the whippings and beatings, to the crown of thorns, to the nailing on the cross. Cyril intertwines the story with the messages Jesus told throughout his life before his execution relating to his final act. For example, Cyril writes "I gave my back to those who beat me and my cheeks to blows; and my face I did not shield from the shame of spitting". This clearly reflects the teachings of Jesus to turn the other cheeks and not raising your hands against violence because violence just begets violence begets violence. The segment of the Catechesis really reflects the voice Cyril maintained in all of his writing. The writings always have the central message of the Bible; Cyril is not trying to add his own beliefs in reference to religious interpretation and remains grounded in true biblical teachings. Danielou sees the baptism rite as carrying eschatological overtones, in that "to inscribe for baptism is to write one's name in the register of the elect in heaven". Eschatology Oded Irshai observed that Cyril lived in a time of intense apocalyptic expectation, when Christians were eager to find apocalyptic meaning in every historical event or natural disaster. Cyril spent a good part of his episcopacy in intermittent exile from Jerusalem. Abraham Malherbe argued that when a leader's control over a community is fragile, directing attention to the imminent arrival of the antichrist effectively diverts attention from that fragility. Soon after his appointment, Cyril in his Letter to Constantius of 351 recorded the appearance of a cross of light in the sky above Golgotha, witnessed by the whole population of Jerusalem. The Greek church commemorates this miracle on 7 May. Though in modern times the authenticity of the Letter has been questioned, on the grounds that the word homoousios occurs in the final blessing, many scholars believe this may be a later interpolation, and accept the letter's authenticity on the grounds of other pieces of internal evidence. Cyril interpreted this as both a sign of support for Constantius, who was soon to face the usurper Magnentius, and as announcing the Second Coming, which was soon to take place in Jerusalem. Not surprisingly, in Cyril's eschatological analysis, Jerusalem holds a central position. Matthew 24:6 speaks of "wars and reports of wars", as a sign of the End Times, and it is within this context that Cyril read Julian's war with the Persians. Matthew 24:7 speaks of "earthquakes from place to place", and Jerusalem experienced an earthquake in 363 at a time when Julian was attempting to rebuild the temple in Jerusalem. Embroiled in a rivalry with Acacius of Caesarea over the relative primacy of their respective sees, Cyril saw even ecclesial discord a sign of the Lord's coming. Catechesis 15 would appear to cast Julian as the antichrist, although Irshai views this as a later interpolation. “In His first coming, He endured the Cross, despising shame; in His second, He comes attended by a host of Angels, receiving glory. We rest not then upon His first advent only, but look also for His second." He looked forward to the Second Advent which would bring an end to the world and then the created world to be re-made anew. At the Second Advent he expected to rise in the resurrection if it came after his time on earth. Mystagogic Catecheses There has been considerable controversy over the date and authorship of the Mystagogic Catecheses, addressed to the newly baptized, in preparation for the reception of Holy Communion, with some scholars having attributed them to Cyril's successor as Bishop of Jerusalem, John. Many scholars would currently view the Mystagogic Catecheses as being written by Cyril, but in the 370s or 380s, rather than at the same time as the Catechetical Lectures. According to the Spanish pilgrim Egeria, these mystagogical catecheses were given to the newly baptised in the Church of the Anastasis in the course
by some of the American Jewish community as a way to adapt to American life, re-inventing the festival in "the language of individualism and personal conscience derived from both Protestantism and the Enlightenment". The reason for the Hanukkah lights is not for the "lighting of the house within", but rather for the "illumination of the house without," so that passersby should see it and be reminded of the holiday's miracle (i.e. that the sole cruse of pure oil found which held enough oil to burn for one night actually burned for eight nights). Accordingly, lamps are set up at a prominent window or near the door leading to the street. It is customary amongst some Ashkenazi Jews to have a separate menorah for each family member (customs vary), whereas most Sephardi Jews light one for the whole household. Only when there was danger of antisemitic persecution were lamps supposed to be hidden from public view, as was the case in Persia under the rule of the Zoroastrians, or in parts of Europe before and during World War II. However, most Hasidic groups light lamps near an inside doorway, not necessarily in public view. According to this tradition, the lamps are placed on the opposite side from the mezuzah, so that when one passes through the door s/he is surrounded by the holiness of mitzvot (the commandments). Generally, women are exempt in Jewish law from time-bound positive commandments, although the Talmud requires that women engage in the mitzvah of lighting Hanukkah candles "for they too were involved in the miracle." Candle-lighting time Hanukkah lights should usually burn for at least half an hour after it gets dark. The custom of many is to light at sundown, although most Hasidim light later. Many Hasidic Rebbes light much later to fulfill the obligation of publicizing the miracle by the presence of their Hasidim when they kindle the lights. Inexpensive small wax candles sold for Hanukkah burn for approximately half an hour so should be lit no earlier than nightfall. Friday night presents a problem, however. Since candles may not be lit on Shabbat itself, the candles must be lit before sunset. However, they must remain lit through the lighting of the Shabbat candles. Therefore, the Hanukkah menorah is lit first with larger candles than usual, followed by the Shabbat candles. At the end of the Shabbat, there are those who light the Hanukkah lights before Havdalah and those who make Havdalah before the lighting Hanukkah lights. If for whatever reason one didn't light at sunset or nightfall, the lights should be kindled later, as long as there are people in the streets. Later than that, the lights should still be kindled, but the blessings should be recited only if there is at least somebody else awake in the house and present at the lighting of the Hannukah lights. Blessings over the candles Typically two blessings (brachot; singular: brachah) are recited during this eight-day festival when lighting the candles. On the first night only, the shehecheyanu blessing is added, making a total of three blessings. The blessings are said before or after the candles are lit depending on tradition. On the first night of Hanukkah one light (candle or oil) is lit on the right side of the menorah, on the following night a second light is placed to the left of the first but it is lit first, and so on, proceeding from placing candles right to left but lighting them from left to right over the eight nights. Blessing for lighting the candles Transliteration: Translation: "Blessed are You, our God, King of the universe, Who has sanctified us with His commandments and commanded us to kindle the Hanukkah light[s]." Blessing for the miracles of Hanukkah Transliteration: Translation: "Blessed are You, LORD our God, King of the universe, Who performed miracles for our ancestors in those days at this time..." Hanerot Halalu After the lights are kindled the hymn Hanerot Halalu is recited. There are several different versions; the version presented here is recited in many Ashkenazic communities: Maoz Tzur In the Ashkenazi tradition, each night after the lighting of the candles, the hymn Ma'oz Tzur is sung. The song contains six stanzas. The first and last deal with general themes of divine salvation, and the middle four deal with events of persecution in Jewish history, and praises God for survival despite these tragedies (the exodus from Egypt, the Babylonian captivity, the miracle of the holiday of Purim, the Hasmonean victory), and a longing for the days when Judea will finally triumph over Rome. The song was composed in the thirteenth century by a poet only known through the acrostic found in the first letters of the original five stanzas of the song: Mordechai. The familiar tune is most probably a derivation of a German Protestant church hymn or a popular folk song. Other customs After lighting the candles and Ma'oz Tzur, singing other Hanukkah songs is customary in many Jewish homes. Some Hasidic and Sephardi Jews recite Psalms, such as Psalm 30, Psalm 67, and Psalm 91. In North America and in Israel it is common to exchange presents or give children presents at this time. In addition, many families encourage their children to give tzedakah (charity) in lieu of presents for themselves. Special additions to daily prayers An addition is made to the "hoda'ah" (thanksgiving) benediction in the Amidah (thrice-daily prayers), called Al HaNissim ("On/about the Miracles"). This addition refers to the victory achieved over the Syrians by the Hasmonean Mattathias and his sons. The same prayer is added to the grace after meals. In addition, the Hallel (praise) Psalms are sung during each morning service and the Tachanun penitential prayers are omitted. The Torah is read every day in the shacharit morning services in synagogue, on the first day beginning from Numbers 6:22 (according to some customs, Numbers 7:1), and the last day ending with Numbers 8:4. Since Hanukkah lasts eight days it includes at least one, and sometimes two, Jewish Sabbaths (Saturdays). The weekly Torah portion for the first Sabbath is almost always Miketz, telling of Joseph's dream and his enslavement in Egypt. The Haftarah reading for the first Sabbath Hanukkah is Zechariah 2:14 – Zechariah 4:7. When there is a second Sabbath on Hanukkah, the Haftarah reading is from 1 Kings 7:40 – 1 Kings 7:50. The Hanukkah menorah is also kindled daily in the synagogue, at night with the blessings and in the morning without the blessings. The menorah is not lit during Shabbat, but rather prior to the beginning of Shabbat as described above and not at all during the day. During the Middle Ages "Megillat Antiochus" was read in the Italian synagogues on Hanukkah just as the Book of Esther is read on Purim. It still forms part of the liturgy of the Yemenite Jews. Zot Hanukkah The last day of Hanukkah is known by some as Zot Hanukkah and by others as Chanukat HaMizbeach, from the verse read on this day in the synagogue Numbers 7:84, Zot Hanukkat Hamizbe'ach: "This was the dedication of the altar". According to the teachings of Kabbalah and Hasidism, this day is the final "seal" of the High Holiday season of Yom Kippur and is considered a time to repent out of love for God. In this spirit, many Hasidic Jews wish each other Gmar chatimah tovah ("may you be sealed totally for good"), a traditional greeting for the Yom Kippur season. It is taught in Hasidic and Kabbalistic literature that this day is particularly auspicious for the fulfillment of prayers. Other related laws and customs It is customary for women not to work for at least the first half-hour of the candles' burning, and some have the custom not to work for the entire time of burning. It is also forbidden to fast or to eulogize during Hanukkah. Hanukkah as the end of the High Holy Days Some Hasidic scholars teach that the Hanukkah is in fact the final conclusion of God's judgement extending High Holy Days of Rosh Hashana when humanity is judged and Yom Kippur when the judgment is sealed: Hassidic masters quote from Kabbalistic sources that the God’s mercy extends even further, giving the Children of Israel till the final day of Chanukah (known as "Zot Chanukah" based on words which appear in the Torah reading of that day), to return to Him and receive a favorable judgment. They see several hints to this in different verses. One is Isaiah 27:9: “Through this (zot) will Jacob’s sin be forgiven” – i.e., on account of the holiness of Zot Chanukah. Customs Music A large number of songs have been written on Hanukkah themes, perhaps more so than for any other Jewish holiday. Some of the best known are "Ma'oz Tzur" (Rock of Ages), "Latke'le Latke'le" (Yiddish song about cooking Latkes), "Hanukkiah Li Yesh" ("I Have a Hanukkah Menorah"), "Ocho Kandelikas" ("Eight Little Candles"), "Kad Katan" ("A Small Jug"), "S'vivon Sov Sov Sov" ("Dreidel, Spin and Spin"), "Haneirot Halolu" ("These Candles which we light"), "Mi Yimalel" ("Who can Retell") and "Ner Li, Ner Li" ("I have a Candle"). Among the most well known songs in English-speaking countries are "Dreidel, Dreidel, Dreidel" and "Oh Chanukah". Among the Rebbes of the Nadvorna Hasidic dynasty, it is customary for the Rebbes to play violin after the menorah is lit. Penina Moise's Hannukah Hymn published in the 1842 Hymns Written for the Use of Hebrew Congregations was instrumental in the beginning of Americanization of Hanukkah. Foods There is a custom of eating foods fried or baked in oil (preferably olive oil) to commemorate the miracle of a small flask of oil keeping the Second Temple's Menorah alight for eight days. Traditional foods include latkes, a fried potato fritter, especially among Ashkenazi families. Sephardi, Polish, and Israeli families eat jam-filled doughnuts called sufganiyot, which were ( pontshkes) by Ashkenazi Jews living in Eastern and Central Europe prior to the Holocaust, bimuelos (spherical doughnuts) and sufganiyot which are deep-fried in oil. Italkim and Hungarian Jews traditionally eat cheese pancakes known as "cassola" or "cheese latkes". Latkes are not popular in Israel, as they are more commonly made at home and are an Ashkenazi Jewish dish. The Sephardi Jews eat fritas de prasa, a similar fried dish made with mashed potato and leek. As the majority of the population in Israel is of Sephardi and Mizrahi Jewish descent, and these groups have their own Hanukkah dishes such as fritas de prasa, sfinj, cassola, and shamlias, among others. Latkes have also been largely replaced by sufganiyot due to local economic factors, convenience and the influence of trade unions. Bakeries in Israel have popularized many new types of fillings for sufganiyot besides the traditional strawberry jelly filling, including chocolate cream, vanilla cream, caramel, cappuccino and others. In recent years, downsized, "mini" sufganiyot containing half the calories of the regular, 400-to-600-calorie version, have become popular. Rabbinic literature also records a tradition of eating cheese and other dairy products during Hanukkah. This custom, as mentioned above, commemorates the heroism of Judith during the Babylonian captivity of the Jews and reminds us that women also played an important role in the events of Hanukkah. The deuterocanonical book of Judith (Yehudit or Yehudis in Hebrew), which is not part of the Tanakh, records that Holofernes, an Assyrian general, had surrounded the village of Bethulia as part of his campaign to conquer Judea. After intense fighting, the water supply of the Jews was cut off and the situation became desperate. Judith, a pious widow, told the city leaders that she had a plan to save the city. Judith went to the Assyrian camps and pretended to surrender. She met Holofernes, who was smitten by her beauty. She went back to his tent with him, where she plied him with cheese and wine. When he fell into a drunken sleep, Judith beheaded him and escaped from the camp, taking the severed head with her (the beheading of Holofernes by Judith has historically been a popular theme in art). When Holofernes' soldiers found his corpse, they were overcome with fear; the Jews, on the other hand, were emboldened and launched a successful counterattack. The town was saved, and the Assyrians defeated. Roast goose has historically been a traditional Hanukkah food among Eastern European and American Jews, although the custom has declined in recent decades. Indian Jews traditionally consume gulab jamun, fried dough balls soaked in a sweet syrup, similar to teiglach or bimuelos, as part of their Hanukkah celebrations. Italian Jews eat fried chicken, cassola (a ricotta cheese latke almost similar to a cheesecake), and Fritelle de riso par Hanukkah (a fried sweet rice pancake). Romanian Jews eat pasta latkes as a traditional Hanukkah dish, and Syrian Jews consume Kibbet Yatkeen, a dish made with pumpkin and bulgur wheat similar to latkes, as well as their own version of keftes de prasa spiced with allspice and cinnamon. Dreidel After lighting the candles, it is customary to play (or spin) the dreidel. The dreidel, or sevivon in Hebrew, is a four-sided spinning top that children play with during Hanukkah. Each side is imprinted with a Hebrew letter which is an abbreviation for the Hebrew words (, "A great miracle happened there"), referring to the miracle of the oil that took place in the Beit Hamikdash. The fourth side of some dreidels sold in Israel are inscribed with the letter (Pe), rendering the acronym (, "A great miracle happened here"), referring to the fact that the miracle occurred in the land of Israel, although this is a relatively recent innovation. Stores in Haredi neighborhoods sell the traditional Shin dreidels as well, because they understand "there" to refer to the Temple and not the entire Land of Israel, and because the Hasidic Masters ascribe significance to the traditional letters. Hanukkah gelt Chanukkah gelt (Yiddish for "Chanukkah money") known in Israel by the Hebrew translation dmei Hanukkah, is often distributed to children during the festival of Hanukkah. The giving of Hanukkah gelt also adds to the holiday excitement. The amount is usually in small coins, although grandparents or relatives may give larger sums. The tradition of giving Chanukah gelt dates back to a long-standing East European custom of children presenting their teachers with a small sum of money at this time of year as a token of gratitude. One minhag favors the fifth night of Hanukkah for giving Hanukkah gelt. Unlike the other nights of Hanukkah, the fifth does not ever fall on the Shabbat, hence never conflicting with the Halachic injunction against handling money on the Shabbat. Hanukkah in the White House The United States has a history of recognizing and celebrating Hanukkah in a number of ways. The earliest Hanukkah link with the White House occurred in 1951 when Israeli Prime Minister David Ben-Gurion presented United States President Harry Truman with a Hanukkah Menorah. In 1979 president Jimmy Carter took part in the first public Hanukkah candle-lighting ceremony of the National Menorah held across the White House lawn. In 1989, President George H.W. Bush displayed a menorah in the White House. In 1993, President Bill Clinton invited a group of schoolchildren to the Oval Office for a small ceremony. The United States Postal Service has released several Hanukkah-themed postage stamps. In 1996, the United States Postal Service (USPS) issued a 32 cent Hanukkah stamp as a joint issue with Israel. In 2004, after eight years of reissuing the menorah design, the USPS issued a dreidel design for the Hanukkah stamp. The dreidel design was used through 2008. In 2009 a Hanukkah stamp was issued with a design featured a photograph of a menorah with nine lit candles. In 2008, President George W. Bush held an official Hanukkah reception in the White House where he linked the occasion to the 1951 gift by using that menorah for the ceremony, with a grandson of Ben-Gurion and a grandson of Truman lighting the candles. In December 2014, two Hanukkah celebrations were held at the White House. The White House commissioned a menorah made by students at the Max Rayne school in Israel and invited two of its students to join U.S. President Barack Obama and First Lady Michelle Obama as they welcomed over 500 guests to the celebration. The students' school in Israel had been subjected to arson by extremists. President Obama said these "students teach us an important lesson for this time in our history. The light of hope must outlast the fires of hate. That's what the Hanukkah story teaches us. It's what our young people can teach us – that one act of faith can make a miracle, that love is stronger than hate, that peace can triumph over conflict." Rabbi Angela Warnick Buchdahl, in leading prayers at the ceremony commented on the how special the scene was, asking the President if he believed America's founding fathers could possibly have pictured that a female Asian-American rabbi would one day be at the White House leading Jewish prayers in front of the African-American president. Dates The dates of Hanukkah are determined by the Hebrew calendar. Hanukkah begins at the 25th day of Kislev and concludes on the second or third day of Tevet (Kislev can have 29 or 30 days). The Jewish day begins at sunset. Hanukkah dates for recent and upcoming: In 2013, on 28 November, the American holiday of Thanksgiving fell during Hanukkah for only the third time since Thanksgiving was declared a national holiday by President Abraham Lincoln. The last time was 1899; and due to the Gregorian and Jewish calendars being slightly out of sync with each other, it will not happen again in the foreseeable future. This convergence prompted the creation of the neologism Thanksgivukkah. Symbolic importance Major Jewish holidays are those when all forms of work are forbidden, and that feature traditional holiday meals, kiddush, holiday candle-lighting, etc. Only biblical holidays fit these criteria, and Chanukah was instituted some two centuries after the Hebrew Bible was completed. Nevertheless, though Chanukah is of rabbinic origin, it is traditionally celebrated in a major and very public fashion. The requirement to position the menorah, or Chanukiah, at the door or window, symbolizes the desire to give the Chanukah miracle a high-profile. Some Jewish historians suggest a different explanation for the rabbinic reluctance to laud the militarism. First, the rabbis wrote after Hasmonean leaders had led Judea into Rome's grip and so may not have wanted to offer the family much praise. Second, they clearly wanted to promote a sense of dependence on God, urging Jews to look toward the divine for protection. They likely feared inciting Jews to another revolt that might end in disaster, as the Bar Kochba revolt did. With the advent of Zionism and the state of Israel, however, these themes were reconsidered. In modern Israel, the national and military aspects of Hanukkah became, once again, more dominant. While Hanukkah is a relatively minor Jewish holiday, as indicated by the lack of religious restrictions on work other than a few minutes after lighting the candles, in North America, Hanukkah in the 21st century has taken a place equal to Passover as a symbol of Jewish identity. Both the Israeli and North American versions of Hanukkah emphasize resistance, focusing on some combination of national liberation and religious freedom as the defining meaning of the holiday. Some Jews in North America and Israel have taken up environmental concerns in relation to Hanukkah's "miracle of the oil", emphasizing reflection on energy conservation and energy independence. An example of this is the Coalition on the Environment and Jewish Life's renewable energy campaign. Relationship to Christmas In North America, Hanukkah became increasingly important to many Jewish individuals and families during the latter part of the 20th century, including a large number of secular Jews, who wanted to celebrate a Jewish alternative to the Christmas celebrations which frequently overlap with Hanukkah. Diane Ashton argues that Jewish immigrants to America raised the profile of Hanukkah as a kid-centered alternative to Christmas as early as the 1800s. This in parts mirrors the ascendancy of Christmas, which like Hanukkah increased in importance in the 1800s. During this time period, Jewish leaders (especially Reform) like Max Lilienthal and Isaac Mayer Wise made an effort to rebrand Hanukkah and started creating Hanukkah celebration for kids at their synagogues, which included candy and singing songs. By the 1900s, it started to become a commercial holiday like Christmas, with Hanukkah gifts and decorations appearing in stores and Jewish Women's magazines printing articles on holiday decorations, children's celebrations, and gift giving. Ashton says that Jewish families did this in order to maintain a Jewish identity which is distinct from mainline Christian culture, on the other hand, the mirroring of Hanukkah and Christmas made Jewish families and kids feel that they were American. Though it was traditional for Ashkenazi Jews to give "gelt" or money to children during Hanukkah, in many families, this tradition has been supplemented with the giving of other gifts so that Jewish children can enjoy receiving gifts just like their Christmas-celebrating peers do. Children play a big role in Hanukkah, and Jewish families with children are more likely to celebrate it than childless Jewish families, and sociologists hypothesize that this is because Jewish parents do not want their kids to be alienated from their non-Jewish peers who celebrate Christmas. Recent celebrations have also seen the presence of the Hanukkah bush, which is considered a Jewish counterpart to the Christmas tree. Today, the presence of Hanukkah bushes is generally discouraged by most rabbis, but some Reform, Reconstructionist and more liberal Conservative rabbis do not object, they also do not object to the presence of Christmas trees. See also Jewish greetings Jewish holidays Footnotes References Further reading External links Hanukkah at About.com Hanukkah at the
eaten to commemorate the importance of oil during the celebration of Hanukkah. Some also have a custom of eating dairy products to remember Judith and how she overcame Holofernes by feeding him cheese, which made him thirsty, and giving him wine to drink. When Holofernes became very drunk, Judith cut off his head. Kindling the Hanukkah lights Each night throughout the eight-day holiday, a candle or oil-based light is lit. As a universally practiced "beautification" (hiddur mitzvah) of the mitzvah, the number of lights lit is increased by one each night. An extra light called a shammash, meaning "attendant" or "sexton," is also lit each night, and is given a distinct location, usually higher, lower, or to the side of the others. Among Ashkenazim the tendency is for every male member of the household (and in many families, girls as well) to light a full set of lights each night, while among Sephardim the prevalent custom is to have one set of lights for the entire household. The purpose of the shammash is to adhere to the prohibition, specified in the Talmud, against using the Hanukkah lights for anything other than publicizing and meditating on the Hanukkah miracle. This differs from Sabbath candles which are meant to be used for illumination and lighting. Hence, if one were to need extra illumination on Hanukkah, the shammash candle would be available, and one would avoid using the prohibited lights. Some, especially Ashkenazim, light the shammash candle first and then use it to light the others. So altogether, including the shammash, two lights are lit on the first night, three on the second and so on, ending with nine on the last night, for a total of 44 (36, excluding the shammash). It is Sephardic custom not to light the shammash first and use it to light the rest. Instead, the shammash candle is the last to be lit, and a different candle or a match is used to light all the candles. Some Hasidic Jews follow this Sephardic custom as well. The lights can be candles or oil lamps. Electric lights are sometimes used and are acceptable in places where open flame is not permitted, such as a hospital room, or for the very elderly and infirm; however, those who permit reciting a blessing over electric lamps only allow it if it is incandescent and battery operated (an incandescent flashlight would be acceptable for this purpose), while a blessing may not be recited over a plug-in menorah or lamp. Most Jewish homes have a special candelabrum referred to as either a Chanukiah (the modern Israeli term) or a menorah (the traditional name, simply Hebrew for 'lamp'). Many families use an oil lamp (traditionally filled with olive oil) for Hanukkah. Like the candle Chanukiah, it has eight wicks to light plus the additional shammash light. In the United States, Hanukkah became a more visible festival in the public sphere from the 1970s when Rabbi Menachem M. Schneerson called for public awareness and observance of the festival and encouraged the lighting of public menorahs. Diane Ashton attributed the increased visibility and reinvention of Hanukkah by some of the American Jewish community as a way to adapt to American life, re-inventing the festival in "the language of individualism and personal conscience derived from both Protestantism and the Enlightenment". The reason for the Hanukkah lights is not for the "lighting of the house within", but rather for the "illumination of the house without," so that passersby should see it and be reminded of the holiday's miracle (i.e. that the sole cruse of pure oil found which held enough oil to burn for one night actually burned for eight nights). Accordingly, lamps are set up at a prominent window or near the door leading to the street. It is customary amongst some Ashkenazi Jews to have a separate menorah for each family member (customs vary), whereas most Sephardi Jews light one for the whole household. Only when there was danger of antisemitic persecution were lamps supposed to be hidden from public view, as was the case in Persia under the rule of the Zoroastrians, or in parts of Europe before and during World War II. However, most Hasidic groups light lamps near an inside doorway, not necessarily in public view. According to this tradition, the lamps are placed on the opposite side from the mezuzah, so that when one passes through the door s/he is surrounded by the holiness of mitzvot (the commandments). Generally, women are exempt in Jewish law from time-bound positive commandments, although the Talmud requires that women engage in the mitzvah of lighting Hanukkah candles "for they too were involved in the miracle." Candle-lighting time Hanukkah lights should usually burn for at least half an hour after it gets dark. The custom of many is to light at sundown, although most Hasidim light later. Many Hasidic Rebbes light much later to fulfill the obligation of publicizing the miracle by the presence of their Hasidim when they kindle the lights. Inexpensive small wax candles sold for Hanukkah burn for approximately half an hour so should be lit no earlier than nightfall. Friday night presents a problem, however. Since candles may not be lit on Shabbat itself, the candles must be lit before sunset. However, they must remain lit through the lighting of the Shabbat candles. Therefore, the Hanukkah menorah is lit first with larger candles than usual, followed by the Shabbat candles. At the end of the Shabbat, there are those who light the Hanukkah lights before Havdalah and those who make Havdalah before the lighting Hanukkah lights. If for whatever reason one didn't light at sunset or nightfall, the lights should be kindled later, as long as there are people in the streets. Later than that, the lights should still be kindled, but the blessings should be recited only if there is at least somebody else awake in the house and present at the lighting of the Hannukah lights. Blessings over the candles Typically two blessings (brachot; singular: brachah) are recited during this eight-day festival when lighting the candles. On the first night only, the shehecheyanu blessing is added, making a total of three blessings. The blessings are said before or after the candles are lit depending on tradition. On the first night of Hanukkah one light (candle or oil) is lit on the right side of the menorah, on the following night a second light is placed to the left of the first but it is lit first, and so on, proceeding from placing candles right to left but lighting them from left to right over the eight nights. Blessing for lighting the candles Transliteration: Translation: "Blessed are You, our God, King of the universe, Who has sanctified us with His commandments and commanded us to kindle the Hanukkah light[s]." Blessing for the miracles of Hanukkah Transliteration: Translation: "Blessed are You, LORD our God, King of the universe, Who performed miracles for our ancestors in those days at this time..." Hanerot Halalu After the lights are kindled the hymn Hanerot Halalu is recited. There are several different versions; the version presented here is recited in many Ashkenazic communities: Maoz Tzur In the Ashkenazi tradition, each night after the lighting of the candles, the hymn Ma'oz Tzur is sung. The song contains six stanzas. The first and last deal with general themes of divine salvation, and the middle four deal with events of persecution in Jewish history, and praises God for survival despite these tragedies (the exodus from Egypt, the Babylonian captivity, the miracle of the holiday of Purim, the Hasmonean victory), and a longing for the days when Judea will finally triumph over Rome. The song was composed in the thirteenth century by a poet only known through the acrostic found in the first letters of the original five stanzas of the song: Mordechai. The familiar tune is most probably a derivation of a German Protestant church hymn or a popular folk song. Other customs After lighting the candles and Ma'oz Tzur, singing other Hanukkah songs is customary in many Jewish homes. Some Hasidic and Sephardi Jews recite Psalms, such as Psalm 30, Psalm 67, and Psalm 91. In North America and in Israel it is common to exchange presents or give children presents at this time. In addition, many families encourage their children to give tzedakah (charity) in lieu of presents for themselves. Special additions to daily prayers An addition is made to the "hoda'ah" (thanksgiving) benediction in the Amidah (thrice-daily prayers), called Al HaNissim ("On/about the Miracles"). This addition refers to the victory achieved over the Syrians by the Hasmonean Mattathias and his sons. The same prayer is added to the grace after meals. In addition, the Hallel (praise) Psalms are sung during each morning service and the Tachanun penitential prayers are omitted. The Torah is read every day in the shacharit morning services in synagogue, on the first day beginning from Numbers 6:22 (according to some customs, Numbers 7:1), and the last day ending with Numbers 8:4. Since Hanukkah lasts eight days it includes at least one, and sometimes two, Jewish Sabbaths (Saturdays). The weekly Torah portion for the first Sabbath is almost always Miketz, telling of Joseph's dream and his enslavement in Egypt. The Haftarah reading for the first Sabbath Hanukkah is Zechariah 2:14 – Zechariah 4:7. When there is a second Sabbath on Hanukkah, the Haftarah reading is from 1 Kings 7:40 – 1 Kings 7:50. The Hanukkah menorah is also kindled daily in the synagogue, at night with the blessings and in the morning without the blessings. The menorah is not lit during Shabbat, but rather prior to the beginning of Shabbat as described above and not at all during the day. During the Middle Ages "Megillat Antiochus" was read in the Italian synagogues on Hanukkah just as the Book of Esther is read on Purim. It still forms part of the liturgy of the Yemenite Jews. Zot Hanukkah The last day of Hanukkah is known by some as Zot Hanukkah and by others as Chanukat HaMizbeach, from the verse read on this day in the synagogue Numbers 7:84, Zot Hanukkat Hamizbe'ach: "This was the dedication of the altar". According to the teachings of Kabbalah and Hasidism, this day is the final "seal" of the High Holiday season of Yom Kippur and is considered a time to repent out of love for God. In this spirit, many Hasidic Jews wish each other Gmar chatimah tovah ("may you be sealed totally for good"), a traditional greeting for the Yom Kippur season. It is taught in Hasidic and Kabbalistic literature that this day is particularly auspicious for the fulfillment of prayers. Other related laws and customs It is customary for women not to work for at least the first half-hour of the candles' burning, and some have the custom not to work for the entire time of burning. It is also forbidden to fast or to eulogize during Hanukkah. Hanukkah as the end of the High Holy Days Some Hasidic scholars teach that the Hanukkah is in fact the final conclusion of God's judgement extending High Holy Days of Rosh Hashana when humanity is judged and Yom Kippur when the judgment is sealed: Hassidic masters quote from Kabbalistic sources that the God’s mercy extends even further, giving the Children of Israel till the final day of Chanukah (known as "Zot Chanukah" based on words which appear in the Torah reading of that day), to return to Him and receive a favorable judgment. They see several hints to this in different verses. One is Isaiah 27:9: “Through this (zot) will Jacob’s sin be forgiven” – i.e., on account of the holiness of Zot Chanukah. Customs Music A large number of songs have been written on Hanukkah themes, perhaps more so than for any other Jewish holiday. Some of the best known are "Ma'oz Tzur" (Rock of Ages), "Latke'le Latke'le" (Yiddish song about cooking Latkes), "Hanukkiah Li Yesh" ("I Have a Hanukkah Menorah"), "Ocho Kandelikas" ("Eight Little Candles"), "Kad Katan" ("A Small Jug"), "S'vivon Sov Sov Sov" ("Dreidel, Spin and Spin"), "Haneirot Halolu" ("These Candles which we light"), "Mi Yimalel" ("Who can Retell") and "Ner Li, Ner Li" ("I have a Candle"). Among the most well known songs in English-speaking countries are "Dreidel, Dreidel, Dreidel" and "Oh Chanukah". Among the Rebbes of the Nadvorna Hasidic dynasty, it is customary for the Rebbes to play violin after the menorah is lit. Penina Moise's Hannukah Hymn published in the 1842 Hymns Written for the Use of Hebrew Congregations was instrumental in the beginning of Americanization of Hanukkah. Foods There is a custom of eating foods fried or baked in oil (preferably olive oil) to commemorate the miracle of a small flask of oil keeping the Second Temple's Menorah alight for eight days. Traditional foods include latkes, a fried potato fritter, especially among Ashkenazi families. Sephardi, Polish, and Israeli families eat jam-filled doughnuts called sufganiyot, which were ( pontshkes) by Ashkenazi Jews living in Eastern and Central Europe prior to the Holocaust, bimuelos (spherical doughnuts) and sufganiyot which are deep-fried in oil. Italkim and Hungarian Jews traditionally eat cheese pancakes known as "cassola" or "cheese latkes". Latkes are not popular in Israel, as they are more commonly made at home and are an Ashkenazi Jewish dish. The Sephardi Jews eat fritas de prasa, a similar fried dish made with mashed potato and leek. As the majority of the population in Israel is of Sephardi and Mizrahi Jewish descent, and these groups have their own Hanukkah dishes such as fritas de prasa, sfinj, cassola, and shamlias, among others. Latkes have also been largely replaced by sufganiyot due to local economic factors, convenience and the influence of trade unions. Bakeries in Israel have popularized many new types of fillings for sufganiyot besides the traditional strawberry jelly filling, including chocolate cream, vanilla cream, caramel, cappuccino and others. In recent years, downsized, "mini" sufganiyot containing half the calories of the regular, 400-to-600-calorie version, have become popular. Rabbinic literature also records a tradition of eating cheese and other dairy products during Hanukkah. This custom, as mentioned above, commemorates the heroism of Judith during the Babylonian captivity of the Jews and reminds us that women also played an important role in the events of Hanukkah. The deuterocanonical book of Judith (Yehudit or Yehudis in Hebrew), which is not part of the Tanakh, records that Holofernes, an Assyrian general, had surrounded the village of Bethulia as part of his campaign to conquer Judea. After intense fighting, the water supply of the Jews was cut off and the situation became desperate. Judith, a pious widow, told the city leaders that she had a plan to save the city. Judith went to the Assyrian camps and pretended to surrender. She met Holofernes, who was smitten by her beauty. She went back to his tent with him, where she plied him with cheese and wine. When he fell into a drunken sleep, Judith beheaded him and escaped from the camp, taking the severed head with her (the beheading of Holofernes by Judith has historically been a popular theme in art). When Holofernes' soldiers found his corpse, they were overcome with fear; the Jews, on the other hand, were emboldened and launched a successful counterattack. The town was saved, and the Assyrians defeated. Roast goose has historically been a traditional Hanukkah food among Eastern European and American Jews, although the custom has declined in recent decades. Indian Jews traditionally consume gulab jamun, fried dough balls soaked in a sweet syrup, similar to teiglach or bimuelos, as part of their Hanukkah celebrations. Italian Jews eat fried chicken, cassola (a ricotta cheese latke almost similar to a cheesecake), and Fritelle de riso par Hanukkah (a fried sweet rice pancake). Romanian Jews eat pasta latkes as a traditional Hanukkah dish, and Syrian Jews consume Kibbet Yatkeen, a dish made with pumpkin and bulgur wheat similar to latkes, as well as their own version of keftes de prasa spiced with allspice and cinnamon. Dreidel After lighting the candles, it is customary to play (or spin) the dreidel. The dreidel, or sevivon in Hebrew, is a four-sided spinning top that children play with during Hanukkah. Each side is imprinted with a Hebrew letter which is an abbreviation for the Hebrew words (, "A great miracle happened there"), referring to the miracle of the oil that took place in the Beit Hamikdash. The fourth side of some dreidels sold in Israel are inscribed with the letter (Pe), rendering the acronym (, "A great miracle happened here"), referring to the fact that the miracle occurred in the land of Israel, although this is a relatively recent innovation. Stores in Haredi neighborhoods sell the traditional Shin dreidels as well, because they understand "there" to refer to the Temple and not the entire Land of Israel, and because the Hasidic Masters ascribe significance to the traditional letters. Hanukkah gelt Chanukkah gelt (Yiddish for "Chanukkah money") known in Israel by the Hebrew translation dmei Hanukkah, is often distributed to children during the festival of Hanukkah. The giving of Hanukkah gelt also adds to the holiday excitement. The amount is usually in small coins, although grandparents or relatives may give larger sums. The tradition of giving Chanukah gelt dates back to a long-standing East European custom of children presenting their teachers with a small sum of money at this time of year as a token of gratitude. One minhag favors the fifth night of Hanukkah for giving Hanukkah gelt. Unlike the other nights of Hanukkah, the fifth does not ever fall on the Shabbat, hence never conflicting with the Halachic injunction against handling money on the Shabbat. Hanukkah in the White House The United States has a history of recognizing and celebrating Hanukkah in a number of ways. The earliest Hanukkah link with the White House occurred in 1951 when Israeli Prime Minister David Ben-Gurion presented United States President Harry Truman with a Hanukkah Menorah. In 1979 president Jimmy Carter took part in the first public Hanukkah candle-lighting ceremony of the National Menorah held across the White House lawn. In 1989, President George H.W. Bush displayed a menorah in the White House. In 1993, President Bill Clinton invited a group of schoolchildren to the Oval Office for a small ceremony. The United States Postal Service has released several Hanukkah-themed postage stamps. In 1996, the United States Postal Service (USPS) issued a 32 cent Hanukkah stamp as a joint issue with Israel. In 2004, after eight years of reissuing the menorah design, the USPS issued a dreidel design for the Hanukkah stamp. The dreidel design was used through 2008. In 2009 a Hanukkah stamp was issued with a design featured a photograph of a menorah with nine lit candles. In 2008, President George W. Bush held an official Hanukkah reception in the White House where he linked the occasion to the 1951 gift by using that menorah for the ceremony, with a grandson of Ben-Gurion and a grandson of Truman lighting the candles. In December 2014, two Hanukkah celebrations were held at the White House. The White House commissioned a menorah made by students at the Max Rayne school in Israel and invited two of its students to join U.S. President Barack Obama and First Lady Michelle Obama as they welcomed over 500 guests to the celebration. The students' school in Israel had been subjected to arson by extremists. President Obama said these "students teach us an important lesson for this time in our history. The light of hope must outlast the fires of hate. That's what the Hanukkah story teaches us. It's what our young people can teach us – that one act of faith can make a miracle, that love is stronger than hate, that peace can triumph over conflict." Rabbi Angela Warnick Buchdahl, in leading prayers at the ceremony commented
society viewing wives as the chattel of husbands. The descriptions of the Bible suggest that she would be expected to perform tasks such as spinning, sewing, weaving, manufacture of clothing, fetching of water, baking of bread, and animal husbandry. However, wives were usually looked after with care, and bigamous men were expected to ensure that they give their first wife food, clothing, and sexual activity. Since a wife was regarded as property, her husband was originally free to divorce her with little restriction, at any time. A divorced couple could get back together unless the wife had married someone else after her divorce. Jesus on marriage, divorce, and remarriage The Bible clearly addresses marriage and divorce. Those in troubled marriages are encouraged to seek counseling and restoration because, according to some advocates of traditional marriage ethics, most divorces are neither necessary nor unavoidable. In both Matthew and Mark, Jesus appealed to God's will in creation. He builds upon the narratives in where male and female are created together and for one another. Thus Jesus takes a firm stand on the permanence of marriage in the original will of God. This corresponds closely with the position of the Pharisee school of thought led by Shammai, at the start of the first millennium, with which Jesus would have been familiar. By contrast, Rabbinic Judaism subsequently took the opposite view, espoused by Hillel, the leader of the other major Pharisee school of thought at the time; in Hillel's view, men were allowed to divorce their wives for any reason. Some hold that marriage vows are unbreakable, so that even in the distressing circumstances in which a couple separates, they are still married from God's point of view. This is the Roman Catholic church's position, although occasionally the church will declare a marriage to be "null" (in other words, it never really was a marriage). William Barclay (1907-1978) has written: Jesus brought together two passages from Genesis, reinforcing the basic position on marriage found in Jewish scripture. Thus, he implicitly emphasized that it is God-made ("God has joined together"), "male and female," lifelong ("let no one separate"), and monogamous ("a man...his wife"). Jesus used the image of marriage and the family to teach the basics about the Kingdom of God. He inaugurated his ministry by blessing the wedding at Cana. In the Sermon on the Mount he set forth a new commandment concerning marriage, teaching that lustful looking constitutes adultery. He also superseded a Mosaic Law allowing divorce with his teaching that "...anyone who divorces his wife, except for sexual immorality (Gk. porneia), causes her to become an adulteress, and anyone who marries the divorced woman commits adultery". Similar Pauline teachings are found in Corinthians 7. The exception clause—"except for..."—uses the Greek word porneia which is variously translated "fornication" (KJV), "marital unfaithfulness" (NIV 1984), "sexual immorality" (NIV 2011), "unchastity" (RSV), et al. The KJV New Testament Greek Lexicon, KJV says porneia includes a variety of sexual "deviations" to include "illicit sexual intercourse, adultery, fornication, homosexuality, lesbianism, intercourse with animals, etc., sexual intercourse with close relatives...." Theologian Frank Stagg says that manuscripts disagree as to the presence in the original text of the phrase "except for fornication". Stagg writes: "Divorce always represents failure...a deviation from God's will.... There is grace and redemption where there is contrition and repentance.... There is no clear authorization in the New Testament for remarriage after divorce." Stagg interprets the chief concern of Matthew 5 as being "to condemn the criminal act of the man who divorces an innocent wife.... Jesus was rebuking the husband who victimizes an innocent wife and thinks that he makes it right with her by giving her a divorce". He points out that Jesus refused to be trapped by the Pharisees into choosing between the strict and liberal positions on divorce as held at the time in Judaism. When they asked him, "Is it lawful for a man to divorce his wife for any cause?" he answered by reaffirming God's will as stated in Genesis, that in marriage husband and wife are made "one flesh", and what God has united man must not separate. There is no evidence that Jesus himself ever married, and considerable evidence that he remained single. In contrast to Judaism and many other traditions, he taught that there is a place for voluntary singleness in Christian service. He believed marriage could be a distraction from an urgent mission, that he was living in a time of crisis and urgency where the Kingdom of God would be established where there would be no marriage nor giving in marriage: In Matthew 22, Jesus is asked about the continuing state of marriage after death and he affirms that at the resurrection "people will neither marry nor be given in marriage; they will be like the angels in heaven." New Testament beyond the Gospels The Apostle Paul quoted passages from Genesis almost verbatim in two of his New Testament books. He used marriage not only to describe the kingdom of God, as Jesus had done, but to define also the nature of the 1st-century Christian church. His theological view was a Christian development of the Old Testament parallel between marriage and the relationship between God and Israel. He analogized the church as a bride and Christ as the bridegroom─drawing parallels between Christian marriage and the relationship between Christ and the Church. There is no hint in the New Testament that Jesus was ever married, and no clear evidence that Paul was ever married. However, both Jesus and Paul seem to view marriage as a legitimate calling from God for Christians. Paul elevates singleness to that of the preferable position, but does offer a caveat suggesting this is "because of the impending crisis"—which could itself extend to present times (see also Pauline privilege). Paul's primary issue was that marriage adds concerns to one's life that detract from their ability to serve God without distraction. Some scholars have speculated that Paul may have been a widower since prior to his conversion to Christianity he was a Pharisee and member of the Sanhedrin, positions in which the social norm of the day required the men to be married. But it is just as likely that he never married at all. Yet, Paul acknowledges the mutuality of marital relations, and recognizes that his own singleness is "a particular gift from God" that others may not necessarily have. He writes: "Now to the unmarried and the widows I say: It is good for them to stay unmarried, as I am. But if they cannot control themselves, they should marry, for it is better to marry than to burn with passion." Paul indicates that bishops, deacons, and elders must be "husbands of one wife", and that women must have one husband. This is usually understood to legislate against polygamy rather than to require marriage: In the Roman Age, female widows who did not remarry were considered more pure than those who did. Such widows were known as one man woman (enos andros gune) in the epistles of Paul. Paul writes: Paul allowed widows to remarry. Paul says that only one-man women older than 60 years can make the list of Christian widows who did special tasks in the community, but that younger widows should remarry to hinder sin. Marriage and early Church Fathers Building on what they saw the example of Jesus and Paul advocating, some early Church Fathers placed less value on the family and saw celibacy and freedom from family ties as a preferable state. Nicene Fathers such as Augustine believed that marriage was a sacrament because it was a symbol used by Paul to express Christ's love of the Church. However, there was also an apocalyptic dimension in his teaching, and he was clear that if everybody stopped marrying and having children that would be an admirable thing; it would mean that the Kingdom of God would return all the sooner and the world would come to an end. Such a view reflects the Manichaean past of Augustine. While upholding the New Testament teaching that marriage is "honourable in all and the bed undefiled," Augustine believed that "yet, whenever it comes to the actual process of generation, the very embrace which is lawful and honourable cannot be effected without the ardour of lust...This is the carnal concupiscence, which, while it is no longer accounted sin in the regenerate, yet in no case happens to nature except from sin." Both Tertullian and Gregory of Nyssa were church fathers who were married. They each stressed that the happiness of marriage was ultimately rooted in misery. They saw marriage as a state of bondage that could only be cured by celibacy. They wrote that at the very least, the virgin woman could expect release from the "governance of a husband and the chains of children." Tertullian argued that second marriage, having been freed from the first by death,"will have to be termed no other than a species of fornication," partly based on the reasoning that this involves desiring to marry a woman out of sexual ardor, which a Christian convert is to avoid. Also advocating celibacy and virginity as preferable alternatives to marriage, Jerome wrote: "It is not disparaging wedlock to prefer virginity. No one can make a comparison between two things if one is good and the other evil." On First Corinthians 7:1 he reasons, "It is good, he says, for a man not to touch a woman. If it is good not to touch a woman, it is bad to touch one: for there is no opposite to goodness but badness. But if it be bad and the evil is pardoned, the reason for the concession is to prevent worse evil." St. John Chrysostom wrote: "...virginity is better than marriage, however good.... Celibacy is...an imitation of the angels. Therefore, virginity is as much more honorable than marriage, as the angel is higher than man. But why do I say angel? Christ, Himself, is the glory of virginity." Cyprian, Bishop of Carthage, said that the first commandment given to men was to increase and multiply, but now that the earth was full there was no need to continue this process of multiplication. This view of marriage was reflected in the lack of any formal liturgy formulated for marriage in the early Church. No special ceremonial was devised to celebrate Christian marriage—despite the fact that the Church had produced liturgies to celebrate the Eucharist, Baptism and Confirmation. It was not important for a couple to have their nuptials blessed by a priest. People could marry by mutual agreement in the presence of witnesses. At first, the old Roman pagan rite was used by Christians, although modified superficially. The first detailed account of a Christian wedding in the West dates from the 9th century. This system, known as Spousals, persisted after the Reformation. Denominational beliefs and practice Marriage and Christianity Roman Catholicism Today all Christian denominations regard marriage as a sacred institution, a covenant. Roman Catholics consider it to be a sacrament,. Marriage was officially recognized as a sacrament at the 1184 Council of Verona. Before then, no specific ritual was prescribed for celebrating a marriage: "Marriage vows did not have to be exchanged in a church, nor was a priest's presence required. A couple could exchange consent anywhere, anytime." In the decrees on marriage of the Council of Trent (twenty-fourth session from 1563), the validity of marriage was made dependent upon the wedding taking place before a priest and two witnesses, although the lack of a requirement for parental consent ended a debate that had proceeded from the 12th century. In the case of a divorce, the right of the innocent party to marry again was denied so long as the other party was alive, even if the other party had committed adultery. The Catholic Church allowed marriages to take place inside churches only starting with the 16th century, beforehand religious marriages happened on the porch of the church. The Roman Catholic Church teaches that God himself is the author of the sacred institution of marriage, which is His way of showing love for those He created. Marriage is a divine institution that can never be broken, even if the husband or wife legally divorce in the civil courts; as long as they are both alive, the Church considers them bound together by God. Holy Matrimony is another name for sacramental marriage. Marriage is intended to be a faithful, exclusive, lifelong union of a man and a woman. Committing themselves completely to each other, a Catholic husband and wife strive to sanctify each other, bring children into the world, and educate them in the Catholic way of life. Man and woman, although created differently from each other, complement each other. This complementarity draws them together in a mutually loving union. The valid marriage of baptized Christians is one of the seven Roman Catholic sacraments. The sacrament of marriage is the only sacrament that a priest does not administer directly; a priest, however, is the chief witness of the husband and wife's administration of the sacrament to each other at the wedding ceremony in a Catholic church. The Roman Catholic Church views that Christ himself established the sacrament of marriage at the wedding feast of Cana; therefore, since it is a divine institution, neither the Church nor state can alter the basic meaning and structure of marriage. Husband and wife give themselves totally to each other in a union that lasts until death. Priests are instructed that marriage is part of God's natural law and to support the couple if they do choose to marry. Today it is common for Roman Catholics to enter into a "mixed marriage" between a Catholic and a baptized non-Catholic. Couples entering into a mixed marriage are usually allowed to marry in a Catholic church provided their decision is of their own accord and they intend to remain together for life, to be faithful to each other, and to have children which are brought up in the Catholic faith. In Roman Catholic teaching, marriage has two objectives: the good of the spouses themselves, and the procreation and education of children (1983 code of canon law, c.1055; 1994 catechism, par.2363). Hence "entering marriage with the intention of never having children is a grave wrong and more than likely grounds for an annulment." It is normal procedure for a priest to ask the prospective bride and groom about their plans to have children before officiating at their wedding. The Roman Catholic Church may refuse to marry anyone unwilling to have children, since procreation by "the marriage act" is a fundamental part of marriage. Thus usage of any form of contraception, in vitro fertilization, or birth control besides Natural Family Planning is a grave offense against the sanctity of marriage and ultimately against God. Protestantism Purposes Essentially all Protestant denominations hold marriage to be ordained by God for the union between a man and a woman. They see the primary purposes of this union as intimate companionship, rearing children and mutual support for both husband and wife to fulfill their life callings. Protestant Christian denominations consider marital sexual pleasure to be a gift of God, though they vary on their position on birth control, ranging from the acceptance of the use of contraception to only allowing natural family planning to teaching Quiverfull doctrine—that birth control is sinful and Christians should have large families. Conservative Protestants consider marriage a solemn covenant between wife, husband and God. Most view sexual relations as appropriate only within a marriage. Protestant Churches discourage divorce though the way it is addressed varies by denomination; for example, the Reformed Church in America permits divorce and remarriage, while connexions such as the Evangelical Methodist Church Conference forbid divorce except in the case of fornication and do not allow for remarriage in any circumstance. Many Methodist Christians teach that marriage is "God's gift and covenant intended to imitate God's covenant with humankind" that "Christians enter in their baptism." For example, the rite used in the Free Methodist Church proclaims that marriage is "more than a legal contract, being a bond of union made in heaven, into which you enter discreetly and reverently." Roles and responsibilities Roles and responsibilities of husband and wives now vary considerably on a continuum between the long-held male dominant/female submission view and a shift toward equality (without sameness) of the woman and the man. There is considerable debate among many Christians today—not just Protestants—whether equality of husband and wife or male headship is the biblically ordained view, and even if it is biblically permissible. The divergent opinions fall into two main groups: Complementarians (who call for husband-headship and wife-submission) and Christian Egalitarians (who believe in full partnership equality in which couples can discover and negotiate roles and responsibilities in marriage). There is no debate that Ephesians 5 presents a historically benevolent husband-headship/wife-submission model for marriage. The questions are (a) how these New Testament household codes are to be reconciled with the calls earlier in Chapter 5 (cf. verses 1, 18, 21) for mutual submission among all believers, and (b) the meaning of "head" in v.23. It is important to note that verse 22 contains no verb in the original manuscripts, which were also not divided into verses: Ephesians 5 (NIV) 1 Follow God’s example, therefore, as dearly loved children 2 and walk in the way of love.... 18 be filled with the Spirit.... 21 Submit to one another out of reverence for Christ. 22 Wives, [submit yourselves] to your own husbands as you do to the Lord. 23 For the husband is the head of the wife as Christ is the head of the church, his body, of which he is the Savior. 24 Now as the church submits to Christ, so also wives should submit to their husbands in everything. 25 Husbands, love your wives, just as Christ loved the church and gave himself up for her 26 to make her holy, cleansing her by the washing with water through the word, 27 and to present her to himself as a radiant church, without stain or wrinkle or any other blemish, but holy and blameless. 28 In this same way, husbands ought to love their wives as their own bodies. He who loves his wife loves himself. 29 After all, no one ever hated their own body, but they feed and care for their body, just as Christ does the church— 30 for we are members of his body. 31 "For this reason a man will leave his father and mother and be united to his wife, and the two will become one flesh." 32 This is a profound mystery—but I am talking about Christ and the church. 33 However, each one of you also must love his wife as he loves himself, and the wife must respect her husband. Eastern Orthodoxy In the Eastern Orthodox Church, marriage is treated as a Sacred Mystery (sacrament), and as an ordination. It serves to unite a woman and a man in eternal union before God. It refers to the 1st centuries of the church, where spiritual union of spouses in the first sacramental marriage was eternal. Therefore, it is considered a martyrdom as each spouse learns to die to self for the sake of the other. Like all Mysteries, Orthodox marriage is more than just a celebration of something which already exists: it is the creation of something new, the imparting to the couple of the grace which transforms them from a 'couple' into husband and wife within the Body of Christ. Marriage is an icon (image) of the relationship between Jesus and the Church. This is somewhat akin to the Old Testament prophets' use of marriage as an analogy to describe the relationship between God and Israel. Marriage is the simplest, most basic unity of the church: a congregation where "two or three are gathered together in Jesus' name." The home is considered a consecrated space (the ritual for the Blessing of a House is based upon that of the Consecration of a Church), and the husband and wife are considered the ministers of that congregation. However, they do not "perform" the Sacraments in the house church; they "live" the Sacrament of Marriage. Because marriage is considered to be a pilgrimage wherein the couple walk side by side toward the Kingdom of Heaven, marriage to a non-Orthodox partner is discouraged, though it may be permitted. Unlike Western Christianity, Eastern Christians do not consider the sacramental aspect of the marriage to be conferred by the couple themselves. Rather, the marriage is conferred by the action of the Holy Spirit acting through the priest. Furthermore, no one besides a bishop or priest—not even a deacon—may perform the Sacred Mystery. The external sign of the marriage is the placing of wedding crowns upon the heads of the couple, and their sharing in a "Common Cup" of wine. Once crowned, the couple walk a circle three times in a ceremonial "dance" in the middle of the church, while the choir intones a joyous three-part antiphonal hymn, "Dance, Isaiah" The sharing of the Common Cup symbolizes the transformation of their union from a common marriage into a sacred union. The wedding is usually performed after the Divine Liturgy at which the couple receives Holy Communion. Traditionally, the wedding couple would wear their wedding crowns for eight days, and there is a special prayer said by the priest at the removal of the crowns. Divorce is discouraged. Sometimes out of economia (mercy) a marriage may be dissolved if there is no hope whatever for a marriage to fulfill even a semblance of its intended sacramental character. The standard formula for remarriage is that the Orthodox Church joyfully blesses the first marriage, merely performs the second, barely tolerates the third, and invariably forbids the fourth. "On the basis of the ideal of the first marriage as an image of the glory of God the question is which significance such a second marriage has and whether it can be regarded as Mysterion. Even though there are opinions (particularly in the west) which deny the sacramental character to the second marriage, in the orthodox literature almost consistently either a reduced or even a full sacramentality is attributed to it. The investigation of the second marriage rite shows that both positions affirming the sacramentality to a second marriage can be justified." Early church texts forbid marriage between an Orthodox Christian and a heretic or schismatic (which would include all non-Orthodox Christians). Traditional Orthodox Christians forbid mixed marriages with other denominations. More liberal ones perform them, provided that the couple formally commit themselves to rearing their children in the Orthodox faith. All people are called to celibacy—human beings are all born into virginity, and Orthodox Christians are expected by Sacred Tradition to remain in that state unless they are called into marriage and that call is sanctified. The church blesses two paths on the journey to salvation: monasticism and marriage. Mere celibacy, without the sanctification of monasticism, can fall into selfishness and tends to be regarded with disfavour by the Church. Orthodox priests who serve in parishes are usually married. They must marry prior to their ordination. If they marry after they are ordained they are not permitted to continue performing sacraments. If their wife dies, they are forbidden to remarry; if they do, they may no longer serve as a priest. A married man may be ordained as a priest or deacon. However, a priest or deacon is not permitted to enter into matrimony after ordination. Bishops must always be monks and are thus celibate. However, if a married priest is widowed, he may receive monastic tonsure and thus become eligible for the episcopate. The Eastern Orthodox Church believes that marriage is an eternal union of spouses, but in Heaven there will not be a procreative bond of marriage. Oriental Orthodoxy The Non-Chalcedonian Churches of Oriental Orthodoxy hold views almost identical to those of the (Chalcedonian) Eastern Orthodox Church. The Coptic Orthodox Church allows second marriages only in cases of adultery or death of spouse. Non-Trinitarian denominations The Church of Jesus Christ of Latter-day Saints In the teachings of The Church of Jesus Christ of Latter-day Saints (LDS Church), celestial (or eternal) marriage is a covenant between a man, a woman, and God performed by a priesthood authority in a temple of the church. Celestial marriage is intended to continue forever into the afterlife if the man and woman do not break their covenants. Thus, eternally married couples are often referred to as being "sealed" to each other. Sealed couples who keep their covenants are also promised to have their posterity sealed to them in the afterlife. (Thus, "families are forever" is a common phrase in the LDS Church.) A celestial marriage is considered a requirement for exaltation. In some countries, celestial marriages can be recognized as civil marriages; in other cases, couples are civilly married outside of the temple and are later sealed in a celestial marriage. (The church will no longer perform a celestial marriage for a couple unless they are first or simultaneously legally married.) The church encourages its members to be in good standing with it so that they may marry or be sealed in the temple. A celestial marriage is not annulled by a civil divorce: a "cancellation of a sealing" may be granted, but only by the First Presidency, the highest authority in the church. Civil divorce and marriage outside the temple carries somewhat of a stigma in the Mormon culture; the church teaches that the "gospel of Jesus Christ—including repentance, forgiveness, integrity, and love—provides the remedy for conflict in marriage." Regarding marriage and divorce, the church instructs its leaders: "No priesthood officer is to counsel a person whom to marry. Nor should he counsel a person to divorce his or her spouse. Those decisions must originate and remain with the individual. When a marriage ends in divorce, or if a husband and wife separate, they should always receive counseling from Church leaders." In church temples, members of the LDS Church perform vicarious celestial marriages for deceased couples who were legally married. New Church (or Swedenborgian Church) The New Church teaches that marital love (or "conjugial love") is "the precious jewel of human life and the repository of the Christian religion" because the love shared between a husband and a wife is the source of all peace and joy. Emanuel Swedenborg coined the term "conjugial" (rather than the more usual adjective in reference to marital union, "conjugal") to describe the special love experienced by married partners. When a husband and wife work together to build their marriage on earth, that marriage continues after the deaths of their bodies and they live as angels in heaven into eternity. Swedenborg claimed to have spoken with angelic couples who had been married for thousands of years. Those who never married in the natural world will, if they wish, find a spouse in heaven. Jehovah's Witnesses The Jehovah's Witnesses view marriage to be a permanent arrangement with the only possible exception being adultery. Divorce is strongly discouraged even when adultery is committed since the wronged spouse is free to forgive the unfaithful one. There are provisions for a domestic separation in the event of "failure to provide for one's household" and domestic violence, or spiritual resistance on the part of a partner. Even in such situations though divorce would be considered grounds for loss of privileges in the congregation. Remarrying after death or a proper divorce is permitted. Marriage is the only situation where any type of sexual interaction is acceptable, and even then certain restrictions apply to acts such as oral and anal sex. Married persons who are known to commit such acts may in fact lose privileges in
the biblically ordained relationship between husband and wife. These views range from Christian egalitarianism that interprets the New Testament as teaching complete equality of authority and responsibility between the man and woman in marriage, all the way to Patriarchy that calls for a "return to complete patriarchy" in which relationships are based on male-dominant power and authority in marriage: 1. Christian Egalitarians believe in an equal partnership of the wife and husband with neither being designated as the leader in the marriage or family. Instead, the wife and husband share a fully equal partnership in both their marriage and in the family. Its proponents teach "the fundamental biblical principle of the equality of all human beings before God". "There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus." According to this principle, there can be no moral or theological justification for permanently granting or denying status, privilege, or prerogative solely on the basis of a person's race, class, or gender. 2. Christian Complementarians prescribe husband-headship—a male-led hierarchy. This view's core beliefs call for a husband's "loving, humble headship" and the wife's "intelligent, willing submission" to his headship. They believe women have "different but complementary roles and responsibilities in marriage". 3. Biblical patriarchy, though not at all popular among mainstream Christians, prescribes a strict male-dominant hierarchy. This view makes the husband the ruler over his wife and his household. Their organization's first tenet is that "God reveals Himself as masculine, not feminine. God is the eternal Father and the eternal Son, the Holy Spirit is also addressed as He, and Jesus Christ is a male". They consider the husband-father to be sovereign over his household—the family leader, provider, and protector. They call for a wife to be obedient to her head (her husband). Some Christian authorities permit the practice polygamy (specifically polygyny), but this practice, besides being illegal in Western cultures, is now considered to be out of the Christian mainstream in most parts of the globe; the Lutheran World Federation hosted a regional conference in Africa, in which the acceptance of polygamists and their wives into full membership by the Lutheran Church in Liberia was defended as being permissible. While the Lutheran Church in Liberia permits men to retain their wives if they married them prior to being received into the Church, it does not permit polygamists who have become Christians to marry more wives after they have received the sacrament of Holy Baptism. Family authority and responsibilities Much of the dispute hinges on how one interprets the New Testament household code (Haustafel), a term coined by Martin Luther, which has as its main focus hierarchical relationships between three pairs of social classes that were controlled by Roman law: husbands/wives, parents/children, and masters/slaves. The apostolic teachings, with variations, that constitute what has been termed the "household code" occurs in four epistles (letters) by the Apostle Paul and in 1 Peter. In the early Roman Republic, long before the time of Christ, the law of manus along with the concept of patria potestas (rule of the fathers), gave the husband nearly absolute autocratic power over his wife, children, and slaves, including the power of life and death. In practice, the extreme form of this right was seldom exercised, and it was eventually limited by law. Theologian Frank Stagg finds the basic tenets of the code in Aristotle's discussion of the household in Book 1 of Politics and in Philo's Hypothetica 7.14. Serious study of the New Testament Household Code (Haustafel) began with Martin Dilbelius in 1913, with a wide range of studies since then. In a Tübingen dissertation, by James E. Crouch concludes that the early Christians found in Hellenistic Judaism a code which they adapted and Christianized. The Staggs believe the several occurrences of the New Testament household code in the Bible were intended to meet the needs for order within the churches and in the society of the day. They maintain that the New Testament household code is an attempt by Paul and Peter to Christianize the concept of family relationships for Roman citizens who had become followers of Christ. The Staggs write that there is some suggestion in scripture that because Paul had taught that they had newly found freedom "in Christ", wives, children, and slaves were taking improper advantage of the Haustafel both in the home and the church. "The form of the code stressing reciprocal social duties is traced to Judaism's own Oriental background, with its strong moral/ethical demand but also with a low view of woman.... At bottom is probably to be seen the perennial tension between freedom and order.... What mattered to (Paul) was 'a new creation' and 'in Christ' there is 'not any Jew not Greek, not any slave nor free, not any male and female'. Two of these Christianized codes are found in Ephesians 5 (which contains the phrases "husband is the head of the wife" and "wives, submit to your husband") and in Colossians 3, which instructs wives to subordinate themselves to their husbands. The importance of the meaning of "head" as used by the Apostle Paul is pivotal in the conflict between the Complementarian position and the Egalitarian view. The word Paul used for "head", transliterated from Greek, is kephalē. Today's English word "cephalic" ( ) stems from the Greek kephalē and means "of or relating to the head; or located on, in, or near the head." A thorough concordance search by Catherine Kroeger shows that the most frequent use of "head" (kephalē) in the New Testament is to refer to "the anatomical head of a body". She found that its second most frequent use in the New Testament was to convey the metaphorical sense of "source". Other Egalitarian authors such as Margaret Howe agree with Kroeger, writing that "The word 'head' must be understood not as 'ruler' but as 'source. Wayne Grudem criticizes commonly rendering kephalē in those same passages only to mean "source", and argues that it denotes "authoritative head" in such texts as Corinthians 11. They interpret that verse to mean that God the father is the authoritative head over the Son, and in turn Jesus is the authoritative head over the church, not simply its source. By extension, they then conclude that in marriage and in the church, the man is the authoritative head over the woman. Another potential way to define the word "head", and hence the relationship between husband and wife as found in the Bible, is through the example given in the surrounding context in which the word is found. In that context the husband and wife are compared to Christ and his church. The context seems to imply an authority structure based on a man sacrificing himself for his wife, as Christ did for the church; a love-based authority structure, where submission is not required but freely given based on the care given to the wife. Some biblical references on this subject are debated depending on one's school of theology. The historical grammatical method is a hermeneutic technique that strives to uncover the meaning of the text by taking into account not just the grammatical words, but also the syntactical aspects, the cultural and historical background, and the literary genre. Thus references to a patriarchal Biblical culture may or may not be relevant to other societies. What is believed to be a timeless truth to one person or denomination may be considered a cultural norm or minor opinion to another. Egalitarian view Christian Egalitarians (from the French word "égal" meaning "equal") believe that Christian marriage is intended to be a marriage without any hierarchy—a full and equal partnership between the wife and husband. They emphasize that nowhere in the New Testament is there a requirement for a wife to obey her husband. While "obey" was introduced into marriage vows for much of the church during the Middle Ages, its only New Testament support is found in Peter 3, with that only being by implication from Sarah's obedience to Abraham. Scriptures such as Galatians 3:28 state that in Christ, right relationships are restored and in him, "there is neither Jew nor Greek, slave nor free, male nor female." Christian Egalitarians interpret scripture to mean that God intended spouses to practice mutual submission, each in equality with the other. The phrase "mutual submission" comes from a verse in Ephesians 5 which precedes advice for the three domestic relationships of the day, including slavery. It reads, "Submit to one another ('mutual submission') out of reverence for Christ", wives to husbands, children to parents, and slaves to their master. Christian Egalitarians believe that full partnership in marriage is the most biblical view, producing the most intimate, wholesome, and reciprocally fulfilling marriages. The Christian Egalitarian view of marriage asserts that gender, in and of itself, neither privileges nor curtails a believer's gifting or calling to any ministry in the church or home. It does not imply that women and men are identical or undifferentiated, but affirms that God designed men and women to complement and benefit one another. A foundational belief of Christian Egalitarians is that the husband and wife are created equally and are ordained of God to "become one", a biblical principle first ordained by God in Genesis 2, reaffirmed by Jesus in Matthew 19 and Mark 10, and by the Apostle Paul in Ephesians 5. Therefore, they see that "oneness" as pointing to gender equality in marriage. They believe the biblical model for Christian marriages is therefore for the spouses to share equal responsibility within the family—not one over the other nor one under the other. David Dykes, theologian, author, and pastor of a 15,000-member Baptist church, sermonized that "When you are in Christ, you have full equality with all other believers". In a sermon he entitled "The Ground Is Level at the Foot of the Cross", he said that some theologians have called one particular Bible verse the Christian Magna Carta. The Bible verse reads: "There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus." Acknowledging the differences between men and women, Dykes writes that "in Christ, these differences don't define who we are. The only category that really matters in the world is whether you are in Christ. At the cross, Jesus destroyed all the made-made barriers of hostility:" ethnicity, social status, and gender. Those of the egalitarian persuasion point to the biblical instruction that all Christian believers, irrespective of gender, are to submit or be subject "to one another in the fear of God" or "out of reverence for Christ". Gilbert Bilezikian writes that in the highly debated Ephesians 5 passage, the verb "to be subject" or "to be submitted" appears in verse 21 which he describes as serving as a "hinge" between two different sections. The first section consists of verses 18–20, verse 21 is the connection between the two, and the second section consists of verses 22–33. When discussion begins at verse 22 in Ephesians 5, Paul appears to be reaffirming a chain of command principle within the family. However, Advocates of Christian egalitarianism believe that this model has firm biblical support: The word translated "help" or "helper" in Genesis 2 until quite recently was generally understood to subordinate a wife to her husband. The KJV translates it as God saying, "I will make a help meet for him". The first distortion was extrabiblical: the noun "help" and the adjective "meet" traditionally have been combined into a new noun, "helpmate". Thus, wives were often referred to as her husband's "helpmate". Next, from the word "help" were drawn inferences of authority/subjection distinctions between men and women. "Helper" was taken to mean that husband was boss and wife his domestic. It is now realized that of the 21 times the Hebrew word 'ezer is used in the Old Testament, in eight of those instances the term clearly means "savior"—another word for Jehovah God. For example, Psalm 33 says "the Lord...is our help ('ezer) and shield". Psalm 121 reads "I lift up my eyes to the mountains—where does my help ('ezer) come from? My help ('ezer) comes from the Lord, the Maker of heaven and earth." That Hebrew word is not used in the Bible with reference to any subordinate person such a servant. Thus, forms of 'ezer in the Hebrew Bible can mean either "to save" or "to be strong" or have the idea of power and strength. The "two becoming one" concept, first cited in Genesis 2, was quoted by Jesus in his teachings on marriage and recorded almost identically in the gospels of both Matthew and Mark. In those passages Jesus reemphasized the concept by adding a divine postscript to the Genesis passage: "So, they are no longer two, but one" (NIV). The Apostle Paul also quoted the Genesis 2:24 passage in Ephesians 5 Describing it as a "profound mystery", he analogizes it to "Christ and the church". Then Paul states that every husband must love his wife as he loves himself. Jesus actually forbids any hierarchy of relationships in Christian relationships. All three synoptic gospels record virtually the same teaching of Jesus, adding to its apparent significance: The Apostle Paul calls on husbands and wives to be subject to each other out of reverence for Christ—mutual submission. As persons, husband and wife are of equal value. There is no priority of one spouse over the other. In truth, they are one. Bible scholar Frank Stagg and Classicist Evelyn Stagg write that husband-wife equality produces the most intimate, wholesome and mutually fulfilling marriages. They conclude that the Apostle Paul's statement, sometimes called the "Magna Carta of Humanity" and recorded in Galatians 3, applies to all Christian relationships, including Christian marriage: "There is neither Jew nor Greek, there is neither bond nor free, there is neither male nor female: for you are all one in Christ Jesus." The Apostle Peter calls husbands and wives "joint heirs of the grace of life" and cautions a husband who is not considerate to his wife and does not treat her with respect that his prayers will be hindered. Each of the six times Aquila and his wife Priscilla are mentioned by name in the New Testament, they are listed together. Their order of appearance alternates, with Aquila mentioned first in the first, third and fifth mentions, and Priscilla (Prisca) first in the other three. Some revisions of the Bible put Priscilla first, instead of Aquila, in Acts 18:26, following the Vulgate and a few Greek texts. Some scholars suggest that Priscilla was the head of the family unit. Among spouses, it is possible to submit without love, but it is impossible to love without submitting mutually to each other. The egalitarian paradigm leaves it up to the couple to decide who is responsible for what task or function in the home. Such decisions should be made rationally and wisely, not based on gender or tradition. Examples of a couple's decision logic might include: which spouse is more competent for a particular task or function; which has better access to it; or if they decide both are similarly competent and have comparable access, they might make the decision based on who prefers that function or task, or conversely, which of them dislikes it less than the other. The egalitarian view holds that decisions about managing family responsibilities are made rationally through cooperation and negotiation, not on the basis of tradition (e.g., "man's work" or "woman's" work), nor any other irrelevant or irrational basis. Complementarian view Complementarians hold to a hierarchical structure between husband and wife. They believe men and women have different gender-specific roles that allow each to complement the other, hence the designation "Complementarians". The Complementarian view of marriage holds that while the husband and wife are of equal worth before God, husbands and wives are given different functions and responsibilities by God that are based on gender, and that male leadership is biblically ordained so that the husband is always the senior authority figure. They state they "observe with deep concern" "accompanying distortions or neglect of the glad harmony portrayed in Scripture between the intelligent, humble leadership of redeemed husbands and the loving, willing support of that leadership by redeemed wives". They believe "the Bible presents a clear chain of authority—above all authority and power is God; God is the head of Christ.
invisible by explicitly declaring fully abstract classes that represent the interfaces of the class. Some languages feature other accessibility schemes: Instance vs. class accessibility: Ruby supports instance-private and instance-protected access specifiers in lieu of class-private and class-protected, respectively. They differ in that they restrict access based on the instance itself, rather than the instance's class. Friend: C++ supports a mechanism where a function explicitly declared as a friend function of the class may access the members designated as private or protected. Path-based: Java supports restricting access to a member within a Java package, which is the logical path of the file. However, it is a common practice when extending a Java framework to implement classes in the same package as a framework class in order to access protected members. The source file may exist in a completely different location, and may be deployed to a different .jar file, yet still be in the same logical path as far as the JVM is concerned. Inter-class relationships In addition to the design of standalone classes, programming languages may support more advanced class design based upon relationships between classes. The inter-class relationship design capabilities commonly provided are compositional and hierarchical. Compositional Classes can be composed of other classes, thereby establishing a compositional relationship between the enclosing class and its embedded classes. Compositional relationship between classes is also commonly known as a has-a relationship. For example, a class "Car" could be composed of and contain a class "Engine". Therefore, a Car has an Engine. One aspect of composition is containment, which is the enclosure of component instances by the instance that has them. If an enclosing object contains component instances by value, the components and their enclosing object have a similar lifetime. If the components are contained by reference, they may not have a similar lifetime. For example, in Objective-C 2.0: @interface Car : NSObject @property NSString *name; @property Engine *engine @property NSArray *tires; @end This class has an instance of (a string object), , and (an array object). Hierarchical Classes can be derived from one or more existing classes, thereby establishing a hierarchical relationship between the derived-from classes (base classes, parent classes or ) and the derived class (child class or subclass) . The relationship of the derived class to the derived-from classes is commonly known as an is-a relationship. For example, a class 'Button' could be derived from a class 'Control'. Therefore, a Button is a Control. Structural and behavioral members of the parent classes are inherited by the child class. Derived classes can define additional structural members (data fields) and behavioral members (methods) in addition to those that they inherit and are therefore specializations of their superclasses. Also, derived classes can override inherited methods if the language allows. Not all languages support multiple inheritance. For example, Java allows a class to implement multiple interfaces, but only inherit from one class. If multiple inheritance is allowed, the hierarchy is a directed acyclic graph (or DAG for short), otherwise it is a tree. The hierarchy has classes as nodes and inheritance relationships as links. Classes in the same level are more likely to be associated than classes in different levels. The levels of this hierarchy are called layers or levels of abstraction. Example (Simplified Objective-C 2.0 code, from iPhone SDK): @interface UIResponder : NSObject //... @interface UIView : UIResponder //... @interface UIScrollView : UIView //... @interface UITableView : UIScrollView //... In this example, a UITableView is a UIScrollView is a UIView is a UIResponder is an NSObject. Definitions of subclass Conceptually, a superclass is a superset of its subclasses. For example, a common class hierarchy would involve as a superclass of and , while would be a subclass of . These are all subset relations in set theory as well, i.e., all squares are rectangles but not all rectangles are squares. A common conceptual error is to mistake a part of relation with a subclass. For example, a car and truck are both kinds of vehicles and it would be appropriate to model them as subclasses of a vehicle class. However, it would be an error to model the component parts of the car as subclass relations. For example, a car is composed of an engine and body, but it would not be appropriate to model engine or body as a subclass of car. In object-oriented modeling these kinds of relations are typically modeled as object properties. In this example, the class would have a property called . would be typed to hold a collection of objects, such as instances of , , , etc. Object modeling languages such as UML include capabilities to model various aspects of "part of" and other kinds of relations – data such as the cardinality of the objects, constraints on input and output values, etc. This information can be utilized by developer tools to generate additional code beside the basic data definitions for the objects, such as error checking on get and set methods. One important question when modeling and implementing a system of object classes is whether a class can have one or more superclasses. In the real world with actual sets it would be rare to find sets that didn't intersect with more than one other set. However, while some systems such as Flavors and CLOS provide a capability for more than one parent to do so at run time introduces complexity that many in the object-oriented community consider antithetical to the goals of using object classes in the first place. Understanding which class will be responsible for handling a message can get complex when dealing with more than one superclass. If used carelessly this feature can introduce some of the same system complexity and ambiguity classes were designed to avoid. Most modern object-oriented languages such as Smalltalk and Java require single inheritance at run time. For these languages, multiple inheritance may be useful for modeling but not for an implementation. However, semantic web application objects do have multiple superclasses. The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it. A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rational is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility. Orthogonality of the class concept and inheritance Although class-based languages are commonly assumed to support inheritance, inheritance is not an intrinsic aspect of the concept of classes. Some languages, often referred to as "object-based languages", support classes yet do not support inheritance. Examples of object-based languages include earlier versions of Visual Basic. Within object-oriented analysis In object-oriented analysis and in UML, an association between two classes represents a collaboration between the classes or their corresponding instances. Associations have direction; for example, a bi-directional association between two classes indicates that both of the classes are aware of their relationship. Associations may be labeled according to their name or purpose. An association role is given end of an association and describes the role of the corresponding class. For example, a "subscriber" role describes the way instances of the class "Person" participate in a "subscribes-to" association with the class "Magazine". Also, a "Magazine" has the "subscribed magazine" role in the same association. Association role multiplicity describes how many instances correspond to each instance of the other class of the association. Common multiplicities are "0..1", "1..1", "1..*" and "0..*", where the "*" specifies any number of instances. Taxonomy of classes There are many categories of classes, some of which overlap. Abstract and concrete In a language that supports inheritance, an abstract class, or abstract base class (ABC), is a class that cannot be instantiated because it is either labeled as abstract or it simply specifies abstract methods (or virtual methods). An abstract class may provide implementations of some methods, and may also specify virtual methods via signatures that are to be implemented by direct or indirect descendants of the abstract class. Before a class derived from an abstract class can be instantiated, all abstract methods of its parent classes must be implemented by some class in the derivation chain. Most object-oriented programming languages allow the programmer to specify which classes are considered abstract and will not allow these to be instantiated. For example, in Java, C# and PHP, the keyword abstract is used. In C++, an abstract class is a class having at least one abstract method given by the appropriate syntax in that language (a pure virtual function in C++ parlance). A class consisting of only virtual methods is called a Pure Abstract Base Class (or Pure ABC) in C++ and is also known as an interface by users of the language. Other languages, notably Java and C#, support a variant of abstract classes called an interface via a keyword in the language. In these languages, multiple inheritance is not allowed, but a class can implement multiple interfaces. Such a class can only contain abstract publicly accessible methods. A concrete class is a class that can be instantiated, as opposed to abstract classes, which cannot. Local and inner In some languages, classes can be declared in scopes other than the global scope. There are various types of such classes. An inner class is a class defined within another class. The relationship between an inner class and its containing class can also be treated as another type of class association. An inner class is typically neither associated with instances of the enclosing class nor instantiated along with its enclosing class. Depending on language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types, also known as inner data type or nested type, which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations). Another type is a local class, which is a class defined within a procedure or function. This limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared to non-local ones. One common restriction is to disallow local
designated as private or protected. Path-based: Java supports restricting access to a member within a Java package, which is the logical path of the file. However, it is a common practice when extending a Java framework to implement classes in the same package as a framework class in order to access protected members. The source file may exist in a completely different location, and may be deployed to a different .jar file, yet still be in the same logical path as far as the JVM is concerned. Inter-class relationships In addition to the design of standalone classes, programming languages may support more advanced class design based upon relationships between classes. The inter-class relationship design capabilities commonly provided are compositional and hierarchical. Compositional Classes can be composed of other classes, thereby establishing a compositional relationship between the enclosing class and its embedded classes. Compositional relationship between classes is also commonly known as a has-a relationship. For example, a class "Car" could be composed of and contain a class "Engine". Therefore, a Car has an Engine. One aspect of composition is containment, which is the enclosure of component instances by the instance that has them. If an enclosing object contains component instances by value, the components and their enclosing object have a similar lifetime. If the components are contained by reference, they may not have a similar lifetime. For example, in Objective-C 2.0: @interface Car : NSObject @property NSString *name; @property Engine *engine @property NSArray *tires; @end This class has an instance of (a string object), , and (an array object). Hierarchical Classes can be derived from one or more existing classes, thereby establishing a hierarchical relationship between the derived-from classes (base classes, parent classes or ) and the derived class (child class or subclass) . The relationship of the derived class to the derived-from classes is commonly known as an is-a relationship. For example, a class 'Button' could be derived from a class 'Control'. Therefore, a Button is a Control. Structural and behavioral members of the parent classes are inherited by the child class. Derived classes can define additional structural members (data fields) and behavioral members (methods) in addition to those that they inherit and are therefore specializations of their superclasses. Also, derived classes can override inherited methods if the language allows. Not all languages support multiple inheritance. For example, Java allows a class to implement multiple interfaces, but only inherit from one class. If multiple inheritance is allowed, the hierarchy is a directed acyclic graph (or DAG for short), otherwise it is a tree. The hierarchy has classes as nodes and inheritance relationships as links. Classes in the same level are more likely to be associated than classes in different levels. The levels of this hierarchy are called layers or levels of abstraction. Example (Simplified Objective-C 2.0 code, from iPhone SDK): @interface UIResponder : NSObject //... @interface UIView : UIResponder //... @interface UIScrollView : UIView //... @interface UITableView : UIScrollView //... In this example, a UITableView is a UIScrollView is a UIView is a UIResponder is an NSObject. Definitions of subclass Conceptually, a superclass is a superset of its subclasses. For example, a common class hierarchy would involve as a superclass of and , while would be a subclass of . These are all subset relations in set theory as well, i.e., all squares are rectangles but not all rectangles are squares. A common conceptual error is to mistake a part of relation with a subclass. For example, a car and truck are both kinds of vehicles and it would be appropriate to model them as subclasses of a vehicle class. However, it would be an error to model the component parts of the car as subclass relations. For example, a car is composed of an engine and body, but it would not be appropriate to model engine or body as a subclass of car. In object-oriented modeling these kinds of relations are typically modeled as object properties. In this example, the class would have a property called . would be typed to hold a collection of objects, such as instances of , , , etc. Object modeling languages such as UML include capabilities to model various aspects of "part of" and other kinds of relations – data such as the cardinality of the objects, constraints on input and output values, etc. This information can be utilized by developer tools to generate additional code beside the basic data definitions for the objects, such as error checking on get and set methods. One important question when modeling and implementing a system of object classes is whether a class can have one or more superclasses. In the real world with actual sets it would be rare to find sets that didn't intersect with more than one other set. However, while some systems such as Flavors and CLOS provide a capability for more than one parent to do so at run time introduces complexity that many in the object-oriented community consider antithetical to the goals of using object classes in the first place. Understanding which class will be responsible for handling a message can get complex when dealing with more than one superclass. If used carelessly this feature can introduce some of the same system complexity and ambiguity classes were designed to avoid. Most modern object-oriented languages such as Smalltalk and Java require single inheritance at run time. For these languages, multiple inheritance may be useful for modeling but not for an implementation. However, semantic web application objects do have multiple superclasses. The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it. A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rational is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility. Orthogonality of the class concept and inheritance Although class-based languages are commonly assumed to support inheritance, inheritance is not an intrinsic aspect of the concept of classes. Some languages, often referred to as "object-based languages", support classes yet do not support inheritance. Examples of object-based languages include earlier versions of Visual Basic. Within object-oriented analysis In object-oriented analysis and in UML, an association between two classes represents a collaboration between the classes or their corresponding instances. Associations have direction; for example, a bi-directional association between two classes indicates that both of the classes are aware of their relationship. Associations may be labeled according to their name or purpose. An association role is given end of an association and describes the role of the corresponding class. For example, a "subscriber" role describes the way instances of the class "Person" participate in a "subscribes-to" association with the class "Magazine". Also, a "Magazine" has the "subscribed magazine" role in the same association. Association role multiplicity describes how many instances correspond to each instance of the other class of the association. Common multiplicities are "0..1", "1..1", "1..*" and "0..*", where the "*" specifies any number of instances. Taxonomy of classes There are many categories of classes, some of which overlap. Abstract and concrete In a language that supports inheritance, an abstract class, or abstract base class (ABC), is a class that cannot be instantiated because it is either labeled as abstract or it simply specifies abstract methods (or virtual methods). An abstract class may provide implementations of some methods, and may also specify virtual methods via signatures that are to be implemented by direct or indirect descendants of the abstract class. Before a class derived from an abstract class can be instantiated, all abstract methods of its parent classes must be implemented by some class in the derivation chain. Most object-oriented programming languages allow the programmer to specify which classes are considered abstract and will not allow these to be instantiated. For example, in Java, C# and PHP, the keyword abstract is used. In C++, an abstract class is a class having at least one abstract method given by the appropriate syntax in that language (a pure virtual function in C++ parlance). A class consisting of only virtual methods is called a Pure Abstract Base Class (or Pure ABC) in C++ and is also known as an interface by users of the language. Other languages, notably Java and C#, support a variant of abstract classes called an interface via a keyword in the language. In these languages, multiple inheritance is not allowed, but a class can implement multiple interfaces. Such a class can only contain abstract publicly accessible methods. A concrete class is a class that can be instantiated, as opposed to abstract classes, which cannot. Local and inner In some languages, classes can be declared in scopes other than the global scope. There are various types of such classes. An inner class is a class defined within another class. The relationship between an inner class and its containing class can also be treated as another type of class association. An inner class is typically neither associated with instances of the enclosing class nor instantiated along with its enclosing class. Depending on language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types, also known as inner data type or nested type, which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations). Another type is a local class, which is a class defined within a procedure or function. This limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared to non-local ones. One common restriction is to disallow local class methods to access local variables of the enclosing function. For example, in C++, a local class may refer to static variables declared within its enclosing function, but may not access the function's automatic variables. Metaclasses Metaclasses are classes whose instances are classes. A metaclass describes a common structure of a collection of classes and can implement a design pattern or describe particular kinds of classes. Metaclasses are often used to describe frameworks. In some languages, such as Python, Ruby or Smalltalk, a class is also an object; thus each class is an instance of a unique metaclass that is built into the language. The Common Lisp Object System (CLOS) provides metaobject protocols (MOPs) to implement those classes and metaclasses. Non-subclassable Non-subclassable classes allow programmers to design classes and hierarchies of classes where at some level in the hierarchy, further derivation is prohibited (a stand-alone class may be also designated as non-subclassable, preventing the formation of any hierarchy). Contrast this to abstract classes, which imply, encourage, and require derivation in order to be used at all. A non-subclassable class is implicitly concrete. A non-subclassable class is created by declaring the class as in C# or as in Java or PHP. For example, Java's class is designated as final. Non-subclassable classes may allow a compiler (in compiled languages) to perform optimizations that are not available for subclassable classes. Open Class An open class is one that can be changed. Typically, an executable program cannot be changed by customers. Developers can often change some classes, but typically cannot change standard or built-in ones. In Ruby, all classes are open. In Python, classes can be created at runtime, and all can be modified afterwards. Objective-C categories permit the programmer to add methods to an existing class without the need to recompile that class or even have access to its source code. Mixins Some languages have special support for mixins, though in any language with multiple inheritance a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes; for example, a class might provide a method called when included in classes
of New Zealand Canterbury Plains Canterbury Bight, a stretch of coastline United Kingdom Canterbury (UK Parliament constituency) City of Canterbury, the local government district in Kent Province of Canterbury, one of two ecclesiastical provinces which constitute the Church of England Diocese of Canterbury, a Church of England diocese Oriel Square, formerly Canterbury Square, Oxford United States Canterbury, Connecticut, a town Canterbury, Delaware, an unincorporated community Canterbury, New Hampshire, a town Canterbury, West Virginia, an unincorporated community Elsewhere Canterbury, Jamaica, a squatter suburb of Montego Bay Canterbury Spur, Marie Byrd Land, Antarctica 3563 Canterbury, an asteroid Schools Canterbury Christ Church University, Kent, England University of Canterbury, Christchurch, New Zealand Canterbury College (disambiguation) Canterbury High School (disambiguation) Canterbury School (disambiguation) Canterbury University (Seychelles), an unaccredited institution Music Canterbury scene, a style of progressive rock that originated in Canterbury, England Canterbury (album), a 1983 album by Diamond Head Canterbury (band), an English alternative rock band Ships Canterbury (ship), the ship which transported William Penn and James Logan from England to Philadelphia in 1699 HMS Canterbury, several ships of the British Royal Navy HMNZS Canterbury (F421), a decommissioned New Zealand Navy frigate HMNZS Canterbury (L421), a multi-role vessel in the New Zealand Navy , a South Eastern and Chatham Railway ferry Sports Canterbury (women's field hockey team), an amateur team in New Zealand Canterbury Golf Club, a golf club in Ohio, US Canterbury Open, a darts tournament in Christchurch, New Zealand Canterbury Park, a horse racing facility in Minnesota, US Canterbury Rugby Football Union, or Canterbury, the governing body for rugby union in a portion of the Canterbury Region of New Zealand Canterbury Stakes, an Australian Thoroughbred horse race Canterbury United Dragons,
1983 album by Diamond Head Canterbury (band), an English alternative rock band Ships Canterbury (ship), the ship which transported William Penn and James Logan from England to Philadelphia in 1699 HMS Canterbury, several ships of the British Royal Navy HMNZS Canterbury (F421), a decommissioned New Zealand Navy frigate HMNZS Canterbury (L421), a multi-role vessel in the New Zealand Navy , a South Eastern and Chatham Railway ferry Sports Canterbury (women's field hockey team), an amateur team in New Zealand Canterbury Golf Club, a golf club in Ohio, US Canterbury Open, a darts tournament in Christchurch, New Zealand Canterbury Park, a horse racing facility in Minnesota, US Canterbury Rugby Football Union, or Canterbury, the governing body for rugby union in a portion of the Canterbury Region of New Zealand Canterbury Stakes, an Australian Thoroughbred horse race Canterbury United Dragons, a men's football team in the New Zealand Football Championship Canterbury United Pride, a football team in the New Zealand National Women's League People and fictional characters
between red, orange, yellow, and green. (Orange and yellow are different combinations of red and green light.) Colors in this range, which appear very different to a normal viewer, appear to a dichromat to be the same or a similar color. The terms protanopia, deuteranopia, and tritanopia come from Greek, and respectively mean "inability to see (anopia) with the first (prot-), second (deuter-), or third (trit-) [cone]". Anomalous trichromacy is the least serious type of color deficiency. People with protanomaly, deuteranomaly, or tritanomaly are trichromats, but the color matches they make differ from the normal. They are called anomalous trichromats. In order to match a given spectral yellow light, protanomalous observers need more red light in a red/green mixture than a normal observer, and deuteranomalous observers need more green. From a practical standpoint though, many protanomalous and deuteranomalous people have very little difficulty carrying out tasks that require normal color vision. Some may not even be aware that their color perception is in any way different from normal. Protanomaly and deuteranomaly can be diagnosed using an instrument called an anomaloscope, which mixes spectral red and green lights in variable proportions, for comparison with a fixed spectral yellow. If this is done in front of a large audience of males, as the proportion of red is increased from a low value, first a small proportion of the audience will declare a match, while most will see the mixed light as greenish; these are the deuteranomalous observers. Next, as more red is added the majority will say that a match has been achieved. Finally, as yet more red is added, the remaining, protanomalous, observers will declare a match at a point where normal observers will see the mixed light as definitely reddish. Red–green color blindness Protanopia, deuteranopia, protanomaly, and deuteranomaly are commonly inherited forms of red–green color blindness which affect a substantial portion of the human population. Those affected have difficulty with discriminating red and green hues due to the absence or mutation of the red or green retinal photoreceptors. It is sex-linked: genetic red–green color blindness affects males much more often than females, because the genes for the red and green color receptors are located on the X chromosome, of which males have only one and females have two. Females (XX) are red–green color blind only if both their X chromosomes are defective with a similar deficiency, whereas males (XY) are color blind if their single X chromosome is defective. The gene for red–green color blindness is transmitted from a color blind male to all his daughters, who are usually heterozygote carriers and are thus unaffected. In turn, a carrier woman has a 50% chance of passing on a mutated X chromosome region to each of her male offspring. The sons of an affected male will not inherit the trait from him, since they receive his Y chromosome and not his (defective) X chromosome. Should an affected male have children with a carrier or colorblind woman, their daughters may be colorblind by inheriting an affected X chromosome from each parent. Because one X chromosome is inactivated at random in each cell during a woman's development, deuteranomalous heterozygotes (i.e. female carriers of deuteranomaly) may be tetrachromats, because they will have the normal long wave (red) receptors, the normal medium wave (green) receptors, the abnormal medium wave (deuteranomalous) receptors and the normal autosomal short wave (blue) receptors in their retinas. The same applies to the carriers of protanomaly (who have two types of long wave receptors, normal medium wave receptors, and normal autosomal short wave receptors in their retinas). If, by rare chance, a woman is heterozygous for both protanomaly and deuteranomaly, she could be pentachromatic. This situation could arise if, for instance, she inherited the X chromosome with the abnormal long wave gene (but normal medium wave gene) from her mother who is a carrier of protanomaly, and her other X chromosome from a deuteranomalous father. Such a woman would have a normal and an abnormal long wave receptor, a normal and abnormal medium wave receptor, and a normal autosomal short wave receptor—5 different types of color receptors in all. The degree to which women who are carriers of either protanomaly or deuteranomaly are demonstrably tetrachromatic and require a mixture of four spectral lights to match an arbitrary light is very variable. In many cases it is almost unnoticeable, but in a minority the tetrachromacy is very pronounced. However, Jameson et al. have shown that with appropriate and sufficiently sensitive equipment it can be demonstrated that any female carrier of red–green color blindness (i.e. heterozygous protanomaly, or heterozygous deuteranomaly) is a tetrachromat to a greater or lesser extent. People in whom deuteranopia or deuteranomaly manifest are sometimes referred to as deutans, those with protanopia or protanomaly as protans. Since deuteranomaly is by far the most common form of red–green blindness among men of northwestern European descent (with an incidence of 8%) it follows that the proportion of carriers (and of potential deuteranomalous tetrachromats) among the females of that genetic stock is 14.7% (i.e. 92% × 8% × 2), based on the Hardy–Weinberg principle. Protanopia (1% of males): Lacking the red cones for long-wavelength sensitive retinal cones, those with this condition are unable to distinguish between colors in the green–yellow–red section of the spectrum. They have a neutral point at a cyan-like wavelength around 492 nm (see spectral color for comparison)—that is, they cannot discriminate light of this wavelength from white. For a protanope, the brightness of red, orange, and yellow are much reduced compared to normal. This dimming can be so pronounced that reds may be confused with black or dark gray, and red traffic lights may appear to be extinguished. They may learn to distinguish reds from yellows primarily on the basis of their apparent brightness or lightness, not on any perceptible hue difference. Violet, lavender, and purple are indistinguishable from various shades of blue because their reddish components are so dimmed as to be invisible. For example, pink flowers, reflecting both red light and blue light, may appear just blue to the protanope. A very few people have been found who have one normal eye and one protanopic eye. These unilateral dichromats report that with only their protanopic eye open, they see wavelengths shorter than neutral point as blue and those longer than it as yellow. This is a rare form of color blindness. Deuteranopia (1% of males): Lacking the green cones for medium-wavelength cones, those affected are again unable to distinguish between colors in the green–yellow–red section of the spectrum. Their neutral point is at a slightly longer wavelength, 498 nm, a more greenish hue of cyan. A deuteranope suffers the same hue discrimination problems as protanopes, but without the abnormal dimming. Purple colors are not perceived as something opposite to spectral colors; all these appear similarly. This form of colorblindness is sometimes referred to as daltonism after John Dalton, whose diagnosis was confirmed as deuteranopia in 1995, some 150 years after his death, by DNA analysis of his preserved eyeball. In some languages, daltonism is still used to describe color blindness in a broad sense or deuteranopia in a more restricted sense. Deuteranopic unilateral dichromats report that with only their deuteranopic eye open, they see wavelengths shorter than neutral point as blue and longer than it as yellow. Protanomaly (1% of males, 0.01% of females): Having a mutated form of the long-wavelength (red) cone, whose peak sensitivity is at a shorter wavelength than in the normal retina, protanomalous individuals are less sensitive to red light than normal. This means that they are less able to discriminate between colors, and they do not see mixed lights as having the same colors as normal observers. They also perceive a darkening of colors on the red end of the spectrum, which causes reds to reduce in intensity. Protanomaly is a fairly rare form of color blindness, making up about 1% of the male population. The gene for protanomaly is carried on the X chromosome. Deuteranomaly (most common—6% of males, 0.4% of females): Individuals with deuteranomaly have a mutated form of the medium-wavelength (green) pigment. Perception of this pigment is shifted towards the red end of the spectrum, resulting in a reduction in sensitivity to the green area of the spectrum. Unlike in protanomaly, the intensity of colors is unchanged. The deuteranomalous person is considered "green weak". For example, in the evening, dark green cars can appear black to deuteranomalous people. As with protanomaly, people with deuteranomaly are less able to discriminate between small differences in hues in the red, orange, yellow, green region of the spectrum. They may incorrectly name hues in this region because the hues appear somewhat shifted toward green. However, unlike those with protanomaly, people with deuteranomaly do not see a loss of brightness in the affected hues. People with deuteranomaly may be better at distinguishing shades of khaki than people with normal vision and may be at an advantage when looking for predators, food, or camouflaged objects hidden among foliage. As is the case for protanomaly, the gene for dueteranomaly is carried on the X chromosome. Blue–yellow color blindness Those with tritanopia and tritanomaly have difficulty discerning between bluish and greenish hues. Blue and yellow appear as white and gray. Color blindness involving the inactivation of the short-wavelength sensitive cone system (whose absorption spectrum peaks in the bluish-violet) is called tritanopia or, loosely, blue–yellow color blindness. The tritanope's neutral point occurs near a yellowish 570 nm; green is perceived at shorter wavelengths and red at longer wavelengths. Mutation of the short-wavelength sensitive cones is called tritanomaly. Tritanopia is equally distributed among males and females. Jeremy H. Nathans (with the Howard Hughes Medical Institute) demonstrated that the gene coding for the blue receptor lies on chromosome 7, which is shared equally by males and females. Therefore, it is not sex-linked. This gene does not have any neighbor whose DNA sequence is similar. Blue color blindness is caused by a simple mutation in this gene. Tritanopia (less than 1% of males and females): Lacking the short-wavelength cones, those affected see short-wavelength colors (blue, indigo and spectral violet) as greenish and drastically dimmed, some of these colors even as black. Yellow and orange are indistinguishable from white and pink respectively, and purple colors are perceived as various shades of red. This form of color blindness is not sex-linked. Tritanomaly (equally rare for males and females [0.01% for both]): Having a mutated form of the short-wavelength (blue) pigment. The short-wavelength pigment is shifted towards the green area of the spectrum. This is the rarest form of anomalous trichromacy color blindness. Unlike the other anomalous trichromacy color deficiencies, the mutation for this color blindness is carried on chromosome 7. Therefore, it is equally prevalent in both male and female populations. The OMIM gene code for this mutation is 304000 "Colorblindness, Partial Tritanomaly". Total color blindness Total color blindness is defined as the inability to see color. Although the term may refer to acquired disorders such as cerebral achromatopsia also known as color agnosia, it typically refers to congenital color vision disorders (i.e. more frequently rod monochromacy and less frequently cone monochromacy). In cerebral achromatopsia, a person cannot perceive colors even though the eyes are capable of distinguishing them. Some sources do not consider these to be true color blindness, because the failure is of perception, not of vision. They are forms of visual agnosia. Monochromacy is the condition of possessing only a single channel for conveying information about color. Monochromats possess a complete inability to distinguish any colors and perceive only variations in brightness. It occurs in two primary forms: Rod monochromacy, frequently called achromatopsia, where the retina contains no cone cells, so that in addition to the absence of color discrimination, vision in lights of normal intensity is difficult. While normally rare, achromatopsia is very common on the island of Pingelap, a part of the Pohnpei state, Federated States of Micronesia, where it is called maskun: about 10% of the population there has it, and 30% are unaffected carriers. The island was devastated by a storm in the 18th century (an example of a genetic bottleneck) and one of the few male survivors carried a gene for achromatopsia. The population grew to several thousand before the 1940s. Cone monochromacy is the condition of having both rods and cones, but only a single kind of cone. A cone monochromat can have good pattern vision at normal daylight levels, but will not be able to distinguish hues. Blue cone monochromacy (X chromosome) is caused by lack of functionality of L and M cones (red and green). It is encoded at the same place as red–green color blindness on the X chromosome. Peak spectral sensitivities are in the blue region of the visible spectrum (near 440 nm). People with this condition generally show nystagmus ("jiggling eyes"), photophobia (light sensitivity), reduced visual acuity, and myopia (nearsightedness). Visual acuity usually falls to the 20/50 to 20/400 range. Management There is no cure for color deficiencies. The American Optometric Association reports that a contact lens on one eye can increase the ability to differentiate between colors, though nothing can cause a person to actually perceive the deficient color. Lenses Optometrists can supply colored spectacle lenses or a single red-tint contact lens to wear on the non-dominant eye, but although this may improve discrimination of some colors, it can make other colors more difficult to distinguish. A 1981 review of various studies to evaluate the effect of the X-chrom contact lens concluded that, while the lens may allow the wearer to achieve a better score on certain color vision tests, it did not correct color vision in the natural environment. A case history using the X-Chrom lens for a rod monochromat is reported and an X-Chrom manual is online. Lenses that filter certain wavelengths of light can allow people with a cone anomaly, but not dichromacy, to see better separation of colors, especially those with classic "red/green" color blindness. They work by notching out wavelengths that strongly stimulate both red and green cones in a
first (prot-), second (deuter-), or third (trit-) [cone]". Anomalous trichromacy is the least serious type of color deficiency. People with protanomaly, deuteranomaly, or tritanomaly are trichromats, but the color matches they make differ from the normal. They are called anomalous trichromats. In order to match a given spectral yellow light, protanomalous observers need more red light in a red/green mixture than a normal observer, and deuteranomalous observers need more green. From a practical standpoint though, many protanomalous and deuteranomalous people have very little difficulty carrying out tasks that require normal color vision. Some may not even be aware that their color perception is in any way different from normal. Protanomaly and deuteranomaly can be diagnosed using an instrument called an anomaloscope, which mixes spectral red and green lights in variable proportions, for comparison with a fixed spectral yellow. If this is done in front of a large audience of males, as the proportion of red is increased from a low value, first a small proportion of the audience will declare a match, while most will see the mixed light as greenish; these are the deuteranomalous observers. Next, as more red is added the majority will say that a match has been achieved. Finally, as yet more red is added, the remaining, protanomalous, observers will declare a match at a point where normal observers will see the mixed light as definitely reddish. Red–green color blindness Protanopia, deuteranopia, protanomaly, and deuteranomaly are commonly inherited forms of red–green color blindness which affect a substantial portion of the human population. Those affected have difficulty with discriminating red and green hues due to the absence or mutation of the red or green retinal photoreceptors. It is sex-linked: genetic red–green color blindness affects males much more often than females, because the genes for the red and green color receptors are located on the X chromosome, of which males have only one and females have two. Females (XX) are red–green color blind only if both their X chromosomes are defective with a similar deficiency, whereas males (XY) are color blind if their single X chromosome is defective. The gene for red–green color blindness is transmitted from a color blind male to all his daughters, who are usually heterozygote carriers and are thus unaffected. In turn, a carrier woman has a 50% chance of passing on a mutated X chromosome region to each of her male offspring. The sons of an affected male will not inherit the trait from him, since they receive his Y chromosome and not his (defective) X chromosome. Should an affected male have children with a carrier or colorblind woman, their daughters may be colorblind by inheriting an affected X chromosome from each parent. Because one X chromosome is inactivated at random in each cell during a woman's development, deuteranomalous heterozygotes (i.e. female carriers of deuteranomaly) may be tetrachromats, because they will have the normal long wave (red) receptors, the normal medium wave (green) receptors, the abnormal medium wave (deuteranomalous) receptors and the normal autosomal short wave (blue) receptors in their retinas. The same applies to the carriers of protanomaly (who have two types of long wave receptors, normal medium wave receptors, and normal autosomal short wave receptors in their retinas). If, by rare chance, a woman is heterozygous for both protanomaly and deuteranomaly, she could be pentachromatic. This situation could arise if, for instance, she inherited the X chromosome with the abnormal long wave gene (but normal medium wave gene) from her mother who is a carrier of protanomaly, and her other X chromosome from a deuteranomalous father. Such a woman would have a normal and an abnormal long wave receptor, a normal and abnormal medium wave receptor, and a normal autosomal short wave receptor—5 different types of color receptors in all. The degree to which women who are carriers of either protanomaly or deuteranomaly are demonstrably tetrachromatic and require a mixture of four spectral lights to match an arbitrary light is very variable. In many cases it is almost unnoticeable, but in a minority the tetrachromacy is very pronounced. However, Jameson et al. have shown that with appropriate and sufficiently sensitive equipment it can be demonstrated that any female carrier of red–green color blindness (i.e. heterozygous protanomaly, or heterozygous deuteranomaly) is a tetrachromat to a greater or lesser extent. People in whom deuteranopia or deuteranomaly manifest are sometimes referred to as deutans, those with protanopia or protanomaly as protans. Since deuteranomaly is by far the most common form of red–green blindness among men of northwestern European descent (with an incidence of 8%) it follows that the proportion of carriers (and of potential deuteranomalous tetrachromats) among the females of that genetic stock is 14.7% (i.e. 92% × 8% × 2), based on the Hardy–Weinberg principle. Protanopia (1% of males): Lacking the red cones for long-wavelength sensitive retinal cones, those with this condition are unable to distinguish between colors in the green–yellow–red section of the spectrum. They have a neutral point at a cyan-like wavelength around 492 nm (see spectral color for comparison)—that is, they cannot discriminate light of this wavelength from white. For a protanope, the brightness of red, orange, and yellow are much reduced compared to normal. This dimming can be so pronounced that reds may be confused with black or dark gray, and red traffic lights may appear to be extinguished. They may learn to distinguish reds from yellows primarily on the basis of their apparent brightness or lightness, not on any perceptible hue difference. Violet, lavender, and purple are indistinguishable from various shades of blue because their reddish components are so dimmed as to be invisible. For example, pink flowers, reflecting both red light and blue light, may appear just blue to the protanope. A very few people have been found who have one normal eye and one protanopic eye. These unilateral dichromats report that with only their protanopic eye open, they see wavelengths shorter than neutral point as blue and those longer than it as yellow. This is a rare form of color blindness. Deuteranopia (1% of males): Lacking the green cones for medium-wavelength cones, those affected are again unable to distinguish between colors in the green–yellow–red section of the spectrum. Their neutral point is at a slightly longer wavelength, 498 nm, a more greenish hue of cyan. A deuteranope suffers the same hue discrimination problems as protanopes, but without the abnormal dimming. Purple colors are not perceived as something opposite to spectral colors; all these appear similarly. This form of colorblindness is sometimes referred to as daltonism after John Dalton, whose diagnosis was confirmed as deuteranopia in 1995, some 150 years after his death, by DNA analysis of his preserved eyeball. In some languages, daltonism is still used to describe color blindness in a broad sense or deuteranopia in a more restricted sense. Deuteranopic unilateral dichromats report that with only their deuteranopic eye open, they see wavelengths shorter than neutral point as blue and longer than it as yellow. Protanomaly (1% of males, 0.01% of females): Having a mutated form of the long-wavelength (red) cone, whose peak sensitivity is at a shorter wavelength than in the normal retina, protanomalous individuals are less sensitive to red light than normal. This means that they are less able to discriminate between colors, and they do not see mixed lights as having the same colors as normal observers. They also perceive a darkening of colors on the red end of the spectrum, which causes reds to reduce in intensity. Protanomaly is a fairly rare form of color blindness, making up about 1% of the male population. The gene for protanomaly is carried on the X chromosome. Deuteranomaly (most common—6% of males, 0.4% of females): Individuals with deuteranomaly have a mutated form of the medium-wavelength (green) pigment. Perception of this pigment is shifted towards the red end of the spectrum, resulting in a reduction in sensitivity to the green area of the spectrum. Unlike in protanomaly, the intensity of colors is unchanged. The deuteranomalous person is considered "green weak". For example, in the evening, dark green cars can appear black to deuteranomalous people. As with protanomaly, people with deuteranomaly are less able to discriminate between small differences in hues in the red, orange, yellow, green region of the spectrum. They may incorrectly name hues in this region because the hues appear somewhat shifted toward green. However, unlike those with protanomaly, people with deuteranomaly do not see a loss of brightness in the affected hues. People with deuteranomaly may be better at distinguishing shades of khaki than people with normal vision and may be at an advantage when looking for predators, food, or camouflaged objects hidden among foliage. As is the case for protanomaly, the gene for dueteranomaly is carried on the X chromosome. Blue–yellow color blindness Those with tritanopia and tritanomaly have difficulty discerning between bluish and greenish hues. Blue and yellow appear as white and gray. Color blindness involving the inactivation of the short-wavelength sensitive cone system (whose absorption spectrum peaks in the bluish-violet) is called tritanopia or, loosely, blue–yellow color blindness. The tritanope's neutral point occurs near a yellowish 570 nm; green is perceived at shorter wavelengths and red at longer wavelengths. Mutation of the short-wavelength sensitive cones is called tritanomaly. Tritanopia is equally distributed among males and females. Jeremy H. Nathans (with the Howard Hughes Medical Institute) demonstrated that the gene coding for the blue receptor lies on chromosome 7, which is shared equally by males and females. Therefore, it is not sex-linked. This gene does not have any neighbor whose DNA sequence is similar. Blue color blindness is caused by a simple mutation in this gene. Tritanopia (less than 1% of males and females): Lacking the short-wavelength cones, those affected see short-wavelength colors (blue, indigo and spectral violet) as greenish and drastically dimmed, some of these colors even as black. Yellow and orange are indistinguishable from white and pink respectively, and purple colors are perceived as various shades of red. This form of color blindness is not sex-linked. Tritanomaly (equally rare for males and females [0.01% for both]): Having a mutated form of the short-wavelength (blue) pigment. The short-wavelength pigment is shifted towards the green area of the spectrum. This is the rarest form of anomalous trichromacy color blindness. Unlike the other anomalous trichromacy color deficiencies, the mutation for this color blindness is carried on chromosome 7. Therefore, it is equally prevalent in both male and female populations. The OMIM gene code for this mutation is 304000 "Colorblindness, Partial Tritanomaly". Total color blindness Total color blindness is defined as the inability to see color. Although the term may refer to acquired disorders such as cerebral achromatopsia also known as color agnosia, it typically refers to congenital color vision disorders (i.e. more frequently rod monochromacy and less frequently cone monochromacy). In cerebral achromatopsia, a person cannot perceive colors even though the eyes are capable of distinguishing them. Some sources do not consider these to be true color blindness, because the failure is of perception, not of vision. They are forms of visual agnosia. Monochromacy is the condition of possessing only a single channel for conveying information about color. Monochromats possess a complete inability to distinguish any colors and perceive only variations in brightness. It occurs in two primary forms: Rod monochromacy, frequently called achromatopsia, where the retina contains no cone cells, so that in addition to the absence of color discrimination, vision in lights of normal intensity is difficult. While normally rare, achromatopsia is very common on the island of Pingelap, a part of the Pohnpei state, Federated States of Micronesia, where it is called maskun: about 10% of the population there has it, and 30% are unaffected carriers. The island was devastated by a storm in the 18th century (an example of a genetic bottleneck) and one of the few male survivors carried a gene for achromatopsia. The population grew to several thousand before the 1940s. Cone monochromacy is the condition of having both rods and cones, but only a single kind of cone. A cone monochromat can have good pattern vision at normal daylight levels, but will not be able to distinguish hues. Blue cone monochromacy (X chromosome) is caused by lack of functionality of L and M cones (red and green). It is encoded at the same place as red–green color blindness on the X chromosome. Peak spectral sensitivities are in the blue region of the visible spectrum (near 440 nm). People with this condition generally show nystagmus ("jiggling eyes"), photophobia (light sensitivity), reduced visual acuity, and myopia (nearsightedness). Visual acuity usually falls to the 20/50 to 20/400 range. Management There is no cure for color deficiencies. The American Optometric Association reports that a contact lens on one eye can increase the ability to differentiate between colors, though nothing can cause a person to actually perceive the deficient color. Lenses Optometrists can supply colored spectacle lenses or a single red-tint contact lens to wear on the non-dominant eye, but although this may improve discrimination of some colors, it can make other colors more difficult to distinguish. A 1981 review of various studies to evaluate the effect of the X-chrom contact lens concluded that, while the lens may allow the wearer to achieve a better score on certain color vision tests, it did not correct color vision in the natural environment. A case history using the X-Chrom lens for a rod monochromat is reported and an X-Chrom manual is online. Lenses that filter certain wavelengths of light can allow people with a cone anomaly, but not dichromacy, to see better separation of colors, especially those with classic "red/green" color blindness. They work by notching out wavelengths that strongly stimulate both red and green cones in a deuter- or protanomalous person, improving the distinction between the two cones' signals. As of 2012, eyeglasses that notch out color wavelengths are available commercially. Apps Many mobile and computer applications have been developed to help color blind individual to better differentiate between colors. Some applications launch a simulation of colorblindness to allow people with typical vision to understand how people with color blindness see the world, which can improve inclusive design for both groups. This is achieved using an LMS color space. After analyzing what colors are confusing, daltonization algorithms can be used to create a color filter for people with color blindness to notice some color differences more easily. Epidemiology Color blindness affects a large number of individuals, with protans and deutans being the most common types. In individuals with Northern European ancestry, as many as 8 percent of men and 0.4 percent of women experience congenital color deficiency. The number affected varies among groups. Isolated communities with a restricted gene pool sometimes produce high proportions of color blindness, including the less usual types. Examples include rural Finland, Hungary, and some of the Scottish islands. In the United States, about 7 percent of the male population—or about 10.5 million men—and 0.4 percent of the female population either cannot distinguish red from green, or see red and green differently from how others do (Howard Hughes Medical Institute, 2006). More than 95 percent of all variations in human color vision involve the red and green receptors in male eyes. It is very rare for males or females to be "blind" to the blue end of the spectrum. History In 1798, English chemist John Dalton published one of the earliest scientific papers on the subject of color blindness, Extraordinary facts relating to the vision of colours after the realization of his own color blindness. Society and culture Design implications Color codes present particular problems for those with color deficiencies as they are often difficult or impossible for them to perceive. Good graphic design avoids using color coding or using color contrasts alone to express information; this not only helps color blind people, but also aids understanding by normally sighted people by providing them with multiple reinforcing cues. Designers need to take into account that color-blindness is highly sensitive to differences in material. For example, a red–green colorblind person who is incapable of distinguishing colors on a map printed on paper may have no such difficulty when viewing the map on a computer screen or television. In addition, some color blind people find it easier to distinguish problem colors on artificial materials, such as plastic or in acrylic paints, than on natural materials, such as paper or wood. Third, for some color blind people, color can only be distinguished if there is a sufficient "mass" of color: thin lines might appear black, while a thicker line of the same color can be perceived as having color. Designers should also note that red–blue and yellow–blue color combinations are generally safe. So instead of the ever-popular "red means bad and green means good" system, using these combinations can lead to a much higher ability to use color coding effectively. This will still cause problems for those with monochromatic color blindness, but it is still something worth considering. When the need to process visual information as rapidly as possible arises, for example in an emergency situation, the visual system may operate only in shades of gray, with the extra information load in adding color being dropped. This is an important possibility to consider when designing, for example, emergency brake handles or emergency phones. Occupations Color blindness may make it difficult or impossible for a person to engage in certain occupations. Persons with color blindness may be legally or practically barred from occupations in which color perception is an essential part of the job (e.g., mixing paint colors), or in which color perception is important for safety (e.g., operating vehicles in response to color-coded signals). This occupational safety principle originates from the Lagerlunda train crash of 1875 in Sweden. Following the crash, Professor Alarik Frithiof Holmgren, a
dongles is to use them for accessing web-based content such as cloud software or Virtual Private Networks (VPNs). In addition, a USB dongle can be configured to lock or unlock a computer. Trusted platform modules (TPMs) secure devices by integrating cryptographic capabilities onto access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to detect and authenticate hardware devices, preventing unauthorized network and data access. Computer case intrusion detection refers to a device, typically a push-button switch, which detects when a computer case is opened. The firmware or BIOS is programmed to show an alert to the operator when the computer is booted up the next time. Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to thieves. Tools exist specifically for encrypting external drives as well. Disabling USB ports is a security option for preventing unauthorized and malicious access to an otherwise secure computer. Infected USB dongles connected to a network from a computer inside the firewall are considered by the magazine Network World as the most common hardware threat facing computer networks. Disconnecting or disabling peripheral devices ( like camera, GPS, removable storage etc.), that are not in use. Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy (LE), Near field communication (NFC) on non-iOS devices and biometric validation such as thumb print readers, as well as QR code reader software designed for mobile devices, offer new, secure ways for mobile phones to connect to access control systems. These control systems provide computer security and can also be used for controlling access to secure buildings. Secure operating systems One use of the term "computer security" refers to technology that is used to implement secure operating systems. In the 1980s, the United States Department of Defense (DoD) used the "Orange Book" standards, but the current international standard ISO/IEC 15408, "Common Criteria" defines a number of progressively more stringent Evaluation Assurance Levels. Many common operating systems meet the EAL4 standard of being "Methodically Designed, Tested and Reviewed", but the formal verification required for the highest levels means that they are uncommon. An example of an EAL6 ("Semiformally Verified Design and Tested") system is INTEGRITY-178B, which is used in the Airbus A380 and several military jets. Secure coding In software engineering, secure coding aims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems are "secure by design". Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system; important for cryptographic protocols for example. Capabilities and access control lists Within computer systems, two of main security models capable of enforcing privilege separation are access control lists (ACLs) and role-based access control (RBAC). An access-control list (ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users, used by the majority of enterprises with more than 500 employees, and can implement mandatory access control (MAC) or discretionary access control (DAC). A further approach, capability-based security has been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is the E language. End user security training The end-user is widely recognized as the weakest link in the security chain and it is estimated that more than 90% of security incidents and breaches involve some kind of human error. Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication. As the human component of cyber risk is particularly relevant in determining the global cyber risk an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Digital hygiene Related to end-user training, digital hygiene or cyber hygiene is a fundamental principle relating to information security and, as the analogy with personal hygiene shows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks. Cyber hygiene should also not be mistaken for proactive cyber defence, a military term. As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline or education. It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal and/or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the term computer virus was coined almost simultaneously with the creation of the first working computer viruses, the term cyber hygiene is a much later invention, perhaps as late as 2000 by Internet pioneer Vint Cerf. It has since been adopted by the Congress and Senate of the United States, the FBI, EU institutions and heads of state. Response to breaches Responding to attempted security breaches is often very difficult for a variety of reasons, including: Identifying attackers is difficult, as they may operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other anonymizing procedures which make back-tracing difficult - and are often located in another jurisdiction. If they successfully breach security, they have also often gained enough administrative access to enable them to delete logs to cover their tracks. The sheer number of attempted attacks, often by automated vulnerability scanners and computer worms, is so large that organizations cannot spend time pursuing each. Law enforcement officers often lack the skills, interest or budget to pursue attackers. In addition, the identification of attackers across a network may require logs from various points in the network and in many countries, which may be difficult or time-consuming to obtain. Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatory security breach notification laws. Types of security and privacy Access control Anti-keyloggers Anti-malware Anti-spyware Anti-subversion software Anti-tamper software Anti-theft Antivirus software Cryptographic software Computer-aided dispatch (CAD) Firewall Intrusion detection system (IDS) Intrusion prevention system (IPS) Log management software Parental control Records management Sandbox Security information management Security information and event management (SIEM) Software and operating system updating Vulnerability Management Incident response planning Incident response is an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as a data breach or system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses. Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Preparation: Preparing stakeholders on the procedures for handling computer security incidents or compromises Detection and analysis: Identifying and investigating suspicious activity to confirm a security incident, prioritizing the response based on impact and coordinating notification of the incident Containment, eradication and recovery: Isolating affected systems to prevent escalation and limit impact, pinpointing the genesis of the incident, removing malware, affected systems and bad actors from the environment and restoring systems and data when a threat no longer remains Post incident activity: Post mortem analysis of the incident, its root cause and the organization's response with the intent of improving the incident response plan and future response efforts. Notable attacks and breaches Some illustrative examples of different types of computer security breaches are given below. Robert Morris and the first computer worm In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internet "computer worm". The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris who said "he wanted to count how many machines were connected to the Internet". Rome Laboratory In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user. TJX customer credit card details In early 2007, American apparel and home goods company TJX announced that it was the victim of an unauthorized computer systems intrusion and that the hackers had accessed a system that stored data on credit card, debit card, check, and merchandise return transactions. Stuxnet attack In 2010, the computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear centrifuges. It did so by disrupting industrial programmable logic controllers (PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iranian's nuclear program – although neither has publicly admitted this. Global surveillance disclosures In early 2013, documents provided by Edward Snowden were published by The Washington Post and The Guardian exposing the massive scale of NSA global surveillance. There were also indications that the NSA may have inserted a backdoor in a NIST standard for encryption. This standard was later withdrawn due to widespread criticism. The NSA additionally were revealed to have tapped the links between Google's data centers. Target and Home Depot breaches In 2013 and 2014, a Ukrainian hacker known as Rescator broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards, and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were delivered at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. Office of Personnel Management data breach In April 2015, the Office of Personnel Management discovered it had been hacked more than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office. The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States. Data targeted in the breach included personally identifiable information such as Social Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check. It is believed the hack was perpetrated by Chinese hackers. Ashley Madison breach In July 2015, a hacker group known as "The Impact Team" successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently." When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained functioning. Colonial Pipeline Ransomware Attack In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast. Legal issues and global regulation International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. Proving attribution for cybercrimes and cyberattacks is also a major problem for all law enforcement agencies. "Computer viruses switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world." The use of techniques such as dynamic DNS, fast flux and bullet proof servers add to the difficulty of investigation and enforcement. Role of government The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the national power-grid. The government's regulatory role in cyberspace is complicated. For some, cyberspace was seen as a virtual space that was to remain free of government intervention, as can be seen in many of today's libertarian blockchain and bitcoin discussions. Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through." On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order. On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges to international peace. According to UN Secretary-General António Guterres, new technologies are too often used to violate rights. International actions Many different teams and organizations exist, including: The Forum of Incident Response and Security Teams (FIRST) is the global association of CSIRTs. The US-CERT, AT&T, Apple, Cisco, McAfee, Microsoft are all members of this international team. The Council of Europe helps protect societies worldwide from the threat of cybercrime through the Convention on Cybercrime. The purpose of the Messaging Anti-Abuse Working Group (MAAWG) is to bring the messaging industry together to work collaboratively and to successfully address the various forms of messaging abuse, such as spam, viruses, denial-of-service attacks and other messaging exploitations. France Telecom, Facebook, AT&T, Apple, Cisco, Sprint are some of the members of the MAAWG. ENISA : The European Network and Information Security Agency (ENISA) is an agency of the European Union with the objective to improve network and information security in the European Union. Europe On 14 April 2016 the European Parliament and Council of the European Union adopted The General Data Protection Regulation (GDPR) (EU) 2016/679. GDPR, which became enforceable beginning 25 May 2018, provides for data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). GDPR requires that business processes that handle personal data be built with data protection by design and by default. GDPR also requires that certain organizations appoint a Data Protection Officer (DPO). National actions Computer emergency response teams Most countries have their own computer emergency response team to protect network security. Canada Since 2010, Canada has had a cybersecurity strategy. This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure. The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online. There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident. The Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors. It posts regular cybersecurity bulletins & operates an online reporting tool where individuals and organizations can report a cyber incident. To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations, and launched the Cyber Security Cooperation Program. They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October. Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015. China China's Central Leading Group for Internet Security and Informatization () was established on 27 February 2014. This Leading Small Group (LSG) of the Chinese Communist Party is headed by General Secretary Xi Jinping himself and is staffed with relevant Party and state decision-makers. The LSG was created to overcome the incoherent policies and overlapping responsibilities that characterized China's former cyberspace decision-making mechanisms. The LSG oversees policy-making in the economic, political, cultural, social and military fields as they relate to network security and IT strategy. This LSG also coordinates major policy initiatives in the international arena that promote norms and standards favored by the Chinese government and that emphasizes the principle of national sovereignty in cyberspace. Germany Berlin starts National Cyber Defense Initiative: On 16 June 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt für Sicherheit in der Informationstechnik, BKA (Federal Police Organisation) Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst, MAD (Military Intelligence Service) Amt für den Militärischen Abschirmdienst and other national organizations in Germany taking care of national security aspects. According to the Minister, the primary task of the new organization founded on 23 February 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents like Stuxnet. Germany has also established the largest research institution for IT security in Europe, the Center for Research in Security and Privacy (CRISP) in Darmstadt. India Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000. The National Cyber Security Policy 2013 is a policy framework by Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data". CERT- In is the nodal agency which monitors the cyber threats in the country. The post of National Cyber Security Coordinator has also been created in the Prime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013. South Korea Following cyber attacks in the first half of 2013, when the government, news media, television station, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies the accusations. United States Legislation The 1986 , the Computer Fraud and Abuse Act is the key legislation. It prohibits unauthorized access or damage of "protected computers" as defined in . Although various other measures have been proposed – none has succeeded. In 2013, executive order 13636 Improving Critical Infrastructure Cybersecurity was signed, which prompted the creation of the NIST Cybersecurity Framework. In response to the Colonial Pipeline ransomware attack President Joe Biden signed Executive Order 14028 on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. Standardized government testing services The General Services Administration (GSA) has standardized the "penetration test" service as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). Agencies The Department of Homeland Security has a dedicated division responsible for the response system, risk management program and requirements for cybersecurity in the United States called the National Cyber Security Division. The division is home to US-CERT operations and the National Cyber Alert System. The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure. The third priority of the Federal Bureau of Investigation (FBI) is to: "Protect the United States against cyber-based attacks and high-technology crimes", and they, along with the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA) are part of the multi-agency task force, The Internet Crime Complaint Center, also known as IC3. In addition to its own specific duties, the FBI participates alongside non-profit organizations such as InfraGard. The Computer Crime and Intellectual Property Section (CCIPS) operates in the United States Department of Justice Criminal Division. The CCIPS is in charge of investigating computer crime and intellectual property crime and is specialized in the search and seizure of digital evidence in computers and networks. In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)." The United States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners." It has no role in the protection of civilian networks. The U.S. Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services. The Food and Drug Administration has issued guidance for medical devices, and the National Highway Traffic Safety Administration is concerned with automotive cybersecurity. After being criticized by the Government Accountability Office, and following successful attacks on airports and claimed attacks on airplanes, the Federal Aviation Administration has devoted funding to securing systems on board the planes of private manufacturers, and the Aircraft Communications Addressing and Reporting System. Concerns have also been raised about the future Next Generation Air Transportation System. Computer emergency readiness team "Computer emergency response team" is a name given to expert groups that handle computer security incidents. In the US, two distinct organization exist, although they do work closely together. US-CERT: part of the National Cyber Security Division of the United States Department of Homeland Security. CERT/CC: created by the Defense Advanced Research Projects Agency (DARPA) and run by the Software Engineering Institute (SEI). Modern warfare There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton from The Christian Science Monitor wrote in a 2015 article titled "The New Cyber Arms Race": In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships. This has led to new terms such as cyberwarfare and cyberterrorism. The United States Cyber Command was created in 2009 and many other countries have similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be. Careers Cybersecurity is a fast-growing field of IT concerned with reducing organizations' risk of hack or data breach. According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015. Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail. However, the use of the term "cybersecurity" is more prevalent in government job descriptions. Typical cybersecurity job titles and descriptions include: Security analyst Analyzes and assesses vulnerabilities in the infrastructure (software, hardware, networks), investigates using available tools and countermeasures to remedy the detected vulnerabilities and recommends solutions and best practices. Analyzes and assesses damage to the data/infrastructure as a result of security incidents, examines available recovery tools and processes, and recommends solutions. Tests for compliance with security policies and procedures. May assist in the creation, implementation, or management of security solutions. Security engineer Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect security incidents, and mounts the incident response. Investigates and utilizes new technologies and processes to enhance security capabilities and implement improvements. May also review code
content such as cloud software or Virtual Private Networks (VPNs). In addition, a USB dongle can be configured to lock or unlock a computer. Trusted platform modules (TPMs) secure devices by integrating cryptographic capabilities onto access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to detect and authenticate hardware devices, preventing unauthorized network and data access. Computer case intrusion detection refers to a device, typically a push-button switch, which detects when a computer case is opened. The firmware or BIOS is programmed to show an alert to the operator when the computer is booted up the next time. Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to thieves. Tools exist specifically for encrypting external drives as well. Disabling USB ports is a security option for preventing unauthorized and malicious access to an otherwise secure computer. Infected USB dongles connected to a network from a computer inside the firewall are considered by the magazine Network World as the most common hardware threat facing computer networks. Disconnecting or disabling peripheral devices ( like camera, GPS, removable storage etc.), that are not in use. Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy (LE), Near field communication (NFC) on non-iOS devices and biometric validation such as thumb print readers, as well as QR code reader software designed for mobile devices, offer new, secure ways for mobile phones to connect to access control systems. These control systems provide computer security and can also be used for controlling access to secure buildings. Secure operating systems One use of the term "computer security" refers to technology that is used to implement secure operating systems. In the 1980s, the United States Department of Defense (DoD) used the "Orange Book" standards, but the current international standard ISO/IEC 15408, "Common Criteria" defines a number of progressively more stringent Evaluation Assurance Levels. Many common operating systems meet the EAL4 standard of being "Methodically Designed, Tested and Reviewed", but the formal verification required for the highest levels means that they are uncommon. An example of an EAL6 ("Semiformally Verified Design and Tested") system is INTEGRITY-178B, which is used in the Airbus A380 and several military jets. Secure coding In software engineering, secure coding aims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems are "secure by design". Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system; important for cryptographic protocols for example. Capabilities and access control lists Within computer systems, two of main security models capable of enforcing privilege separation are access control lists (ACLs) and role-based access control (RBAC). An access-control list (ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users, used by the majority of enterprises with more than 500 employees, and can implement mandatory access control (MAC) or discretionary access control (DAC). A further approach, capability-based security has been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is the E language. End user security training The end-user is widely recognized as the weakest link in the security chain and it is estimated that more than 90% of security incidents and breaches involve some kind of human error. Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication. As the human component of cyber risk is particularly relevant in determining the global cyber risk an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Digital hygiene Related to end-user training, digital hygiene or cyber hygiene is a fundamental principle relating to information security and, as the analogy with personal hygiene shows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks. Cyber hygiene should also not be mistaken for proactive cyber defence, a military term. As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline or education. It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal and/or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the term computer virus was coined almost simultaneously with the creation of the first working computer viruses, the term cyber hygiene is a much later invention, perhaps as late as 2000 by Internet pioneer Vint Cerf. It has since been adopted by the Congress and Senate of the United States, the FBI, EU institutions and heads of state. Response to breaches Responding to attempted security breaches is often very difficult for a variety of reasons, including: Identifying attackers is difficult, as they may operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other anonymizing procedures which make back-tracing difficult - and are often located in another jurisdiction. If they successfully breach security, they have also often gained enough administrative access to enable them to delete logs to cover their tracks. The sheer number of attempted attacks, often by automated vulnerability scanners and computer worms, is so large that organizations cannot spend time pursuing each. Law enforcement officers often lack the skills, interest or budget to pursue attackers. In addition, the identification of attackers across a network may require logs from various points in the network and in many countries, which may be difficult or time-consuming to obtain. Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatory security breach notification laws. Types of security and privacy Access control Anti-keyloggers Anti-malware Anti-spyware Anti-subversion software Anti-tamper software Anti-theft Antivirus software Cryptographic software Computer-aided dispatch (CAD) Firewall Intrusion detection system (IDS) Intrusion prevention system (IPS) Log management software Parental control Records management Sandbox Security information management Security information and event management (SIEM) Software and operating system updating Vulnerability Management Incident response planning Incident response is an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as a data breach or system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses. Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Preparation: Preparing stakeholders on the procedures for handling computer security incidents or compromises Detection and analysis: Identifying and investigating suspicious activity to confirm a security incident, prioritizing the response based on impact and coordinating notification of the incident Containment, eradication and recovery: Isolating affected systems to prevent escalation and limit impact, pinpointing the genesis of the incident, removing malware, affected systems and bad actors from the environment and restoring systems and data when a threat no longer remains Post incident activity: Post mortem analysis of the incident, its root cause and the organization's response with the intent of improving the incident response plan and future response efforts. Notable attacks and breaches Some illustrative examples of different types of computer security breaches are given below. Robert Morris and the first computer worm In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internet "computer worm". The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris who said "he wanted to count how many machines were connected to the Internet". Rome Laboratory In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user. TJX customer credit card details In early 2007, American apparel and home goods company TJX announced that it was the victim of an unauthorized computer systems intrusion and that the hackers had accessed a system that stored data on credit card, debit card, check, and merchandise return transactions. Stuxnet attack In 2010, the computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear centrifuges. It did so by disrupting industrial programmable logic controllers (PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iranian's nuclear program – although neither has publicly admitted this. Global surveillance disclosures In early 2013, documents provided by Edward Snowden were published by The Washington Post and The Guardian exposing the massive scale of NSA global surveillance. There were also indications that the NSA may have inserted a backdoor in a NIST standard for encryption. This standard was later withdrawn due to widespread criticism. The NSA additionally were revealed to have tapped the links between Google's data centers. Target and Home Depot breaches In 2013 and 2014, a Ukrainian hacker known as Rescator broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards, and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were delivered at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. Office of Personnel Management data breach In April 2015, the Office of Personnel Management discovered it had been hacked more than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office. The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States. Data targeted in the breach included personally identifiable information such as Social Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check. It is believed the hack was perpetrated by Chinese hackers. Ashley Madison breach In July 2015, a hacker group known as "The Impact Team" successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently." When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained functioning. Colonial Pipeline Ransomware Attack In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast. Legal issues and global regulation International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. Proving attribution for cybercrimes and cyberattacks is also a major problem for all law enforcement agencies. "Computer viruses switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world." The use of techniques such as dynamic DNS, fast flux and bullet proof servers add to the difficulty of investigation and enforcement. Role of government The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the national power-grid. The government's regulatory role in cyberspace is complicated. For some, cyberspace was seen as a virtual space that was to remain free of government intervention, as can be seen in many of today's libertarian blockchain and bitcoin discussions. Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through." On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order. On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges to international peace. According to UN Secretary-General António Guterres, new technologies are too often used to violate rights. International actions Many different teams and organizations exist, including: The Forum of Incident Response and Security Teams (FIRST) is the global association of CSIRTs. The US-CERT, AT&T, Apple, Cisco, McAfee, Microsoft are all members of this international team. The Council of Europe helps protect societies worldwide from the threat of cybercrime through the Convention on Cybercrime. The purpose of the Messaging Anti-Abuse Working Group (MAAWG) is to bring the messaging industry together to work collaboratively and to successfully address the various forms of messaging abuse, such as spam, viruses, denial-of-service attacks and other messaging exploitations. France Telecom, Facebook, AT&T, Apple, Cisco, Sprint are some of the members of the MAAWG. ENISA : The European Network and Information Security Agency (ENISA) is an agency of the European Union with the objective to improve network and information security in the European Union. Europe On 14 April 2016 the European Parliament and Council of the European Union adopted The General Data Protection Regulation (GDPR) (EU) 2016/679. GDPR, which became enforceable beginning 25 May 2018, provides for data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). GDPR requires that business processes that handle personal data be built with data protection by design and by default. GDPR also requires that certain organizations appoint a Data Protection Officer (DPO). National actions Computer emergency response teams Most countries have their own computer emergency response team to protect network security. Canada Since 2010, Canada has had a cybersecurity strategy. This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure. The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online. There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident. The Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors. It posts regular cybersecurity bulletins & operates an online reporting tool where individuals and organizations can report a cyber incident. To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations, and launched the Cyber Security Cooperation Program. They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October. Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015. China China's Central Leading Group for Internet Security and Informatization () was established on 27 February 2014. This Leading Small Group (LSG) of the Chinese Communist Party is headed by General Secretary Xi Jinping himself and is staffed with relevant Party and state decision-makers. The LSG was created to overcome the incoherent policies and overlapping responsibilities that characterized China's former cyberspace decision-making mechanisms. The LSG oversees policy-making in the economic, political, cultural, social and military fields as they relate to network security and IT strategy. This LSG also coordinates major policy initiatives in the international arena that promote norms and standards favored by the Chinese government and that emphasizes the principle of national sovereignty in cyberspace. Germany Berlin starts National Cyber Defense Initiative: On 16 June 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt für Sicherheit in der Informationstechnik, BKA (Federal Police Organisation) Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst, MAD (Military Intelligence Service) Amt für den Militärischen Abschirmdienst and other national organizations in Germany taking care of national security aspects. According to the Minister, the primary task of the new organization founded on 23 February 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents like Stuxnet. Germany has also established the largest research institution for IT security in Europe, the Center for Research in Security and Privacy (CRISP) in Darmstadt. India Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000. The National Cyber Security Policy 2013 is a policy framework by Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data". CERT- In is the nodal agency which monitors the cyber threats in the country. The post of National Cyber Security Coordinator has also been created in the Prime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013. South Korea Following cyber attacks in the first half of 2013, when the government, news media, television station, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies the accusations. United States Legislation The 1986 , the Computer Fraud and Abuse Act is the key legislation. It prohibits unauthorized access or damage of "protected computers" as defined in . Although various other measures have been proposed – none has succeeded. In 2013, executive order 13636 Improving Critical Infrastructure Cybersecurity was signed, which prompted the creation of the NIST Cybersecurity Framework. In response to the Colonial Pipeline ransomware attack President Joe Biden signed Executive Order 14028 on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. Standardized government testing services The General Services Administration (GSA) has standardized the "penetration test" service as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). Agencies The Department of Homeland Security has a dedicated division responsible for the response system, risk management program and requirements for cybersecurity in the United States called the National Cyber Security Division. The division is home to US-CERT operations and the National Cyber Alert System. The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure. The third priority of the Federal Bureau of Investigation (FBI) is to: "Protect the United States against cyber-based attacks and high-technology crimes", and they, along with the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA) are part of the multi-agency task force, The Internet Crime Complaint Center, also known as IC3. In addition to its own specific duties, the FBI participates alongside non-profit organizations such as InfraGard. The Computer Crime and Intellectual Property Section (CCIPS) operates in the United States Department of Justice Criminal Division. The CCIPS is in charge of investigating computer crime and intellectual property crime and is specialized in the search and seizure of digital evidence in computers and networks. In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)." The United States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners." It has no role in the protection of civilian networks. The U.S. Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services. The Food and Drug Administration has issued guidance for medical devices, and the National Highway Traffic Safety Administration is concerned with automotive cybersecurity. After being criticized by the Government Accountability Office, and following successful attacks on airports and claimed attacks on airplanes, the Federal Aviation Administration has devoted funding to securing systems on board the planes of private manufacturers, and the Aircraft Communications Addressing and Reporting System. Concerns have also been raised about the future Next Generation Air Transportation System. Computer emergency readiness team "Computer emergency response team" is a name given to expert groups that handle computer security incidents. In the US, two distinct organization exist, although they do work closely together. US-CERT: part of the National Cyber Security Division of the United States Department of Homeland Security. CERT/CC: created by the Defense Advanced Research Projects Agency (DARPA) and run by the Software Engineering Institute (SEI). Modern warfare There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton from The Christian Science Monitor wrote in a 2015 article titled "The New Cyber Arms Race": In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships. This has led to new terms such as cyberwarfare and cyberterrorism. The United States Cyber Command was created in 2009 and many other countries have similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be. Careers Cybersecurity is a fast-growing field of IT concerned with reducing organizations' risk of hack or data breach. According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015. Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail. However, the use of the term "cybersecurity" is more prevalent in government job descriptions. Typical cybersecurity job titles and descriptions include: Security analyst Analyzes and assesses vulnerabilities in the infrastructure (software, hardware, networks), investigates using available tools and countermeasures to remedy the detected vulnerabilities and recommends solutions and best practices. Analyzes and assesses damage to the data/infrastructure as a result of security incidents, examines available recovery tools and processes, and recommends solutions. Tests for compliance with security policies and procedures. May assist in the creation, implementation, or management of security solutions. Security engineer Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect security incidents, and mounts the incident response. Investigates and utilizes new technologies and processes to enhance security capabilities and implement improvements. May also review code or perform other security engineering methodologies. Security architect
from Flex was shown in the Barbican's exhibition Seduced: Art and Sex from Antiquity to Now curated by Martin Kemp, Marina Wallace and Joanne Bernstein. alongside other pieces by Bacon, Klimt, Rembrandt, Rodin and Picasso. Short films In 2005, Cunningham released the short film Rubber Johnny as a DVD accompanied by a book of photographs and drawings. Rubber Johnny, a six-minute experimental short film cut to a soundtrack by Aphex Twin remixed by Cunningham, was shot between 2001 and 2004. Shot on DV night-vision, it was made in Cunningham's own time as a home movie of sorts, and took three and half years of weekends to complete. The Telegraph called it "like a Looney Tunes short for a generation raised on video nasties and rave music". During this period Cunningham also made another short film for Warp Films, Spectral Musicians, which remains unreleased. The short film was set to Squarepusher's "My Fucking Sound" from his album Go Plastic; and to a piece called "Mutilation Colony" which was written especially for the short, and was released on the EP Do You Know Squarepusher. Commercials Cunningham has directed a handful of commercials for companies and brands, including Gucci, Sony (PlayStation), Levi's, Telecom Italia, Nissan, and Orange. Music production In 2004/2005, Cunningham took a sabbatical from filmmaking to learn about music production and recording and to develop his own music projects. In December 2007 Cunningham produced two tracks, "Three Decades" and "Primary Colours", for Primary Colours, the second album by the Horrors. In the summer of 2008, due to scheduling conflicts with his feature film script writing he could not work on the rest of the album which was subsequently recorded by Geoff Barrow from Portishead. In 2008, he produced and arranged a new version of 'I Feel Love' for the Gucci commercial that he also directed. He travelled to Nashville to work with Donna Summer to record a brand new vocal for it. Chris Cunningham Live In 2005, Cunningham played a 45-minute audio visual piece performed live in Tokyo and Osaka in front of 30,000+ fans over the two nights at the Japanese electronic music festival . These performances evolved into Chris Cunningham Live, a 55-minute long performance piece combining original and remixed music and film. It features remixed, unreleased and brand new videos and music dynamically edited together into a new live piece spread over three screens. The sound accompanying these images includes Cunningham's first publicly performed compositions interspersed with his remixes of other artist's work. Chris Cunningham Live debuted as one of the headline attractions at Warp 20 in Paris on 8 May 2009 with other performances scheduled at festivals in UK, and a number of European cities later in the year. Chris Cunningham Live continued in June 2011, with performances in London, Barcelona, and Sydney, Australia. Photography Cunningham has created photography and cover artwork for various people including Björk's "All Is Full of Love", Aphex Twin's "Windowlicker" and "Come to Daddy". In 2008, Cunningham produced a fashion shoot for Dazed & Confused using Grace Jones as a model to create "Nubian versions" of Rubber Johnny. In an interview for BBC's "The Culture Show", it was suggested that the collaboration may expand into a video project. In regards to the collaboration, Cunningham stated "For me, Grace has the strongest iconography of any artist in music. She’s definitely the most inspiring person I’ve worked with so far". In November 2008, Cunningham followed on with another photoshoot for Vice Magazine. Neuromancer In an August 1999 Spike Magazine interview, cyberpunk author William Gibson stated "He (Chris) was brought to my attention by someone else. We were told, third-hand, that he was extremely wary of the Hollywood process, and wouldn't return calls. But someone else told us that Neuromancer had been his Wind in the Willows, that he'd read it when he was a kid. I went to London and we met." Gibson is also quoted in the article as saying "Chris is my own 100 per cent personal choice...My only choice. The only person I've met who I thought might have a hope in hell of doing it right. I went back to see him in London just after he'd finished the Bjork video, and I sat on a couch beside this dead sex little Bjork robot, except it was wearing Aphex Twin's head. We talked." In 2000, Cunningham and William Gibson began work on the script for Gibson's 1984 novel Neuromancer. However, because Neuromancer was due to be a big budget studio film, it is rumoured that Cunningham pulled out due to being a first time director without final cut approval. He also felt that too much of the original book's ideas had been cannibalised by other recent films. On 18 November 2004, in the FAQ on the William Gibson Board, Gibson was asked: Personal life Cunningham was married to Warpaint's bassist Jenny Lee Lindberg. They are currently no longer together. Videography "Second Bad Vilbel" (1996) video for Autechre "Back with the Killer Again" (1996) video for the Auteurs "Light Aircraft on Fire" (1996) video for the Auteurs "Fighting Fit" (1996) video for Gene "Another Day" (1996) video for Lodestar "Space Junkie" (1996) video for Holy Barbarians "36 Degrees" (1996) video for Placebo "Personally" (1997) video for 12 Rounds "Jesus Coming in for the Kill" (1997) video for Life's Addiction "The
Artist/Curator Paul Murnaghan, In 2007, an excerpt from Flex was shown in the Barbican's exhibition Seduced: Art and Sex from Antiquity to Now curated by Martin Kemp, Marina Wallace and Joanne Bernstein. alongside other pieces by Bacon, Klimt, Rembrandt, Rodin and Picasso. Short films In 2005, Cunningham released the short film Rubber Johnny as a DVD accompanied by a book of photographs and drawings. Rubber Johnny, a six-minute experimental short film cut to a soundtrack by Aphex Twin remixed by Cunningham, was shot between 2001 and 2004. Shot on DV night-vision, it was made in Cunningham's own time as a home movie of sorts, and took three and half years of weekends to complete. The Telegraph called it "like a Looney Tunes short for a generation raised on video nasties and rave music". During this period Cunningham also made another short film for Warp Films, Spectral Musicians, which remains unreleased. The short film was set to Squarepusher's "My Fucking Sound" from his album Go Plastic; and to a piece called "Mutilation Colony" which was written especially for the short, and was released on the EP Do You Know Squarepusher. Commercials Cunningham has directed a handful of commercials for companies and brands, including Gucci, Sony (PlayStation), Levi's, Telecom Italia, Nissan, and Orange. Music production In 2004/2005, Cunningham took a sabbatical from filmmaking to learn about music production and recording and to develop his own music projects. In December 2007 Cunningham produced two tracks, "Three Decades" and "Primary Colours", for Primary Colours, the second album by the Horrors. In the summer of 2008, due to scheduling conflicts with his feature film script writing he could not work on the rest of the album which was subsequently recorded by Geoff Barrow from Portishead. In 2008, he produced and arranged a new version of 'I Feel Love' for the Gucci commercial that he also directed. He travelled to Nashville to work with Donna Summer to record a brand new vocal for it. Chris Cunningham Live In 2005, Cunningham played a 45-minute audio visual piece performed live in Tokyo and Osaka in front of 30,000+ fans over the two nights at the Japanese electronic music festival . These performances evolved into Chris Cunningham Live, a 55-minute long performance piece combining original and remixed music and film. It features remixed, unreleased and brand new videos and music dynamically edited together into a new live piece spread over three screens. The sound accompanying these images includes Cunningham's first publicly performed compositions interspersed with his remixes of other artist's work. Chris Cunningham Live debuted as one of the headline attractions at Warp 20 in Paris on 8 May 2009 with other performances scheduled at festivals in UK, and a number of European cities later in the year. Chris Cunningham Live continued in June 2011, with performances in London, Barcelona, and Sydney, Australia. Photography Cunningham has created photography and cover artwork for various people including Björk's "All Is Full of Love", Aphex Twin's "Windowlicker" and "Come to Daddy". In 2008, Cunningham produced a fashion shoot for Dazed & Confused using Grace Jones as a model to create "Nubian versions" of Rubber Johnny. In an interview for BBC's "The Culture Show", it was suggested that the collaboration may expand into a video project. In regards to the collaboration, Cunningham stated "For me, Grace has the strongest iconography of any artist in music. She’s definitely the most inspiring person I’ve worked with so far". In November 2008, Cunningham followed on with another photoshoot for Vice Magazine. Neuromancer In an August 1999 Spike Magazine interview, cyberpunk author William Gibson stated "He (Chris) was brought to my attention by someone else. We were told, third-hand, that he was extremely wary of the Hollywood process, and wouldn't return calls. But someone else told us that Neuromancer had been his Wind in the Willows, that he'd read it when he was a kid. I went to London and we met." Gibson is also quoted in the article as saying "Chris is my own 100 per cent personal choice...My only choice. The only person I've met who I thought might have a hope in hell of doing it right. I went back to see him in London just after he'd finished the Bjork video, and I sat on a couch beside this dead sex little Bjork robot, except it was wearing Aphex Twin's head. We talked." In 2000, Cunningham and William Gibson began work on the script for Gibson's 1984 novel Neuromancer. However, because Neuromancer was due to be a big budget studio film, it is rumoured that Cunningham pulled out due to being a first time director without final cut approval. He also felt that
and fought against the Lapiths. Origin of the myth The most common theory holds that the idea of centaurs came from the first reaction of a non-riding culture, as in the Minoan Aegean world, to nomads who were mounted on horses. The theory suggests that such riders would appear as half-man, half-animal. Bernal Díaz del Castillo reported that the Aztecs also had this misapprehension about Spanish cavalrymen. The Lapith tribe of Thessaly, who were the kinsmen of the Centaurs in myth, were described as the inventors of horse-riding by Greek writers. The Thessalian tribes also claimed their horse breeds were descended from the centaurs. Robert Graves (relying on the work of Georges Dumézil, who argued for tracing the centaurs back to the Indian Gandharva), speculated that the centaurs were a dimly remembered, pre-Hellenic fraternal earth cult who had the horse as a totem. A similar theory was incorporated into Mary Renault's The Bull from the Sea. Variations Female centaurs Though female centaurs, called centaurides or centauresses, are not mentioned in early Greek literature and art, they do appear occasionally in later antiquity. A Macedonian mosaic of the 4th century BC is one of the earliest examples of the centauress in art. Ovid also mentions a centauress named Hylonome who committed suicide when her husband Cyllarus was killed in the war with the Lapiths. India The Kalibangan cylinder seal, dated to be around 2600-1900 BC, found at the site of Indus-Valley civilization shows a battle between men in the presence of centaur-like creatures. Other sources claim the creatures represented are actually half human and half tigers, later evolving into the Hindu Goddess of War. These seals are also evidence of Indus-Mesopotamia relations in the 3rd millennium BC. In a popular legend associated with Pazhaya Sreekanteswaram Temple in Thiruvananthapuram, the curse of a saintly Brahmin transformed a handsome Yadava prince into a creature having a horse's body and the prince's head, arms, and torso in place of the head and neck of the horse. Kinnaras, another half-man, half-horse mythical creature from Indian mythology, appeared in various ancient texts, arts, and sculptures from all around India. It is shown as a horse with the torso of a man where the horse's head would be, and is similar to a Greek centaur. Russia A centaur-like half-human, half-equine creature called Polkan appeared in Russian folk art and lubok prints of the 17th–19th centuries. Polkan is originally based on Pulicane, a half-dog from Andrea da Barberino's poem I Reali di Francia, which was once popular in the Slavonic world in prosaic translations. Artistic representations Classical art The extensive Mycenaean pottery found at Ugarit included two fragmentary Mycenaean terracotta figures which have been tentatively identified as centaurs. This finding suggests a Bronze Age origin for these creatures of myth. A painted terracotta centaur was found in the "Hero's tomb" at Lefkandi, and by the Geometric period, centaurs figure among the first representational figures painted on Greek pottery. An often-published Geometric period bronze of a warrior face-to-face with a centaur is at the Metropolitan Museum of Art. In Greek art of the Archaic period, centaurs are depicted in three different forms. Some centaurs are depicted with a human torso attached to the body of a horse at the withers, where the horse's neck would be; this form, designated "Class A" by Professor Paul Baur, later became standard. "Class B" centaurs are depicted with a human body and legs joined at the waist to the hindquarters of a horse; in some cases centaurs of both Class A and Class B appear together. A third type, designated "Class C", depicts centaurs with human forelegs terminating in hooves. Baur describes this as an apparent development of Aeolic art, which never became particularly widespread. At a later period, paintings on some amphorae depict winged centaurs. Centaurs were also frequently depicted in Roman art. One example is the pair of centaurs drawing the chariot of Constantine the Great and his family in the Great Cameo of Constantine (circa AD 314–16), which embodies wholly pagan imagery, and contrasts sharply with the popular image of Constantine as the patron of early Christianity. Medieval art Centaurs preserved a Dionysian connection in the 12th-century Romanesque carved capitals of Mozac Abbey in the Auvergne. Other similar capitals depict harvesters, boys riding goats (a further Dionysiac theme), and griffins guarding the chalice that held the wine. Centaurs are also shown on a number of Pictish carved stones from north-east Scotland erected in the 8th–9th centuries AD (e.g., at Meigle, Perthshire). Though outside the limits of the Roman Empire, these depictions appear to be derived from Classical prototypes. Modern art The John C. Hodges library at The University of Tennessee hosts a permanent exhibit of a "Centaur from Volos" in its library. The exhibit, made by sculptor Bill Willers by combining a study human skeleton with the skeleton of a Shetland pony, is entitled "Do you believe in Centaurs?". According to the exhibitors, it was meant to mislead students in order to make them more critically aware. In heraldry Centaurs are common in European heraldry, although more frequent in continental than in British arms. A centaur holding a bow is referred to as a sagittarius. Literature Classical literature Jerome's version of the Life of St Anthony the Great, written by Athanasius of Alexandria about the hermit monk of Egypt, was widely disseminated in the Middle Ages; it relates Anthony's encounter with a centaur who challenged the saint, but was forced to admit that the old gods had been overthrown. The episode was often depicted in The Meeting of St Anthony Abbot and St Paul the Hermit by the painter Stefano di Giovanni, who was known as "Sassetta". Of the two episodic depictions of the hermit Anthony's travel to greet the hermit Paul, one is his encounter with the demonic figure of a centaur along the pathway in a wood. Lucretius, in his first-century BC philosophical poem On the Nature of Things, denied the existence of centaurs based on their differing rate of growth. He states that at the age of three years, horses are in the prime of their life while humans at the same age are still little more than babies, making hybrid animals impossible. Medieval literature Centaurs are among the creatures which 14th-century Italian poet Dante placed as guardians in his Inferno. In Canto XII, Dante and his guide Virgil meet a band led by Chiron and Pholus, guarding the bank of Phlegethon in the seventh circle of Hell, a river of boiling blood in which the violent against their neighbours are immersed, shooting arrows into any who move to a shallower spot than their allotted station. The two poets are treated with courtesy, and Nessus guides them to a ford. In Canto XXIV, in the eighth circle, in Bolgia 7, a ditch where thieves are confined, they meet but do not converse with Cacus (who is a giant in the ancient sources), wreathed in serpents and with a fire-breathing dragon on his shoulders, arriving to punish a sinner who has just cursed God. In his Purgatorio, an unseen spirit on the sixth terrace cites the centaurs ("the drunken double-breasted ones who fought Theseus") as examples of the sin of gluttony. Modern day literature C.S. Lewis' The Chronicles of Narnia series depicts centaurs as the wisest and noblest of creatures. Narnian Centaurs are gifted at stargazing, prophecy, healing, and warfare; a fierce and valiant race always faithful to the High King Aslan the Lion. In J.K. Rowling's Harry Potter series, centaurs live in the Forbidden Forest close to Hogwarts, preferring to avoid contact with humans. They live in societies called herds and are skilled at archery, healing, and astrology, but like in the original myths, they are known to have some wild and barbarous tendencies. With the exception of Chiron, the centaurs in Rick Riordan's Percy Jackson & the Olympians are seen as wild party-goers who use a lot of American slang. Chiron retains his mythological role as a trainer of heroes and is skilled in archery. In Riordan's subsequent series, Heroes of Olympus, another group of centaurs are depicted with more animalistic features (such as horns) and appear as villains, serving the Gigantes. Philip Jose Farmer's World of Tiers series (1965) includes centaurs, called Half-Horses or Hoi Kentauroi. His creations address several of the metabolic problems of such creatures—how could the human mouth and nose intake sufficient air to sustain both itself and the horse body and, similarly, how could the human ingest sufficient food to sustain both parts. Brandon Mull's Fablehaven series features centaurs that live in an area called Grunhold. The centaurs are portrayed as a proud, elitist group of beings that consider themselves superior to all other creatures. The fourth book also has a variation on the species called an Alcetaur, which is part man, part moose. The myth of the centaur appears in John Updike's novel The Centaur. The author depicts a rural Pennsylvanian town as seen through the optics of the myth of the centaur. An unknown and marginalized local school teacher, just like the mythological Chiron did for Prometheus, gave up his life for the future of his son who had chosen to be an independent artist in New York. Celebration A number of dates have been suggested over the years by centaur enthusiasts to celebrate the myth of the Centaur, the most recent claim being that May 14 is International Centaur Appreciation Day, and appears to be celebrated by some, however no official conclusion has been met. Gallery See also Other hybrid creatures appear in Greek mythology, always with some liminal connection that links Hellenic culture with archaic or non-Hellenic cultures: Furietti Centaurs Hippocamp Hybrid (mythology) Ipotane Legendary creature Lists of legendary creatures Minotaur Onocentaur Ichthyocentaur Sagittarius Satyr Also, Hindu Kamadhenu Indian Kinnara which are half-horse and half-man creature. Islamic Buraq, a heavenly steed often portrayed as an equine being with a human face. Philippine Tikbalang Roman Faun, and the Hippopodes of Pomponius Mela, Pliny the Elder, and later authors. Scottish Each uisge and Nuckelavee Welsh Ceffyl Dŵr Additionally, Bucentaur, the name of several historically important Venetian vessels, was linked to a posited ox-centaur or βουκένταυρος (boukentauros) by fanciful and likely spurious folk-etymology. Footnotes References References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Diodorus
wedding and was killed by Peleus. Isoples, killed by Heracles when he tried to steal the wine of Pholus. Latreus, fought against the Lapiths at Pirithous' wedding and was killed by Caeneus. Lycabas, attended Pirithous' wedding, fought against the Lapiths and fled. Lycidas, fought against the Lapiths at Pirithous' wedding and was killed by Dryas. Lycopes, fought against the Lapiths at Pirithous' wedding and was killed by Theseus. Lycus, fought against the Lapiths at Pirithous' wedding was killed by Pirithous. Medon, attended Pirithous' wedding, fought against the Lapiths and fled. Melanchaetes, killed by Heracles when he tried to steal the wine of Pholus. Melaneus, attended Pirithous' wedding, fought against the Lapiths and fled. Mermerus, wounded by the Lapiths at Pirithous' wedding and fled. Mimas, attended Pirithous' wedding and fought against the Lapiths. Monychus, attended Pirithous' wedding and fought in the battle against the Lapiths. He was conquered by Nestor, mounted on his unwilling back. Nedymnus, fought against the Lapiths at Pirithous' wedding. Killed by Theseus. Nessus, a black Centaur. Fled during the fight with the Lapiths at Pirithous' wedding. Later he attempted to rape Deianira and before dying gave her a charm which resulted in the death of Heracles. He was killed by the latter. Ophion, father of Amycus. Orius, killed by Heracles when he tried to steal the wine of Pholus. Orneus, attended Pirithous' wedding fought against the Lapiths and fled. Perimedes, son of Peuceus and attended Pirithous' wedding and fought against the Lapiths. Petraeus, fought against the Lapiths at Pirithous' wedding and was killed by Pirithous. Peuceus, father of Perimedes and Dryalus. Phaecomes, fought against the Lapiths at Pirithous' wedding and was killed by Nestor. Phlegraeus, fought against the Lapiths at Pirithous' wedding and was killed by Peleus. Pholus Phrixus, killed by Heracles when he tried to steal the wine of Pholus. Pisenor, attended Pirithous' wedding, fought against the Lapiths and fled. Pylenor, having been wounded by Heracles washed himself in the river Anigrus, thus providing the river with a peculiar odor. Pyracmus, fought against the Lapiths at Pirithous' wedding and was killed by Caeneus. Pyraethus, fought against the Lapiths at Pirithous' wedding and was killed by Periphas. Rhoecus, He also tried to rape Atalanta and was killed by her. Rhoetus, fought against the Lapiths at Pirithous' wedding and was killed by Dryas. Ripheus, fought against the Lapiths at Pirithous' wedding and was killed by Theseus. Styphelus, fought against the Lapiths at Pirithous' wedding and was killed by Caeneus. Teleboas, fought against the Lapiths at Pirithous' wedding and was killed by Nestor. Thaumas, attended Pirithous' wedding, fought against the Lapiths and fled. Thereus, this Centaur used to catch bears and carry them home alive and struggling. Attended Pirithous' wedding and fought in the battle against the Lapiths. Killed by Theseus. Thereus, killed by Heracles when he tried to steal the wine of Pholus. Ureus, attended Pirithous' wedding and fought against the Lapiths. Origin of the myth The most common theory holds that the idea of centaurs came from the first reaction of a non-riding culture, as in the Minoan Aegean world, to nomads who were mounted on horses. The theory suggests that such riders would appear as half-man, half-animal. Bernal Díaz del Castillo reported that the Aztecs also had this misapprehension about Spanish cavalrymen. The Lapith tribe of Thessaly, who were the kinsmen of the Centaurs in myth, were described as the inventors of horse-riding by Greek writers. The Thessalian tribes also claimed their horse breeds were descended from the centaurs. Robert Graves (relying on the work of Georges Dumézil, who argued for tracing the centaurs back to the Indian Gandharva), speculated that the centaurs were a dimly remembered, pre-Hellenic fraternal earth cult who had the horse as a totem. A similar theory was incorporated into Mary Renault's The Bull from the Sea. Variations Female centaurs Though female centaurs, called centaurides or centauresses, are not mentioned in early Greek literature and art, they do appear occasionally in later antiquity. A Macedonian mosaic of the 4th century BC is one of the earliest examples of the centauress in art. Ovid also mentions a centauress named Hylonome who committed suicide when her husband Cyllarus was killed in the war with the Lapiths. India The Kalibangan cylinder seal, dated to be around 2600-1900 BC, found at the site of Indus-Valley civilization shows a battle between men in the presence of centaur-like creatures. Other sources claim the creatures represented are actually half human and half tigers, later evolving into the Hindu Goddess of War. These seals are also evidence of Indus-Mesopotamia relations in the 3rd millennium BC. In a popular legend associated with Pazhaya Sreekanteswaram Temple in Thiruvananthapuram, the curse of a saintly Brahmin transformed a handsome Yadava prince into a creature having a horse's body and the prince's head, arms, and torso in place of the head and neck of the horse. Kinnaras, another half-man, half-horse mythical creature from Indian mythology, appeared in various ancient texts, arts, and sculptures from all around India. It is shown as a horse with the torso of a man where the horse's head would be, and is similar to a Greek centaur. Russia A centaur-like half-human, half-equine creature called Polkan appeared in Russian folk art and lubok prints of the 17th–19th centuries. Polkan is originally based on Pulicane, a half-dog from Andrea da Barberino's poem I Reali di Francia, which was once popular in the Slavonic world in prosaic translations. Artistic representations Classical art The extensive Mycenaean pottery found at Ugarit included two fragmentary Mycenaean terracotta figures which have been tentatively identified as centaurs. This finding suggests a Bronze Age origin for these creatures of myth. A painted terracotta centaur was found in the "Hero's tomb" at Lefkandi, and by the Geometric period, centaurs figure among the first representational figures painted on Greek pottery. An often-published Geometric period bronze of a warrior face-to-face with a centaur is at the Metropolitan Museum of Art. In Greek art of the Archaic period, centaurs are depicted in three different forms. Some centaurs are depicted with a human torso attached to the body of a horse at the withers, where the horse's neck would be; this form, designated "Class A" by Professor Paul Baur, later became standard. "Class B" centaurs are depicted with a human body and legs joined at the waist to the hindquarters of a horse; in some cases centaurs of both Class A and Class B appear together. A third type, designated "Class C", depicts centaurs with human forelegs terminating in hooves. Baur describes this as an apparent development of Aeolic art, which never became particularly widespread. At a later period, paintings on some amphorae depict winged centaurs. Centaurs were also frequently depicted in Roman art. One example is the pair of centaurs drawing the chariot of Constantine the Great and his family in the Great Cameo of Constantine (circa AD 314–16), which embodies wholly pagan imagery, and contrasts sharply with the popular image of Constantine as the patron of early Christianity. Medieval art Centaurs preserved a Dionysian connection in the 12th-century Romanesque carved capitals of Mozac Abbey in the Auvergne. Other similar capitals depict harvesters, boys riding goats (a further Dionysiac theme), and griffins guarding the chalice that held the wine. Centaurs are also shown on a number of Pictish carved stones from north-east Scotland erected in the 8th–9th centuries AD (e.g., at Meigle, Perthshire). Though outside the limits of the Roman Empire, these depictions appear to be derived from Classical prototypes. Modern art The John C. Hodges library at The University of Tennessee hosts a permanent
regulation allows the bacterium to 'remember' chemical concentrations from the recent past, a few seconds, and compare them to those it is currently experiencing, thus 'know' whether it is traveling up or down a gradient. that bacteria have to chemical gradients, other mechanisms are involved in increasing the absolute value of the sensitivity on a given background. Well-established examples are the ultra-sensitive response of the motor to the CheY-P signal, and the clustering of chemoreceptors. Chemoattractants and chemorepellents Chemoattractants and chemorepellents are inorganic or organic substances possessing chemotaxis-inducer effect in motile cells. These chemotactic ligands create chemical concentration gradients that organisms, prokaryotic and eukaryotic, move toward or away from, respectively. Effects of chemoattractants are elicited via chemoreceptors such as methyl-accepting chemotaxis proteins (MCP). MCPs in E.coli include Tar, Tsr, Trg and Tap. Chemoattracttants to Trg include ribose and galactose with phenol as a chemorepellent. Tap and Tsr recognize dipeptides and serine as chemoattractants, respectively. Chemoattractants or chemorepellents bind MCPs at its extracellular domain; an intracellular signaling domain relays the changes in concentration of these chemotactic ligands to downstream proteins like that of CheA which then relays this signal to flagellar motors via phosphorylated CheY (CheY-P). CheY-P can then control flagellar rotation influencing the direction of cell motility. For E.coli, S. meliloti, and R. spheroids, the binding of chemoattractants to MCPs inhibit CheA and therefore CheY-P activity, resulting in smooth runs, but for B. substilis, CheA activity increases. Methylation events in E.coli cause MCPs to have lower affinity to chemoattractants which causes increased activity of CheA and CheY-P resulting in tumbles. In this way cells are able to adapt to the immediate chemoattractant concentration and detect further changes to modulate cell motility. Chemoattractants in eukaryotes are well characterized for immune cells. Formyl peptides, such as fMLF, attract leukocytes such as neutrophils and macrophages, causing movement toward infection sites. Non-acylated methioninyl peptides do not act as chemoattractants to neutrophils and macrophages. Leukocytes also move toward chemoattractants C5a, a complement component, and pathogen-specific ligands on bacteria. Mechanisms concerning chemorepellents are less known than chemoattractants. Although chemorepellents work to confer an avoidance response in organisms, Tetrahymena thermophila adapt to a chemorepellent, Netrin-1 peptide, within 10 minutes of exposure; however, exposure to chemorepellents such as GTP, PACAP-38, and nociceptin show no such adaptations. GTP and ATP are chemorepellents in micro-molar concentrations to both Tetrahymena and Paramecium. These organisms avoid these molecules by producing avoiding reactions to re-orient themselves away from the gradient. Eukaryotic chemotaxis The mechanism of chemotaxis that eukaryotic cells employ is quite different from that in the bacteria E. coli; however, sensing of chemical gradients is still a crucial step in the process. Due to their small size and other biophysical constraints, E. coli cannot directly detect a concentration gradient. Instead, they employ temporal gradient sensing, where they move over larger distances several times their own width and measure the rate at which perceived chemical concentration changes. Eukaryotic cells are much larger than prokaryotes and have receptors embedded uniformly throughout the cell membrane. Eukaryotic chemotaxis involves detecting a concentration gradient spatially by comparing the asymmetric activation of these receptors at the different ends of the cell. Activation of these receptors results in migration towards chemoattractants, or away from chemorepellants. In mating yeast, which are non-motile, patches of polarity proteins on the cell cortex can relocate in a chemotactic fashion up pheromone gradients. It has also been shown that both prokaryotic and eukaryotic cells are capable of chemotactic memory. In prokaryotes, this mechanism involves the methylation of receptors called methyl-accepting chemotaxis proteins (MCPs). This results in their desensitization and allows prokaryotes to "remember" and adapt to a chemical gradient. In contrast, chemotactic memory in eukaryotes can be explained by the Local Excitation Global Inhibition (LEGI) model. LEGI involves the balance between a fast excitation and delayed inhibition which controls downstream signaling such as Ras activation and PIP3 production. Levels of receptors, intracellular signalling pathways and the effector mechanisms all represent diverse, eukaryotic-type components. In eukaryotic unicellular cells, amoeboid movement and cilium or the eukaryotic flagellum are the main effectors (e.g., Amoeba or Tetrahymena). Some eukaryotic cells of higher vertebrate origin, such as immune cells also move to where they need to be. Besides immune competent cells (granulocyte, monocyte, lymphocyte) a large group of cells—considered previously to be fixed into tissues—are also motile in special physiological (e.g., mast cell, fibroblast, endothelial cells) or pathological conditions (e.g., metastases). Chemotaxis has high significance in the early phases of embryogenesis as development of germ layers is guided by gradients of signal molecules. Motility Unlike motility in bacterial chemotaxis, the mechanism by which eukaryotic cells physically move is unclear. There appear to be mechanisms by which an external chemotactic gradient is sensed and turned into an intracellular PIP3 gradient, which results in a gradient and the activation of a signaling pathway, culminating in the polymerisation of actin filaments. The growing distal end of actin filaments develops connections with the internal surface of the plasma membrane via different sets of peptides and results in the formation of anterior pseudopods and posterior uropods. Cilia of eukaryotic cells can also produce chemotaxis; in this case, it is mainly a Ca2+-dependent induction of the microtubular system of the basal body and the beat of the 9 + 2 microtubules within cilia. The orchestrated beating of hundreds of cilia is synchronized by a submembranous system built between basal bodies. The details of the signaling pathways are still not totally clear. Chemotaxis-related migratory responses Chemotaxis refers to the directional migration of cells in response to chemical gradients; several variations of chemical-induced migration exist as listed below. Chemokinesis refers to an increase in cellular motility in response to chemicals in the surrounding environment. Unlike chemotaxis, the migration stimulated by chemokinesis lacks directionality, and instead increases environmental scanning behaviors. In haptotaxis the gradient of the chemoattractant is expressed or bound on a surface, in contrast to the classical model of chemotaxis, in which the gradient develops in a soluble fluid. The most common biologically active haptotactic surface is the extracellular matrix (ECM); the presence of bound ligands is responsible for induction of transendothelial migration and angiogenesis. Necrotaxis embodies a special type of chemotaxis when the chemoattractant molecules are released from necrotic or apoptotic cells. Depending on the chemical character of released substances, necrotaxis can accumulate or repel cells, which underlines the pathophysiological significance of this phenomenon. Receptors In general, eukaryotic cells sense the presence of chemotactic stimuli through the use of 7-transmembrane (or serpentine) heterotrimeric G-protein-coupled receptors, a class representing a significant portion of the genome. Some members of this gene superfamily are used in eyesight (rhodopsins) as well as in olfaction (smelling). The main classes of chemotaxis receptors are triggered by: Formyl peptides - formyl peptide receptors (FPR), Chemokines - chemokine receptors (CCR or CXCR), and Leukotrienes - leukotriene receptors (BLT). However, induction of a wide set of membrane receptors (e.g., cyclic nucleotides, amino acids, insulin, vasoactive peptides) also elicit migration of the cell. Chemotactic selection While some chemotaxis receptors are expressed in the surface membrane with long-term characteristics, as they are determined genetically, others have short-term dynamics, as they are assembled ad hoc in the presence of the ligand. The diverse features of the chemotaxis receptors and ligands allows for the possibility of selecting chemotactic responder cells with a simple chemotaxis assay By chemotactic selection, we can determine whether a still-uncharacterized molecule acts via the long- or the short-term receptor pathway. The term chemotactic selection is also used to designate a technique that separates eukaryotic or prokaryotic cells according to their chemotactic responsiveness to selector ligands. Chemotactic ligands The number of molecules capable of eliciting chemotactic responses is relatively high, and we can distinguish primary and secondary chemotactic molecules. The main groups of the primary ligands are as follows: Formyl peptides are di-, tri-, tetrapeptides of bacterial origin, formylated on the N-terminus of the peptide. They are released from bacteria in vivo or after decomposition of the cell[ a typical member of this group is the N-formylmethionyl-leucyl-phenylalanine (abbreviated fMLF or fMLP). Bacterial fMLF is a key component
receptor activation results in autophosphorylation in the histidine kinase, CheA, at a single highly conserved histidine residue. CheA, in turn, transfers phosphoryl groups to conserved aspartate residues in the response regulators CheB and CheY; CheA is a histidine kinase and it does not actively transfer the phosphoryl group, rather, the response regulator CheB takes the phosphoryl group from CheA. This mechanism of signal transduction is called a two-component system, and it is a common form of signal transduction in bacteria. CheY induces tumbling by interacting with the flagellar switch protein FliM, inducing a change from counter-clockwise to clockwise rotation of the flagellum. Change in the rotation state of a single flagellum can disrupt the entire flagella bundle and cause a tumble. Receptor regulation CheB, when activated by CheA, acts as a methylesterase, removing methyl groups from glutamate residues on the cytosolic side of the receptor; it works antagonistically with CheR, a methyltransferase, which adds methyl residues to the same glutamate residues. If the level of an attractant remains high, the level of phosphorylation of CheA (and, therefore, CheY and CheB) will remain low, the cell will swim smoothly, and the level of methylation of the MCPs will increase (because CheB-P is not present to demethylate). The MCPs no longer respond to the attractant when they are fully methylated; therefore, even though the level of attractant might remain high, the level of CheA-P (and CheB-P) increases and the cell begins to tumble. The MCPs can be demethylated by CheB-P, and, when this happens, the receptors can once again respond to attractants. The situation is the opposite with regard to repellents: fully methylated MCPs respond best to repellents, while least-methylated MCPs respond worst to repellents. This regulation allows the bacterium to 'remember' chemical concentrations from the recent past, a few seconds, and compare them to those it is currently experiencing, thus 'know' whether it is traveling up or down a gradient. that bacteria have to chemical gradients, other mechanisms are involved in increasing the absolute value of the sensitivity on a given background. Well-established examples are the ultra-sensitive response of the motor to the CheY-P signal, and the clustering of chemoreceptors. Chemoattractants and chemorepellents Chemoattractants and chemorepellents are inorganic or organic substances possessing chemotaxis-inducer effect in motile cells. These chemotactic ligands create chemical concentration gradients that organisms, prokaryotic and eukaryotic, move toward or away from, respectively. Effects of chemoattractants are elicited via chemoreceptors such as methyl-accepting chemotaxis proteins (MCP). MCPs in E.coli include Tar, Tsr, Trg and Tap. Chemoattracttants to Trg include ribose and galactose with phenol as a chemorepellent. Tap and Tsr recognize dipeptides and serine as chemoattractants, respectively. Chemoattractants or chemorepellents bind MCPs at its extracellular domain; an intracellular signaling domain relays the changes in concentration of these chemotactic ligands to downstream proteins like that of CheA which then relays this signal to flagellar motors via phosphorylated CheY (CheY-P). CheY-P can then control flagellar rotation influencing the direction of cell motility. For E.coli, S. meliloti, and R. spheroids, the binding of chemoattractants to MCPs inhibit CheA and therefore CheY-P activity, resulting in smooth runs, but for B. substilis, CheA activity increases. Methylation events in E.coli cause MCPs to have lower affinity to chemoattractants which causes increased activity of CheA and CheY-P resulting in tumbles. In this way cells are able to adapt to the immediate chemoattractant concentration and detect further changes to modulate cell motility. Chemoattractants in eukaryotes are well characterized for immune cells. Formyl peptides, such as fMLF, attract leukocytes such as neutrophils and macrophages, causing movement toward infection sites. Non-acylated methioninyl peptides do not act as chemoattractants to neutrophils and macrophages. Leukocytes also move toward chemoattractants C5a, a complement component, and pathogen-specific ligands on bacteria. Mechanisms concerning chemorepellents are less known than chemoattractants. Although chemorepellents work to confer an avoidance response in organisms, Tetrahymena thermophila adapt to a chemorepellent, Netrin-1 peptide, within 10 minutes of exposure; however, exposure to chemorepellents such as GTP, PACAP-38, and nociceptin show no such adaptations. GTP and ATP are chemorepellents in micro-molar concentrations to both Tetrahymena and Paramecium. These organisms avoid these molecules by producing avoiding reactions to re-orient themselves away from the gradient. Eukaryotic chemotaxis The mechanism of chemotaxis that eukaryotic cells employ is quite different from that in the bacteria E. coli; however, sensing of chemical gradients is still a crucial step in the process. Due to their small size and other biophysical constraints, E. coli cannot directly detect a concentration gradient. Instead, they employ temporal gradient sensing, where they move over larger distances several times their own width and measure the rate at which perceived chemical concentration changes. Eukaryotic cells are much larger than prokaryotes and have receptors embedded uniformly throughout the cell membrane. Eukaryotic chemotaxis involves detecting a concentration gradient spatially by comparing the asymmetric activation of these receptors at the different ends of the cell. Activation of these receptors results in migration towards chemoattractants, or away from chemorepellants. In mating yeast, which are non-motile, patches of polarity proteins on the cell cortex can relocate in a chemotactic fashion up pheromone gradients. It has also been shown that both prokaryotic and eukaryotic cells are capable of chemotactic memory. In prokaryotes, this mechanism involves the methylation of receptors called methyl-accepting chemotaxis proteins (MCPs). This results in their desensitization and allows prokaryotes to "remember" and adapt to a chemical gradient. In contrast, chemotactic memory in eukaryotes can be explained by the Local Excitation Global Inhibition (LEGI) model. LEGI involves the balance between a fast excitation and delayed inhibition which controls downstream signaling such as Ras activation and PIP3 production. Levels of receptors, intracellular signalling pathways and the effector mechanisms all represent diverse, eukaryotic-type components. In eukaryotic unicellular cells, amoeboid movement and cilium or the eukaryotic flagellum are the main effectors (e.g., Amoeba or Tetrahymena). Some eukaryotic cells of higher vertebrate origin, such as immune cells also move to where they need to be. Besides immune competent cells (granulocyte, monocyte, lymphocyte) a large group of cells—considered previously to be fixed into tissues—are also motile in special physiological (e.g., mast cell, fibroblast, endothelial cells) or pathological conditions (e.g., metastases). Chemotaxis has high significance in the early phases of embryogenesis as development of germ layers is guided by gradients of signal molecules. Motility Unlike motility in bacterial chemotaxis, the mechanism by which eukaryotic cells physically move is unclear. There appear to be mechanisms by which an external chemotactic gradient is sensed and turned into an intracellular PIP3 gradient, which results in a gradient and the activation of a signaling pathway, culminating in the polymerisation of actin filaments. The growing distal end of actin filaments develops connections with the internal surface of the plasma membrane via different sets of peptides and results in the formation of anterior pseudopods and posterior uropods. Cilia of eukaryotic cells can also produce chemotaxis; in this case, it is mainly a Ca2+-dependent induction of the microtubular system of the basal body and the beat of the 9 + 2 microtubules within cilia. The orchestrated beating of hundreds of cilia is synchronized by a submembranous system built between basal bodies. The details of the signaling pathways are still not totally clear. Chemotaxis-related migratory responses Chemotaxis refers to the directional migration of cells in response to chemical gradients; several variations of chemical-induced migration exist as listed below. Chemokinesis refers to an increase in cellular motility in response to chemicals in the surrounding environment. Unlike chemotaxis, the migration stimulated by chemokinesis lacks directionality, and instead increases environmental scanning behaviors. In haptotaxis the gradient of the chemoattractant is expressed or bound on a surface, in contrast to the classical model of chemotaxis, in which the gradient develops in a soluble fluid. The most common biologically active haptotactic surface is the extracellular matrix (ECM); the presence of bound ligands is responsible for induction of transendothelial migration and angiogenesis. Necrotaxis embodies a special type of chemotaxis when the chemoattractant molecules are released from necrotic or apoptotic cells. Depending on the chemical character of released substances, necrotaxis can accumulate or repel cells, which underlines the pathophysiological significance of this phenomenon. Receptors In general, eukaryotic cells sense the presence of chemotactic stimuli through the use of 7-transmembrane (or serpentine) heterotrimeric G-protein-coupled receptors, a class representing a significant portion of the genome. Some members of this gene superfamily are used in eyesight (rhodopsins) as well as in olfaction (smelling). The main classes of chemotaxis receptors are triggered by: Formyl peptides - formyl peptide receptors (FPR), Chemokines - chemokine receptors (CCR or CXCR), and Leukotrienes - leukotriene receptors (BLT). However, induction of a wide set of membrane receptors (e.g., cyclic nucleotides, amino acids, insulin, vasoactive peptides) also elicit migration of the cell. Chemotactic selection While some chemotaxis receptors are expressed in the surface membrane with long-term characteristics, as they are determined genetically, others have short-term dynamics, as they are assembled ad hoc in the presence of the ligand. The diverse features of the chemotaxis receptors and ligands allows for the possibility of selecting chemotactic responder cells with a simple chemotaxis assay By chemotactic selection, we can determine whether a still-uncharacterized molecule acts via the long- or the short-term receptor pathway. The term chemotactic selection is also used to designate
10,994 (2%) in ethnic groups other than white. Of the 2% in non-white ethnic groups: 3,717 (34%) belonged to mixed ethnic groups 3,336 (30%) were Asian or Asian British 1,076 (10%) were black or black British 1,826 (17%) were of Chinese ethnic groups 1,039 (9%) were of other ethnic groups. Religion In the 2001 Census, 81% of the population (542,413) identified themselves as Christian; 124,677 (19%) did not identify with any religion or did not answer the question; 5,665 (1%) identified themselves as belonging to other major world religions; and 1,033 belonged to other religions. The boundary of the Church of England Diocese of Chester follows most closely the pre-1974 county boundary of Cheshire, so it includes all of Wirral, Stockport, and the Cheshire panhandle that included Tintwistle Rural District council area. In terms of Roman Catholic church administration, most of Cheshire falls into the Roman Catholic Diocese of Shrewsbury. Economy Cheshire has a diverse economy with significant sectors including agriculture, automotive, bio-technology, chemical, financial services, food and drink, ICT, and tourism. The county is famous for the production of Cheshire cheese, salt and silk. The county has seen a number of inventions and firsts in its history. A mainly rural county, Cheshire has a high concentration of villages. Agriculture is generally based on the dairy trade, and cattle are the predominant livestock. Land use given to agriculture has fluctuated somewhat, and in 2005 totalled 1558 km2 over 4,609 holdings. Based on holdings by EC farm type in 2005, 8.51 km2 was allocated to dairy farming, with another 11.78 km2 allocated to cattle and sheep. The chemical industry in Cheshire was founded in Roman times, with the mining of salt in Middlewich and Northwich. Salt is still mined in the area by British Salt. The salt mining has led to a continued chemical industry around Northwich, with Brunner Mond based in the town. Other chemical companies, including Ineos (formerly ICI), have plants at Runcorn. The Essar Refinery (formerly Shell Stanlow Refinery) is at Ellesmere Port. The oil refinery has operated since 1924 and has a capacity of 12 million tonnes per year. Crewe was once the centre of the British railway industry, and remains a major railway junction. The Crewe railway works, built in 1840, employed 20,000 people at its peak, although the workforce is now less than 1,000. Crewe is also the home of Bentley cars. Also within Cheshire are manufacturing plants for Jaguar and Vauxhall Motors in Ellesmere Port. The county also has an aircraft industry, with the BAE Systems facility at Woodford Aerodrome, part of BAE System's Military Air Solutions division. The facility designed and constructed Avro Lancaster and Avro Vulcan bombers and the Hawker-Siddeley Nimrod. On the Cheshire border with Flintshire is the Broughton aircraft factory, more recently associated with Airbus. Tourism in Cheshire from within the UK and overseas continues to perform strongly. Over 8 million nights of accommodation (both UK and overseas) and over 2.8 million visits to Cheshire were recorded during 2003. At the start of 2003, there were 22,020 VAT-registered enterprises in Cheshire, an increase of 7% since 1998, many in the business services (31.9%) and wholesale/retail (21.7%) sectors. Between 2002 and 2003 the number of businesses grew in four sectors: public administration and other services (6.0%), hotels and restaurants (5.1%), construction (1.7%), and business services (1.0%). The county saw the largest proportional reduction between 2001 and 2002 in employment in the energy and water sector and there was also a significant reduction in the manufacturing sector. The largest growth during this period was in the other services and distribution, hotels and retail sectors. Cheshire is considered to be an affluent county. However, towns such as Crewe and Winsford have significant deprivation. The county's proximity to the cities of Manchester and Liverpool means counter urbanisation is common. Cheshire West has a fairly large proportion of residents who work in Liverpool and Manchester, while the town of Northwich and area of Cheshire East falls more within Manchester's sphere of influence. Education All four local education authorities in Cheshire operate only comprehensive state school systems. When Altrincham, Sale and Bebington were moved from Cheshire to Trafford and Merseyside in 1974, they took some former Cheshire selective schools. There are two universities based in the county, the University of Chester and the Chester campus of The University of Law. The Crewe campus of Manchester Metropolitan University was scheduled to close in 2019. Culture Arts and entertainment Cheshire has produced notable musicians such as Rick Astley, Joy Division members Ian Curtis and Stephen Morris, One Direction member Harry Styles, the members of The 1975, Take That member Gary Barlow, The Cult member Ian Astbury, Catfish and the Bottlemen member Van McCann, Girls Aloud member Nicola Roberts, Stephen Hough, John Mayall, The Charlatans member Tim Burgess, and Nigel Stonier. Actors from Cheshire include Russ Abbot, Warren Brown, Julia Chan, Ray Coulthard, Daniel Craig, Tim Curry, Wendy Hiller, Tom Hughes, Tim McInnerny, Ben Miller, Pete Postlethwaite, Adam Rickitt, John Steiner, and Ann Todd. The most famous author from the county is Lewis Carroll, who wrote Alice's Adventures in Wonderland and named the Cheshire Cat character after it. Other notable Cheshire writers include Hall Caine, Alan Garner, and Elizabeth Gaskell. Artists from Cheshire include ceramic artist Emma Bossons and sculptor/photographer Andy Goldsworthy. Local radio stations in the county include Dee 106.3, Capital, Smooth Radio, Silk FM, Signal 1, Wire FM, and Wish FM. It is one of only four counties in the country (along with County Durham, Dorset, and Rutland) that does not have its own designated BBC radio station; the south and parts of the east are covered by BBC Radio Stoke, while BBC Radio Merseyside tends to cover the west, and BBC Radio Manchester covers the north and parts of the east. The BBC directs readers to Stoke and Staffordshire when Cheshire is selected on their website. There were plans to launch BBC Radio Cheshire, but those were shelved in 2007 after the BBC license fee settlement was lower than expected. The Royal Cheshire Show, an annual agricultural show, has taken place since the 1800s. Sports Cheshire has been home to numerous athletes. Many Premier League footballers have relocated there over the years upon joining teams such as Manchester United FC, Manchester City FC, Everton FC, and Liverpool FC, which are all located nearby. These include Dean Ashton, Seth Johnson, Michael Owen, Jesse Lingard, and Wayne Rooney. The "Golden Triangle" is the collective name for a group of adjacent Cheshire villages where the amount of footballers, actors, and entrepreneurs moving in over the years led to the average house prices becoming some of the most expensive in the UK. Other local athletes include rock climber Shauna Coxsey (currently the most successful competitive climber in the UK), cricketer Ian Botham, marathon runner Paula Radcliffe, oarsman Matt Langridge, hurdler Shirley Strong, sailor Ben Ainslie, cyclist Sarah Storey, and mountaineer George Mallory. Cheshire has also produced a military hero in Norman Cyril Jones, a World War I flying ace who won the Distinguished Flying Cross. Cheshire has one Football League team, Crewe Alexandra FC, which plays in League One. Chester FC, a phoenix club formed in 2010 after ex-Football League club Chester City FC was dissolved, competes in the National League North. Northwich Victoria FC, another ex-League team which was a founding member of the Football League Division Two in 1892/1893, now represents Cheshire in the Northern Premier League along with Nantwich Town FC, Warrington Town FC, and Witton Albion FC. Macclesfield Town FC formerly played in the National League, but went into liquidation in 2020. The Warrington Wolves and Widnes Vikings are the premier rugby league teams in Cheshire; the former plays in the Super League, while the latter plays in the Championship. There are also numerous junior clubs in the county, including Chester Gladiators. Cheshire County Cricket Club is one of the clubs that make up the minor counties of English and Welsh cricket. Cheshire also is represented in the highest level basketball league in the UK, the BBL, by Cheshire Phoenix (formerly Cheshire Jets). Europe's largest motorcycle event, the Thundersprint, is held in Northwich every May. Modern county emblem As part of a 2002 marketing campaign, the plant conservation charity Plantlife chose the cuckooflower as the county flower. Previously, a sheaf of golden wheat was the county emblem, a reference to the Earl of Chester's arms in use from the 12th century. Landmarks Prehistoric burial grounds have been discovered at The Bridestones near Congleton (Neolithic) and Robin Hood's Tump near Alpraham (Bronze Age). The remains of Iron Age hill forts are found on sandstone ridges at several locations in Cheshire. Examples include Maiden Castle on Bickerton Hill, Helsby Hillfort and Woodhouse Hillfort at Frodsham. The Roman fortress and walls of Chester, perhaps the earliest building works in Cheshire remaining above ground, are constructed from purple-grey sandstone. The distinctive local red sandstone has been used for many monumental and ecclesiastical buildings throughout the county: for example, the medieval Beeston Castle, Chester Cathedral and numerous parish churches. Occasional residential and industrial buildings, such as Helsby railway station (1849), are also in this sandstone. Many surviving buildings from the 15th to 17th centuries are timbered, particularly in the southern part of the county. Notable examples include the moated manor house Little Moreton Hall, dating from around 1450, and many commercial and residential buildings in Chester, Nantwich and surrounding villages. Early brick buildings include Peover Hall near Macclesfield (1585), Tattenhall Hall (pre-1622), and the Pied Bull Hotel in Chester (17th-century). From the 18th century, orange, red or brown brick became the predominant building material used in Cheshire, although earlier buildings are often faced or dressed with stone. Examples from the Victorian period onwards often employ distinctive brick detailing, such as brick patterning and ornate chimney stacks and gables. Notable examples include Arley Hall near Northwich, Willington Hall near Chester (both by Nantwich architect George Latham) and Overleigh Lodge, Chester. From the Victorian era, brick buildings often incorporate timberwork in a mock Tudor style, and this hybrid style has been used in some modern residential developments in the county. Industrial buildings, such as the Macclesfield silk mills (for example, Waters Green New Mill), are also usually in brick. Settlements The county is home to some of the most affluent areas of
against the English populace was enough to end all future resistance. Examples were made of major landowners such as Earl Edwin of Mercia, their properties confiscated and redistributed amongst Norman barons. The earldom was sufficiently independent from the kingdom of England that the 13th-century Magna Carta did not apply to the shire of Chester, so the earl wrote up his own Chester Charter at the petition of his barons. County Palatine William I made Cheshire a county palatine and gave Gerbod the Fleming the new title of Earl of Chester. When Gerbod returned to Normandy in about 1070, the king used his absence to declare the earldom forfeit and gave the title to Hugh d'Avranches (nicknamed Hugh Lupus, or "wolf"). Because of Cheshire's strategic location on the Welsh Marches, the Earl had complete autonomous powers to rule on behalf of the king in the county palatine. Hundreds Cheshire in the Domesday Book (1086) is recorded as a much larger county than it is today. It included two hundreds, Atiscross and Exestan, that later became part of North Wales. At the time of the Domesday Book, it also included as part of Duddestan Hundred the area of land later known as English Maelor (which used to be a detached part of Flintshire) in Wales. The area between the Mersey and Ribble (referred to in the Domesday Book as "Inter Ripam et Mersam") formed part of the returns for Cheshire. Although this has been interpreted to mean that at that time south Lancashire was part of Cheshire, more exhaustive research indicates that the boundary between Cheshire and what was to become Lancashire remained the River Mersey. With minor variations in spelling across sources, the complete list of hundreds of Cheshire at this time are: Atiscross, Bochelau, Chester, Dudestan, Exestan, Hamestan, Middlewich, Riseton, Roelau, Tunendune, Warmundestrou and Wilaveston. Feudal baronies There were 8 feudal baronies in Chester, the barons of Kinderton, Halton, Malbank, Mold, Shipbrook, Dunham-Massey, and the honour of Chester itself. Feudal baronies or baronies by tenure were granted by the Earl as forms of feudal land tenure within the palatinate in a similar way to which the king granted English feudal baronies within England proper. An example is the barony of Halton. One of Hugh d'Avranche's barons has been identified as Robert Nicholls, Baron of Halton and Montebourg. North Mersey to Lancashire In 1182, the land north of the Mersey became administered as part of the new county of Lancashire, resolving any uncertainty about the county in which the land "Inter Ripam et Mersam" was. Over the years, the ten hundreds consolidated and changed names to leave just seven—Broxton, Bucklow, Eddisbury, Macclesfield, Nantwich, Northwich and Wirral. Principality: Merging of Palatine and Earldom In 1397 the county had lands in the march of Wales added to its territory, and was promoted to the rank of principality. This was because of the support the men of the county had given to King Richard II, in particular by his standing armed force of about 500 men called the "Cheshire Guard". As a result, the King's title was changed to "King of England and France, Lord of Ireland, and Prince of Chester". No other English county has been honoured in this way, although it lost the distinction on Richard's fall in 1399. Lieutenancy: North split-off District Through the Local Government Act 1972, which came into effect on 1 April 1974, some areas in the north became part of the metropolitan counties of Greater Manchester and Merseyside. Stockport (previously a county borough), Altrincham, Hyde, Dukinfield and Stalybridge in the north-east became part of Greater Manchester. Much of the Wirral Peninsula in the north-west, including the county boroughs of Birkenhead and Wallasey, joined Merseyside as the Metropolitan Borough of Wirral. At the same time the Tintwistle Rural District was transferred to Derbyshire. The area of south Lancashire not included within either the Merseyside or Greater Manchester counties, including Widnes and the county borough of Warrington, was added to the new non-metropolitan county of Cheshire. District and Unitary Halton and Warrington became unitary authorities independent of Cheshire County Council on 1 April 1998, but remain part of Cheshire for ceremonial purposes and also for fire and policing. A referendum for a further local government reform connected with an elected regional assembly was planned for 2004, but was abandoned. Unitary As part of the local government restructuring in April 2009, Cheshire County Council and the Cheshire districts were abolished and replaced by two new unitary authorities, Cheshire East and Cheshire West and Chester. The existing unitary authorities of Halton and Warrington were not affected by the change. Governance Current Cheshire has no county-wide elected local council, but it does have a Lord Lieutenant under the Lieutenancies Act 1997 and a High Sheriff under the Sheriffs Act 1887. Local government functions apart from the Police and Fire/Rescue services are carried out by four smaller unitary authorities: Cheshire East, Cheshire West and Chester, Halton, and Warrington. All four unitary authority areas have borough status. Policing and fire and rescue services are still provided across the county as a whole. The Cheshire Fire Authority consist of members of the four councils, while governance of Cheshire Constabulary is performed by the elected Cheshire Police and Crime Commissioner. Winsford is a major administrative hub for Cheshire with the Police and Fire & Rescue Headquarters based in the town as well as a majority of Cheshire West and Chester Council. It was also home to the former Vale Royal Borough Council and Cheshire County Council. Transition into a lieutenancy From 1 April 1974 the area under the control of the county council was divided into eight local government districts; Chester, Congleton, Crewe and Nantwich, Ellesmere Port and Neston, Halton, Macclesfield, Vale Royal and Warrington. Halton (which includes the towns of Runcorn and Widnes) and Warrington became unitary authorities in 1998. The remaining districts and the county were abolished as part of local government restructuring on 1 April 2009. The Halton and Warrington boroughs were not affected by the 2009 restructuring. On 25 July 2007, the Secretary of State Hazel Blears announced she was 'minded' to split Cheshire into two new unitary authorities, Cheshire West and Chester, and Cheshire East. She confirmed she had not changed her mind on 19 December 2007 and therefore the proposal to split two-tier Cheshire into two would proceed. Cheshire County Council leader Paul Findlow, who attempted High Court legal action against the proposal, claimed that splitting Cheshire would only disrupt excellent services while increasing living costs for all. A widespread sentiment that this decision was taken by the European Union long ago has often been portrayed via angered letters from Cheshire residents to local papers. On 31 January 2008 The Standard, Cheshire and district's newspaper, announced that the legal action had been dropped. Members against the proposal were advised that they may be unable to persuade the court that the decision of Hazel Blears was "manifestly absurd". The Cheshire West and Chester unitary authority covers the area formerly occupied by the City of Chester and the boroughs of Ellesmere Port and Neston and Vale Royal; Cheshire East now covers the area formerly occupied by the boroughs of Congleton, Crewe and Nantwich, and Macclesfield. The changes were implemented on 1 April 2009. Congleton Borough Council pursued an appeal against the judicial review it lost in October 2007. The appeal was dismissed on 4 March 2008. Geography Physical Cheshire covers a boulder clay plain separating the hills of North Wales and the Peak District (the area is also known as the Cheshire Gap). This was formed following the retreat of ice age glaciers which left the area dotted with kettle holes, locally referred to as meres. The bedrock of this region is almost entirely Triassic sandstone, outcrops of which have long been quarried, notably at Runcorn, providing the distinctive red stone for Liverpool Cathedral and Chester Cathedral. The eastern half of the county is Upper Triassic Mercia Mudstone laid down with large salt deposits which were mined for hundreds of years around Winsford. Separating this area from Lower Triassic Sherwood Sandstone to the west is a prominent sandstone ridge known as the Mid Cheshire Ridge. A footpath, the Sandstone Trail, follows this ridge from Frodsham to Whitchurch passing Delamere Forest, Beeston Castle and earlier Iron Age forts. The highest point (county top) in the historic county of Cheshire was Black Hill () near Crowden in the Cheshire Panhandle, a long eastern projection of the county which formerly stretched along the northern side of Longdendale and on the border with the West Riding of Yorkshire. Black Hill is now the highest point in the ceremonial county of West Yorkshire. Within the current ceremonial county and the unitary authority of Cheshire East the highest point is Shining Tor on the Derbyshire/Cheshire border between Macclesfield and Buxton, at above sea level. After Shining Tor, the next highest point in Cheshire is Shutlingsloe, at above sea level. Shutlingshoe lies just to the south of Macclesfield Forest and is sometimes humorously referred to as the "Matterhorn of Cheshire" thanks to its distinctive steep profile. Human Green belt Cheshire contains portions of two green belt areas surrounding the large conurbations of Merseyside and Greater Manchester (North Cheshire Green Belt, part of the North West Green Belt) and Stoke-on-Trent (South Cheshire Green Belt, part of the Stoke-on-Trent Green Belt), these were first
the concept of a county town pre-dates the establishment of these councils. The concept of a county town is ill-defined and unofficial. Some counties have their administrative bodies located elsewhere. For example, Lancaster is the county town of Lancashire, but the county council is located in Preston. Some county towns are no longer situated within the administrative county because of changes in the county's boundaries. For example, Nottingham is administered by a unitary authority separate from the rest of Nottinghamshire. UK county towns, pre-19th-century reforms Historic counties of England This list shows county towns prior to the reforms of 1889. Historic counties of Scotland Historic counties of Wales This list shows county towns prior to the reforms of 1889. Historic counties of Northern Ireland Note – Despite the fact that Belfast is the capital of Northern Ireland, it is not the county town of any
the county town of each county. However, the concept of a county town pre-dates the establishment of these councils. The concept of a county town is ill-defined and unofficial. Some counties have their administrative bodies located elsewhere. For example, Lancaster is the county town of Lancashire, but the county council is located in Preston. Some county towns are no longer situated within the administrative county because of changes in the county's boundaries. For example, Nottingham is administered by a unitary authority separate from the rest of Nottinghamshire. UK county towns, pre-19th-century reforms Historic counties of England This list shows county towns prior to the reforms of 1889. Historic counties of Scotland Historic counties of Wales This list shows county towns prior to the reforms of 1889. Historic counties of Northern Ireland Note – Despite the fact that Belfast is the capital of Northern Ireland, it is not the county town of any county. Greater Belfast straddles two counties (Antrim and Down). UK county towns post 19th-century reforms With the creation of elected county councils in 1889 the location of administrative headquarters in some cases moved away from the traditional county town. Furthermore, in 1965 and 1974 there were major boundary changes in England and Wales and administrative counties were replaced with new metropolitan and non-metropolitan counties. The boundaries underwent further alterations between 1995 and 1998 to create unitary authorities and some of the ancient counties and county towns were restored. (Note: not all headquarters
Act 1982 which is in force in Britain is in English only, but the version of the Act in force in Canada is bilingual, English and French. In addition to enacting the Constitution Act, 1982, the Canada Act 1982 provides that no further British acts of Parliament will apply to Canada as part of its law, finalizing Canada's legislative independence. Canadian Charter of Rights and Freedoms As noted above, this is Part I of the Constitution Act, 1982. The Charter is the constitutional guarantee of the civil rights and liberties of every citizen in Canada, such as freedom of expression, of religion, and of mobility. Part II addresses the rights of Aboriginal peoples in Canada. It is written in plain language to ensure accessibility to the average citizen. It applies only to government and government actions to prevent the government from creating unconstitutional laws. Amending formula Instead of the usual parliamentary procedure, which includes the monarch's formal royal assent for enacting legislation, amendments to the Constitution Act, 1982, must be done in accordance with Part V of the Constitution Act, 1982, which provides for five different amending formulae. Amendments can be brought forward under section 46(1) by any province or the federal legislature. The general formula set out in section 38(1), known as the "7/50 formula", requires: (a) assent from both the House of Commons and the Senate; (b) the approval of two-thirds of the provincial legislatures (at least seven provinces) representing at least 50 per cent of the population (effectively, this would include at least Quebec or Ontario, as they are the most populous provinces). This formula specifically applies to amendments related to the proportionate representation in Parliament, powers, selection, and composition of the Senate, the Supreme Court and the addition of provinces or territories. The other amendment formulae are for particular cases as provided by the act. An amendment related to the Office of the Queen, the use of either official language (subject to section 43), the amending formula itself, or the composition of the Supreme Court, must be adopted by unanimous consent of all the provinces in accordance with section 41. In the case of an amendment related to provincial boundaries or the use of an official language within a province alone, the amendment must be passed by the legislatures affected by the amendment (section 43). In the case of an amendment that affects the federal government only, the amendment does not need the approval of the provinces (section 44). The same applies to amendments affecting the provincial government alone (section 45). Sources of the constitution Canada's constitution has roots going back to the thirteenth century, including England's Magna Carta and the first English Parliament of 1275. Canada's constitution is composed of several individual statutes. There are three general methods by which a statute becomes entrenched in the Constitution: Specific mention as a constitutional document in section 52(2) of the Constitution Act, 1982 (e.g., the Constitution Act, 1867). Constitutional entrenchment of an otherwise statutory English, British, or Canadian document because its (still in force) subject matter provisions are explicitly assigned to one of the methods of the amending formula (per the Constitution Act, 1982)—e.g., provisions with regard to the monarchy in the English Bill of Rights 1689 or the Act of Settlement 1701. English and British statutes are part of Canadian law because of the Colonial Laws Validity Act 1865; section 129 of the Constitution Act, 1867; and the Statute of Westminster 1931. If still at least partially unrepealed those laws then became entrenched when the amending formula was made part of the constitution. Reference by an entrenched document—e.g., the Preamble of the Constitution Act, 1867's entrenchment of written and unwritten principles from the constitution of the United Kingdom or the Constitution Act, 1982's reference to the Proclamation of 1763. Crucially, this includes Aboriginal rights and Crown treaties with particular First Nations (e.g., historic "numbered" treaties; modern land-claims agreements). Unwritten or uncodified sources The existence of unwritten constitutional components was reaffirmed in 1998 by the Supreme Court in Reference re Secession of Quebec. The Constitution is more than a written text. It embraces the entire global system of rules and principles which govern the exercise of constitutional authority. A superficial reading of selected provisions of the written constitutional enactment, without more, may be misleading. In practice, there have been three sources of unwritten constitutional law: Conventions Constitutional conventions form part of the constitution, but they are not judicially enforceable. They include the existence of the office of prime minister and the Cabinet, the practise that the Crown in most circumstances is required to grant royal assent to bills adopted by both houses of Parliament, and the requirement that the prime minister either resign or request a dissolution and general election upon losing a vote of confidence in the House of Commons. Royal prerogative Reserve powers of the Canadian Crown, being remnants of the powers once held by the British Crown, reduced over time by the parliamentary system. Primarily, these are the orders in Council, which give the government the authority to declare war, conclude treaties, issue passports, make appointments, make regulations, incorporate, and receive lands that escheat to the Crown. Unwritten principlesPrinciples that are incorporated into the Canadian constitution by the preamble of the Constitution Act, 1867, including a statement that the constitution is "similar in Principle to that of the United Kingdom", much of which is unwritten. Unlike conventions, they are justiciable. Amongst those principles most recognized as constitutional to date are federalism, liberal democracy, constitutionalism, the rule of law, and respect for minorities. Others include responsible government, representation by population, judicial independence, parliamentary supremacy, and an implied bill of rights. In one case, the Provincial Judges Reference (1997), a law was held invalid for contradicting an unwritten principle (in this case judicial independence). Provincial constitutions Unlike in most federations, Canadian provinces do not have written provincial constitutions. Provincial constitutions are instead a combination of uncodified constitution, provisions of the Constitution of Canada, and provincial statutes. Overall structures of provincial governments (like the legislature and cabinet) are described in parts of the Constitution of Canada. Governmental structure of the original four provinces are described in Part V of the Constitution Act, 1867. The three colonies that joined Canada after Confederation had existing UK legislation which described their governmental structure, and this was affirmed in each colony's Terms of Union, which now form part of Canada's Constitution. The remaining three provinces were created by federal statute. Their constitutional structures are described in those statutes, which now form part of Canada's Constitution. Section 45 of the Constitution Act, 1982 allows each province to amend its own constitution. However, if the desired change would require an amendment to any documents that form part of the Constitution of Canada, it would require the consent of the federal government under section 43. This was done, for example, by the Constitution Amendment, 1998, when Newfoundland asked the federal government to amend the Terms of Union of Newfoundland to allow it to end denominational quotas for religion classes. All provinces have enacted legislation that establishes other rules for the structure of government. For example, every province (and territory) has an act governing elections to the legislature, and another governing procedure in the legislature. Two provinces have explicitly listed such acts as being part of their provincial constitution; see Constitution of Quebec and Constitution Act (British Columbia). However, these acts do not, generally, supersede other legislation and do not require special procedures to amend, and so they function as regular statutes rather than constitutional statutes. There is, however, some provincial legislation that does supersede all other provincial legislation, as a constitution would. This is referred to as quasi-constitutionality. Quasi-constitutionality is often applied to human rights laws, allowing those laws to act as a de facto constitutional charter of rights. There are also a small number of statutes that cannot
the legislatures affected by the amendment (section 43). In the case of an amendment that affects the federal government only, the amendment does not need the approval of the provinces (section 44). The same applies to amendments affecting the provincial government alone (section 45). Sources of the constitution Canada's constitution has roots going back to the thirteenth century, including England's Magna Carta and the first English Parliament of 1275. Canada's constitution is composed of several individual statutes. There are three general methods by which a statute becomes entrenched in the Constitution: Specific mention as a constitutional document in section 52(2) of the Constitution Act, 1982 (e.g., the Constitution Act, 1867). Constitutional entrenchment of an otherwise statutory English, British, or Canadian document because its (still in force) subject matter provisions are explicitly assigned to one of the methods of the amending formula (per the Constitution Act, 1982)—e.g., provisions with regard to the monarchy in the English Bill of Rights 1689 or the Act of Settlement 1701. English and British statutes are part of Canadian law because of the Colonial Laws Validity Act 1865; section 129 of the Constitution Act, 1867; and the Statute of Westminster 1931. If still at least partially unrepealed those laws then became entrenched when the amending formula was made part of the constitution. Reference by an entrenched document—e.g., the Preamble of the Constitution Act, 1867's entrenchment of written and unwritten principles from the constitution of the United Kingdom or the Constitution Act, 1982's reference to the Proclamation of 1763. Crucially, this includes Aboriginal rights and Crown treaties with particular First Nations (e.g., historic "numbered" treaties; modern land-claims agreements). Unwritten or uncodified sources The existence of unwritten constitutional components was reaffirmed in 1998 by the Supreme Court in Reference re Secession of Quebec. The Constitution is more than a written text. It embraces the entire global system of rules and principles which govern the exercise of constitutional authority. A superficial reading of selected provisions of the written constitutional enactment, without more, may be misleading. In practice, there have been three sources of unwritten constitutional law: Conventions Constitutional conventions form part of the constitution, but they are not judicially enforceable. They include the existence of the office of prime minister and the Cabinet, the practise that the Crown in most circumstances is required to grant royal assent to bills adopted by both houses of Parliament, and the requirement that the prime minister either resign or request a dissolution and general election upon losing a vote of confidence in the House of Commons. Royal prerogative Reserve powers of the Canadian Crown, being remnants of the powers once held by the British Crown, reduced over time by the parliamentary system. Primarily, these are the orders in Council, which give the government the authority to declare war, conclude treaties, issue passports, make appointments, make regulations, incorporate, and receive lands that escheat to the Crown. Unwritten principlesPrinciples that are incorporated into the Canadian constitution by the preamble of the Constitution Act, 1867, including a statement that the constitution is "similar in Principle to that of the United Kingdom", much of which is unwritten. Unlike conventions, they are justiciable. Amongst those principles most recognized as constitutional to date are federalism, liberal democracy, constitutionalism, the rule of law, and respect for minorities. Others include responsible government, representation by population, judicial independence, parliamentary supremacy, and an implied bill of rights. In one case, the Provincial Judges Reference (1997), a law was held invalid for contradicting an unwritten principle (in this case judicial independence). Provincial constitutions Unlike in most federations, Canadian provinces do not have written provincial constitutions. Provincial constitutions are instead a combination of uncodified constitution, provisions of the Constitution of Canada, and provincial statutes. Overall structures of provincial governments (like the legislature and cabinet) are described in parts of the Constitution of Canada. Governmental structure of the original four provinces are described in Part V of the Constitution Act, 1867. The three colonies that joined Canada after Confederation had existing UK legislation which described their governmental structure, and this was affirmed in each colony's Terms of Union, which now form part of Canada's Constitution. The remaining three provinces were created by federal statute. Their constitutional structures are described in those statutes, which now form part of Canada's Constitution. Section 45 of the Constitution Act, 1982 allows each province to amend its own constitution. However, if the desired change would require an amendment to any documents that form part of the Constitution of Canada, it would require the consent of the federal government under section 43. This was done, for example, by the Constitution Amendment, 1998, when Newfoundland asked the federal government to amend the Terms of Union of Newfoundland to allow it to end denominational quotas for religion classes. All provinces have enacted legislation that establishes other rules for the structure of government. For example, every province (and territory) has an act governing elections to the legislature, and another governing procedure in the legislature. Two provinces have explicitly listed such acts as being part of their provincial constitution; see Constitution of Quebec and Constitution Act (British Columbia). However, these acts do not, generally, supersede other legislation and do not require special procedures to amend, and so they function as regular statutes rather than constitutional statutes. There is, however, some provincial legislation that does supersede all other provincial legislation, as a constitution would. This is referred to as quasi-constitutionality. Quasi-constitutionality is often applied to human rights laws, allowing those laws to act as a de facto constitutional charter of rights. There are also a small number of statutes that cannot be amended by a simple majority of the legislative assembly. For example, section 7 of the Constitution of Alberta Amendment Act, 1990 requires plebiscites of Metis settlement members before that Act can be amended. Courts have not yet ruled about whether this kind of language really would bind future legislatures, but it might do so if the higher bar was met when creating the law. Vandalism of the proclamation paper In 1983, Peter Greyson, an art student, entered Ottawa's National Archives (known today as Library and Archives Canada) and poured red paint mixed with glue over a copy of the proclamation of the 1982 constitutional amendment. He said he was displeased with the federal government's decision to allow United States missile testing in Canada and had wanted to "graphically illustrate to Canadians" how wrong he believed the government to be. Greyson was charged with public mischief and sentenced to 89 days in jail, 100 hours of community work, and two years of probation. A grapefruit-sized stain remains on the original document; restoration specialists opted to leave most of the paint intact, fearing that removal attempts would only cause further damage. See also Canadian Bill of Rights Constitution Act (British Columbia) Constitution of Quebec Constitutionalism References Further reading External links Full text of the Constitution Canada in
fabric emerges in Europe during the 19th century. Earlier work identified as crochet was commonly made by nålebinding, a different looped yarn technique. The first known published instructions for crochet explicitly using that term to describe the craft in its present sense appeared in the Dutch magazine Penélopé in 1823. This includes a colour plate showing five styles of purse, of which three were intended to be crocheted with silk thread. The first is "simple open crochet" (crochet simple ajour), a mesh of chain-stitch arches. The second (illustrated here) starts in a semi-open form (demi jour), where chain-stitch arches alternate with equally long segments of slip-stitch crochet, and closes with a star made with "double-crochet stitches" (dubbelde hekelsteek: double-crochet in British terminology; single-crochet in US). The third purse is made entirely in double-crochet. The instructions prescribe the use of a tambour needle (as illustrated below) and introduce a number of decorative techniques. The earliest dated reference in English to garments made of cloth produced by looping yarn with a hook—shepherd's knitting—is in The Memoirs of a Highland Lady by Elizabeth Grant (1797–1830). The journal entry, itself, is dated 1812 but was not recorded in its subsequently published form until some time between 1845 and 1867, and the actual date of publication was first in 1898. Nonetheless, the 1833 volume of Penélopé describes and illustrates a shepherd's hook, and recommends its use for crochet with coarser yarn. In 1844, one of the numerous books discussing crochet that began to appear in the 1840s states: Two years later, the same author writes: An instruction book from 1846 describes Shepherd or single crochet as what in current British usage is either called single crochet or slip-stitch crochet, with U.S. American terminology always using the latter (reserving single crochet for use as noted above). It similarly equates "Double" and "French crochet". Notwithstanding the categorical assertion of a purely British origin, there is solid evidence of a connection between French tambour embroidery and crochet. French tambour embroidery was illustrated in detail in 1763 in Diderot's Encyclopedia. The tip of the needle shown there is indistinguishable from that of a present-day inline crochet hook and the chain stitch separated from a cloth support is a fundamental element of the latter technique. The 1823 Penélopé instructions unequivocally state that the tambour tool was used for crochet and the first of the 1840s instruction books uses the terms tambour and crochet as synonyms. This equivalence is retained in the 4th edition of that work, 1847. The strong taper of the shepherd's hook eases the production of slip-stitch crochet but is less amenable to stitches that require multiple loops on the hook at the same time. Early yarn hooks were also continuously tapered but gradually enough to accommodate multiple loops. The design with a cylindrical shaft that is commonplace today was largely reserved for tambour-style steel needles. Both types gradually merged into the modern form that appeared toward the end of the 19th century, including both tapered and cylindrical segments, and the continuously tapered bone hook remained in industrial production until World War II. The early instruction books make frequent reference to the alternative use of 'ivory, bone, or wooden hooks' and 'steel needles in a handle', as appropriate to the stitch being made. Taken with the synonymous labeling of shepherd's- and single crochet, and the similar equivalence of French- and double crochet, there is a strong suggestion that crochet is rooted both in tambour embroidery and shepherd's knitting, leading to thread and yarn crochet respectively; a distinction that is still made. The locus of the fusion of all these elements—the "invention" noted above—has yet to be determined, as does the origin of shepherd's knitting. Shepherd's hooks are still being made for local slip-stitch crochet traditions. The form in the accompanying photograph is typical for contemporary production. A longer continuously tapering design intermediate between it and the 19th-century tapered hook was also in earlier production, commonly being made from the handles of forks and spoons. Irish crochet In the 19th century, as Ireland was facing the Great Irish Famine (1845–1849), crochet lace work was introduced as a form of famine relief (the production of crocheted lace being an alternative way of making money for impoverished Irish workers). Men, women, children joined a cooperative in order to crochet and produce products to help with famine relief during the Great Irish Famine. Schools to teach crocheting were started. Teachers were trained and sent across Ireland to teach this craft. When the Irish immigrated to the Americas, they were able to take with them crocheting. Mademoiselle Riego de la Branchardiere is generally credited with the invention of Irish Crochet, publishing the first book of patterns in 1846. Irish lace became popular in Europe and America, and was made in quantity until the first World War. Modern practice and culture Fashions in crochet changed with the end of the Victorian era in the 1890s. Crocheted laces in the new Edwardian era, peaking between 1910 and 1920, became even more elaborate in texture and complicated stitching. The strong Victorian colours disappeared, though, and new publications called for white or pale threads, except for fancy purses, which were often crocheted of brightly colored silk and elaborately beaded. After World War I, far fewer crochet patterns were published, and most of them were simplified versions of the early 20th-century patterns. After World War II, from the late 1940s until the early 1960s, there was a resurgence in interest in home crafts, particularly in the United States, with many new and imaginative crochet designs published for colorful doilies, potholders, and other home items, along with updates of earlier publications. These patterns called for thicker threads and yarns than in earlier patterns and included wonderful variegated colors. The craft remained primarily a homemaker's art until the late 1960s and early 1970s, when the new generation picked up on crochet and popularized granny squares, a motif worked in the round and incorporating bright colors. Although crochet underwent a subsequent decline in popularity, the early 21st century has seen a revival of interest in handcrafts and DIY, as well as great strides in improvement of the quality and varieties of yarn. There are many more new pattern books with modern patterns being printed, and most yarn stores now offer crochet lessons in addition to the traditional knitting lessons. There are many books you can purchase from local book stores to teach yourself how to crochet whether it be as a beginner or intermediate. There are also many books for children and teenagers who are hoping to take up the hobby. Filet crochet, Tunisian crochet, tapestry crochet, broomstick lace, hairpin lace, cro-hooking, and Irish crochet are all variants of the basic crochet method. Crochet has experienced a revival on the catwalk as well. Christopher Kane's Fall 2011 Ready-to-Wear collection makes intensive use of the granny square, one of the most basic of crochet motifs. In addition, crochet has been utilized many times by designers on the popular reality show Project Runway. Websites such as Etsy and Ravelry have made it easier for individual hobbyists to sell and distribute their patterns or projects across the internet. Laneya Wiles released a music video titled "Straight Hookin'" which makes a play on the word "hookers," which has a double meaning for both "one who crochets" and "a prostitute." Materials Basic materials required for crochet are a hook and some type of material that will be crocheted, most commonly yarn or thread. Yarn, one of the most commonly used materials for crocheting, has varying weights which need to be taken into consideration when following patterns. Additional tools are convenient for keeping stitches counted, measuring crocheted fabric, or making related accessories. Examples include cardboard cutouts, which can be used to make tassels, fringe, and many other items; a pom-pom circle, used to make pom-poms; a tape measure and a gauge measure, both used for measuring crocheted work and counting stitches; a row counter; and occasionally plastic rings, which are used for special projects. In recent years, yarn selections have moved beyond synthetic and plant and animal-based fibers to include bamboo, qiviut, hemp, and banana stalks, to name a few. Many advanced crocheters have also incorporated recycled materials into their work in an effort to "go green" and experiment with new textures by using items such as plastic bags, old t-shirts or sheets, VCR or Cassette tape, and ribbon. Crochet hook The crochet hook comes in many sizes and materials, such as bone, bamboo, aluminium, plastic, and steel. Because sizing is categorized by the diameter of the hook's shaft, a crafter aims to create stitches of a certain size in order to reach a particular gauge specified in a given pattern. If gauge is not reached with one hook, another is used until the stitches made are the needed size. Crafters may have a preference for one type of hook material over another due to aesthetic appeal, yarn glide, or hand disorders such as arthritis, where bamboo or wood hooks are favored over metal for the perceived warmth and flexibility during use. Hook grips and ergonomic hook handles are also available to assist crafters. Steel crochet hooks range in size from 0.4 to 3.5 millimeters, or from 00 to 16 in American sizing. These hooks are used for fine crochet work such as doilies and lace. Aluminium, bamboo, and plastic crochet hooks are available from 2.5 to 19 millimeters in size, or from B to S in American sizing. Artisan-made hooks are often made of hand-turned woods, sometimes decorated with semi-precious stones or beads. Crochet hooks used for Tunisian crochet are elongated and have a stopper at the end of the handle, while double-ended crochet hooks have a hook on both ends of the handle. There is also a double hooked apparatus called a Cro-hook that has become popular. A hairpin loom is often used to create lacy and long stitches, known as hairpin lace. While this is not in itself a hook, it is a device used in conjunction with a crochet hook to produce stitches. See : List of United States standard crochet hook and knitting needle sizes Yarn Yarn for crochet is usually sold as balls, or skeins (hanks), although it may also be wound on spools or cones. Skeins and balls are generally sold with a yarn band, a label that describes the yarn's weight, length, dye lot, fiber content, washing instructions, suggested needle size, likely gauge, etc. It is a common practice to save the yarn band for future reference, especially if additional skeins must be purchased. Crocheters generally ensure that the yarn for a project comes from a single dye lot. The dye lot specifies a group of skeins that were dyed together and thus have precisely the same color; skeins from different dye lots, even if very similar in color, are usually slightly different and may produce a visible stripe when added onto existing work. If insufficient yarn of a single dye lot is bought to complete a project, additional skeins of the same dye lot can sometimes be obtained from other yarn stores
which can be used to make tassels, fringe, and many other items; a pom-pom circle, used to make pom-poms; a tape measure and a gauge measure, both used for measuring crocheted work and counting stitches; a row counter; and occasionally plastic rings, which are used for special projects. In recent years, yarn selections have moved beyond synthetic and plant and animal-based fibers to include bamboo, qiviut, hemp, and banana stalks, to name a few. Many advanced crocheters have also incorporated recycled materials into their work in an effort to "go green" and experiment with new textures by using items such as plastic bags, old t-shirts or sheets, VCR or Cassette tape, and ribbon. Crochet hook The crochet hook comes in many sizes and materials, such as bone, bamboo, aluminium, plastic, and steel. Because sizing is categorized by the diameter of the hook's shaft, a crafter aims to create stitches of a certain size in order to reach a particular gauge specified in a given pattern. If gauge is not reached with one hook, another is used until the stitches made are the needed size. Crafters may have a preference for one type of hook material over another due to aesthetic appeal, yarn glide, or hand disorders such as arthritis, where bamboo or wood hooks are favored over metal for the perceived warmth and flexibility during use. Hook grips and ergonomic hook handles are also available to assist crafters. Steel crochet hooks range in size from 0.4 to 3.5 millimeters, or from 00 to 16 in American sizing. These hooks are used for fine crochet work such as doilies and lace. Aluminium, bamboo, and plastic crochet hooks are available from 2.5 to 19 millimeters in size, or from B to S in American sizing. Artisan-made hooks are often made of hand-turned woods, sometimes decorated with semi-precious stones or beads. Crochet hooks used for Tunisian crochet are elongated and have a stopper at the end of the handle, while double-ended crochet hooks have a hook on both ends of the handle. There is also a double hooked apparatus called a Cro-hook that has become popular. A hairpin loom is often used to create lacy and long stitches, known as hairpin lace. While this is not in itself a hook, it is a device used in conjunction with a crochet hook to produce stitches. See : List of United States standard crochet hook and knitting needle sizes Yarn Yarn for crochet is usually sold as balls, or skeins (hanks), although it may also be wound on spools or cones. Skeins and balls are generally sold with a yarn band, a label that describes the yarn's weight, length, dye lot, fiber content, washing instructions, suggested needle size, likely gauge, etc. It is a common practice to save the yarn band for future reference, especially if additional skeins must be purchased. Crocheters generally ensure that the yarn for a project comes from a single dye lot. The dye lot specifies a group of skeins that were dyed together and thus have precisely the same color; skeins from different dye lots, even if very similar in color, are usually slightly different and may produce a visible stripe when added onto existing work. If insufficient yarn of a single dye lot is bought to complete a project, additional skeins of the same dye lot can sometimes be obtained from other yarn stores or online. The thickness or weight of the yarn is a significant factor in determining how many stitches and rows are required to cover a given area for a given stitch pattern. This is also termed the gauge. Thicker yarns generally require large-diameter crochet hooks, whereas thinner yarns may be crocheted with thick or thin hooks. Hence, thicker yarns generally require fewer stitches, and therefore less time, to work up a given project. The recommended gauge for a given ball of yarn can be found on the label that surrounds the skein when buying in stores. Patterns and motifs are coarser with thicker yarns and produce bold visual effects, whereas thinner yarns are best for refined or delicate pattern-work. Yarns are standardly grouped by thickness into six categories: superfine, fine, light, medium, bulky and superbulky. Quantitatively, thickness is measured by the number of wraps per inch (WPI). The related weight per unit length is usually measured in tex or denier. Before use, hanks are wound into balls in which the yarn emerges from the center, making crocheting easier by preventing the yarn from becoming easily tangled. The winding process may be performed by hand or done with a ballwinder and swift. A yarn's usefulness is judged by several factors, such as its loft (its ability to trap air), its resilience (elasticity under tension), its washability and colorfastness, its hand (its feel, particularly softness vs. scratchiness), its durability against abrasion, its resistance to pilling, its hairiness (fuzziness), its tendency to twist or untwist, its overall weight and drape, its blocking and felting qualities, its comfort (breathability, moisture absorption, wicking properties) and its appearance, which includes its color, sheen, smoothness and ornamental features. Other factors include allergenicity, speed of drying, resistance to chemicals, moths, and mildew, melting point and flammability, retention of static electricity, and the propensity to accept dyes. Desirable properties may vary for different projects, so there is no one "best" yarn. Although crochet may be done with ribbons, metal wire or more exotic filaments, most yarns are made by spinning fibers. In spinning, the fibers are twisted so that the yarn resists breaking under tension; the twisting may be done in either direction, resulting in a Z-twist or S-twist yarn. If the fibers are first aligned by combing them and the spinner uses a worsted type drafting method such as the short forward draw, the yarn is smoother and called a worsted; by contrast, if the fibers are carded but not combed and the spinner uses a woolen drafting method such as the long backward draw, the yarn is fuzzier and called woolen-spun. The fibers making up a yarn may be continuous filament fibers such as silk and many synthetics, or they may be staples (fibers of an average length, typically a few inches); naturally filament fibers are sometimes cut up into staples before spinning. The strength of the spun yarn against breaking is determined by the amount of twist, the length of the fibers and the thickness of the yarn. In general, yarns become stronger with more twist (also called worst), longer fibers and thicker yarns (more fibers); for example, thinner yarns require more twist than do thicker yarns to resist breaking under tension. The thickness of the yarn may vary along its length; a slub is a much thicker section in which a mass of fibers is incorporated into the yarn. The spun fibers are generally divided into animal fibers, plant and synthetic fibers. These fiber types are chemically different, corresponding to proteins, carbohydrates and synthetic polymers, respectively. Animal fibers include silk, but generally are long hairs of animals such as sheep (wool), goat (angora, or cashmere goat), rabbit (angora), llama, alpaca, dog, cat, camel, yak, and muskox (qiviut). Plants used for fibers include cotton, flax (for linen), bamboo, ramie, hemp, jute, nettle, raffia, yucca, coconut husk, banana trees, soy and corn. Rayon and acetate fibers are also produced from cellulose mainly derived from trees. Common synthetic fibers include acrylics, polyesters such as dacron and ingeo, nylon and other polyamides, and olefins such as polypropylene. Of these types, wool is generally favored for crochet, chiefly owing to its superior elasticity, warmth and (sometimes) felting; however, wool is generally less convenient to clean and some people are allergic to it. It is also common to blend different fibers in the yarn, e.g., 85% alpaca and 15% silk. Even within a type of fiber, there can be great variety in the length and thickness of the fibers; for example, Merino wool and Egyptian cotton are favored because they produce exceptionally long, thin (fine) fibers for their type. A single spun yarn may be crochet as is, or braided or plied with another. In plying, two or more yarns are spun together, almost always in the opposite sense from which they were spun individually; for example, two Z-twist yarns are usually plied with an S-twist. The opposing twist relieves some of the yarns' tendency to curl up and produces a thicker, balanced yarn. Plied yarns may themselves be plied together, producing cabled yarns or multi-stranded yarns. Sometimes, the yarns being plied are fed at different rates, so that one yarn loops around the other, as in bouclé. The single yarns may be dyed separately before plying, or afterwards to give the yarn a uniform look. The dyeing of yarns is a complex art. Yarns need not be dyed; or they may be dyed one color, or a great variety of colors. Dyeing may be done industrially, by hand or even hand-painted onto the yarn. A great variety of synthetic dyes have been developed since the synthesis of indigo dye in the mid-19th century; however, natural dyes are also possible, although they are generally less brilliant. The color-scheme of a yarn is sometimes called its colorway. Variegated yarns can produce interesting visual effects, such as diagonal stripes. Process Crocheted fabric is begun by placing a slip-knot loop on the hook (though other methods, such as a magic ring or simple folding over of the yarn may be used), pulling another loop through the first loop, and repeating this process to create a chain of a suitable length. The chain is either turned and worked in rows, or joined to the beginning of the row with a slip stitch and worked in rounds. Rounds can also be created by working many stitches into a single loop. Stitches are made by pulling one or more loops through each loop of the chain. At any one time at the end of a stitch, there is only one loop left on the hook. Tunisian crochet, however, draws all of the loops for an entire row onto a long hook before working them off one at a time. Like knitting, crochet can be worked either flat (back and forth in rows) or in the round (in spirals, such as when making tubular pieces). Types of stitches There are six main types of basic stitches (the following description uses US crochet terminology which differs from the terminology used in the UK and Europe). Chain stitch – the most basic of all stitches and used to begin most projects. Slip stitch – used to join chain stitch to form a ring. Single crochet stitch (called double crochet stitch in the UK) – easiest stitch to master (see single crochet stitch tutorial) Half-double crochet stitch (called half treble stitch in the UK) – the 'in-between' stitch (see half-double crochet tutorial) Double crochet stitch (called treble stitch in the UK) (yarn over once) – many uses for this unlimited use stitch (see double crochet stitch tutorial) Treble (or triple) crochet stitch (called double treble stitch in the UK) (yarn over twice) While the horizontal distance covered by these basic stitches is the same, they differ in height and thickness. The more advanced stitches are often combinations of these basic stitches, or are made by inserting the hook into the work in unusual locations. More advanced stitches include the shell stitch, V stitch, spike stitch, Afghan stitch, butterfly stitch, popcorn stitch, cluster stitch, and crocodile stitch. International crochet terms and notations In the English-speaking crochet world, basic stitches have different names that vary by country. The differences are usually referred to as UK/US or British/American. Crochet is traditionally worked off a written pattern in which stitches and placement are communicated using textual abbreviations. To help counter confusion when reading patterns, a diagramming system using a standard international notation has come into use (illustration, left). In the United States, crochet terminology and sizing guidelines, as well as standards for yarn and hook labeling, are primarily regulated by the Craft Yarn Council. Another terminological difference is known as tension (UK) and gauge (US). Individual crocheters work yarn with a loose or a tight hold and, if unmeasured, these differences can lead to significant size changes in finished garments that have the same number of stitches. In order to control for this inconsistency, printed crochet instructions include a standard for the number of stitches across a standard swatch of fabric. An individual crocheter begins work by producing a test swatch and compensating for any discrepancy by changing to a smaller or larger hook. North Americans call this gauge, referring to the result of these adjustments; British crocheters speak of tension, which refers to the crafter's grip on the yarn while producing stitches. Differences from and similarities to knitting One of the more obvious differences is that crochet uses one hook while much knitting uses two needles. In most crochet, the artisan usually has only one live stitch on the hook (with the exception being Tunisian crochet), while a knitter keeps an entire row of stitches active simultaneously. Dropped stitches, which can unravel a knitted fabric, rarely interfere with crochet work, due to a second structural difference between knitting and crochet. In knitting, each stitch is supported by the corresponding stitch in the row above and it supports the corresponding stitch in the row below, whereas crochet stitches are only supported by and support the stitches on either side of it. If a stitch in a finished crocheted item breaks, the stitches above and below remain intact, and because of the complex looping of each stitch, the stitches on either side are unlikely to come loose unless heavily stressed. Round or cylindrical patterns are simple to produce with a regular crochet hook, but cylindrical knitting requires either a set of circular needles or three to five special double-ended needles. Many crocheted items are composed of individual motifs which are then joined, either by sewing or crocheting, whereas knitting is usually composed of one fabric, such as entrelac. Freeform crochet is a technique that can create interesting shapes in three dimensions because new stitches can be made independently of previous stitches almost anywhere in the crocheted piece. It is generally accomplished by building shapes or structural elements onto existing crocheted fabric at any place the crafter desires. Knitting can be accomplished by machine, while many crochet stitches can only be crafted by hand. The height of knitted and crocheted stitches is also different: a single crochet stitch is twice the height of a knit stitch in the same yarn size and comparable diameter tools, and a double crochet stitch is about four times the height of a knit stitch. While most crochet is made with a hook, there is also a method of crocheting with a knitting loom. This is called loomchet. Slip stitch crochet is very similar to knitting. Each stitch in slip stitch crochet is formed the same way as a knit or purl stitch which is then bound off. A person working in slip stitch crochet can follow a knitted pattern with knits, purls, and cables, and get a similar result. It is a common perception that crochet produces a thicker fabric than knitting, tends to have less "give" than knitted fabric, and uses approximately a third more yarn for a comparable project than knitted items. Although this is true when comparing a single crochet swatch with a stockinette swatch, both made with the same size yarn and needle/hook, it is not necessarily true for crochet in general. Most crochet uses far less than 1/3 more yarn than knitting for comparable pieces, and a crocheter can get similar feel and drape to knitting by using a larger hook or thinner yarn. Tunisian crochet and slip stitch crochet can in some cases use less yarn than knitting for comparable pieces. According to sources claiming to have tested the
one winding, insulated electrically from each other. When there are two or more windings around a common magnetic axis, the windings are said to be inductively coupled or magnetically coupled. A time-varying current through one winding will create a time-varying magnetic field that passes through the other winding, which will induce a time-varying voltage in the other windings. This is called a transformer. The winding to which current is applied, which creates the magnetic field, is called the primary winding. The other windings are called secondary windings. Magnetic core Many electromagnetic coils have a magnetic core, a piece of ferromagnetic material like iron in the center to increase the magnetic field. The current through the coil magnetizes the iron, and the field of the magnetized material adds to the field produced by the wire. This is called a ferromagnetic-core or iron-core coil. A ferromagnetic core can increase the magnetic field and inductance of a coil by hundreds or thousands of times over what it would be without the core. A ferrite core coil is a variety of coil with a core made of ferrite, a ferrimagnetic ceramic compound. Ferrite coils have lower core losses at high frequencies. A coil with a core which forms a closed loop, possibly with some narrow air gaps, is called a closed-core coil. By providing a closed path for the magnetic field lines, this geometry minimizes the magnetic reluctance and produces the strongest magnetic field. It is often used in transformers. A common form for closed-core coils is a toroidal core coil, in which the core has the shape of a torus or doughnut, with either a circular or rectangular cross section. This geometry has minimum leakage flux and radiates minimum electromagnetic interference (EMI). A coil with a core which is a straight bar or other non-loop shape is called an open-core coil. This has lower magnetic field and inductance than a closed core, but is often used to prevent magnetic saturation of the core. A coil without a ferromagnetic core is called an air-core coil. This includes coils wound on plastic or other nonmagnetic forms, as well as coils which actually have empty air space inside their windings. Types of coils Coils can be classified by the frequency of the current they are designed to operate with: Direct current or DC coils or electromagnets operate with a steady direct current in their windings Audio-frequency or AF coils, inductors or transformers operate with alternating currents in the audio frequency range, less than 20 kHz Radio-frequency or RF coils, inductors or transformers operate with alternating currents in the radio frequency range, above 20 kHz Coils can be classified by their function: Electromagnets Electromagnets are coils that generate a magnetic field for some external use, often to exert a mechanical force on something. A few specific types: Solenoid - an electromagnet in the form of a straight hollow helix of wire Motor and generator windings - iron core electromagnets on the rotor or stator of electric motors and generators which act on each other to either turn the shaft (motor) or generate an electric current (generator) Field winding - an iron-core coil which generates a steady magnetic field to act on the armature winding. Armature winding - an iron-core coil which is acted on by the magnetic field of the field winding to either create torque (motor) or induce a voltage to produce power (generator) Helmholtz coil, Maxwell coil - air-core coils which serve to cancel an external magnetic field Degaussing coil - a coil used to demagnetize parts Voice coil - a coil used in a moving-coil loudspeaker, suspended between the poles of a magnet. When the audio signal is passed through the coil, it vibrates, moving the attached speaker cone to create sound waves. The reverse is used in a dynamic microphone, where sound vibrations intercepted by something like a diaphragm physically transfer to a voice coil immersed in a magnetic field, and the coil's terminal ends then provide an electric analog of those vibrations. Inductors Inductors or reactors are coils which generate a magnetic field which interacts with the coil itself, to induce a back EMF which opposes changes in current through the coil. Inductors are used as circuit elements in electrical circuits, to temporarily store energy or resist changes in current. A few types: Tank coil - an inductor used in a tuned circuit Choke - an inductor used to block high frequency AC while allowing through low frequency AC. Loading coil - an inductor used to add inductance to an antenna, to make it resonant, or to a cable to prevent distortion of signals. Variometer - an adjustable inductor consisting of two coils in series, an outer stationary coil and a second one inside it which can be rotated so their magnetic axes are in the same direction or opposed. Flyback transformer - Although called a transformer, this is actually an inductor which serves to store energy in switching power supplies and horizontal deflection circuits for CRT televisions and monitors Saturable reactor - an iron-core inductor used to control AC power by varying the saturation of the core using a DC control voltage in an auxiliary winding. Inductive ballast - an inductor used in gas-discharge lamp circuits, such as fluorescent lamps, to limit the current through the lamp. Transformers A transformer is a device with two or more magnetically coupled windings (or sections of a single winding). A time varying current in one coil (called the primary winding) generates a magnetic field which induces a voltage in the other coil (called the secondary winding). A few types: Distribution transformer - A transformer in an electric power grid which transforms the high voltage from the electric power line to the lower voltage used by utility customers. Autotransformer - a transformer with only one winding. Different portions of the winding, accessed with taps, act as primary and secondary windings of the transformer. Toroidal transformer - the core is in the shape of a toroid. This is a commonly used shape as it decreases the leakage flux, resulting in less electromagnetic interference. Induction coil or trembler coil - an early transformer which uses a vibrating interrupter mechanism to break the primary current so it can operate off of DC current. Ignition coil - an induction coil used in internal combustion engines to create a pulse of high voltage to fire
coil - a coil used to demagnetize parts Voice coil - a coil used in a moving-coil loudspeaker, suspended between the poles of a magnet. When the audio signal is passed through the coil, it vibrates, moving the attached speaker cone to create sound waves. The reverse is used in a dynamic microphone, where sound vibrations intercepted by something like a diaphragm physically transfer to a voice coil immersed in a magnetic field, and the coil's terminal ends then provide an electric analog of those vibrations. Inductors Inductors or reactors are coils which generate a magnetic field which interacts with the coil itself, to induce a back EMF which opposes changes in current through the coil. Inductors are used as circuit elements in electrical circuits, to temporarily store energy or resist changes in current. A few types: Tank coil - an inductor used in a tuned circuit Choke - an inductor used to block high frequency AC while allowing through low frequency AC. Loading coil - an inductor used to add inductance to an antenna, to make it resonant, or to a cable to prevent distortion of signals. Variometer - an adjustable inductor consisting of two coils in series, an outer stationary coil and a second one inside it which can be rotated so their magnetic axes are in the same direction or opposed. Flyback transformer - Although called a transformer, this is actually an inductor which serves to store energy in switching power supplies and horizontal deflection circuits for CRT televisions and monitors Saturable reactor - an iron-core inductor used to control AC power by varying the saturation of the core using a DC control voltage in an auxiliary winding. Inductive ballast - an inductor used in gas-discharge lamp circuits, such as fluorescent lamps, to limit the current through the lamp. Transformers A transformer is a device with two or more magnetically coupled windings (or sections of a single winding). A time varying current in one coil (called the primary winding) generates a magnetic field which induces a voltage in the other coil (called the secondary winding). A few types: Distribution transformer - A transformer in an electric power grid which transforms the high voltage from the electric power line to the lower voltage used by utility customers. Autotransformer - a transformer with only one winding. Different portions of the winding, accessed with taps, act as primary and secondary windings of the transformer. Toroidal transformer - the core is in the shape of a toroid. This is a commonly used shape as it decreases the leakage flux, resulting in less electromagnetic interference. Induction coil or trembler coil - an early transformer which uses a vibrating interrupter mechanism to break the primary current so it can operate off of DC current. Ignition coil - an induction coil used in internal combustion engines to create a pulse of high voltage to fire the spark plug which initiates the fuel burning. Balun - a transformer which matches a balanced transmission line to an unbalanced one. Bifilar coil - a coil wound with two parallel, closely spaced strands. If AC currents are passed through it in the same direction, the magnetic fluxes will add, but if equal currents in opposite directions pass through the windings the opposite fluxes will cancel, resulting in zero flux in the core. So no voltage will be induced in a third winding on the core. These are used in instruments and in devices like Ground Fault Interrupters. They are also used in low inductance wirewound resistors for use at RF frequencies. Audio transformer - A transformer used with audio signals. They are used for impedance matching. Hybrid coil - a specialized audio transformer with 3 windings used in telephony circuits to convert between two-wire and four-wire circuits Electric machines Electric machines such as motors and generators have one or more windings which interact with moving magnetic fields to convert electrical energy to mechanical energy. Often a machine will have one winding through which passes most of the power of the machine (the "armature"), and a second winding which provides the magnetic field of the rotating element ( the "field winding") which may be connected by brushes or slip rings to an external source of electric current. In an induction motor, the "field" winding of the rotor is energized by the slow relative motion between the rotating winding and the rotating magnetic field produced by the stator winding, which induces the necessary exciting current in the rotor. Transducer coils These are coils used to translate time-varying magnetic fields to electric signals, and vice versa. A few types: Sensor or pickup coils - these are used to detect external time-varying magnetic fields Inductive sensor - a coil which senses when a magnet or iron object passes near it Recording head - a coil which is used to create a magnetic field to write data to a magnetic storage medium, such as magnetic tape, or a hard disk. Conversely it is also used to read the data in the form of changing magnetic fields in the medium. Induction heating coil - an AC coil used to heat an object by inducing eddy currents in it, a process called induction heating. Loop antenna - a coil which serves as a radio antenna, to convert radio waves to electric currents. Rogowski coil - a toroidal coil used as an AC measuring device Musical instrument pickup - a coil used to produce the output audio signal in an electric guitar or electric bass. Flux gate - a sensor coil used in a magnetometer Magnetic phonograph cartridge - a sensor in a record player that uses a coil to translate vibration of a needle to an audio signal in playing vinyl phonograph records. There are also types of coil which don't fit into these categories. Winding technology See also Hanna curve References Further reading Querfurth, William, "Coil winding; a description of coil winding procedures, winding machines and associated equipment for the electronic industry" (2d ed.). Chicago, G. Stevens Mfg. Co., 1958. Weymouth, F. Marten, "Drum armatures and commutators (theory and practice) : a complete treatise on the
the encouragement of his Protestant advisers, James summoned the English Parliament in 1624 to request subsidies for a war. Charles and Buckingham supported the impeachment of the Lord Treasurer, Lionel Cranfield, 1st Earl of Middlesex, who opposed war on grounds of cost and quickly fell in much the same manner Bacon had. James told Buckingham he was a fool, and presciently warned Charles that he would live to regret the revival of impeachment as a parliamentary tool. An underfunded makeshift army under Ernst von Mansfeld set off to recover the Palatinate, but it was so poorly provisioned that it never advanced beyond the Dutch coast. By 1624, the increasingly ill James was finding it difficult to control Parliament. By the time of his death in March 1625, Charles and the Duke of Buckingham had already assumed de facto control of the kingdom. Early reign With the failure of the Spanish match, Charles and Buckingham turned their attention to France. On 1 May 1625 Charles was married by proxy to the 15-year-old French princess Henrietta Maria in front of the doors of Notre Dame de Paris. He had seen her in Paris while en route to Spain. The married couple met in person on 13 June 1625 in Canterbury. Charles delayed the opening of his first Parliament until after the marriage was consummated, to forestall any opposition. Many members of the Commons opposed his marriage to a Roman Catholic, fearing that he would lift restrictions on Catholic recusants and undermine the official establishment of the reformed Church of England. Charles told Parliament that he would not relax religious restrictions, but promised to do exactly that in a secret marriage treaty with his brother-in-law Louis XIII of France. Moreover, the treaty loaned to the French seven English naval ships that were used to suppress the Protestant Huguenots at La Rochelle in September 1625. Charles was crowned on 2 February 1626 at Westminster Abbey, but without his wife at his side, because she refused to participate in a Protestant religious ceremony. Distrust of Charles's religious policies increased with his support of a controversial anti-Calvinist ecclesiastic, Richard Montagu, who was in disrepute among the Puritans. In his pamphlet A New Gag for an Old Goose (1624), a reply to the Catholic pamphlet A New Gag for the New Gospel, Montagu argued against Calvinist predestination, the doctrine that God preordained salvation and damnation. Anti-Calvinistsknown as Arminiansbelieved that people could influence their fates by exercising free will. Arminian divines had been one of the few sources of support for Charles's proposed Spanish marriage. With King James's support, Montagu produced another pamphlet, Appello Caesarem, in 1625, shortly after the old king's death and Charles's accession. To protect Montagu from the stricture of Puritan members of Parliament, Charles made him a royal chaplain, heightening many Puritans' suspicions that Charles favoured Arminianism as a clandestine attempt to aid Catholicism's resurgence. Rather than direct involvement in the European land war, the English Parliament preferred a relatively inexpensive naval attack on Spanish colonies in the New World, hoping for the capture of the Spanish treasure fleets. Parliament voted to grant a subsidy of £140,000, an insufficient sum for Charles's war plans. Moreover, the House of Commons limited its authorisation for royal collection of tonnage and poundage (two varieties of customs duties) to a year, although previous sovereigns since Henry VI had been granted the right for life. In this manner, Parliament could delay approval of the rates until after a full-scale review of customs revenue. The bill made no progress in the House of Lords past its first reading. Although no Parliamentary Act for the levy of tonnage and poundage was obtained, Charles continued to collect the duties. A poorly conceived and executed naval expedition against Spain under Buckingham's leadership went badly, and the House of Commons began proceedings for the impeachment of the duke. In May 1626, Charles nominated Buckingham as Chancellor of Cambridge University in a show of support, and had two members who had spoken against BuckinghamDudley Digges and Sir John Eliotarrested at the door of the House. The Commons was outraged by the imprisonment of two of their members, and after about a week in custody, both were released. On 12 June 1626, the Commons launched a direct protestation attacking Buckingham, stating, "We protest before your Majesty and the whole world that until this great person be removed from intermeddling with the great affairs of state, we are out of hope of any good success; and do fear that any money we shall or can give will, through his misemployment, be turned rather to the hurt and prejudice of this your kingdom than otherwise, as by lamentable experience we have found those large supplies formerly and lately given." Despite the protests, Charles refused to dismiss his friend, dismissing Parliament instead. Meanwhile, domestic quarrels between Charles and Henrietta Maria were souring the early years of their marriage. Disputes over her jointure, appointments to her household, and the practice of her religion culminated in the king expelling the vast majority of her French attendants in August 1626. Despite Charles's agreement to provide the French with English ships as a condition of marrying Henrietta Maria, in 1627 he launched an attack on the French coast to defend the Huguenots at La Rochelle. The action, led by Buckingham, was ultimately unsuccessful. Buckingham's failure to protect the Huguenotsand his retreat from Saint-Martin-de-Réspurred Louis XIII's siege of La Rochelle and furthered the English Parliament's and people's detestation of the duke. Charles provoked further unrest by trying to raise money for the war through a "forced loan": a tax levied without parliamentary consent. In November 1627, the test case in the King's Bench, the "Five Knights' Case", found that the king had a prerogative right to imprison without trial those who refused to pay the forced loan. Summoned again in March 1628, on 26 May Parliament adopted a Petition of Right, calling upon Charles to acknowledge that he could not levy taxes without Parliament's consent, impose martial law on civilians, imprison them without due process, or quarter troops in their homes. Charles assented to the petition on 7 June, but by the end of the month he had prorogued Parliament and reasserted his right to collect customs duties without authorisation from Parliament. On 23 August 1628, Buckingham was assassinated. Charles was deeply distressed. According to Edward Hyde, 1st Earl of Clarendon, he "threw himself upon his bed, lamenting with much passion and with abundance of tears". He remained grieving in his room for two days. In contrast, the public rejoiced at Buckingham's death, accentuating the gulf between the court and the nation and between the Crown and the Commons. Buckingham's death effectively ended the war with Spain and eliminated his leadership as an issue, but it did not end the conflicts between Charles and Parliament. It did, however, coincide with an improvement in Charles's relationship with his wife, and by November 1628 their old quarrels were at an end. Perhaps Charles's emotional ties were transferred from Buckingham to Henrietta Maria. She became pregnant for the first time, and the bond between them grew stronger. Together, they embodied an image of virtue and family life, and their court became a model of formality and morality. Personal rule Parliament prorogued In January 1629, Charles opened the second session of the English Parliament, which had been prorogued in June 1628, with a moderate speech on the tonnage and poundage issue. Members of the House of Commons began to voice opposition to Charles's policies in light of the case of John Rolle, a Member of Parliament whose goods had been confiscated for failing to pay tonnage and poundage. Many MPs viewed the imposition of the tax as a breach of the Petition of Right. When Charles ordered a parliamentary adjournment on 2 March, members held the Speaker, Sir John Finch, down in his chair so that the session could be prolonged long enough for resolutions against Catholicism, Arminianism and tonnage and poundage to be read out and acclaimed by the chamber. The provocation was too much for Charles, who dissolved Parliament and had nine parliamentary leaders, including Sir John Eliot, imprisoned over the matter, thereby turning the men into martyrs and giving popular cause to their protest. Personal rule necessitated peace. Without the means in the foreseeable future to raise funds from Parliament for a European war, or Buckingham's help, Charles made peace with France and Spain. The next 11 years, during which Charles ruled England without a Parliament, are known as the personal rule or the "eleven years' tyranny". Ruling without Parliament was not exceptional, and was supported by precedent. But only Parliament could legally raise taxes, and without it Charles's capacity to acquire funds for his treasury was limited to his customary rights and prerogatives. Finances A large fiscal deficit had arisen during the reigns of Elizabeth I and James I. Notwithstanding Buckingham's short-lived campaigns against both Spain and France, Charles had little financial capacity to wage wars overseas. Throughout his reign, he was obliged to rely primarily on volunteer forces for defence and on diplomatic efforts to support his sister, Elizabeth, and his foreign policy objective for the restoration of the Palatinate. England was still the least taxed country in Europe, with no official excise and no regular direct taxation. To raise revenue without reconvening Parliament, Charles resurrected an all-but-forgotten law called the "Distraint of Knighthood", in abeyance for over a century, which required any man who earned £40 or more from land each year to present himself at the king's coronation to be knighted. Relying on this old statute, Charles fined those who had failed to attend his coronation in 1626. The chief tax Charles imposed was a feudal levy known as ship money, which proved even more unpopular, and lucrative, than tonnage and poundage before it. Previously, collection of ship money had been authorised only during wars, and only on coastal regions. But Charles argued that there was no legal bar to collecting the tax for defence during peacetime and throughout the whole of the kingdom. Ship money, paid directly to the Treasury of the Navy, provided between £150,000 to £200,000 annually between 1634 and 1638, after which yields declined. Opposition to ship money steadily grew, but England's 12 common law judges ruled the tax within the king's prerogative, though some of them had reservations. The prosecution of John Hampden for non-payment in 1637–38 provided a platform for popular protest, and the judges found against Hampden only by the narrow margin of 7–5. Charles also derived money by granting monopolies, despite a statute forbidding such action, which, though inefficient, raised an estimated £100,000 a year in the late 1630s. One such monopoly was for soap, pejoratively referred to as "popish soap"because some of its backers were Catholics. Charles also raised funds from the Scottish nobility, at the price of considerable acrimony, by the Act of Revocation (1625), whereby all gifts of royal or church land made to the nobility since 1540 were revoked, with continued ownership being subject to an annual rent. In addition, the boundaries of the royal forests in England were restored to their ancient limits as part of a scheme to maximise income by exploiting the land and fining land users within the reasserted boundaries for encroachment. The programme's focus was disafforestation and sale of forest lands for conversion to pasture and arable farming, or in the case of the Forest of Dean, development for the iron industry. Disafforestation frequently caused riots and disturbances, including those known as the Western Rising. Against the background of this unrest, Charles faced bankruptcy in mid-1640. The City of London, preoccupied with its own grievances, refused to make any loans to him, as did foreign powers. In this extremity, in July Charles seized silver bullion worth £130,000 held in trust at the mint in the Tower of London, promising its later return at 8% interest to its owners. In August, after the East India Company refused to grant a loan, Lord Cottington seized the company's stock of pepper and spices and sold it for £60,000 (far below its market value), promising to refund the money with interest later. Religious conflicts Throughout Charles's reign, the English Reformation was in the forefront of political debate. Arminian theology emphasised clerical authority and the individual's ability to reject or accept salvation, which opponents viewed as heretical and a potential vehicle for the reintroduction of Roman Catholicism. Puritan reformers thought Charles too sympathetic to the teachings of Arminianism, which they considered irreligious, and opposed his desire to move the Church of England in a more traditional and sacramental direction. In addition, his Protestant subjects followed the European war closely and grew increasingly dismayed by Charles's diplomacy with Spain and his failure to support the Protestant cause abroad effectively. In 1633, Charles appointed William Laud Archbishop of Canterbury. They initiated a series of reforms to promote religious uniformity by restricting non-conformist preachers, insisting the liturgy be celebrated as prescribed by the Book of Common Prayer, organising the internal architecture of English churches to emphasise the sacrament of the altar, and reissuing King James's Declaration of Sports, which permitted secular activities on the sabbath. The Feoffees for Impropriations, an organisation that bought benefices and advowsons so that Puritans could be appointed to them, was dissolved. Laud prosecuted those who opposed his reforms in the Court of High Commission and the Star Chamber, the two most powerful courts in the land. The courts became feared for their censorship of opposing religious views and unpopular among the propertied classes for inflicting degrading punishments on gentlemen. For example, in 1637 William Prynne, Henry Burton and John Bastwick were pilloried, whipped and mutilated by cropping and imprisoned indefinitely for publishing anti-episcopal pamphlets. When Charles attempted to impose his religious policies in Scotland he faced numerous difficulties. Although born in Scotland, Charles had become estranged from it; his first visit since early childhood was for his Scottish coronation in 1633. To the dismay of the Scots, who had removed many traditional rituals from their liturgical practice, Charles insisted that the coronation be conducted using the Anglican rite. In 1637, he ordered the use of a new prayer book in Scotland that was almost identical to the English Book of Common Prayer, without consulting either the Scottish Parliament or the Kirk. Although it had been written, under Charles's direction, by Scottish bishops, many Scots resisted it, seeing it as a vehicle to introduce Anglicanism to Scotland. On 23 July, riots erupted in Edinburgh upon the first Sunday of the prayer book's usage, and unrest spread throughout the Kirk. The public began to mobilise around a reaffirmation of the National Covenant, whose signatories pledged to uphold the reformed religion of Scotland and reject any innovations not authorised by Kirk and Parliament. When the General Assembly of the Church of Scotland met in November 1638, it condemned the new prayer book, abolished episcopal church government by bishops, and adopted presbyterian government by elders and deacons. Bishops' Wars Charles perceived the unrest in Scotland as a rebellion against his authority, precipitating the First Bishops' War in 1639. He did not seek subsidies from the English Parliament to wage war, instead raising an army without parliamentary aid and marching to Berwick-upon-Tweed, on the Scottish border. The army did not engage the Covenanters, as the king feared the defeat of his forces, whom he believed to be significantly outnumbered by the Scots. In the Treaty of Berwick, Charles regained custody of his Scottish fortresses and secured the dissolution of the Covenanters' interim government, albeit at the decisive concession that both the Scottish Parliament and General Assembly of the Scottish Church were called. The military failure in the First Bishops' War caused a financial and diplomatic crisis for Charles that deepened when his efforts to raise funds from Spain while simultaneously continuing his support for his Palatine relatives led to the public humiliation of the Battle of the Downs, where the Dutch destroyed a Spanish bullion fleet off the coast of Kent in sight of the impotent English navy. Charles continued peace negotiations with the Scots in a bid to gain time before launching a new military campaign. Because of his financial weakness, he was forced to call Parliament into session in an attempt to raise funds for such a venture. Both English and Irish parliaments were summoned in the early months of 1640. In March 1640, the Irish Parliament duly voted in a subsidy of £180,000 with the promise to raise an army 9,000 strong by the end of May. But in the English general election in March, court candidates fared badly, and Charles's dealings with the English Parliament in April quickly reached stalemate. The earls of Northumberland and Strafford attempted to broker a compromise whereby the king would agree to forfeit ship money in exchange for £650,000 (although the cost of the coming war was estimated at around £1 million). Nevertheless, this alone was insufficient to produce consensus in the Commons. The Parliamentarians' calls for further reforms were ignored by Charles, who still retained the support of the House of Lords. Despite the protests of Northumberland, the Short Parliament (as it came to be known) was dissolved in May 1640, less than a month after it assembled. By this stage Strafford, Lord Deputy of Ireland since 1632, had emerged as
November, capturing Brentford on the way while simultaneously continuing to negotiate with civic and parliamentary delegations. At Turnham Green on the outskirts of London, the royalist army met resistance from the city militia, and faced with a numerically superior force, Charles ordered a retreat. He overwintered in Oxford, strengthening the city's defences and preparing for the next season's campaign. Peace talks between the two sides collapsed in April. The war continued indecisively over the next couple of years, and Henrietta Maria returned to Britain for 17 months from February 1643. After Rupert captured Bristol in July 1643, Charles visited the port city and laid siege to Gloucester, further up the river Severn. His plan to undermine the city walls failed due to heavy rain, and on the approach of a parliamentary relief force, Charles lifted the siege and withdrew to Sudeley Castle. The parliamentary army turned back towards London, and Charles set off in pursuit. The two armies met at Newbury, Berkshire, on 20 September. Just as at Edgehill, the battle stalemated at nightfall, and the armies disengaged. In January 1644, Charles summoned a Parliament at Oxford, which was attended by about 40 peers and 118 members of the Commons; all told, the Oxford Parliament, which sat until March 1645, was supported by the majority of peers and about a third of the Commons. Charles became disillusioned by the assembly's ineffectiveness, calling it a "mongrel" in private letters to his wife. In 1644, Charles remained in the southern half of England while Rupert rode north to relieve Newark and York, which were under threat from parliamentary and Scottish Covenanter armies. Charles was victorious at the battle of Cropredy Bridge in late June, but the royalists in the north were defeated at the battle of Marston Moor just a few days later. The king continued his campaign in the south, encircling and disarming the parliamentary army of the Earl of Essex. Returning northwards to his base at Oxford, he fought at Newbury for a second time before the winter closed in; the battle ended indecisively. Attempts to negotiate a settlement over the winter, while both sides rearmed and reorganised, were again unsuccessful. At the battle of Naseby on 14 June 1645, Rupert's horsemen again mounted a successful charge against the flank of Parliament's New Model Army, but elsewhere on the field, opposing forces pushed Charles's troops back. Attempting to rally his men, Charles rode forward, but as he did so, Lord Carnwath seized his bridle and pulled him back, fearing for the king's safety. The royalist soldiers misinterpreted Carnwath's action as a signal to move back, leading to a collapse of their position. The military balance tipped decisively in favour of Parliament. There followed a series of defeats for the royalists, and then the siege of Oxford, from which Charles escaped (disguised as a servant) in April 1646. He put himself into the hands of the Scottish presbyterian army besieging Newark, and was taken northwards to Newcastle upon Tyne. After nine months of negotiations, the Scots finally arrived at an agreement with the English Parliament: in exchange for £100,000, and the promise of more money in the future, the Scots withdrew from Newcastle and delivered Charles to the parliamentary commissioners in January 1647. Captivity Parliament held Charles under house arrest at Holdenby House in Northamptonshire until Cornet George Joyce took him by threat of force from Holdenby on 3 June in the name of the New Model Army. By this time, mutual suspicion had developed between Parliament, which favoured army disbandment and presbyterianism, and the New Model Army, which was primarily officered by congregationalist Independents, who sought a greater political role. Charles was eager to exploit the widening divisions, and apparently viewed Joyce's actions as an opportunity rather than a threat. He was taken first to Newmarket, at his own suggestion, and then transferred to Oatlands and subsequently Hampton Court, while more fruitless negotiations took place. By November, he determined that it would be in his best interests to escape—perhaps to France, Southern England or Berwick-upon-Tweed, near the Scottish border. He fled Hampton Court on 11 November, and from the shores of Southampton Water made contact with Colonel Robert Hammond, Parliamentary Governor of the Isle of Wight, whom he apparently believed to be sympathetic. But Hammond confined Charles in Carisbrooke Castle and informed Parliament that Charles was in his custody. From Carisbrooke, Charles continued to try to bargain with the various parties. In direct contrast to his previous conflict with the Scottish Kirk, on 26 December 1647 he signed a secret treaty with the Scots. Under the agreement, called the "Engagement", the Scots undertook to invade England on Charles's behalf and restore him to the throne on condition that presbyterianism be established in England for three years. The royalists rose in May 1648, igniting the Second Civil War, and as agreed with Charles, the Scots invaded England. Uprisings in Kent, Essex, and Cumberland, and a rebellion in South Wales, were put down by the New Model Army, and with the defeat of the Scots at the Battle of Preston in August 1648, the royalists lost any chance of winning the war. Charles's only recourse was to return to negotiations, which were held at Newport on the Isle of Wight. On 5 December 1648, Parliament voted 129 to 83 to continue negotiating with the king, but Oliver Cromwell and the army opposed any further talks with someone they viewed as a bloody tyrant and were already taking action to consolidate their power. Hammond was replaced as Governor of the Isle of Wight on 27 November, and placed in the custody of the army the following day. In Pride's Purge on 6 and 7 December, the members of Parliament out of sympathy with the military were arrested or excluded by Colonel Thomas Pride, while others stayed away voluntarily. The remaining members formed the Rump Parliament. It was effectively a military coup. Trial Charles was moved to Hurst Castle at the end of 1648, and thereafter to Windsor Castle. In January 1649, the Rump House of Commons indicted him for treason; the House of Lords rejected the charge. The idea of trying a king was novel. The Chief Justices of the three common law courts of England—Henry Rolle, Oliver St John and John Wilde—all opposed the indictment as unlawful. The Rump Commons declared itself capable of legislating alone, passed a bill creating a separate court for Charles's trial, and declared the bill an act without the need for royal assent. The High Court of Justice established by the Act consisted of 135 commissioners, but many either refused to serve or chose to stay away. Only 68 (all firm Parliamentarians) attended Charles's trial on charges of high treason and "other high crimes" that began on 20 January 1649 in Westminster Hall. John Bradshaw acted as President of the Court, and the prosecution was led by Solicitor General John Cook. Charles was accused of treason against England by using his power to pursue his personal interest rather than the good of the country. The charge stated that he, "for accomplishment of such his designs, and for the protecting of himself and his adherents in his and their wicked practices, to the same ends hath traitorously and maliciously levied war against the present Parliament, and the people therein represented", and that the "wicked designs, wars, and evil practices of him, the said Charles Stuart, have been, and are carried on for the advancement and upholding of a personal interest of will, power, and pretended prerogative to himself and his family, against the public interest, common right, liberty, justice, and peace of the people of this nation." Presaging the modern concept of command responsibility, the indictment held him "guilty of all the treasons, murders, rapines, burnings, spoils, desolations, damages and mischiefs to this nation, acted and committed in the said wars, or occasioned thereby." An estimated 300,000 people, or 6% of the population, died during the war. Over the first three days of the trial, whenever Charles was asked to plead, he refused, stating his objection with the words: "I would know by what power I am called hither, by what lawful authority...?" He claimed that no court had jurisdiction over a monarch, that his own authority to rule had been given to him by God and by the traditional laws of England, and that the power wielded by those trying him was only that of force of arms. Charles insisted that the trial was illegal, explaining that, The court, by contrast, challenged the doctrine of sovereign immunity and proposed that "the King of England was not a person, but an office whose every occupant was entrusted with a limited power to govern 'by and according to the laws of the land and not otherwise'." At the end of the third day, Charles was removed from the court, which then heard over 30 witnesses against him in his absence over the next two days, and on 26 January condemned him to death. The next day, the king was brought before a public session of the commission, declared guilty, and sentenced. Fifty-nine of the commissioners signed Charles's death warrant. Execution Charles's beheading was scheduled for Tuesday, 30 January 1649. Two of his children remained in England under the control of the Parliamentarians: Elizabeth and Henry. They were permitted to visit him on 29 January, and he bade them a tearful farewell. The next morning, he called for two shirts to prevent the cold weather causing any noticeable shivers that the crowd could have mistaken for fear: "the season is so sharp as probably may make me shake, which some observers may imagine proceeds from fear. I would have no such imputation." He walked under guard from St James's Palace, where he had been confined, to the Palace of Whitehall, where an execution scaffold had been erected in front of the Banqueting House. Charles was separated from spectators by large ranks of soldiers, and his last speech reached only those with him on the scaffold. He blamed his fate on his failure to prevent the execution of his loyal servant Strafford: "An unjust sentence that I suffered to take effect, is punished now by an unjust sentence on me." He declared that he had desired the liberty and freedom of the people as much as any, "but I must tell you that their liberty and freedom consists in having government ... It is not their having a share in the government; that is nothing appertaining unto them. A subject and a sovereign are clean different things." He continued, "I shall go from a corruptible to an incorruptible Crown, where no disturbance can be." At about 2:00 p.m., Charles put his head on the block after saying a prayer and signalled the executioner when he was ready by stretching out his hands; he was then beheaded in one clean stroke. According to observer Philip Henry, a moan "as I never heard before and desire I may never hear again" rose from the assembled crowd, some of whom then dipped their handkerchiefs in the king's blood as a memento. The executioner was masked and disguised, and there is debate over his identity. The commissioners approached Richard Brandon, the common hangman of London, but he refused, at least at first, despite being offered £200. It is possible he relented and undertook the commission after being threatened with death, but others have been named as potential candidates, including George Joyce, William Hulet and Hugh Peters. The clean strike, confirmed by an examination of the king's body at Windsor in 1813, suggests that the execution was carried out by an experienced headsman. It was common practice for the severed head of a traitor to be held up and exhibited to the crowd with the words "Behold the head of a traitor!" Charles's head was exhibited, but those words were not used, possibly because the executioner did not want his voice recognised. On the day after the execution, the king's head was sewn back onto his body, which was then embalmed and placed in a lead coffin. The commission refused to allow Charles's burial at Westminster Abbey, so his body was conveyed to Windsor on the night of 7 February. He was buried in private on 9 February 1649 in the Henry VIII vault in the chapel's quire, alongside the coffins of Henry VIII and Henry's third wife, Jane Seymour, in St George's Chapel, Windsor Castle. The king's son, Charles II, later planned for an elaborate royal mausoleum to be erected in Hyde Park, London, but it was never built. Legacy Ten days after Charles's execution, on the day of his interment, a memoir purportedly written by him appeared for sale. This book, the Eikon Basilike (Greek for the "Royal Portrait"), contained an apologia for royal policies, and proved an effective piece of royalist propaganda. John Milton wrote a Parliamentary rejoinder, the Eikonoklastes ("The Iconoclast"), but the response made little headway against the pathos of the royalist book. Anglicans and royalists fashioned an image of martyrdom, and in the Convocations of Canterbury and York of 1660 King Charles the Martyr was added to the Church of England's liturgical calendar. High church Anglicans held special services on the anniversary of his death. Churches, such as those at Falmouth and Tunbridge Wells, and Anglican devotional societies such as the Society of King Charles the Martyr, were founded in his honour. With the monarchy overthrown, England became a republic or "Commonwealth". The House of Lords was abolished by the Rump Commons, and executive power was assumed by a Council of State. All significant military opposition in Britain and Ireland was extinguished by the forces of Oliver Cromwell in the Third English Civil War and the Cromwellian conquest of Ireland. Cromwell forcibly disbanded the Rump Parliament in 1653, thereby establishing the Protectorate with himself as Lord Protector. Upon his death in 1658, he was briefly succeeded by his ineffective son, Richard. Parliament was reinstated, and the monarchy was restored to Charles I's eldest son, Charles II, in 1660. Art Partly inspired by his visit to the Spanish court in 1623, Charles became a passionate and knowledgeable art collector, amassing one of the finest art collections ever assembled. In Spain, he sat for a sketch by Velázquez, and acquired works by Titian and Correggio, among others. In England, his commissions included the ceiling of the Banqueting House, Whitehall, by Rubens and paintings by other artists from the Low Countries such as van Honthorst, Mytens, and van Dyck. His close associates, including the Duke of Buckingham and the Earl of Arundel, shared his interest and have been dubbed the Whitehall Group. In 1627 and 1628, Charles purchased the entire collection of the Duke of Mantua, which included work by Titian, Correggio, Raphael, Caravaggio, del Sarto and Mantegna. His collection grew further to encompass Bernini, Bruegel, da Vinci, Holbein, Hollar, Tintoretto and Veronese, and self-portraits by both Dürer and Rembrandt. By Charles's death, there were an estimated 1,760 paintings, most of which were sold and dispersed by Parliament. Assessments In the words of John Philipps Kenyon, "Charles Stuart is a man of contradictions and controversy". Revered by high Tories who considered him a saintly martyr, he was condemned by Whig historians, such as Samuel Rawson Gardiner, who thought him duplicitous and delusional. In recent decades, most historians have criticised him, the main exception being Kevin Sharpe, who offered a more sympathetic view that has not been widely adopted. Sharpe argued that the king was a dynamic man of conscience, but Professor Barry Coward thought Charles "the most incompetent monarch of England since Henry VI", a view shared by Ronald Hutton, who called him "the worst king we have had since the Middle Ages". Archbishop William Laud, whom Parliament beheaded during the war, described Charles as "A mild and gracious prince who knew not how to be, or how to be made, great." Charles was more sober and refined than his father, but he was intransigent. He deliberately pursued unpopular policies that brought ruin on himself. Both Charles and James were advocates of the divine right of kings, but while James's ambitions concerning absolute prerogative were tempered by compromise and consensus with his subjects, Charles believed he had no need to compromise or even to explain his actions. He thought he was answerable only to God. "Princes are not bound to give account of their actions," he wrote, "but to God alone". Titles, styles, honours and arms Titles and styles 23 December 1600 – 27 March 1625: Duke of Albany, Marquess of Ormonde, Earl of Ross and Lord Ardmannoch 6 January 1605 – 27 March 1625: Duke of York 6 November 1612 – 27 March 1625: Duke of Cornwall and Rothesay 4 November 1616 – 27 March 1625: Prince of Wales and Earl of Chester 27 March 1625 – 30 January 1649: His Majesty The King The official style of Charles I as king in England was "Charles, by the Grace of God, King of England, Scotland, France and Ireland, Defender of the Faith, etc." The style "of France" was only nominal, and was used by every English monarch from Edward III to George III, regardless of the amount of French territory actually controlled. The authors of his death warrant called him "Charles Stuart, King of England". Honours KB: Knight of the Bath, 6 January 1605 KG: Knight of the Garter, 24 April 1611 Arms As Duke of York, Charles bore the royal arms of the kingdom differenced by a label Argent of three points, each bearing three torteaux Gules. As the Prince of Wales, he bore the royal arms differenced by a plain label Argent of three points. As king, Charles bore the royal arms undifferenced: Quarterly, I and IV Grandquarterly, Azure three fleurs-de-lis Or (for France) and Gules three lions passant guardant in pale Or (for England); II Or a lion rampant within a tressure flory-counter-flory Gules (for Scotland); III Azure a harp Or stringed Argent (for Ireland). In Scotland, the Scottish arms were placed in the first and fourth quarters with the English and French arms in the second quarter. Issue Charles had nine children, two of whom eventually succeeded as king, and two of whom died at or shortly after birth. Ancestry Notes References Citations Sources Further reading Brotton, Jerry (2007), The Sale of the Late King's Goods: Charles I and His Art Collection, Pan Macmillan, Gardiner, Samuel Rawson (1882), The Fall of the Monarchy of Charles I, 1637–1649: Volume I (1637–1640); Volume II (1640–1642) Historiography online online online in JSTOR External links Official website of the British monarchy The Society of King Charles the Martyr (United States) |- |- 1600 births 1649 deaths 16th-century Scottish peers 17th-century Scottish monarchs 17th-century English monarchs 17th-century Irish monarchs 17th-century English nobility 17th-century Scottish peers Protestant monarchs Anglican saints English pretenders to the French throne Princes of England Princes of Scotland Princes of Wales House of Stuart Dukes of Albany Dukes of Cornwall Dukes of Rothesay Dukes of York Stuart, Charles Peers of Scotland created by James VI Peers of England created