id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
11611098
Locally finite collection
Topological concept A collection of subsets of a topological space formula_0 is said to be locally finite if each point in the space has a neighbourhood that intersects only finitely many of the sets in the collection. In the mathematical field of topology, local finiteness is a property of collections of subsets of a topological space. It is fundamental in the study of paracompactness and topological dimension. Note that the term locally finite has different meanings in other mathematical fields. Examples and properties. A finite collection of subsets of a topological space is locally finite. Infinite collections can also be locally finite: for example, the collection of subsets of formula_1 of the form formula_2 for an integer formula_3. A countable collection of subsets need not be locally finite, as shown by the collection of all subsets of formula_1 of the form formula_4 for a natural number "n". Every locally finite collection of sets is point finite, meaning that every point of the space belongs to only finitely many sets in the collection. Point finiteness is a strictly weaker notion, as illustrated by the collection of intervals formula_5 in formula_6, which is point finite, but not locally finite at the point formula_7. The two concepts are used in the definitions of paracompact space and metacompact space, and this is the reason why every paracompact space is metacompact. If a collection of sets is locally finite, the collection of the closures of these sets is also locally finite. The reason for this is that if an open set containing a point intersects the closure of a set, it necessarily intersects the set itself, hence a neighborhood can intersect at most the same number of closures (it may intersect fewer, since two distinct, indeed disjoint, sets can have the same closure). The converse, however, can fail if the closures of the sets are not distinct. For example, in the finite complement topology on formula_1 the collection of all open sets is not locally finite, but the collection of all closures of these sets is locally finite (since the only closures are formula_1 and the empty set). An arbitrary union of closed sets is not closed in general. However, the union of a locally finite collection of closed sets is closed. To see this we note that if formula_8 is a point outside the union of this locally finite collection of closed sets, we merely choose a neighbourhood formula_9 of formula_8 that intersects this collection at only finitely many of these sets. Define a bijective map from the collection of sets that formula_9 intersects to formula_10 thus giving an index to each of these sets. Then for each set, choose an open set formula_11 containing formula_8 that doesn't intersect it. The intersection of all such formula_11 for formula_12 intersected with formula_9, is a neighbourhood of formula_8 that does not intersect the union of this collection of closed sets. In compact spaces. Every locally finite collection of sets in a compact space is finite. Indeed, let formula_13 be a locally finite family of subsets of a compact space formula_0 . For each point formula_14, choose an open neighbourhood formula_15 that intersects a finite number of the subsets in formula_16. Clearly the family of sets: formula_17 is an open cover of formula_0, and therefore has a finite subcover: formula_18. Since each formula_19 intersects only a finite number of subsets in formula_16, the union of all such formula_19 intersects only a finite number of subsets in formula_16. Since this union is the whole space formula_0, it follows that formula_20 intersects only a finite number of subsets in the collection formula_16. And since formula_16 is composed of subsets of formula_0 every member of formula_16 must intersect formula_0, thus formula_16 is finite. In Lindelöf spaces. Every locally finite collection of sets in a Lindelöf space, in particular in a second-countable space, is countable. This is proved by a similar argument as in the result above for compact spaces. Countably locally finite collections. A collection of subsets of a topological space is called <templatestyles src="Template:Visible anchor/styles.css" />σ-locally finite or <templatestyles src="Template:Visible anchor/styles.css" />countably locally finite if it is a countable union of locally finite collections. The σ-locally finite notion is a key ingredient in the Nagata–Smirnov metrization theorem, which states that a topological space is metrizable if and only if it is regular, Hausdorff, and has a σ-locally finite base. In a Lindelöf space, in particular in a second-countable space, every σ-locally finite collection of sets is countable. Citations. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathbb{R}" }, { "math_id": 2, "text": "(n, n+2)" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "(-n, n)" }, { "math_id": 5, "text": "(0,1/n)" }, { "math_id": 6, "text": "\\mathbb R" }, { "math_id": 7, "text": "0" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "V" }, { "math_id": 10, "text": "{1,\\dots,k}" }, { "math_id": 11, "text": "U_i" }, { "math_id": 12, "text": "1\\leq i\\leq k" }, { "math_id": 13, "text": "G=\\{G_{a}|a\\in A\\}" }, { "math_id": 14, "text": "x\\in X" }, { "math_id": 15, "text": "U_{x}" }, { "math_id": 16, "text": "G" }, { "math_id": 17, "text": "\\{U_{x}|x\\in X\\}" }, { "math_id": 18, "text": "\\{U_{k_n}|n\\in 1\\dots n\\}" }, { "math_id": 19, "text": "U_{k_i}" }, { "math_id": 20, "text": "" } ]
https://en.wikipedia.org/wiki?curid=11611098
11611962
Trade-weighted US dollar index
Measure of the relative value of the US dollar The trade-weighted US dollar index, also known as the broad index, is a measure of the value of the United States dollar relative to other world currencies. It is a trade weighted index that improves on the older U.S. Dollar Index by incorporating more currencies and yearly rebalancing. The base index value is 100 in January 1997. As the U.S. Dollar gains value the index increases. History. The trade-weighted dollar index was introduced in 1998 for two primary reasons. The first was the introduction of the euro, which eliminated several of the currencies in the standard dollar index; the second was to keep pace with new developments in US trade. Included currencies. In the older U.S. Dollar Index, a significant weight is given to the euro, because most U. S. Trade in 1973 was with European countries. As U. S. trade expanded over time, the weights in that index went unchanged and became out of date. To more accurately reflect the strength of the dollar relative to other world currencies, the Federal Reserve created the trade-weighted US dollar index, which includes a bigger collection of currencies than the US dollar index. The regions included are: <templatestyles src="Col-begin/styles.css"/> Mathematical formulation. Based on nominal exchange rates. The index is computed as the geometric mean of the bilateral exchange rates of the included currencies. The weight assigned to the value of each currency in the calculation is based on trade data, and is updated annually (the value of the index itself is updated much more frequently than the weightings). The index value at time formula_0 is given by the formula: formula_1. where Based on real exchange rates. The real exchange rate is a more informative measure of the dollar's worth since it accounts for countries whose currencies experience differing rates of inflation from that of the United States. This is compensated for by adjusting the exchange rates in the formula using the consumer price index of the respective countries. In this more general case the index value is given by: formula_11. where Federal Reserve Bank of St. Louis data. The Federal Reserve Bank of St. Louis, provides "weighted averages of the foreign exchange value of the U.S. dollar against the currencies of a broad group of major U.S. trading partners" with detailed information. The "broad currency index includes the Euro Area, Canada, Japan, Mexico, China, United Kingdom, Taiwan, Korea, Singapore, Hong Kong, Malaysia, Brazil, Switzerland, Thailand, Philippines, Australia, Indonesia, India, Israel, Saudi Arabia, Russia, Sweden, Argentina, Venezuela, Chile and Colombia." This table shows some highs and lows of the Trade Weighted U.S. Dollar Index: Broad [TWEXB] from 2002 to April 2017. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "t" }, { "math_id": 1, "text": "I_t = I_{t-1} \\times \\prod_{j = 1}^{N(t)} \\left( \\frac{e_{j,t}}{e_{j,t-1}} \\right)^{w_{j,t}}" }, { "math_id": 2, "text": "I_t" }, { "math_id": 3, "text": "I_{t-1}" }, { "math_id": 4, "text": "t-1" }, { "math_id": 5, "text": "N(t)" }, { "math_id": 6, "text": "e_{j,t}" }, { "math_id": 7, "text": "e_{j,t-1}" }, { "math_id": 8, "text": "j" }, { "math_id": 9, "text": "w_{j,t}" }, { "math_id": 10, "text": "\\sum_{j=1}^{N(t)} w_{j,t} = 1" }, { "math_id": 11, "text": "I_t = I_{t-1} \\times \\prod_{j = 1}^{N(t)} \\left( \\frac{e_{j,t} \\cdot \\frac{p_t}{p_{j,t}}}{e_{j,t-1}\\cdot \\frac{p_{t-1}}{p_{j,t-1}}} \\right)^{w_{j,t}}" }, { "math_id": 12, "text": "p_t" }, { "math_id": 13, "text": "p_{t-1}" }, { "math_id": 14, "text": "p_{j,t}" }, { "math_id": 15, "text": "p_{j,t-1}" } ]
https://en.wikipedia.org/wiki?curid=11611962
11613832
Euclidean relation
In mathematics, Euclidean relations are a class of binary relations that formalize "" in Euclid's "Elements": "Magnitudes which are equal to the same are equal to each other." Definition. A binary relation "R" on a set "X" is Euclidean (sometimes called right Euclidean) if it satisfies the following: for every "a", "b", "c" in "X", if "a" is related to "b" and "c", then "b" is related to "c". To write this in predicate logic: formula_0 Dually, a relation "R" on "X" is left Euclidean if for every "a", "b", "c" in "X", if "b" is related to "a" and "c" is related to "a", then "b" is related to "c": formula_1 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\forall a, b, c\\in X\\,(a\\,R\\, b \\land a \\,R\\, c \\to b \\,R\\, c)." }, { "math_id": 1, "text": "\\forall a, b, c\\in X\\,(b\\,R\\, a \\land c \\,R\\, a \\to b \\,R\\, c)." } ]
https://en.wikipedia.org/wiki?curid=11613832
1161411
Bad Kreuznach
Place in Rhineland-Palatinate, Germany Bad Kreuznach () is a town in the Bad Kreuznach district in Rhineland-Palatinate, Germany. It is a spa town, most well known for its medieval bridge dating from around 1300, the Alte Nahebrücke, which is one of the few remaining bridges in the world with buildings on it. The town is located in the Nahe River wine region, renowned both nationally and internationally for its wines, especially from the Riesling, Silvaner and Müller-Thurgau grape varieties. Bad Kreuznach does not lie within any , even though it is the seat of the Bad Kreuznach (Verbandsgemeinde). The town is the seat of several courts, as well as federal and state authorities. Bad Kreuznach is also officially a "große kreisangehörige Stadt" ("large town belonging to a district"), meaning that it does not have the district-level powers that "kreisfreie Städte" ("district-free towns/cities") enjoy. It is, nonetheless, the district seat, and also the seat of the state chamber of commerce for Rhineland-Palatinate. It is classed as a middle centre with some functions of an upper centre, making it the administrative, cultural and economic hub of a region with more than 150,000 inhabitants. Geography. Location. Bad Kreuznach lies between the Hunsrück, Rhenish Hesse and the North Palatine Uplands, some as the crow flies south-southwest of Bingen am Rhein. It lies at the mouth of the Ellerbach, where it empties into the lower Nahe. Neighbouring municipalities. Clockwise from the north, Bad Kreuznach's neighbours are the municipalities of Bretzenheim, Langenlonsheim, Gensingen, Welgesheim, Zotzenheim, Sprendlingen, Badenheim (these last five lying in the neighbouring Mainz-Bingen district), Biebelsheim, Pfaffen-Schwabenheim, Volxheim, Hackenheim, Frei-Laubersheim, Altenbamberg, Traisen, Hüffelsheim, Rüdesheim an der Nahe, Roxheim, Hargesheim and Guldental. Constituent communities. Bad Kreuznach's outlying "Ortsbezirke" or "Stadtteile" are Bosenheim, Ippesheim, Planig, Winzenheim and Bad Münster am Stein-Ebernburg. Climate. Yearly precipitation in Bad Kreuznach amounts to 517 mm, which is very low, falling into the lowest third of the precipitation chart for all Germany. Only at 5% of the German Weather Service's weather stations are even lower figures recorded. The driest month is January. The most rainfall comes in June. In that month, precipitation is 1.8 times what it is in January. Precipitation varies only slightly. At only 7% of the weather stations are lower seasonal swings recorded. History. Antiquity. As early as the 5th century BC, there is conclusive evidence that there was a Celtic settlement within what are now Bad Kreuznach's town limits. About 58 BC, the area became part of the Roman Empire and a Roman vicus came into being here, named, according to legend, after a Celt called Cruciniac, who transferred a part of his land to the Romans for them to build a supply station between Mainz (Mogontiacum) and Trier (Augusta Treverorum). Kreuznach lay on the Roman road that led from Metz (Divodurum), by way of the Saar crossing near Dillingen-Pachten (Contiomagus) and the Vicus Wareswald, near Tholey to Bingen am Rhein (Bingium). About AD 250, an enormous (measuring 81 × 71 m), luxurious palace, unique to the lands north of the Alps, was built, in the style of a peristyle villa. It contained 50 rooms on the ground floor alone. Spolia found near the "Heidenmauer" ("Heathen Wall") have led to the conclusion that there were a temple to either Mercury or both Mercury and Maia and a Gallo-Roman provincial theatre. According to an inscription and tile plates that were found in Bad Kreuznach, a vexillatio of the Legio XXII Primigenia was stationed there. In the course of measures to shore up the Imperial border against the Germanic Alemannic tribes who kept making incursions across the limes into the Empire, an auxiliary castrum was built in 370 under Emperor Valentinian I. Middle Ages. After Rome's downfall, Kreuznach became in the year 500 a royal estate and an imperial village in the newly growing Frankish Empire. Then, the town's first church was built within the old castrum's walls, which was at first consecrated to Saint Martin, but later to Saint Kilian, and in 1590, it was torn down. According to an 822 document from Louis the Pious, who was invoking an earlier document from Charlemagne, about 741, Saint Martin's Church in Kreuznach was supposedly donated to the Bishopric of Würzburg by his forebear Carloman. According to this indirect note, Kreuznach once again had a documentary mention in the "Annales regni Francorum" as Royal "Pfalz" (an imperial palace), where Louis the Pious stayed in 819 and 839. Kreuznach was mentioned in documents by Louis the Pious (in 823 as "villa Cruciniacus" and in 825 and 839, as "Cruciniacum castrum" or "Cruciniacum palatium regium"), Louis the German (in 845 as "villa Cruzinacha" and in 868 as "villa Cruciniacum"), Charles III, "the Fat" (in 882 as "C[h]rucinachum", "Crutcinacha", "Crucenachum"), Arnulf of Carinthia (in 889), Henry the Fowler (in 923), Otto I, Holy Roman Emperor (in 962 as "Cruciniacus") and Frederick I, Holy Roman Emperor (in 1179 as "Cruczennach"). On the other hand, the "Crucinaha" in Emperor Otto III's documents from 1000 (which granted the rights to hold a yearly market and to strike coins) is today thought to refer to Christnach, an outlying centre of Waldbillig, a town nowadays in Luxembourg. In mediaeval and early modern Latin sources, Kreuznach is named not only as "Crucenacum", "Crucin[i]acum" (adjective "Crucenacensis", "Crucin[i]acensis") and the like, but also as "Stauronesus, Stauronesum" (adjective "Staurone[n]s[i]us"; from σταυρός "cross" and νῆσος "island") or "Naviculacrucis" (from "navicula", a kind of small boat used on inland waterways, called a "Nachen" in German, and "crux" "cross"). Sometimes also encountered is the abbreviation "Xnach" (often with a Fraktur X, with a cross-stroke: formula_0). About 1017, Henry II, Holy Roman Emperor enfeoffed his wife Cunigunde's grandnephew, Count Eberhard V of Nellenburg, with the noble estate of Kreuznach and the Villa Schwabenheim belonging thereto. After his death, King Henry IV supposedly donated the settlement of Kreuznach to the High Foundation of Speyer in 1065, who then transferred it shortly after 1105 – presumably as a fief – to the Counts of Sponheim. On Epiphany 1147, it is said that Bernard of Clairvaux performed a miraculous healing at Saint Kilian's Church. In 1183, half of the old Frankish village of Kreuznach at the former Roman castrum – the "Osterburg" – burnt down. Afterwards, of the 21 families there, 11 moved to what is now the Old Town ("Altstadt"). In the years 1206 to 1230, Counts Gottfried III of Sponheim (d. 1218) and Johann I of Sponheim (d. 1266) had the castle Kauzenburg built, even though King Philip of Swabia had forbidden them to do so. Along with the building of this castle came the rise of the New Town ("Neustadt") on the Nahe's north bank. In the years 1235 and 1270, Kreuznach was granted town rights, market rights, taxation rights and tolling rights under the rule of the comital House of Sponheim, which were acknowledged once again in 1290 by King Rudolf I of Habsburg. In 1279, in the Battle of Sprendlingen, the legend of Michel Mort arose. He is a local legendary hero, a butcher from Kreuznach who fought on the Sponheim side in the battle against the troops of the Archbishop of Mainz. When Count Johann I of Sponheim found himself in difficulties, Michel Mort drew the enemy's lances upon himself, sparing the Count by bringing about his own death. Early knowledge of the town of Kreuznach is documented in one line of a song by the minstrel Tannhäuser from the 13th century, which is preserved in handwriting by Hans Sachs: "vur creűczenach rint aűch die na". In Modern German, this would be "Vor Kreuznach rinnt auch die Nahe" ("Before Kreuznach, the Nahe also runs"). Records witness Jewish settlement in Kreuznach beginning in the late 13th century, while for a short time in the early 14th century, North Italian traders ("Lombards") lived in town. In the 13th century, Kreuznach was a fortified town and in 1320, it withstood a siege by Archbishop-Elector Baldwin of Trier (about 1270–1336). In 1361, Charles IV, Holy Roman Emperor granted Count Walram I of Sponheim (about 1305–1380) a yearly market privilege for Kreuznach. In 1375, the townsfolk rose up against the town council. Count Walram's response was to have four of the uprising's leaders beheaded at the marketplace. Through its long time as Kreuznach's lordly family, the House of Sponheim had seven heads: In 1417, however, the "Further" line of the House of Sponheim died out when Countess Elisabeth of Sponheim-Kreuznach (1365–1417) died. In her will, she divided the county between Electoral Palatinate and the County of Sponheim-Starkenburg, bequeathing to them one fifth and four-fifths respectively. In 1418, King Sigismund of Luxembourg enfeoffed Count Johann V of Sponheim-Starkenburg (about 1359–1437) with the yearly market, the mint, the Jews at Kreuznach and the right of escort, as far as Gensingen on the Trier-Mainz highway. In 1437, the lordship over Kreuznach was divided up between the Counts of Veldenz, the Margraves of Baden and Palatinate-Simmern. In 1457, at a time when a children's crusade movement was on the rise, 120 children left Kreuznach on their way to Mont-Saint-Michel by way of Wissembourg. In 1475, Electoral Palatinate issued a comprehensive police act for the "Amt" of Kreuznach, in which at this time, no Badish "Amtmann" resided. Elector Palatine Philip the Upright and John I, Count Palatine of Simmern granted the town leave to hold a second yearly market in 1490. In that same year, Elector Palatine Philip bestowed ownership of the "saltz- und badbronnen" ("salty and bathing springs") upon his cooks Conrad Brunn and Matthes von Nevendorf. The briny springs were likely discovered in 1478; nevertheless, a "Sulzer Hof" in what is today called the Salinental ("Saltworks Dale") had already been mentioned in the 13th or 14th century. On 24 August 1495, there was another uprising of the townsfolk, but this one was directed at Kreuznach's Palatine "Amtmann", Albrecht V Göler von Ravensburg, who had refused to release a prisoner against the posting of a bond. Nobody was beheaded this time, but Elector Palatine Philip did have a few of the leaders maimed, and then put into force a new town order. Town fortifications. The town wall, first mentioned in 1247, had a footprint that formed roughly a square in the Old Town, and was set back a few metres from what are today the streets Wilhelmstraße, Salinenstraße and Schloßstraße, with the fourth side skirting the millpond. Serving as town gates were, in the north, the "Kilianstor" or the "Mühlentor" ("Saint Kilian's Gate" or "Mill Gate"; torn down in 1877), in the southeast the "Hackenheimer Tor" (later the "Mannheimer Tor"; torn down in 1860) and in the south the "St.-Peter-Pförtchen", which lay at the end of Rossstraße, and which for security was often walled up. In the New Town, the town wall ran from the "Butterfass" ("Butterchurn"; later serving as the prison tower) on the Nahe riverbank up to the intersection of Wilhelmstraße and Brückes on "Bundesstraße" 48, where to the northwest the "Löhrpforte" (also called the "Lehrtor" or the "Binger Tor"; torn down about 1837) was found. It then ran in a bow between Hofgartenstraße and Hochstraße to the "Rüdesheimer Tor" in the southwest at the beginning of Gerbergasse, whose course it then followed down to the Ellerbach and along the Nahe as a riverbank wall. Along this section, the town wall contained the "Fischerpforte" or "Ellerpforte" as a watergate and in the south, the "Große Pforte" ("Great Gate") at the bridge across the Nahe. Belonging to the fortified complex of the Kauzenburg, across the Ellerbach from the New Town, were the "Klappertor" and a narrow, defensive ward ("zwinger"), from which the street known as "Zwingel" gets its name. On the bridge over to the ait (or the "Wörth" as it is called locally; the river island between the two parts of town) stood the "Brückentor" ("Bridge Gate"). To defend the town there was, besides the castle's Burgmannen, also a kind of townsmen's defence force or shooting guild (somewhat like a town militia). Preserved as an incunable print from 1487, printed in Mainz by Peter Schöffer (about 1425–1503), is an invitation from the mayor and town council to any and all who considered themselves good marksmen with the crossbow to come to a shooting contest on 23 September. Jewish population. On 31 March 1283 (2 Nisan 5043) in Kreuznach (קרויצנאך), Rabbi Ephraim bar Elieser ha-Levi – apparently as a result of a judicial sentence – was broken on the wheel. The execution was likely linked to the Mainz blood libel accusations, which in March and April 1283 also led to pogroms in Mellrichstadt, Mainz, Bacharach and Rockenhausen. In 1311, Aaron Judeus de Crucenaco (the last three words mean "the Jew from Kreuznach") was mentioned, as was a Jewish toll gatherer from Bingen am Rhein named Abraham von Kreuznach in 1328, 1342 and 1343. In 1336, Emperor Louis the Bavarian allowed Count Johann II of Sponheim-Kreuznach to permanently keep 60 house-owning freed Jews at Kreuznach or elsewhere on his lands ("… daß er zu Creützenach oder anderstwoh in seinen landen 60 haußgesäsß gefreyter juden ewiglich halten möge …"). After further persecution in the time of the Plague in 1348/1349, there is no further evidence of Jews in Kreuznach until 1375. By 1382 at the latest, the Jew Gottschalk (who died sometime between 1409 and 1421) from Katzenelnbogen was living in Kreuznach and owned the house at the corner of Lämmergasse and Mannheimerstraße 12 (later: Löwensteiner Hof) near the "Eiermarkt" ("Egg Market"). On a false charge of usury, Count Simon III of Sponheim (after 1330–1414) had him thrown in prison and only released him after payment of a hefty ransom. He was afterwards taken into protection by Ruprecht III of the Palatinate against a yearly payment of 10 Rhenish guilders. At Gottschalk's suggestion, Archbishop Johann of Nassau-Wiesbaden-Idstein lifted the "dice toll" for Jews crossing the border into the Archbishopric of Mainz. The special taxes for Jews ordered in 1418 and 1434 by King Sigismund of Luxembourg were also imposed in Kreuznach. In the Middle Ages, the eastern part of today's Poststraße in the New Town was the "Judengasse" ("Jews' Lane"). The "Kleine Judengasse" ran from the "Judengasse" to what is today called Magister-Faust-Gasse. In 1482, a "Jewish school" was mentioned, which might already have stood at Fährgasse 2 (lane formerly known as "Kleine Eselsgass" – "Little Ass's Lane"), where the Old Synagogue of Bad Kreuznach later stood (first mentioned here in 1715; new Baroque building in 1737; renovated in 1844; destroyed in 1938; torn down in 1953/1954; last wall remnant removed in 1975). In 1525, Louis V, Elector Palatine allowed Meïr Levi to settle for, at first, twelve years in Kreuznach, to organise the money market there, to receive visits, to lay out his own burial plot and to deal in medicines. In the earlier half of the 16th century, his son, the physician Isaak Levi, whose collection of medical works became well known as "Des Juden buch von kreuczenach" ("The Jew's Book of/from Kreuznach"), lived in Kreuznach. The work is preserved in a manuscript transcribed personally by Louis V, Elector Palatine. The oldest Jewish graveyard in Kreuznach lay in the area of today's "Rittergut Bangert" (knightly estate), having been mentioned in 1525 and 1636. The Jewish graveyard on Stromberger Straße was bought in 1661 (one preserved gravestone, however, dates from 1630) and expanded in 1919. It is said to be one of the best preserved in Rhineland-Palatinate. The Jewish family Creizenach, originally from Kreuznach, is known from records to have been in Mainz and Frankfurt am Main from 1733, and to have produced a number of important academics (Michael Creizenach, Theodor Creizenach, and Wilhelm Creizenach). The Yiddish name for Kreuznach was צלם־מקום (abbreviated צ״מ), variously rendered in Latin script as "Zelem-Mochum" or "Celemochum" (with the initial Z or C intended to transliterate the letter "צ", as they would be pronounced /ts/ in German), which literally meant "Image Place", for pious Jews wished to avoid the term "Kreuz" ("cross"). In 1828, 425 of the 7,896 inhabitants of the "Bürgermeisterei" ("Mayoralty") of Kreuznach (5.4%) adhered to the Jewish faith, as did 611 of the town's 18,143 inhabitants (3.4%) in 1890. Monasteries. Before the Thirty Years' War, Kreuznach had some 8,000 inhabitants and seven monasteries. In the Middle Ages and early modern times, the following monasteries were mentioned: Plague and leprosy. The Plague threatened Kreuznach several times throughout its history. Great epidemics are recorded as having broken out in 1348/1349 (Johannes Trithemius spoke of 1,600 victims), 1364, 1501/1502, 1608, 1635 (beginning in September) and 1666 (reportedly 1,300 victims). During the 1501 epidemic, the humanist and Palatine prince-raiser Adam Werner von Themar, one of Abbot Trithemius's friends, wrote a poem in Kreuznach about the plague saint, Sebastian. Outside the town, a sickhouse for lepers, the so-called "Gutleuthof", was founded on the Gräfenbach down from the village of Hargesheim and had its first documentary mention in 1487. Modern times. In the War of the Succession of Landshut against Elector Palatine Philip of the Rhine, both the town and the castle were unsuccessfully besieged for six days by Alexander, Count Palatine of Zweibrücken and William I, Landgrave of Lower Hesse, who then laid the surrounding countryside waste. The Sponheim abbot Johannes Trithemius had brought the monasterial belongings, the library and the archive to safety in Kreuznach. The besieged town was relieved by Electoral Palatinate Captain Hans III, "Landschad" of Steinach. In 1507, Master Faust assumed the rector's post at the Kreuznach Latin school, which had been secured for him by Franz von Sickingen. On the grounds of allegations of fornication, he fled the town only a short time afterwards, as witnessed by a letter from Johannes Trithemius to Johannes Virdung, in which Virdung was warned about Faust. Maximilian I, Holy Roman Emperor, who spent Whitsun 1508 in Boppard, stayed in Kreuznach in June 1508 and wrote from there to his daughter Duchess Margaret of Savoy. In 1557, the Reformation was introduced into Kreuznach. According to the 1601 "Verzeichnis aller Herrlich- und Gerechtigkeiten der Stätt und Dörffer der vorderen Grafschaft Sponheim im Ampt Creutznach" ("Directory of All Lordships and Justices of the Towns and Villages of the Further County of Sponheim in the "Amt" of Kreuznach"), compiled by Electoral Palatinate "Oberamtmann" Johann von Eltz-Blieskastel-Wecklingen, the town had 807 estates and was the seat of a "Hofgericht" (lordly court) to which the "free villages" of Waldböckelheim, Wöllstein, Volxheim, Braunweiler, Mandel and Roxheim, which were thus freed from the toll at Kreuznach, had to send "Schöffen" (roughly "lay jurists"). Thirty Years' War. During the Thirty Years' War, Kreuznach was overrun and captured many times by various factions fighting in that war: The town was thus heavily drawn into hardship and woe, and the population dwindled from some 8,000 at the war's outbreak to roughly 3,500. The expression "Er ist zu Kreuznach geboren" ("He was born at Kreuznach") became a byword in German for somebody who had to struggle with a great deal of hardship. On 19 August 1663, the town was stricken by an extraordinarily high flood on the river Nahe. Nine Years' War. In the Nine Years' War (known in Germany as the "Pfälzischer Erbfolgekrieg", or War of the Palatine Succession), the Kauzenburg (castle) was conquered on 5 October 1688 by Marshal Louis François, duc de Boufflers. The town fortifications and the castle were torn down and the town of Kreuznach largely destroyed in May 1689 by French troops under Brigadier Ezéchiel du Mas, Comte de Mélac (about 1630–1704) or Lieutenant General Marquis Nicolas du Blé d’Uxelles. On 18 October 1689, Kreuznach's churches were burnt down. 18th century. As of 1708, Kreuznach wholly belonged to Electoral Palatinate. Under Elector Palatine Karl III Philipp, the Karlshalle Saltworks were built in 1729. Built in 1743 by Prince-Elector, Count Palatine and Duke Karl Theodor were the Theodorshalle Saltworks. On 13 May 1725, after a cloudburst and hailstorm, Kreuznach was stricken by an extreme flood in which 31 people lost their lives, some 300 or 400 head of cattle drowned, two houses were utterly destroyed and many damaged and remaining parts of the town wall fell in. Taking part at the founding of the Masonic Lodge "Zum wiedererbauten Tempel der Bruderliebe" ("To the Rebuilt Temple of Brotherly Love") in Worms in 1781 were also Freemasons from Kreuznach. As early as 1775, the Grand Lodge of the Rhenish Masonic Lodges (8th Provincial Grand Lodge) of Strict Observance had already been given the name "Kreuznach". In the extreme winter of 1783/1784, the town was heavily damaged on 27–28 February 1784 by an icerun and flooding. A pharmacist named Daniel Riem was killed in his house "Zum weißen Schwan" ("At the White Swan") when it collapsed into the floodwaters. French Revolutionary and Napoleonic times. In the course of the Napoleonic Wars (1792–1814), French emigrants came to Kreuznach, among them Prince Louis Joseph of Condé. In October 1792, French Revolutionary troops under General Adam Philippe, Comte de Custine occupied the land around Kreuznach, remaining there until 28 March 1793. The town itself was briefly occupied by French troops under General François Séverin Marceau-Desgraviers on 4 January and then again on 16 October 1794. From 30 October until 1 December 1795, the town was held by Imperial troops under Rhinegrave Karl August von Salm-Grumbach, but they were at first driven out in bloody battles by Marshals Jean-Baptiste Jourdan and Jean-Baptiste Bernadotte. In this time, the town suffered greatly under sackings and involuntary contributions. After the French withdrew on 12 December, it was occupied by an Austrian battalion under Captain Alois Graf Gavasini, which withdrew again on 30 May 1796. On 9 June 1796, Kreuznach was once again occupied by the French. In 1797, Kreuznach, along with all lands on the Rhine's left bank, was annexed by the French First Republic, a deed confirmed under international law by the 1801 Treaty of Lunéville. The parts of town that lay north of the Nahe were assigned to the Arrondissement of Simmern in the Department of Rhin-et-Moselle, whereas those that lay to the south were assigned to the Department of Mont-Tonnerre (or Donnersberg in German). The subprefect in Simmern in 1800 was Andreas van Recum and in 1806 it was Ludwig von Closen. The "maire" of Kreuznach as of 1800 was Franz Joseph Potthoff (b. 1756; d. after 1806) and beginning in 1806 it was Karl Joseph Burret. On 20 September and 5 October 1804, the French Emperor, Napoleon Bonaparte visited Kreuznach. On the occasion of Napoleon's victory in the Battle of Austerlitz a celebratory Te Deum was held at the Catholic churches in January 1806 on Bishop of Aachen Marc-Antoine Berdolet's orders (Kreuznach was part of his diocese from 1801 to 1821). In 1808, Napoleon made a gift of Kreuznach's two saltworks to his favourite sister, Pauline. In 1809, the Kreuznach Masonic Lodge "Les amis réunis de la Nahe et du Rhin" was founded by van Reccum, which at first lasted only until 1814. It was, however, refounded in 1858. In Napoleon's honour, the timing of the Kreuznach yearly market was set by Mayor Burret on the Sunday after his birthday (15 August). Men from Kreuznach also took part in Napoleon's 1812 Russian Campaign on the French side, to whom a monument established at the Mannheimer Straße graveyard in 1842 still stands. The subsequent German campaign (called the "Befreiungskriege", or Wars of Liberation, in Germany) put an end to French rule. Congress of Vienna to First World War. Until a permanent new order could be imposed under the terms of the Congress of Vienna, the region lay under joint Bavarian-Austrian administration, whose seat was in Kreuznach. When these terms eventually came about, Kreuznach passed to the Kingdom of Prussia in 1815 and from 1816 it belonged to the "Regierungsbezirk" of Koblenz in the province of the Grand Duchy of the Lower Rhine (as of 1822 the Rhine Province) and was a border town with two neighbouring states, the Grand Duchy of Hesse to the east and the Bavarian exclave of the Palatinate to the south. The two saltworks, which had now apparently been taken away from Napoleon's sister, were from 1816 to 1897 Grand-Ducal-Hessian state property on Prussian territory. In 1817, Johann Erhard Prieger opened the first bathing parlour with briny water and thereby laid the groundwork for the fast-growing spa business. In 1843, Karl Marx married Jenny von Westphalen in Kreuznach, presumably at the "Wilhelmskirche" (William's Church), which had been built between 1698 and 1700 and was later, in 1968, all but torn down, leaving only the churchtower. In Kreuznach, Marx set down considerable portions of his manuscript "Critique of Hegel's Philosophy of Right" ("Zur Kritik der Hegelschen Rechtsphilosophie") in 1843. Clara Schumann, who was attending the spa in Kreuznach, and her half-sister Marie Wieck gave a concert at the spa house in 1860. With the building of the Nahe Valley Railway from Bingerbrück to Saarbrücken in 1858/1860, the groundwork was laid for the town's industrialisation. This, along with the ever-growing income from the spa, led after years of stagnation to an economic boost for the town's development. Nevertheless, the railway was not built for industry and spa-goers alone, but also as a logistical supply line for a war that was expected to break out with France. Before this, though, right at Kreuznach's town limits, Prussia and Bavaria once again stood at odds with each other in 1866. Thinking that was not influenced by this led to another railway line being built even before the First World War, the "strategic railway" from Bad Münster by way of Staudernheim, Meisenheim, Lauterecken and Kusel towards the west, making Kreuznach into an important contributor to transport towards the west. Only about 1950 were parts of this line torn up and abandoned. Today, between Staudernheim and Kusel, it serves as a tourist attraction for those who wish to ride draisines. In 1891, three members of the Franciscan Brothers of the Holy Cross came to live in Kreuznach. In 1893, they took over the hospital "Kiskys-Wörth", which as of 1905 bore the name "St. Marienwörth". Since 1948, they have run it together with the Sisters of the Congregation of Papal Law of the Maids of Mary of the Immaculate Conception, and today run it as a hospital bearing the classification "II. Regelversorgung" under Germany's "" hospital planning system. In 1901, the Second Rhenish "Diakonissen-Mutterhaus" ("Deaconess's Mother-House"), founded in 1889 in Sobernheim, moved under its abbot, the Reverend Hugo Reich, to Kreuznach. It is now a foundation known as the "kreuznacher diakonie" (always written with lowercase initials). In 1904, the pharmacist Karl Aschoff discovered the Kreuznach brine's radon content, and thereafter introduced "radon balneology", a therapy that had already been practised in the Austro-Hungarian town of Sankt Joachimsthal in the Bohemian Ore Mountains (now Jáchymov in the Czech Republic). Even though the Bad Kreuznach's radon content was much slighter than that found in the waters from Brambach or Bad Gastein, the town was quickly billed as a "radium healing spa" – the technical error in that billing notwithstanding. In 1912, a radon inhalatorium was brought into service, into which was piped the air from an old mining gallery at the Kauzenberg, which had a higher radon content than the springwater. The inhalatorium was destroyed in 1945. In 1974, however, the old mining gallery itself was converted into a therapy room. To this day, radon inhalation serves as a natural pain reliever for those suffering from rheumatism. In the First World War, both the Kreuznach spa house and other hotels and villas became as of 2 January 1917 the seat of the Great Headquarters of Kaiser Wilhelm II. The Kaiser actually lived in the spa house. Used as the General staff building was the Oranienhof. At the spa house on 19 December 1917, General Mustafa Kemal Pasha – better known as Atatürk ("Father of the Turks") and later president of a strictly secular Turkey – the Kaiser, Paul von Hindenburg and Erich Ludendorff all met for talks. Only an extreme wintertime flood on the Nahe in January 1918 led to the Oberste Heeresleitung being moved to Spa in Belgium. Weimar Republic and Third Reich. After the First World War, French troops occupied the Rhineland and along with it, Kreuznach, whose great hotels were thereafter mostly abandoned. In 1924, Kreuznach was granted the designation "Bad", literally "Bath", which is conferred on places that can be regarded as health resorts. Since this time, the town has been known as Bad Kreuznach. After Adolf Hitler and the Nazis seized power in 1933, some, among them the trade unionist Hugo Salzmann, organised resistance to National Socialism. Despite imprisonment, Salzmann survived the Third Reich, and after 1945 sat on town council for the Communist Party of Germany (KPD). The Jews who were still left in the district after the Second World War broke out were on the district leadership's orders taken in 1942 to the former "Kolpinghaus", whence, on 27 July, they were deported to Theresienstadt. Bad Kreuznach, whose spa facilities and remaining hotels once again, from 1939 to 1940, became the seat of the Army High Command, was time and again targeted by Allied air raids because of the Wehrmacht barracks on Bosenheimer Straße, Alzeyer Straße and Franziska-Puricelli-Straße as well as the strategically important Berlin-Paris railway line, which then led through the town. The last "Stadtkommandant" (town commander), Lieutenant Colonel Johann Kaup (d. 1945), kept Bad Kreuznach from even greater destruction when he offered advancing American troops no resistance, and yielded the town to them on 16 March 1945 with barely any fighting. Shortly before this, German troops had blown up yet another part of the old bridge across the Nahe, thus also destroying residential buildings near the bridge ends. After 1945. Bad Kreuznach was occupied by US troops in March 1945 and thus stood under American military authority. This even extended to one of the "Rheinwiesenlager" for disarmed German forces, which lay near Bad Kreuznach on the road to Bretzenheim, and whose former location is now marked by a memorial. It was commonly known as the "Field of Misery". Found in the Lohrer Wald (forest) is a graveyard of honour for wartime and camp victims. Under the Potsdam Protocols on the fixing of occupation zone boundaries, Bad Kreuznach found itself for a while in French zone of occupation, but in an exchange in the early 1950s, United States Armed Forces came back into the districts of Kreuznach, Birkenfeld and Kusel. Until the middle of 2001, the Americans maintained four barracks, a Redstone missile unit, a firing range, a small airfield and a drill ground in Bad Kreuznach. The last US forces in Bad Kreuznach were parts of the 1st Armored Division ("Old Ironsides"). In 1958, President of France Charles de Gaulle and Federal Chancellor Konrad Adenauer agreed in Bad Kreuznach to an institutionalisation of the special relations between the two countries, which in 1963 resulted in the Élysée Treaty. A monumental stone before the old spa house recalls this historic event. On 1 April 1960, the town of Bad Kreuznach was declared, after application to the state government, a "große kreisangehörige Stadt" ("large town belonging to a district"). In 2010 Bad Kreuznach launched a competition to replace the 1950s addition to the "Alte Nahebrücke" ("Old Nahe Bridge"). The bridge, designed by competition winner Dissing+Weitling architecture of Copenhagen, is scheduled for completion by 2012. Amalgamations. In the course of administrative restructuring in Rhineland-Palatinate, the hitherto self-administering municipalities of Bosenheim, Planig, Ippesheim (all three of which had belonged until then to the Bingen district) and Winzenheim were amalgamated on 7 June 1969 with Bad Kreuznach. Furthermore, Rüdesheim an der Nahe was also amalgamated, but fought the amalgamation in court, winning, and thereby regaining its autonomy a few months later. As part of the 2009 German federal election, a plebiscite was included on the ballot on the question of whether the towns of Bad Kreuznach and Bad Münster am Stein-Ebernburg should be merged, and 68.3% of the Bad Kreuznach voters favoured negotiations between the two towns. On 25 May 2009, the town received another special designation, this time from the Cabinet: "Ort der Vielfalt" – "Place of Diversity". Religion. As at 31 August 2013, there are 44,851 full-time residents in Bad Kreuznach, and of those, 15,431 are Protestant (34.405%), 13,355 are Catholic (29.776%), 4 belong to the Old Catholic Church (0.009%), 77 belong to the Greek Orthodox Church (0.172%), 68 belong to the Russian Orthodox Church (0.152%), 1 is United Methodist (0.002%), 16 belong to the Free Evangelical Church (0.036%), 41 are Lutheran (0.091%), 2 belong to the Palatinate State Free Religious Community (0.004%), 1 belongs to the Mainz Free Religious Community (0.002%), 4 are Reformed (0.009%), 9 belong to the Alzey Free Religious Community (0.02%), 2 form part of a membership group in a Jewish community (0.004%) (162 other Jews belong to the Bad Kreuznach-Koblenz worship community [0.361%] while a further one belongs to the State League of Jewish worship communities in Bavaria [0.002%]), 9 are Jehovah's Witnesses (0.02%), 1 belongs to yet another free religious community (0.002%), 5,088 (11.344%) belong to other religious groups and 10,579 (23.587%) either have no religion or will not reveal their religious affiliation. Politics. Town council. The council is made up of 44 council members, who were elected by proportional representation at the municipal election held on 7 June 2009, and the chief mayor as chairwoman. Since this election, the town has been run by a Jamaica coalition of the Christian Democratic Union of Germany, the Free Democratic Party and the Greens. The municipal election held on 7 June 2009 yielded the following results: Mayors. Bad Kreuznach's current mayor ("Oberbürgermeister") is Emanuel Letz, elected in March 2022. Listed here are Bad Kreuznach's mayors since Napoleonic times: Coat of arms. The town's arms might be described thus: On an escutcheon argent ensigned with a town wall with three towers all embattled Or, a fess countercompony Or and azure between three crosses pattée sable. Bad Kreuznach's right to bear arms comes from municipal law for the state of Rhineland-Palatinate. The three crosses pattée (that is, with the ends somewhat broader than the rest of the crosses' arms) are a canting charge, referring to the town's name, the German word for "cross" being "Kreuz". The crosses are sometimes wrongly taken to be Christian crosses. In fact, the name Kreuznach developed out of the Celtic-Latin word "Cruciniacum", which meant "Crucinius's Home", thus a man's name with the suffix "—acum" added, meaning "flowing water". The coat of arms first appeared with this composition on the keystone at Saint Nicholas's Church in the late 13th century. The mural crown on top of the escutcheon began appearing only about 1800 under French rule. The stylised stretch of town wall was originally rendered reddish-brown, but it usually appears gold nowadays. Twin towns – sister cities. Bad Kreuznach is twinned with: Culture and sightseeing. Buildings. The following are listed buildings or sites in Rhineland-Palatinate's Directory of Cultural Monuments: Tourist attractions. The town of Bad Kreuznach is home to the following tourist attractions: Town of Bad Kreuznach Cultural Prize. The is a promotional prize awarded by the town of Bad Kreuznach each year in the categories of music, visual arts and literature on a rotational basis. A full list of prizewinners since the award's introduction can be seen at the link. In 2013, the prize was not awarded owing to cost-cutting measures. Sport and leisure. Sport clubs. In Bad Kreuznach there are many clubs that can boast of successes at the national level. In trampolining and whitewater slalom, the town is a national stronghold, while it has also shown strength at the state level in shooting sports and bocce. The biggest club is "VfL 1848 Bad Kreuznach", within which the first basketball department in any sport club in Germany was founded in 1935. After the Second World War, too, the club produced many important personalities, among them several players at the national level. Moreover, the club's field hockey department is also of importance, having for a while been represented in the "Damen-Bundesliga" ("Ladies' National League"). The first field hockey department in a Bad Kreuznach sport club, however, was the "Kreuznacher HC", which made it to the semi-finals at the German Championship in 1960, and which to this day stages the Easter Hockey Tournament. In football, the town's most successful club is Eintracht Bad Kreuznach. The team played in, among other leagues, the Oberliga, when that was Germany's highest level in football, as well as, later, the Second "Bundesliga". The club that has won the most titles is MTV Bad Kreuznach, which in trampolining is among Germany's most successful clubs. Canoeing, in particular whitewater slalom, is practised by RKV Bad Kreuznach. Creuznacher RV has a long tradition in rowing. Also important are the shooting sport clubs SG Bad Kreuznach 1847 and BSC Bad Kreuznach. In disabled sports, the Sportfreunde Diakonie especially has been successful, particularly in bocce. Town of Bad Kreuznach Sport Badge. The "Sportplakette der Stadt Bad Kreuznach" is an honour awarded by the town once each year to individual sportsmen or sportswomen, whole teams, worthy promoters of sports and worthy people whose jobs are linked to sports. With this award, the town also hopes to underscore its image as a sporting town in Rhineland-Palatinate. The Sport Badge is conferred upon sportsmen or sportswomen at three levels: A promoter or person working in a sport-related field must be active in an unpaid capacity for at least 25 years to receive this award. Economy and infrastructure. Winegrowing. Bad Kreuznach is characterised to a considerable extent by winegrowing, and with 777 ha of vineyard planted – 77% white wine varieties and 23% red – it is the biggest winegrowing centre in the Nahe wine region and the seventh biggest in Rhineland-Palatinate. Industry and trade. Bad Kreuznach has roughly 1,600 businesses with at least one employee, thereby offering 28,000 jobs, of which half are filled by commuters who come into town from surrounding areas. The economic structure is thus characterised mainly by small and medium enterprises, but also some big businesses like the tire manufacturer Michelin, the machine builder KHS, the Meffert Farbwerke (dyes, lacquers, plasters, protective coatings) and the Jos. Schneider Optische Werke GmbH may be mentioned. In 2002, the tradition-rich Seitz-Filter-Werke was taken over by the US-based Pall Corporation. Thus producing businesses are of great importance, and are especially well represented by the chemical industry (tires, lacquers, dyes) and the optical industry as well as machine builders and automotive suppliers. Retail and wholesale dealers, as well as restaurants hold particular weight in the inner town, although in the last few years, the service sector, too, has been gaining in importance. The express road links to the Autobahn bring Bad Kreuznach closer to Frankfurt Airport. The town can also attract new investment with its economic conversion areas. Spa and tourism. The spa operations and the wellness tourism also hold a special place for the town as the world's oldest radon-brine spa and the Rhineland-Palatinate centre for rheumatic care. Available in town are 2,498* beds for guests, which out of 449,756* overnight stays have seen 270,306* stays by guests in rehabilitation clinics. All together, the town was visited by 92,700 overnight guests (*as of 31 December 2010). Also available to the spa operations are six spa clinics, spa sanatoria, the thermal brine movement bath "Crucenia Thermen" with a salt grotto, a radon gallery, graduation towers in the Salinental (dale), the brine-fogger in the "Kurpark" (spa park) set up as open-air inhalatoria and the "Crucenia Gesundheitszentrum" ("Crucenia Health Centre") for ambulatory spa treatment. The indications for these treatments are for rheumatic complaints, changes in joints due to gout, degenerative diseases of the spinal column and joints, women's complaints, illnesses of the respiratory system, paediatric illnesses, vascular illnesses, non-infectious skin diseases, endocrinological dysfunctions, psychosomatic illnesses and eye complaints. After the noticeable decline in the spa business in the mid 1990s, there was a remodelling of the healing spa. At the "Saunalandschaft" bathhouse rose a "wellness temple" with 12 great saunas on an area of 4 000 m2, which receives roughly 80,000 visitors every year. Hospitals and specialised clinics. In the hospital run by "kreuznacher diakonie" (397 beds) and the St. Marienwörth hospital (Franciscan brothers), Bad Kreuznach has at its disposal two general hospitals that have available the most modern specialised departments for heart and intestinal disorders, and also strokes. In the spa zone, there is also the "Sana" Rhineland-Palatinate Rheumatic Centre, made up of a rheumatic hospital and a rehabilitation clinic, the "Karl-Aschoff-Klinik". Another rehabilitation clinic under private sponsorship is the "Klinik Nahetal". Also, there are the psychosomatic specialised clinic "St.-Franziska-Stift" and the rehabilitation and preventive clinic for children and youth, "Viktoriastift". Transport. Given Bad Kreuznach's location in the narrow Nahe valley, all transport corridors run upstream parallel to the river. Moreover, the town is an important crossing point for all modes of transport. Rail. From 1896 to 1936, there were the "Kreuznacher Kleinbahnen" ("Kreuznach Narrow-Gauge Railways"), a rural narrow-gauge railway network. An original steam locomotive and its shed, which were moved from Winterburg, can be found today in nearby Bockenau. The "Kreuznacher Straßen- und Vorortbahnen" ("Kreuznach Tramways and Suburban Railways") ran not only a service within the town but also lines out into the surrounding area, to Bad Münster am Stein, Langenlonsheim and Sankt Johann. In 1953, the whole operation was shut down. Since the introduction of "Rhineland-Palatinate Timetabling" ("Rheinland-Pfalz-Takt") in the mid 1990s, the train services other than the ICE/EC/IC services have once again earned some importance. Besides the introduction of hourly timetabling, there has also been a marked expansion into the nighttime hours, with trains leaving for Mainz three hours later each day. Bad Kreuznach station is one of Rhineland-Palatinate's few V-shaped stations (called a "Keilbahnhof", or "wedge station", in the German terminology). Branching off the Nahe Valley Railway (Bingen–Saarbrücken) here is the railway line to Gau Algesheim. From Bingen am Rhein, Regionalbahn trains run by way of the Alsenz Valley Railway, which branches off the Nahe Valley Railway in Bad Münster am Stein, to Kaiserslautern, reaching it in roughly 65 minutes. Running on the line to Saarbrücken and by way of Gau Algesheim and the West Rhine Railway to Mainz are Regional-Express and Regionalbahn trains. The travel time to Mainz lies between 25 and 40 minutes, and to Saarbrücken between 1 hour and 40 minutes and 2 hours and 20 minutes. Road. Bad Kreuznach can be reached by car through the like-named interchange on the Autobahn A 61 as well as on "Bundesstraßen" 41, 48 and 428. Except for "Bundesstraße" 48, all these roads skirt the inner town, while the Autobahn is roughly 12 km from the town centre. Local public transport is provided by a town bus network with services running at 15- or 30-minute intervals. There are seven bus routes run by "Verkehrsgesellschaft Bad Kreuznach" (VGK), which is owned by the company Rhenus Veniro. Furthermore, there is a great number of regional bus routes serving the nearby area, run by VGK and "Omnibusverkehr Rhein-Nahe GmbH" (ORN). The routes run by the various carriers are all part of the "Rhein-Nahe-Nahverkehrsverbund" ("Rhine-Nahe Local Transport Association"). Education and research. Found in Bad Kreuznach are not only several primary schools, some of which offer "full-time school", but also secondary schools of all three types as well as vocational preparatory schools or combined vocational-academic schools such as "Berufsfachschulen", "Berufsoberfachschulen" and "Technikerschulen", which are housed at the vocational schools. The following schools are found in Bad Kreuznach: Special schools. In 1950, the Max Planck Institute for Agricultural and Agricultural Engineering was moved from Imbshausen to Bad Kreuznach, where it used spaces of the Bangert knightly estate. From 1956 until its closure in 1976, it bore the name "Max-Planck-Institut für Landarbeit und Landtechnik". From 1971 to 1987, the discipline of cultivation of the "Fachhochschule Rheinland-Pfalz", Bingen, was located in Bad Kreuznach. Since it moved away to Bingen, Bad Kreuznach has been offering collegelike training for aspirant winemakers and agricultural technologists with the "DLR" ("Dienstleistungszentrum Ländlicher Raum"). This two-year "Technikerschule für Weinbau und Oenologie sowie Landbau" is a path within the agricultural economics college. It continues the tradition of the former, well known "Höheren Weinbauschule" ("Higher Winegrowing School") and the "Ingenieurschule für Landbau" ("Engineering School for Cultivation") and fills a gap in the training between Fachhochschule and one-year "Fachschule". The "Agentur für Qualitätssicherung, Evaluation und Selbstständigkeit von Schulen" ("Agency for Quality Assurance, Evaluation and Independence of Schools") and the "Pädagogisches Zentrum Rheinland-Pfalz" ("Rhineland-Palatinate Paedagogical Centre"), the latter of which the state's schools support with their further paedagogical and didactic development, likewise have their seats in the town, as does the "Staatliche Studienseminar Bad Kreuznach" (a higher teachers' college). The Evangelical Church in the Rhineland maintained from 1960 to 2003 a seminary in Bad Kreuznach to train vicars. Notable people. Honorary citizens. Thus far, 15 persons have been named honorary citizens of the town of Bad Kreuznach. Three of those have been stripped of the honour: Adolf Hitler, Wilhelm Frick and Richard Walther Darré. The twelve remaining honorary citizens are listed here with the date of the honour in parentheses: References. <templatestyles src="Reflist/styles.css" /> Further reading. All these works are in German:
[ { "math_id": 0, "text": "\\mathfrak{X} " } ]
https://en.wikipedia.org/wiki?curid=1161411
11615
Finite field
Algebraic structure In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are given by the integers mod "p" when "p" is a prime number. The "order" of a finite field is its number of elements, which is either a prime number or a prime power. For every prime number "p" and every positive integer "k" there are fields of order "p""k", all of which are isomorphic. Finite fields are fundamental in a number of areas of mathematics and computer science, including number theory, algebraic geometry, Galois theory, finite geometry, cryptography and coding theory. Properties. A finite field is a finite set that is a field; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as the field axioms. The number of elements of a finite field is called its "order" or, sometimes, its "size". A finite field of order "q" exists if and only if "q" is a prime power "p""k" (where "p" is a prime number and "k" is a positive integer). In a field of order "p""k", adding "p" copies of any element always results in zero; that is, the characteristic of the field is "p". If "q" = "p""k", all fields of order "q" are isomorphic (see ' below). Moreover, a field cannot contain two different finite subfields with the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denoted formula_0, F"'"q" or GF("q"), where the letters GF stand for "Galois field". In a finite field of order "q", the polynomial "Xq" − "X" has all "q" elements of the finite field as roots. The non-zero elements of a finite field form a multiplicative group. This group is cyclic, so all non-zero elements can be expressed as powers of a single element called a primitive element of the field. (In general there will be several primitive elements for a given field.) The simplest examples of finite fields are the fields of prime order: for each prime number "p", the prime field of order "p" may be constructed as the integers modulo "p", formula_1. The elements of the prime field of order "p" may be represented by integers in the range 0, ..., "p" − 1. The sum, the difference and the product are the remainder of the division by "p" of the result of the corresponding integer operation. The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (see ""). Let "F" be a finite field. For any element "x" in "F" and any integer "n", denote by "n" ⋅ "x" the sum of "n" copies of "x". The least positive "n" such that "n" ⋅ 1 = 0 is the characteristic "p" of the field. This allows defining a multiplication ("k", "x") ↦ "k" ⋅ "x" of an element "k" of GF("p") by an element "x" of "F" by choosing an integer representative for "k". This multiplication makes "F" into a GF("p")-vector space. It follows that the number of elements of "F" is "p""n" for some integer "n". The identity formula_2 (sometimes called the freshman's dream) is true in a field of characteristic "p". This follows from the binomial theorem, as each binomial coefficient of the expansion of ("x" + "y")"p", except the first and the last, is a multiple of "p". By Fermat's little theorem, if "p" is a prime number and "x" is in the field GF("p") then "x""p" = "x". This implies the equality formula_3 for polynomials over GF("p"). More generally, every element in GF("p""n") satisfies the polynomial equation "x""p""n" − "x" = 0. Any finite field extension of a finite field is separable and simple. That is, if "E" is a finite field and "F" is a subfield of "E", then "E" is obtained from "F" by adjoining a single element whose minimal polynomial is separable. To use a piece of jargon, finite fields are perfect. A more general algebraic structure that satisfies all the other axioms of a field, but whose multiplication is not required to be commutative, is called a division ring (or sometimes "skew field"). By Wedderburn's little theorem, any finite division ring is commutative, and hence is a finite field. Existence and uniqueness. Let "q" = "pn" be a prime power, and "F" be the splitting field of the polynomial formula_4 over the prime field GF("p"). This means that "F" is a finite field of lowest order, in which "P" has "q" distinct roots (the formal derivative of "P" is "P"′ = −1, implying that gcd("P", "P"′) = 1, which in general implies that the splitting field is a separable extension of the original). The above identity shows that the sum and the product of two roots of "P" are roots of "P", as well as the multiplicative inverse of a root of "P". In other words, the roots of "P" form a field of order "q", which is equal to "F" by the minimality of the splitting field. The uniqueness up to isomorphism of splitting fields implies thus that all fields of order "q" are isomorphic. Also, if a field "F" has a field of order "q" = "p""k" as a subfield, its elements are the "q" roots of "X""q" − "X", and "F" cannot contain another subfield of order "q". In summary, we have the following classification theorem first proved in 1893 by E. H. Moore: "The order of a finite field is a prime power. For every prime power" "q" "there are fields of order" "q", "and they are all isomorphic. In these fields, every element satisfies" formula_5 "and the polynomial" "Xq" − "X" "factors as" formula_6 It follows that GF("pn") contains a subfield isomorphic to GF("p""m") if and only if "m" is a divisor of "n"; in that case, this subfield is unique. In fact, the polynomial "X""p""m" − "X" divides "X""p""n" − "X" if and only if "m" is a divisor of "n". Explicit construction. Non-prime fields. Given a prime power "q" = "p""n" with "p" prime and "n" > 1, the field GF("q") may be explicitly constructed in the following way. One first chooses an irreducible polynomial "P" in GF("p")["X"] of degree "n" (such an irreducible polynomial always exists). Then the quotient ring formula_7 of the polynomial ring GF("p")["X"] by the ideal generated by "P" is a field of order "q". More explicitly, the elements of GF("q") are the polynomials over GF("p") whose degree is strictly less than "n". The addition and the subtraction are those of polynomials over GF("p"). The product of two elements is the remainder of the Euclidean division by "P" of the product in GF("p")["X"]. The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; see "". However, with this representation, elements of GF("q") may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonly "α" to the element of GF("q") that corresponds to the polynomial "X". So, the elements of GF("q") become polynomials in "α", where "P"("α") = 0, and, when one encounters a polynomial in "α" of degree greater or equal to "n" (for example after a multiplication), one knows that one has to use the relation "P"("α") = 0 to reduce its degree (it is what Euclidean division is doing). Except in the construction of GF(4), there are several possible choices for "P", which produce isomorphic results. To simplify the Euclidean division, one commonly chooses for "P" a polynomial of the form formula_8 which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic 2, irreducible polynomials of the form "Xn" + "aX" + "b" may not exist. In characteristic 2, if the polynomial "X""n" + "X" + 1 is reducible, it is recommended to choose "X""n" + "X""k" + 1 with the lowest possible "k" that makes the polynomial irreducible. If all these trinomials are reducible, one chooses "pentanomials" "X""n" + "X""a" + "X""b" + "X""c" + 1, as polynomials of degree greater than 1, with an even number of terms, are never irreducible in characteristic 2, having 1 as a root. A possible choice for such a polynomial is given by Conway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields. In the next sections, we will show how the general construction method outlined above works for small finite fields. Field with four elements. The smallest non-prime field is the field with four elements, which is commonly denoted GF(4) or formula_9 It consists of the four elements 0, 1, "α", 1 + "α" such that "α"2 = 1 + "α", 1 ⋅ "α" = "α" ⋅ 1 = "α", "x" + "x" = 0, and "x" ⋅ 0 = 0 ⋅ "x" = 0, for every "x" ∈ GF(4), the other operation results being easily deduced from the distributive law. See below for the complete operation tables. This may be deduced as follows from the results of the preceding section. Over GF(2), there is only one irreducible polynomial of degree 2: formula_10 Therefore, for GF(4) the construction of the preceding section must involve this polynomial, and formula_11 Let "α" denote a root of this polynomial in GF(4). This implies that <templatestyles src="Block indent/styles.css"/>"α"2 = 1 + "α", and that "α" and 1 + "α" are the elements of GF(4) that are not in GF(2). The tables of the operations in GF(4) result from this, and are as follows: A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2. In the third table, for the division of "x" by "y", the values of "x" must be read in the left column, and the values of "y" in the top row. (Because 0 ⋅ "z" = 0 for every z in every ring the division by 0 has to remain undefined.) From the tables, it can be seen that the additive structure of GF(4) is isomorphic to the Klein four-group, while the non-zero multiplicative structure is isomorphic to the group Z3. The map formula_12 is the non-trivial field automorphism, called the Frobenius automorphism, which sends "α" into the second root 1 + "α" of the above mentioned irreducible polynomial "X"2 + "X" + 1. GF("p"2) for an odd prime "p". For applying the above general construction of finite fields in the case of GF("p"2), one has to find an irreducible polynomial of degree 2. For "p" = 2, this has been done in the preceding section. If "p" is an odd prime, there are always irreducible polynomials of the form "X"2 − "r", with "r" in GF("p"). More precisely, the polynomial "X"2 − "r" is irreducible over GF("p") if and only if "r" is a quadratic non-residue modulo "p" (this is almost the definition of a quadratic non-residue). There are quadratic non-residues modulo "p". For example, 2 is a quadratic non-residue for "p" = 3, 5, 11, 13, ..., and 3 is a quadratic non-residue for "p" = 5, 7, 17, ... If "p" ≡ 3 mod 4, that is "p" = 3, 7, 11, 19, ..., one may choose −1 ≡ "p" − 1 as a quadratic non-residue, which allows us to have a very simple irreducible polynomial "X"2 + 1. Having chosen a quadratic non-residue "r", let "α" be a symbolic square root of "r", that is, a symbol that has the property "α"2 = "r", in the same way that the complex number "i" is a symbolic square root of −1. Then, the elements of GF("p"2) are all the linear expressions formula_13 with "a" and "b" in GF("p"). The operations on GF("p"2) are defined as follows (the operations between elements of GF("p") represented by Latin letters are the operations in GF("p")): formula_14 GF(8) and GF(27). The polynomial formula_15 is irreducible over GF(2) and GF(3), that is, it is irreducible modulo 2 and 3 (to show this, it suffices to show that it has no root in GF(2) nor in GF(3)). It follows that the elements of GF(8) and GF(27) may be represented by expressions formula_16 where "a", "b", "c" are elements of GF(2) or GF(3) (respectively), and "α" is a symbol such that formula_17 The addition, additive inverse and multiplication on GF(8) and GF(27) may thus be defined as follows; in following formulas, the operations between elements of GF(2) or GF(3), represented by Latin letters, are the operations in GF(2) or GF(3), respectively: formula_18 GF(16). The polynomial formula_19 is irreducible over GF(2), that is, it is irreducible modulo 2. It follows that the elements of GF(16) may be represented by expressions formula_20 where "a", "b", "c", "d" are either 0 or 1 (elements of GF(2)), and "α" is a symbol such that formula_21 (that is, "α" is defined as a root of the given irreducible polynomial). As the characteristic of GF(2) is 2, each element is its additive inverse in GF(16). The addition and multiplication on GF(16) may be defined as follows; in following formulas, the operations between elements of GF(2), represented by Latin letters are the operations in GF(2). formula_22 The field GF(16) has eight primitive elements (the elements that have all nonzero elements of GF(16) as integer powers). These elements are the four roots of "X"4 + "X" + 1 and their multiplicative inverses. In particular, "α" is a primitive element, and the primitive elements are "α""m" with "m" less than and coprime with 15 (that is, 1, 2, 4, 7, 8, 11, 13, 14). Multiplicative structure. The set of non-zero elements in GF("q") is an abelian group under the multiplication, of order "q" – 1. By Lagrange's theorem, there exists a divisor "k" of "q" – 1 such that "xk" = 1 for every non-zero "x" in GF("q"). As the equation "xk" = 1 has at most "k" solutions in any field, "q" – 1 is the lowest possible value for "k". The structure theorem of finite abelian groups implies that this multiplicative group is cyclic, that is, all non-zero elements are powers of a single element. In summary: <templatestyles src="Block indent/styles.css"/>"The multiplicative group of the non-zero elements in" GF("q") "is cyclic, i.e., there exists an element" "a", "such that the" "q" – 1 "non-zero elements of" GF("q") "are" "a", "a"2, ..., "a""q"−2, "a""q"−1 = 1. Such an element "a" is called a primitive element of GF("q"). Unless "q" = 2, 3, the primitive element is not unique. The number of primitive elements is "φ"("q" − 1) where "φ" is Euler's totient function. The result above implies that "xq" = "x" for every "x" in GF("q"). The particular case where "q" is prime is Fermat's little theorem. Discrete logarithm. If "a" is a primitive element in GF("q"), then for any non-zero element "x" in "F", there is a unique integer "n" with 0 ≤ "n" ≤ "q" − 2 such that <templatestyles src="Block indent/styles.css"/>"x" = "an". This integer "n" is called the discrete logarithm of "x" to the base "a". While "an" can be computed very quickly, for example using exponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in various cryptographic protocols, see "Discrete logarithm" for details. When the nonzero elements of GF("q") are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction modulo "q" – 1. However, addition amounts to computing the discrete logarithm of "a""m" + "a""n". The identity <templatestyles src="Block indent/styles.css"/>"a""m" + "a""n" = "a""n"("a""m"−"n" + 1) allows one to solve this problem by constructing the table of the discrete logarithms of "a""n" + 1, called Zech's logarithms, for "n" = 0, ..., "q" − 2 (it is convenient to define the discrete logarithm of zero as being −∞). Zech's logarithms are useful for large computations, such as linear algebra over medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field. Roots of unity. Every nonzero element of a finite field is a root of unity, as "x""q"−1 = 1 for every nonzero element of GF("q"). If "n" is a positive integer, an "n"th primitive root of unity is a solution of the equation "xn" = 1 that is not a solution of the equation "xm" = 1 for any positive integer "m" < "n". If "a" is a "n"th primitive root of unity in a field "F", then "F" contains all the "n" roots of unity, which are 1, "a", "a"2, ..., "a""n"−1. The field GF("q") contains a "n"th primitive root of unity if and only if "n" is a divisor of "q" − 1; if "n" is a divisor of "q" − 1, then the number of primitive "n"th roots of unity in GF("q") is "φ"("n") (Euler's totient function). The number of "n"th roots of unity in GF("q") is gcd("n", "q" − 1). In a field of characteristic "p", every ("np")th root of unity is also a "n"th root of unity. It follows that primitive ("np")th roots of unity never exist in a field of characteristic "p". On the other hand, if "n" is coprime to "p", the roots of the "n"th cyclotomic polynomial are distinct in every field of characteristic "p", as this polynomial is a divisor of "X""n" − 1, whose discriminant "n""n" is nonzero modulo "p". It follows that the "n"th cyclotomic polynomial factors over GF("p") into distinct irreducible polynomials that have all the same degree, say "d", and that GF("p""d") is the smallest field of characteristic "p" that contains the "n"th primitive roots of unity. Example: GF(64). The field GF(64) has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements with minimal polynomial of degree 6 over GF(2)) are primitive elements; and the primitive elements are not all conjugate under the Galois group. The order of this field being 26, and the divisors of 6 being 1, 2, 3, 6, the subfields of GF(64) are GF(2), GF(22) = GF(4), GF(23) = GF(8), and GF(64) itself. As 2 and 3 are coprime, the intersection of GF(4) and GF(8) in GF(64) is the prime field GF(2). The union of GF(4) and GF(8) has thus 10 elements. The remaining 54 elements of GF(64) generate GF(64) in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree 6 over GF(2). This implies that, over GF(2), there are exactly 9 = irreducible monic polynomials of degree 6. This may be verified by factoring "X"64 − "X" over GF(2). The elements of GF(64) are primitive "n"th roots of unity for some "n" dividing 63. As the 3rd and the 7th roots of unity belong to GF(4) and GF(8), respectively, the 54 generators are primitive "n"th roots of unity for some "n" in {9, 21, 63}. Euler's totient function shows that there are 6 primitive 9th roots of unity, 12 primitive 21st roots of unity, and 36 primitive 63rd roots of unity. Summing these numbers, one finds again 54 elements. By factoring the cyclotomic polynomials over GF(2), one finds that: This shows that the best choice to construct GF(64) is to define it as GF(2)["X"] / ("X"6 + "X" + 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division. Frobenius automorphism and Galois theory. In this section, "p" is a prime number, and "q" = "p""n" is a power of "p". In GF("q"), the identity ("x" + "y")"p" = "xp" + "yp" implies that the map formula_26 is a GF("p")-linear endomorphism and a field automorphism of GF("q"), which fixes every element of the subfield GF("p"). It is called the Frobenius automorphism, after Ferdinand Georg Frobenius. Denoting by "φk" the composition of "φ" with itself "k" times, we have formula_27 It has been shown in the preceding section that "φ""n" is the identity. For 0 < "k" < "n", the automorphism "φ""k" is not the identity, as, otherwise, the polynomial formula_28 would have more than "pk" roots. There are no other GF("p")-automorphisms of GF("q"). In other words, GF("pn") has exactly "n" GF("p")-automorphisms, which are formula_29 In terms of Galois theory, this means that GF("p""n") is a Galois extension of GF("p"), which has a cyclic Galois group. The fact that the Frobenius map is surjective implies that every finite field is perfect. Polynomial factorization. If "F" is a finite field, a non-constant monic polynomial with coefficients in "F" is irreducible over "F", if it is not the product of two non-constant monic polynomials, with coefficients in "F". As every polynomial ring over a field is a unique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials. There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite field. They are a key step for factoring polynomials over the integers or the rational numbers. At least for this reason, every computer algebra system has functions for factoring polynomials over finite fields, or, at least, over finite prime fields. Irreducible polynomials of a given degree. The polynomial formula_30 factors into linear factors over a field of order "q". More precisely, this polynomial is the product of all monic polynomials of degree one over a field of order "q". This implies that, if "q" = "pn" then "Xq" − "X" is the product of all monic irreducible polynomials over GF("p"), whose degree divides "n". In fact, if "P" is an irreducible factor over GF("p") of "Xq" − "X", its degree divides "n", as its splitting field is contained in GF("p""n"). Conversely, if "P" is an irreducible monic polynomial over GF("p") of degree "d" dividing "n", it defines a field extension of degree "d", which is contained in GF("p""n"), and all roots of "P" belong to GF("p""n"), and are roots of "Xq" − "X"; thus "P" divides "Xq" − "X". As "Xq" − "X" does not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it. This property is used to compute the product of the irreducible factors of each degree of polynomials over GF("p"); see "Distinct degree factorization". Number of monic irreducible polynomials of a given degree over a finite field. The number "N"("q", "n") of monic irreducible polynomials of degree "n" over GF("q") is given by formula_31 where "μ" is the Möbius function. This formula is an immediate consequence of the property of "X""q" − "X" above and the Möbius inversion formula. By the above formula, the number of irreducible (not necessarily monic) polynomials of degree "n" over GF("q") is ("q" − 1)"N"("q", "n"). The exact formula implies the inequality formula_32 this is sharp if and only if "n" is a power of some prime. For every "q" and every "n", the right hand side is positive, so there is at least one irreducible polynomial of degree "n" over GF("q"). Applications. In cryptography, the difficulty of the discrete logarithm problem in finite fields or in elliptic curves is the basis of several widely used protocols, such as the Diffie–Hellman protocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field. In coding theory, many codes are constructed as subspaces of vector spaces over finite fields. Finite fields are used by many error correction codes, such as Reed–Solomon error correction code or BCH code. The finite field almost always has characteristic of 2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element of GF(28). One exception is PDF417 bar code, which is GF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic 2, generally variations of carry-less product. Finite fields are widely used in number theory, as many problems over the integers may be solved by reducing them modulo one or several prime numbers. For example, the fastest known algorithms for polynomial factorization and linear algebra over the field of rational numbers proceed by reduction modulo one or several primes, and then reconstruction of the solution by using Chinese remainder theorem, Hensel lifting or the LLL algorithm. Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example, "Hasse principle". Many recent developments of algebraic geometry were motivated by the need to enlarge the power of these modular methods. Wiles' proof of Fermat's Last Theorem is an example of a deep result involving many mathematical tools, including finite fields. The Weil conjectures concern the number of points on algebraic varieties over finite fields and the theory has many applications including exponential and character sum estimates. Finite fields have widespread application in combinatorics, two well known examples being the definition of Paley Graphs and the related construction for Hadamard Matrices. In arithmetic combinatorics finite fields and finite field models are used extensively, such as in Szemerédi's theorem on arithmetic progressions. Extensions. Wedderburn's little theorem. A division ring is a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings: Wedderburn's little theorem states that all finite division rings are commutative, and hence are finite fields. This result holds even if we relax the associativity axiom to alternativity, that is, all finite alternative division rings are finite fields, by the Artin–Zorn theorem. Algebraic closure. A finite field "F" is not algebraically closed: the polynomial formula_33 has no roots in "F", since "f" ("α") = 1 for all "α" in "F". Given a prime number p, let formula_34 be an algebraic closure of formula_35 It is not only unique up to an isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristic p. This property results mainly from the fact that the elements of formula_36 are exactly the roots of formula_37 and this defines an inclusion formula_38 for formula_39 These inclusions allow writing informally formula_40 The formal validation of this notation results from the fact that the above field inclusions form a directed set of fields; Its direct limit is formula_41 which may thus be considered as "directed union". Primitive elements in the algebraic closure. Given a primitive element formula_42 of formula_43 then formula_44 is a primitive element of formula_45 For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive element formula_46 of formula_47 in order that, whenever formula_48 one has formula_49 where formula_50 is the primitive element already chosen for formula_51 Such a construction may be obtained by Conway polynomials. Quasi-algebraic closure. Although finite fields are not algebraically closed, they are quasi-algebraically closed, which means that every homogeneous polynomial over a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture of Artin and Dickson proved by Chevalley (see "Chevalley–Warning theorem"). Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\mathbb{F}_{q}" }, { "math_id": 1, "text": "\\mathbb{Z}/p\\mathbb{Z}" }, { "math_id": 2, "text": "(x+y)^p=x^p+y^p" }, { "math_id": 3, "text": "X^p-X=\\prod_{a\\in \\mathrm{GF}(p)} (X-a)" }, { "math_id": 4, "text": "P = X^q-X" }, { "math_id": 5, "text": "x^q=x," }, { "math_id": 6, "text": "X^q-X= \\prod_{a\\in F} (X-a)." }, { "math_id": 7, "text": "\\mathrm{GF}(q) = \\mathrm{GF}(p)[X]/(P)" }, { "math_id": 8, "text": "X^n + aX + b," }, { "math_id": 9, "text": "\\mathbb{F}_4." }, { "math_id": 10, "text": "X^2+X+1" }, { "math_id": 11, "text": "\\mathrm{GF}(4) = \\mathrm{GF}(2)[X]/(X^2+X+1)." }, { "math_id": 12, "text": " \\varphi:x \\mapsto x^2" }, { "math_id": 13, "text": "a+b\\alpha," }, { "math_id": 14, "text": "\\begin{align}\n-(a+b\\alpha)&=-a+(-b)\\alpha\\\\\n(a+b\\alpha)+(c+d\\alpha)&=(a+c)+(b+d)\\alpha\\\\\n(a+b\\alpha)(c+d\\alpha)&=(ac + rbd)+ (ad+bc)\\alpha\\\\\n(a+b\\alpha)^{-1}&=a(a^2-rb^2)^{-1}+(-b)(a^2-rb^2)^{-1}\\alpha\n\\end{align}" }, { "math_id": 15, "text": "X^3-X-1" }, { "math_id": 16, "text": "a+b\\alpha+c\\alpha^2," }, { "math_id": 17, "text": "\\alpha^3=\\alpha+1." }, { "math_id": 18, "text": "\n\\begin{align}\n-(a+b\\alpha+c\\alpha^2)&=-a+(-b)\\alpha+(-c)\\alpha^2 \\qquad\\text{(for } \\mathrm{GF}(8), \\text{this operation is the identity)}\\\\\n(a+b\\alpha+c\\alpha^2)+(d+e\\alpha+f\\alpha^2)&=(a+d)+(b+e)\\alpha+(c+f)\\alpha^2\\\\\n(a+b\\alpha+c\\alpha^2)(d+e\\alpha+f\\alpha^2)&=(ad + bf+ce)+ (ae+bd+bf+ce+cf)\\alpha+(af+be+cd+cf)\\alpha^2\n\\end{align}\n" }, { "math_id": 19, "text": "X^4+X+1" }, { "math_id": 20, "text": "a+b\\alpha+c\\alpha^2+d\\alpha^3," }, { "math_id": 21, "text": "\\alpha^4=\\alpha+1" }, { "math_id": 22, "text": "\n\\begin{align}\n(a+b\\alpha+c\\alpha^2+d\\alpha^3)+(e+f\\alpha+g\\alpha^2+h\\alpha^3)&=(a+e)+(b+f)\\alpha+(c+g)\\alpha^2+(d+h)\\alpha^3\\\\\n(a+b\\alpha+c\\alpha^2+d\\alpha^3)(e+f\\alpha+g\\alpha^2+h\\alpha^3)&=(ae+bh+cg+df)\n+(af+be+bh+cg+df +ch+dg)\\alpha\\;+\\\\\n&\\quad\\;(ag+bf+ce +ch+dg+dh)\\alpha^2\n+(ah+bg+cf+de +dh)\\alpha^3\n\\end{align}\n" }, { "math_id": 23, "text": "X^6+X^3+1," }, { "math_id": 24, "text": "(X^6+X^4+X^2+X+1)(X^6+X^5+X^4+X^2+1)." }, { "math_id": 25, "text": "(X^6+X^4+X^3+X+1)(X^6+X+1)(X^6+X^5+1)(X^6+X^5+X^3+X^2+1)(X^6+X^5+X^2+X+1)(X^6+X^5+X^4+X+1)." }, { "math_id": 26, "text": " \\varphi:x \\mapsto x^p" }, { "math_id": 27, "text": " \\varphi^k:x \\mapsto x^{p^k}." }, { "math_id": 28, "text": "X^{p^k}-X" }, { "math_id": 29, "text": "\\mathrm{Id}=\\varphi^0, \\varphi, \\varphi^2, \\ldots, \\varphi^{n-1}." }, { "math_id": 30, "text": "X^q-X" }, { "math_id": 31, "text": "N(q,n)=\\frac{1}{n}\\sum_{d\\mid n} \\mu(d)q^{n/d}," }, { "math_id": 32, "text": "N(q,n)\\geq\\frac{1}{n} \\left(q^n-\\sum_{\\ell\\mid n, \\ \\ell \\text{ prime}} q^{n/\\ell}\\right);" }, { "math_id": 33, "text": "f(T) = 1+\\prod_{\\alpha \\in F} (T-\\alpha)," }, { "math_id": 34, "text": "\\overline{\\mathbb{F}}_p" }, { "math_id": 35, "text": "\\mathbb{F}_p." }, { "math_id": 36, "text": "\\mathbb F_{p^n}" }, { "math_id": 37, "text": "x^{p^n}-x," }, { "math_id": 38, "text": "\\mathbb \\mathbb F_{p^n}\\subset \\mathbb F_{p^{nm}} " }, { "math_id": 39, "text": "m>1." }, { "math_id": 40, "text": "\\overline{\\mathbb{F}}_p = \\bigcup_{n \\ge 1} \\mathbb{F}_{p^n}." }, { "math_id": 41, "text": "\\overline{\\mathbb{F}}_p," }, { "math_id": 42, "text": "g_{mn}" }, { "math_id": 43, "text": "\\mathbb{F}_{q^{mn}}," }, { "math_id": 44, "text": "g_{mn}^m" }, { "math_id": 45, "text": "\\mathbb{F}_{q^{n}}." }, { "math_id": 46, "text": "g_{n}" }, { "math_id": 47, "text": "\\mathbb{F}_{q^{n}}" }, { "math_id": 48, "text": "n=mh," }, { "math_id": 49, "text": "g_{m}=g_n^h," }, { "math_id": 50, "text": "g_{m}" }, { "math_id": 51, "text": "\\mathbb{F}_{q^{m}}." } ]
https://en.wikipedia.org/wiki?curid=11615
11617
Feynman diagram
Pictorial representation of the behavior of subatomic particles In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948. The interaction of subatomic particles can be complex and difficult to understand; Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics." While the diagrams are applied primarily to quantum field theory, they can also be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle." Feynman used Ernst Stueckelberg's interpretation of the positron as if it were an electron moving backward in time. Thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams. The calculation of probability amplitudes in theoretical particle physics requires the use of rather large and complicated integrals over a large number of variables. Feynman diagrams can represent these integrals graphically. A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory. Within the canonical formulation of quantum field theory, a Feynman diagram represents a term in the Wick's expansion of the perturbative S-matrix. Alternatively, the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible histories of the system from the initial to the final state, in terms of either particles or fields. The transition amplitude is then given as the matrix element of the S-matrix between the initial and final states of the quantum system. <templatestyles src="Template:TOC limit/styles.css" /> Motivation and history. When calculating scattering cross-sections in particle physics, the interaction between particles can be described by starting from a free field that describes the incoming and outgoing particles, and including an interaction Hamiltonian to describe how the particles deflect one another. The amplitude for scattering is the sum of each possible interaction history over all possible intermediate particle states. The number of times the interaction Hamiltonian acts is the order of the perturbation expansion, and the time-dependent perturbation theory for fields is known as the Dyson series. When the intermediate states at intermediate times are energy eigenstates (collections of particles with a definite momentum) the series is called old-fashioned perturbation theory (or time-dependent/time-ordered perturbation theory). The Dyson series can be alternatively rewritten as a sum over Feynman diagrams, where at each vertex both the energy and momentum are conserved, but where the length of the energy-momentum four-vector is not necessarily equal to the mass, i.e. the intermediate particles are so-called off-shell. The Feynman diagrams are much easier to keep track of than "old-fashioned" terms, because the old-fashioned way treats the particle and antiparticle contributions as separate. Each Feynman diagram is the sum of exponentially many old-fashioned terms, because each internal line can separately represent either a particle or an antiparticle. In a non-relativistic theory, there are no antiparticles and there is no doubling, so each Feynman diagram includes only one term. Feynman gave a prescription for calculating the amplitude (the Feynman rules, below) for any given diagram from a field theory Lagrangian. Each internal line corresponds to a factor of the virtual particle's propagator; each vertex where lines meet gives a factor derived from an interaction term in the Lagrangian, and incoming and outgoing lines carry an energy, momentum, and spin. In addition to their value as a mathematical tool, Feynman diagrams provide deep physical insight into the nature of particle interactions. Particles interact in every way available; in fact, intermediate virtual particles are allowed to propagate faster than light. The probability of each final state is then obtained by summing over all such possibilities. This is closely tied to the functional integral formulation of quantum mechanics, also invented by Feynman—see path integral formulation. The naïve application of such calculations often produces diagrams whose amplitudes are infinite, because the short-distance particle interactions require a careful limiting procedure, to include particle self-interactions. The technique of renormalization, suggested by Ernst Stueckelberg and Hans Bethe and implemented by Dyson, Feynman, Schwinger, and Tomonaga compensates for this effect and eliminates the troublesome infinities. After renormalization, calculations using Feynman diagrams match experimental results with very high accuracy. Feynman diagram and path integral methods are also used in statistical mechanics and can even be applied to classical mechanics. Alternative names. Murray Gell-Mann always referred to Feynman diagrams as Stueckelberg diagrams, after a Swiss physicist, Ernst Stueckelberg, who devised a similar notation many years earlier. Stueckelberg was motivated by the need for a manifestly covariant formalism for quantum field theory, but did not provide as automated a way to handle symmetry factors and loops, although he was first to find the correct physical interpretation in terms of forward and backward in time particle paths, all without the path-integral. Historically, as a book-keeping device of covariant perturbation theory, the graphs were called Feynman–Dyson diagrams or Dyson graphs, because the path integral was unfamiliar when they were introduced, and Freeman Dyson's derivation from old-fashioned perturbation theory borrowed from the perturbative expansions in statistical mechanics was easier to follow for physicists trained in earlier methods. Feynman had to lobby hard for the diagrams, which confused the establishment physicists trained in equations and graphs. Representation of physical reality. In their presentations of fundamental interactions, written from the particle physics perspective, Gerard 't Hooft and Martinus Veltman gave good arguments for taking the original, non-regularized Feynman diagrams as the most succinct representation of our present knowledge about the physics of quantum scattering of fundamental particles. Their motivations are consistent with the convictions of James Daniel Bjorken and Sidney Drell: The Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand. Although the statement of the theory in terms of graphs may imply perturbation theory, use of graphical methods in the many-body problem shows that this formalism is flexible enough to deal with phenomena of nonperturbative characters ... Some modification of the Feynman rules of calculation may well outlive the elaborate mathematical structure of local canonical quantum field theory ... In quantum field theories the Feynman diagrams are obtained from a Lagrangian by Feynman rules. Dimensional regularization is a method for regularizing integrals in the evaluation of Feynman diagrams; it assigns values to them that are meromorphic functions of an auxiliary complex parameter d, called the dimension. Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension d and spacetime points. Particle-path interpretation. A Feynman diagram is a representation of quantum field theory processes in terms of particle interactions. The particles are represented by the lines of the diagram, which can be squiggly or straight, with an arrow or without, depending on the type of particle. A point where lines connect to other lines is a "vertex", and this is where the particles meet and interact: by emitting or absorbing new particles, deflecting one another, or changing type. There are three different types of lines: "internal lines" connect two vertices, "incoming lines" extend from "the past" to a vertex and represent an initial state, and "outgoing lines" extend from a vertex to "the future" and represent the final state (the latter two are also known as "external lines"). Traditionally, the bottom of the diagram is the past and the top the future; other times, the past is to the left and the future to the right. When calculating correlation functions instead of scattering amplitudes, there is no past and future and all the lines are internal. The particles then begin and end on little x's, which represent the positions of the operators whose correlation is being calculated. Feynman diagrams are a pictorial representation of a contribution to the total amplitude for a process that can happen in several different ways. When a group of incoming particles are to scatter off each other, the process can be thought of as one where the particles travel over all possible paths, including paths that go backward in time. Feynman diagrams are often confused with spacetime diagrams and bubble chamber images because they all describe particle scattering. Feynman diagrams are graphs that represent the interaction of particles rather than the physical position of the particle during a scattering process. Unlike a bubble chamber picture, only the sum of all the Feynman diagrams represent any given particle interaction; particles do not choose a particular diagram each time they interact. The law of summation is in accord with the principle of superposition—every diagram contributes to the total amplitude for the process. Description. A Feynman diagram represents a perturbative contribution to the amplitude of a quantum transition from some initial quantum state to some final quantum state. For example, in the process of electron-positron annihilation the initial state is one electron and one positron, the final state: two photons. The initial state is often assumed to be at the left of the diagram and the final state at the right (although other conventions are also used quite often). A Feynman diagram consists of points, called vertices, and lines attached to the vertices. The particles in the initial state are depicted by lines sticking out in the direction of the initial state (e.g., to the left), the particles in the final state are represented by lines sticking out in the direction of the final state (e.g., to the right). In QED there are two types of particles: matter particles such as electrons or positrons (called fermions) and exchange particles (called gauge bosons). They are represented in Feynman diagrams as follows: In QED a vertex always has three lines attached to it: one bosonic line, one fermionic line with arrow toward the vertex, and one fermionic line with arrow away from the vertex. The vertices might be connected by a bosonic or fermionic propagator. A bosonic propagator is represented by a wavy line connecting two vertices (•~•). A fermionic propagator is represented by a solid line (with an arrow in one or another direction) connecting two vertices, (•←•). The number of vertices gives the order of the term in the perturbation series expansion of the transition amplitude. Electron–positron annihilation example. The electron–positron annihilation interaction: e+ + e− → 2γ has a contribution from the second order Feynman diagram shown adjacent: In the initial state (at the bottom; early time) there is one electron (e−) and one positron (e+) and in the final state (at the top; late time) there are two photons (γ). Canonical quantization formulation. The probability amplitude for a transition of a quantum system (between asymptotically free states) from the initial state to the final state is given by the matrix element formula_0 where S is the S-matrix. In terms of the time-evolution operator U, it is simply formula_1 In the interaction picture, this expands to formula_2 where HV is the interaction Hamiltonian and T signifies the time-ordered product of operators. Dyson's formula expands the time-ordered matrix exponential into a perturbation series in the powers of the interaction Hamiltonian density, formula_3 Equivalently, with the interaction Lagrangian LV, it is formula_4 A Feynman diagram is a graphical representation of a single summand in the Wick's expansion of the time-ordered product in the nth-order term "S"("n") of the Dyson series of the S-matrix, formula_5 where "N" signifies the normal-ordered product of the operators and (±) takes care of the possible sign change when commuting the fermionic operators to bring them together for a contraction (a propagator) and "A" represents all possible contractions. Feynman rules. The diagrams are drawn according to the Feynman rules, which depend upon the interaction Lagrangian. For the QED interaction Lagrangian formula_6 describing the interaction of a fermionic field ψ with a bosonic gauge field Aμ, the Feynman rules can be formulated in coordinate space as follows: Example: second order processes in QED. The second order perturbation term in the S-matrix is formula_8 Scattering of fermions. The Wick's expansion of the integrand gives (among others) the following term formula_9 where formula_10 is the electromagnetic contraction (propagator) in the Feynman gauge. This term is represented by the Feynman diagram at the right. This diagram gives contributions to the following processes: Compton scattering and annihilation/generation of e− e+ pairs. Another interesting term in the expansion is formula_11 where formula_12 is the fermionic contraction (propagator). Path integral formulation. In a path integral, the field Lagrangian, integrated over all possible field histories, defines the probability amplitude to go from one field configuration to another. In order to make sense, the field theory should have a well-defined ground state, and the integral should be performed a little bit rotated into imaginary time, i.e. a Wick rotation. The path integral formalism is completely equivalent to the canonical operator formalism above. Scalar field Lagrangian. A simple example is the free relativistic scalar field in d dimensions, whose action integral is: formula_13 The probability amplitude for a process is: formula_14 where A and B are space-like hypersurfaces that define the boundary conditions. The collection of all the "φ"("A") on the starting hypersurface give the initial value of the field, analogous to the starting position for a point particle, and the field values "φ"("B") at each point of the final hypersurface defines the final field value, which is allowed to vary, giving a different amplitude to end up at different values. This is the field-to-field transition amplitude. The path integral gives the expectation value of operators between the initial and final state: formula_15 and in the limit that A and B recede to the infinite past and the infinite future, the only contribution that matters is from the ground state (this is only rigorously true if the path-integral is defined slightly rotated into imaginary time). The path integral can be thought of as analogous to a probability distribution, and it is convenient to define it so that multiplying by a constant does not change anything: formula_16 The normalization factor on the bottom is called the "partition function" for the field, and it coincides with the statistical mechanical partition function at zero temperature when rotated into imaginary time. The initial-to-final amplitudes are ill-defined if one thinks of the continuum limit right from the beginning, because the fluctuations in the field can become unbounded. So the path-integral can be thought of as on a discrete square lattice, with lattice spacing a and the limit "a" → 0 should be taken carefully. If the final results do not depend on the shape of the lattice or the value of a, then the continuum limit exists. On a lattice. On a lattice, (i), the field can be expanded in Fourier modes: formula_17 Here the integration domain is over k restricted to a cube of side length , so that large values of k are not allowed. It is important to note that the k-measure contains the factors of 2π from Fourier transforms, this is the best standard convention for k-integrals in QFT. The lattice means that fluctuations at large k are not allowed to contribute right away, they only start to contribute in the limit "a" → 0. Sometimes, instead of a lattice, the field modes are just cut off at high values of k instead. It is also convenient from time to time to consider the space-time volume to be finite, so that the k modes are also a lattice. This is not strictly as necessary as the space-lattice limit, because interactions in k are not localized, but it is convenient for keeping track of the factors in front of the k-integrals and the momentum-conserving delta functions that will arise. On a lattice, (ii), the action needs to be discretized: formula_18 where is a pair of nearest lattice neighbors x and y. The discretization should be thought of as defining what the derivative ∂"μ""φ" means. In terms of the lattice Fourier modes, the action can be written: formula_19 For k near zero this is: formula_20 Now we have the continuum Fourier transform of the original action. In finite volume, the quantity ddk is not infinitesimal, but becomes the volume of a box made by neighboring Fourier modes, or (). The field φ is real-valued, so the Fourier transform obeys: formula_21 In terms of real and imaginary parts, the real part of "φ"("k") is an even function of k, while the imaginary part is odd. The Fourier transform avoids double-counting, so that it can be written: formula_22 over an integration domain that integrates over each pair ("k",−"k") exactly once. For a complex scalar field with action formula_23 the Fourier transform is unconstrained: formula_24 and the integral is over all k. Integrating over all different values of "φ"("x") is equivalent to integrating over all Fourier modes, because taking a Fourier transform is a unitary linear transformation of field coordinates. When you change coordinates in a multidimensional integral by a linear transformation, the value of the new integral is given by the determinant of the transformation matrix. If formula_25 then formula_26 If A is a rotation, then formula_27 so that det "A" ±1, and the sign depends on whether the rotation includes a reflection or not. The matrix that changes coordinates from "φ"("x") to "φ"("k") can be read off from the definition of a Fourier transform. formula_28 and the Fourier inversion theorem tells you the inverse: formula_29 which is the complex conjugate-transpose, up to factors of 2π. On a finite volume lattice, the determinant is nonzero and independent of the field values. formula_30 and the path integral is a separate factor at each value of k. formula_31 The factor ddk is the infinitesimal volume of a discrete cell in k-space, in a square lattice box formula_32 where L is the side-length of the box. Each separate factor is an oscillatory Gaussian, and the width of the Gaussian diverges as the volume goes to infinity. In imaginary time, the "Euclidean action" becomes positive definite, and can be interpreted as a probability distribution. The probability of a field having values φk is formula_33 The expectation value of the field is the statistical expectation value of the field when chosen according to the probability distribution: formula_34 Since the probability of φk is a product, the value of φk at each separate value of k is independently Gaussian distributed. The variance of the Gaussian is , which is formally infinite, but that just means that the fluctuations are unbounded in infinite volume. In any finite volume, the integral is replaced by a discrete sum, and the variance of the integral is . Monte Carlo. The path integral defines a probabilistic algorithm to generate a Euclidean scalar field configuration. Randomly pick the real and imaginary parts of each Fourier mode at wavenumber k to be a Gaussian random variable with variance . This generates a configuration "φC"("k") at random, and the Fourier transform gives "φC"("x"). For real scalar fields, the algorithm must generate only one of each pair "φ"("k"), "φ"(−"k"), and make the second the complex conjugate of the first. To find any correlation function, generate a field again and again by this procedure, and find the statistical average: formula_35 where is the number of configurations, and the sum is of the product of the field values on each configuration. The Euclidean correlation function is just the same as the correlation function in statistics or statistical mechanics. The quantum mechanical correlation functions are an analytic continuation of the Euclidean correlation functions. For free fields with a quadratic action, the probability distribution is a high-dimensional Gaussian, and the statistical average is given by an explicit formula. But the Monte Carlo method also works well for bosonic interacting field theories where there is no closed form for the correlation functions. Scalar propagator. Each mode is independently Gaussian distributed. The expectation of field modes is easy to calculate: formula_36 for "k" ≠ "k"′, since then the two Gaussian random variables are independent and both have zero mean. formula_37 in finite volume V, when the two k-values coincide, since this is the variance of the Gaussian. In the infinite volume limit, formula_38 Strictly speaking, this is an approximation: the lattice propagator is: formula_39 But near "k" 0, for field fluctuations long compared to the lattice spacing, the two forms coincide. The delta functions contain factors of 2π, so that they cancel out the 2π factors in the measure for k integrals. formula_40 where "δD"("k") is the ordinary one-dimensional Dirac delta function. This convention for delta-functions is not universal—some authors keep the factors of 2π in the delta functions (and in the k-integration) explicit. Equation of motion. The form of the propagator can be more easily found by using the equation of motion for the field. From the Lagrangian, the equation of motion is: formula_41 and in an expectation value, this says: formula_42 Where the derivatives act on x, and the identity is true everywhere except when x and y coincide, and the operator order matters. The form of the singularity can be understood from the canonical commutation relations to be a delta-function. Defining the (Euclidean) "Feynman propagator" Δ as the Fourier transform of the time-ordered two-point function (the one that comes from the path-integral): formula_43 So that: formula_44 If the equations of motion are linear, the propagator will always be the reciprocal of the quadratic-form matrix that defines the free Lagrangian, since this gives the equations of motion. This is also easy to see directly from the path integral. The factor of i disappears in the Euclidean theory. Wick theorem. Because each field mode is an independent Gaussian, the expectation values for the product of many field modes obeys "Wick's theorem": formula_45 is zero unless the field modes coincide in pairs. This means that it is zero for an odd number of φ, and for an even number of φ, it is equal to a contribution from each pair separately, with a delta function. formula_46 where the sum is over each partition of the field modes into pairs, and the product is over the pairs. For example, formula_47 An interpretation of Wick's theorem is that each field insertion can be thought of as a dangling line, and the expectation value is calculated by linking up the lines in pairs, putting a delta function factor that ensures that the momentum of each partner in the pair is equal, and dividing by the propagator. Higher Gaussian moments — completing Wick's theorem. There is a subtle point left before Wick's theorem is proved—what if more than two of the formula_48s have the same momentum? If it's an odd number, the integral is zero; negative values cancel with the positive values. But if the number is even, the integral is positive. The previous demonstration assumed that the formula_48s would only match up in pairs. But the theorem is correct even when arbitrarily many of the formula_48 are equal, and this is a notable property of Gaussian integration: formula_49 formula_50 Dividing by I, formula_51 formula_52 If Wick's theorem were correct, the higher moments would be given by all possible pairings of a list of 2"n" different x: formula_53 where the x are all the same variable, the index is just to keep track of the number of ways to pair them. The first x can be paired with 2"n" − 1 others, leaving 2"n" − 2. The next unpaired x can be paired with 2"n" − 3 different x leaving 2"n" − 4, and so on. This means that Wick's theorem, uncorrected, says that the expectation value of "x"2"n" should be: formula_54 and this is in fact the correct answer. So Wick's theorem holds no matter how many of the momenta of the internal variables coincide. Interaction. Interactions are represented by higher order contributions, since quadratic contributions are always Gaussian. The simplest interaction is the quartic self-interaction, with an action: formula_55 The reason for the combinatorial factor 4! will be clear soon. Writing the action in terms of the lattice (or continuum) Fourier modes: formula_56 Where SF is the free action, whose correlation functions are given by Wick's theorem. The exponential of S in the path integral can be expanded in powers of λ, giving a series of corrections to the free action. formula_57 The path integral for the interacting action is then a power series of corrections to the free action. The term represented by X should be thought of as four half-lines, one for each factor of "φ"("k"). The half-lines meet at a vertex, which contributes a delta-function that ensures that the sum of the momenta are all equal. To compute a correlation function in the interacting theory, there is a contribution from the X terms now. For example, the path-integral for the four-field correlator: formula_58 which in the free field was only nonzero when the momenta k were equal in pairs, is now nonzero for all values of k. The momenta of the insertions "φ"("ki") can now match up with the momenta of the Xs in the expansion. The insertions should also be thought of as half-lines, four in this case, which carry a momentum k, but one that is not integrated. The lowest-order contribution comes from the first nontrivial term "e"−"SF""X" in the Taylor expansion of the action. Wick's theorem requires that the momenta in the X half-lines, the "φ"("k") factors in X, should match up with the momenta of the external half-lines in pairs. The new contribution is equal to: formula_59 The 4! inside X is canceled because there are exactly 4! ways to match the half-lines in X to the external half-lines. Each of these different ways of matching the half-lines together in pairs contributes exactly once, regardless of the values of "k"1,2,3,4, by Wick's theorem. Feynman diagrams. The expansion of the action in powers of X gives a series of terms with progressively higher number of Xs. The contribution from the term with exactly n Xs is called nth order. The nth order terms has: By Wick's theorem, each pair of half-lines must be paired together to make a "line", and this line gives a factor of formula_60 which multiplies the contribution. This means that the two half-lines that make a line are forced to have equal and opposite momentum. The line itself should be labelled by an arrow, drawn parallel to the line, and labeled by the momentum in the line k. The half-line at the tail end of the arrow carries momentum k, while the half-line at the head-end carries momentum −"k". If one of the two half-lines is external, this kills the integral over the internal k, since it forces the internal k to be equal to the external k. If both are internal, the integral over k remains. The diagrams that are formed by linking the half-lines in the Xs with the external half-lines, representing insertions, are the Feynman diagrams of this theory. Each line carries a factor of , the propagator, and either goes from vertex to vertex, or ends at an insertion. If it is internal, it is integrated over. At each vertex, the total incoming k is equal to the total outgoing k. The number of ways of making a diagram by joining half-lines into lines almost completely cancels the factorial factors coming from the Taylor series of the exponential and the 4! at each vertex. Loop order. A forest diagram is one where all the internal lines have momentum that is completely determined by the external lines and the condition that the incoming and outgoing momentum are equal at each vertex. The contribution of these diagrams is a product of propagators, without any integration. A tree diagram is a connected forest diagram. An example of a tree diagram is the one where each of four external lines end on an X. Another is when three external lines end on an X, and the remaining half-line joins up with another X, and the remaining half-lines of this X run off to external lines. These are all also forest diagrams (as every tree is a forest); an example of a forest that is not a tree is when eight external lines end on two Xs. It is easy to verify that in all these cases, the momenta on all the internal lines is determined by the external momenta and the condition of momentum conservation in each vertex. A diagram that is not a forest diagram is called a "loop" diagram, and an example is one where two lines of an X are joined to external lines, while the remaining two lines are joined to each other. The two lines joined to each other can have any momentum at all, since they both enter and leave the same vertex. A more complicated example is one where two Xs are joined to each other by matching the legs one to the other. This diagram has no external lines at all. The reason loop diagrams are called loop diagrams is because the number of k-integrals that are left undetermined by momentum conservation is equal to the number of independent closed loops in the diagram, where independent loops are counted as in homology theory. The homology is real-valued (actually R"d" valued), the value associated with each line is the momentum. The boundary operator takes each line to the sum of the end-vertices with a positive sign at the head and a negative sign at the tail. The condition that the momentum is conserved is exactly the condition that the boundary of the k-valued weighted graph is zero. A set of valid k-values can be arbitrarily redefined whenever there is a closed loop. A closed loop is a cyclical path of adjacent vertices that never revisits the same vertex. Such a cycle can be thought of as the boundary of a hypothetical 2-cell. The k-labellings of a graph that conserve momentum (i.e. which has zero boundary) up to redefinitions of k (i.e. up to boundaries of 2-cells) define the first homology of a graph. The number of independent momenta that are not determined is then equal to the number of independent homology loops. For many graphs, this is equal to the number of loops as counted in the most intuitive way. Symmetry factors. The number of ways to form a given Feynman diagram by joining together half-lines is large, and by Wick's theorem, each way of pairing up the half-lines contributes equally. Often, this completely cancels the factorials in the denominator of each term, but the cancellation is sometimes incomplete. The uncancelled denominator is called the "symmetry factor" of the diagram. The contribution of each diagram to the correlation function must be divided by its symmetry factor. For example, consider the Feynman diagram formed from two external lines joined to one X, and the remaining two half-lines in the X joined to each other. There are 4 × 3 ways to join the external half-lines to the X, and then there is only one way to join the two remaining lines to each other. The X comes divided by 4! 4 × 3 × 2, but the number of ways to link up the X half lines to make the diagram is only 4 × 3, so the contribution of this diagram is divided by two. For another example, consider the diagram formed by joining all the half-lines of one X to all the half-lines of another X. This diagram is called a "vacuum bubble", because it does not link up to any external lines. There are 4! ways to form this diagram, but the denominator includes a 2! (from the expansion of the exponential, there are two Xs) and two factors of 4!. The contribution is multiplied by = . Another example is the Feynman diagram formed from two Xs where each X links up to two external lines, and the remaining two half-lines of each X are joined to each other. The number of ways to link an X to two external lines is 4 × 3, and either X could link up to either pair, giving an additional factor of 2. The remaining two half-lines in the two Xs can be linked to each other in two ways, so that the total number of ways to form the diagram is 4 × 3 × 4 × 3 × 2 × 2, while the denominator is 4! × 4! × 2!. The total symmetry factor is 2, and the contribution of this diagram is divided by 2. The symmetry factor theorem gives the symmetry factor for a general diagram: the contribution of each Feynman diagram must be divided by the order of its group of automorphisms, the number of symmetries that it has. An automorphism of a Feynman graph is a permutation M of the lines and a permutation N of the vertices with the following properties: This theorem has an interpretation in terms of particle-paths: when identical particles are present, the integral over all intermediate particles must not double-count states that differ only by interchanging identical particles. Proof: To prove this theorem, label all the internal and external lines of a diagram with a unique name. Then form the diagram by linking a half-line to a name and then to the other half line. Now count the number of ways to form the named diagram. Each permutation of the Xs gives a different pattern of linking names to half-lines, and this is a factor of "n"!. Each permutation of the half-lines in a single X gives a factor of 4!. So a named diagram can be formed in exactly as many ways as the denominator of the Feynman expansion. But the number of unnamed diagrams is smaller than the number of named diagram by the order of the automorphism group of the graph. Connected diagrams: "linked-cluster theorem". Roughly speaking, a Feynman diagram is called "connected" if all vertices and propagator lines are linked by a sequence of vertices and propagators of the diagram itself. If one views it as an undirected graph it is connected. The remarkable relevance of such diagrams in QFTs is due to the fact that they are sufficient to determine the quantum partition function "Z"["J"]. More precisely, connected Feynman diagrams determine formula_61 To see this, one should recall that formula_62 with Dk constructed from some (arbitrary) Feynman diagram that can be thought to consist of several connected components Ci. If one encounters ni (identical) copies of a component Ci within the Feynman diagram Dk one has to include a "symmetry factor" "ni"!. However, in the end each contribution of a Feynman diagram Dk to the partition function has the generic form formula_63 where i labels the (infinitely) many connected Feynman diagrams possible. A scheme to successively create such contributions from the Dk to "Z"["J"] is obtained by formula_64 and therefore yields formula_65 To establish the "normalization" "Z"0 exp "W"[0] 1 one simply calculates all connected "vacuum diagrams", i.e., the diagrams without any "sources" J (sometimes referred to as "external legs" of a Feynman diagram). The linked-cluster theorem was first proved to order four by Keith Brueckner in 1955, and for infinite orders by Jeffrey Goldstone in 1957. Vacuum bubbles. An immediate consequence of the linked-cluster theorem is that all vacuum bubbles, diagrams without external lines, cancel when calculating correlation functions. A correlation function is given by a ratio of path-integrals: formula_66 The top is the sum over all Feynman diagrams, including disconnected diagrams that do not link up to external lines at all. In terms of the connected diagrams, the numerator includes the same contributions of vacuum bubbles as the denominator: formula_67 Where the sum over E diagrams includes only those diagrams each of whose connected components end on at least one external line. The vacuum bubbles are the same whatever the external lines, and give an overall multiplicative factor. The denominator is the sum over all vacuum bubbles, and dividing gets rid of the second factor. The vacuum bubbles then are only useful for determining Z itself, which from the definition of the path integral is equal to: formula_68 where ρ is the energy density in the vacuum. Each vacuum bubble contains a factor of "δ"("k") zeroing the total k at each vertex, and when there are no external lines, this contains a factor of "δ"(0), because the momentum conservation is over-enforced. In finite volume, this factor can be identified as the total volume of space time. Dividing by the volume, the remaining integral for the vacuum bubble has an interpretation: it is a contribution to the energy density of the vacuum. Sources. Correlation functions are the sum of the connected Feynman diagrams, but the formalism treats the connected and disconnected diagrams differently. Internal lines end on vertices, while external lines go off to insertions. Introducing "sources" unifies the formalism, by making new vertices where one line can end. Sources are external fields, fields that contribute to the action, but are not dynamical variables. A scalar field source is another scalar field h that contributes a term to the (Lorentz) Lagrangian: formula_69 In the Feynman expansion, this contributes H terms with one half-line ending on a vertex. Lines in a Feynman diagram can now end either on an X vertex, or on an H vertex, and only one line enters an H vertex. The Feynman rule for an H vertex is that a line from an H with momentum k gets a factor of "h"("k"). The sum of the connected diagrams in the presence of sources includes a term for each connected diagram in the absence of sources, except now the diagrams can end on the source. Traditionally, a source is represented by a little "×" with one line extending out, exactly as an insertion. formula_70 where "C"("k"1...,"kn") is the connected diagram with n external lines carrying momentum as indicated. The sum is over all connected diagrams, as before. The field h is not dynamical, which means that there is no path integral over h: h is just a parameter in the Lagrangian, which varies from point to point. The path integral for the field is: formula_71 and it is a function of the values of h at every point. One way to interpret this expression is that it is taking the Fourier transform in field space. If there is a probability density on R"n", the Fourier transform of the probability density is: formula_72 The Fourier transform is the expectation of an oscillatory exponential. The path integral in the presence of a source "h"("x") is: formula_73 which, on a lattice, is the product of an oscillatory exponential for each field value: formula_74 The Fourier transform of a delta-function is a constant, which gives a formal expression for a delta function: formula_75 This tells you what a field delta function looks like in a path-integral. For two scalar fields φ and η, formula_76 which integrates over the Fourier transform coordinate, over h. This expression is useful for formally changing field coordinates in the path integral, much as a delta function is used to change coordinates in an ordinary multi-dimensional integral. The partition function is now a function of the field h, and the physical partition function is the value when h is the zero function: The correlation functions are derivatives of the path integral with respect to the source: formula_77 In Euclidean space, source contributions to the action can still appear with a factor of i, so that they still do a Fourier transform. Spin ; "photons" and "ghosts". Spin : Grassmann integrals. The field path integral can be extended to the Fermi case, but only if the notion of integration is expanded. A Grassmann integral of a free Fermi field is a high-dimensional determinant or Pfaffian, which defines the new type of Gaussian integration appropriate for Fermi fields. The two fundamental formulas of Grassmann integration are: formula_78 where M is an arbitrary matrix and "ψ", "ψ" are independent Grassmann variables for each index i, and formula_79 where A is an antisymmetric matrix, ψ is a collection of Grassmann variables, and the is to prevent double-counting (since "ψiψj" −"ψjψi"). In matrix notation, where and are Grassmann-valued row vectors, η and ψ are Grassmann-valued column vectors, and M is a real-valued matrix: formula_80 where the last equality is a consequence of the translation invariance of the Grassmann integral. The Grassmann variables η are external sources for ψ, and differentiating with respect to η pulls down factors of . formula_81 again, in a schematic matrix notation. The meaning of the formula above is that the derivative with respect to the appropriate component of η and gives the matrix element of "M"−1. This is exactly analogous to the bosonic path integration formula for a Gaussian integral of a complex bosonic field: formula_82 formula_83 So that the propagator is the inverse of the matrix in the quadratic part of the action in both the Bose and Fermi case. For real Grassmann fields, for Majorana fermions, the path integral is a Pfaffian times a source quadratic form, and the formulas give the square root of the determinant, just as they do for real Bosonic fields. The propagator is still the inverse of the quadratic part. The free Dirac Lagrangian: formula_84 formally gives the equations of motion and the anticommutation relations of the Dirac field, just as the Klein Gordon Lagrangian in an ordinary path integral gives the equations of motion and commutation relations of the scalar field. By using the spatial Fourier transform of the Dirac field as a new basis for the Grassmann algebra, the quadratic part of the Dirac action becomes simple to invert: formula_85 The propagator is the inverse of the matrix M linking "ψ"("k") and "ψ"("k"), since different values of k do not mix together. formula_86 The analog of Wick's theorem matches ψ and in pairs: formula_87 where S is the sign of the permutation that reorders the sequence of and ψ to put the ones that are paired up to make the delta-functions next to each other, with the coming right before the ψ. Since a "ψ", "ψ" pair is a commuting element of the Grassmann algebra, it does not matter what order the pairs are in. If more than one "ψ", "ψ" pair have the same k, the integral is zero, and it is easy to check that the sum over pairings gives zero in this case (there are always an even number of them). This is the Grassmann analog of the higher Gaussian moments that completed the Bosonic Wick's theorem earlier. The rules for spin- Dirac particles are as follows: The propagator is the inverse of the Dirac operator, the lines have arrows just as for a complex scalar field, and the diagram acquires an overall factor of −1 for each closed Fermi loop. If there are an odd number of Fermi loops, the diagram changes sign. Historically, the −1 rule was very difficult for Feynman to discover. He discovered it after a long process of trial and error, since he lacked a proper theory of Grassmann integration. The rule follows from the observation that the number of Fermi lines at a vertex is always even. Each term in the Lagrangian must always be Bosonic. A Fermi loop is counted by following Fermionic lines until one comes back to the starting point, then removing those lines from the diagram. Repeating this process eventually erases all the Fermionic lines: this is the Euler algorithm to 2-color a graph, which works whenever each vertex has even degree. The number of steps in the Euler algorithm is only equal to the number of independent Fermionic homology cycles in the common special case that all terms in the Lagrangian are exactly quadratic in the Fermi fields, so that each vertex has exactly two Fermionic lines. When there are four-Fermi interactions (like in the Fermi effective theory of the weak nuclear interactions) there are more k-integrals than Fermi loops. In this case, the counting rule should apply the Euler algorithm by pairing up the Fermi lines at each vertex into pairs that together form a bosonic factor of the term in the Lagrangian, and when entering a vertex by one line, the algorithm should always leave with the partner line. To clarify and prove the rule, consider a Feynman diagram formed from vertices, terms in the Lagrangian, with Fermion fields. The full term is Bosonic, it is a commuting element of the Grassmann algebra, so the order in which the vertices appear is not important. The Fermi lines are linked into loops, and when traversing the loop, one can reorder the vertex terms one after the other as one goes around without any sign cost. The exception is when you return to the starting point, and the final half-line must be joined with the unlinked first half-line. This requires one permutation to move the last to go in front of the first ψ, and this gives the sign. This rule is the only visible effect of the exclusion principle in internal lines. When there are external lines, the amplitudes are antisymmetric when two Fermi insertions for identical particles are interchanged. This is automatic in the source formalism, because the sources for Fermi fields are themselves Grassmann valued. Spin 1: photons. The naive propagator for photons is infinite, since the Lagrangian for the A-field is: formula_88 The quadratic form defining the propagator is non-invertible. The reason is the gauge invariance of the field; adding a gradient to A does not change the physics. To fix this problem, one needs to fix a gauge. The most convenient way is to demand that the divergence of A is some function f, whose value is random from point to point. It does no harm to integrate over the values of f, since it only determines the choice of gauge. This procedure inserts the following factor into the path integral for A: formula_89 The first factor, the delta function, fixes the gauge. The second factor sums over different values of f that are inequivalent gauge fixings. This is simply formula_90 The additional contribution from gauge-fixing cancels the second half of the free Lagrangian, giving the Feynman Lagrangian: formula_91 which is just like four independent free scalar fields, one for each component of A. The Feynman propagator is: formula_92 The one difference is that the sign of one propagator is wrong in the Lorentz case: the timelike component has an opposite sign propagator. This means that these particle states have negative norm—they are not physical states. In the case of photons, it is easy to show by diagram methods that these states are not physical—their contribution cancels with longitudinal photons to only leave two physical photon polarization contributions for any value of k. If the averaging over f is done with a coefficient different from , the two terms do not cancel completely. This gives a covariant Lagrangian with a coefficient formula_93, which does not affect anything: formula_94 and the covariant propagator for QED is: formula_95 Spin 1: non-Abelian ghosts. To find the Feynman rules for non-Abelian gauge fields, the procedure that performs the gauge fixing must be carefully corrected to account for a change of variables in the path-integral. The gauge fixing factor has an extra determinant from popping the delta function: formula_96 To find the form of the determinant, consider first a simple two-dimensional integral of a function f that depends only on r, not on the angle θ. Inserting an integral over θ: formula_97 The derivative-factor ensures that popping the delta function in θ removes the integral. Exchanging the order of integration, formula_98 but now the delta-function can be popped in y, formula_99 The integral over θ just gives an overall factor of 2π, while the rate of change of y with a change in θ is just x, so this exercise reproduces the standard formula for polar integration of a radial function: formula_100 In the path-integral for a nonabelian gauge field, the analogous manipulation is: formula_101 The factor in front is the volume of the gauge group, and it contributes a constant, which can be discarded. The remaining integral is over the gauge fixed action. formula_102 To get a covariant gauge, the gauge fixing condition is the same as in the Abelian case: formula_103 Whose variation under an infinitesimal gauge transformation is given by: formula_104 where α is the adjoint valued element of the Lie algebra at every point that performs the infinitesimal gauge transformation. This adds the Faddeev Popov determinant to the action: formula_105 which can be rewritten as a Grassmann integral by introducing ghost fields: formula_106 The determinant is independent of f, so the path-integral over f can give the Feynman propagator (or a covariant propagator) by choosing the measure for f as in the abelian case. The full gauge fixed action is then the Yang Mills action in Feynman gauge with an additional ghost action: formula_107 The diagrams are derived from this action. The propagator for the spin-1 fields has the usual Feynman form. There are vertices of degree 3 with momentum factors whose couplings are the structure constants, and vertices of degree 4 whose couplings are products of structure constants. There are additional ghost loops, which cancel out timelike and longitudinal states in A loops. In the Abelian case, the determinant for covariant gauges does not depend on A, so the ghosts do not contribute to the connected diagrams. Particle-path representation. Feynman diagrams were originally discovered by Feynman, by trial and error, as a way to represent the contribution to the S-matrix from different classes of particle trajectories. Schwinger representation. The Euclidean scalar propagator has a suggestive representation: formula_108 The meaning of this identity (which is an elementary integration) is made clearer by Fourier transforming to real space. formula_109 The contribution at any one value of τ to the propagator is a Gaussian of width . The total propagation function from 0 to x is a weighted sum over all proper times τ of a normalized Gaussian, the probability of ending up at x after a random walk of time τ. The path-integral representation for the propagator is then: formula_110 which is a path-integral rewrite of the Schwinger representation. The Schwinger representation is both useful for making manifest the particle aspect of the propagator, and for symmetrizing denominators of loop diagrams. Combining denominators. The Schwinger representation has an immediate practical application to loop diagrams. For example, for the diagram in the "φ"4 theory formed by joining two xs together in two half-lines, and making the remaining lines external, the integral over the internal propagators in the loop is: formula_111 Here one line carries momentum k and the other "k" + "p". The asymmetry can be fixed by putting everything in the Schwinger representation. formula_112 Now the exponent mostly depends on "t" + "t"′, formula_113 except for the asymmetrical little bit. Defining the variable "u" "t" + "t"′ and "v" , the variable u goes from 0 to ∞, while v goes from 0 to 1. The variable u is the total proper time for the loop, while v parametrizes the fraction of the proper time on the top of the loop versus the bottom. The Jacobian for this transformation of variables is easy to work out from the identities: formula_114 and "wedging" gives formula_115. This allows the u integral to be evaluated explicitly: formula_116 leaving only the v-integral. This method, invented by Schwinger but usually attributed to Feynman, is called "combining denominator". Abstractly, it is the elementary identity: formula_117 But this form does not provide the physical motivation for introducing v; v is the proportion of proper time on one of the legs of the loop. Once the denominators are combined, a shift in k to "k"′ "k" + "vp" symmetrizes everything: formula_118 This form shows that the moment that "p"2 is more negative than four times the mass of the particle in the loop, which happens in a physical region of Lorentz space, the integral has a cut. This is exactly when the external momentum can create physical particles. When the loop has more vertices, there are more denominators to combine: formula_119 The general rule follows from the Schwinger prescription for "n" + 1 denominators: formula_120 The integral over the Schwinger parameters ui can be split up as before into an integral over the total proper time "u" "u"0 + "u"1 ... + "un" and an integral over the fraction of the proper time in all but the first segment of the loop "vi" for "i" ∈ {1,2...,"n"}. The vi are positive and add up to less than 1, so that the v integral is over an n-dimensional simplex. The Jacobian for the coordinate transformation can be worked out as before: formula_121 formula_122 Wedging all these equations together, one obtains formula_123 This gives the integral: formula_124 where the simplex is the region defined by the conditions formula_125 as well as formula_126 Performing the u integral gives the general prescription for combining denominators: formula_127 Since the numerator of the integrand is not involved, the same prescription works for any loop, no matter what the spins are carried by the legs. The interpretation of the parameters vi is that they are the fraction of the total proper time spent on each leg. Scattering. The correlation functions of a quantum field theory describe the scattering of particles. The definition of "particle" in relativistic field theory is not self-evident, because if you try to determine the position so that the uncertainty is less than the compton wavelength, the uncertainty in energy is large enough to produce more particles and antiparticles of the same type from the vacuum. This means that the notion of a single-particle state is to some extent incompatible with the notion of an object localized in space. In the 1930s, Wigner gave a mathematical definition for single-particle states: they are a collection of states that form an irreducible representation of the Poincaré group. Single particle states describe an object with a finite mass, a well defined momentum, and a spin. This definition is fine for protons and neutrons, electrons and photons, but it excludes quarks, which are permanently confined, so the modern point of view is more accommodating: a particle is anything whose interaction can be described in terms of Feynman diagrams, which have an interpretation as a sum over particle trajectories. A field operator can act to produce a one-particle state from the vacuum, which means that the field operator "φ"("x") produces a superposition of Wigner particle states. In the free field theory, the field produces one particle states only. But when there are interactions, the field operator can also produce 3-particle, 5-particle (if there is no +/− symmetry also 2, 4, 6 particle) states too. To compute the scattering amplitude for single particle states only requires a careful limit, sending the fields to infinity and integrating over space to get rid of the higher-order corrections. The relation between scattering and correlation functions is the LSZ-theorem: The scattering amplitude for n particles to go to m particles in a scattering event is the given by the sum of the Feynman diagrams that go into the correlation function for "n" + "m" field insertions, leaving out the propagators for the external legs. For example, for the "λφ"4 interaction of the previous section, the order λ contribution to the (Lorentz) correlation function is: formula_128 Stripping off the external propagators, that is, removing the factors of , gives the invariant scattering amplitude M: formula_129 which is a constant, independent of the incoming and outgoing momentum. The interpretation of the scattering amplitude is that the sum of over all possible final states is the probability for the scattering event. The normalization of the single-particle states must be chosen carefully, however, to ensure that M is a relativistic invariant. Non-relativistic single particle states are labeled by the momentum k, and they are chosen to have the same norm at every value of k. This is because the nonrelativistic unit operator on single particle states is: formula_130 In relativity, the integral over the k-states for a particle of mass m integrates over a hyperbola in "E","k" space defined by the energy–momentum relation: formula_131 If the integral weighs each k point equally, the measure is not Lorentz-invariant. The invariant measure integrates over all values of k and E, restricting to the hyperbola with a Lorentz-invariant delta function: formula_132 So the normalized k-states are different from the relativistically normalized k-states by a factor of formula_133 The invariant amplitude M is then the probability amplitude for relativistically normalized incoming states to become relativistically normalized outgoing states. For nonrelativistic values of k, the relativistic normalization is the same as the nonrelativistic normalization (up to a constant factor ). In this limit, the "φ"4 invariant scattering amplitude is still constant. The particles created by the field φ scatter in all directions with equal amplitude. The nonrelativistic potential, which scatters in all directions with an equal amplitude (in the Born approximation), is one whose Fourier transform is constant—a delta-function potential. The lowest order scattering of the theory reveals the non-relativistic interpretation of this theory—it describes a collection of particles with a delta-function repulsion. Two such particles have an aversion to occupying the same point at the same time. Nonperturbative effects. Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect that goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever. But this point of view is misleading, because the diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations. They encode not only asymptotic processes like particle scattering, they also describe the multiplication rules for fields, the operator product expansion. Nonperturbative tunneling processes involve field configurations that on average get big when the coupling constant gets small, but each configuration is a coherent superposition of particles whose local interactions are described by Feynman diagrams. When the coupling is small, these become collective processes that involve large numbers of particles, but where the interactions between each of the particles is simple. (The perturbation series of any interacting quantum field theory has zero radius of convergence, complicating the limit of the infinite series of diagrams needed (in the limit of vanishing coupling) to describe such field configurations.) This means that nonperturbative effects show up asymptotically in resummations of infinite classes of diagrams, and these diagrams can be locally simple. The graphs determine the local equations of motion, while the allowed large-scale configurations describe non-perturbative physics. But because Feynman propagators are nonlocal in time, translating a field process to a coherent particle language is not completely intuitive, and has only been explicitly worked out in certain special cases. In the case of nonrelativistic bound states, the Bethe–Salpeter equation describes the class of diagrams to include to describe a relativistic atom. For quantum chromodynamics, the Shifman–Vainshtein–Zakharov sum rules describe non-perturbatively excited long-wavelength field modes in particle language, but only in a phenomenological way. The number of Feynman diagrams at high orders of perturbation theory is very large, because there are as many diagrams as there are graphs with a given number of nodes. Nonperturbative effects leave a signature on the way in which the number of diagrams and resummations diverge at high order. It is only because non-perturbative effects appear in hidden form in diagrams that it was possible to analyze nonperturbative effects in string theory, where in many cases a Feynman description is the only one available. See also. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "S_{\\rm fi}=\\langle \\mathrm{f}|S|\\mathrm{i}\\rangle\\;," }, { "math_id": 1, "text": "S=\\lim _{t_{2}\\rightarrow +\\infty }\\lim _{t_{1}\\rightarrow -\\infty }U(t_2, t_1)\\;." }, { "math_id": 2, "text": "S = \\mathcal{T}e^{-i\\int _{-\\infty}^{+\\infty}d\\tau H_V(\\tau )}." }, { "math_id": 3, "text": "S=\\sum_{n=0}^{\\infty}\\frac{(-i)^n}{n!} \\left(\\prod_{j=1}^n \\int d^4 x_j\\right) \\mathcal{T}\\left\\{\\prod_{j=1}^n \\mathcal{H}_V\\left(x_j\\right)\\right\\} \\equiv\\sum_{n=0}^{\\infty}S^{(n)}\\;." }, { "math_id": 4, "text": "S=\\sum_{n=0}^{\\infty}\\frac{i^n}{n!} \\left(\\prod_{j=1}^n \\int d^4 x_j\\right) \\mathcal{T}\\left\\{\\prod_{j=1}^n \\mathcal{L}_V\\left(x_j\\right)\\right\\} \\equiv\\sum_{n=0}^{\\infty}S^{(n)}\\;." }, { "math_id": 5, "text": "\\mathcal{T}\\prod_{j=1}^n\\mathcal{L}_V\\left(x_j\\right)=\\sum_{\\text{A}}(\\pm)\\mathcal{N}\\prod_{j=1}^n\\mathcal{L}_V\\left(x_j\\right)\\;," }, { "math_id": 6, "text": "L_v=-g\\bar\\psi\\gamma^\\mu\\psi A_\\mu" }, { "math_id": 7, "text": "A_\\mu(x_i)" }, { "math_id": 8, "text": "S^{(2)}=\\frac{(ie)^2}{2!}\\int d^4x\\, d^4x'\\, T\\bar\\psi(x)\\,\\gamma^\\mu\\,\\psi(x)\\,A_\\mu(x)\\,\\bar\\psi(x')\\,\\gamma^\\nu\\,\\psi(x')\\,A_\\nu(x').\\;" }, { "math_id": 9, "text": "N\\bar\\psi(x)\\gamma^\\mu\\psi(x)\\bar\\psi(x')\\gamma^\\nu\\psi(x')\\underline{A_\\mu(x)A_\\nu(x')}\\;," }, { "math_id": 10, "text": "\\underline{A_\\mu(x)A_\\nu(x')}=\\int\\frac{d^4k}{(2\\pi)^4}\\frac{-ig_{\\mu\\nu}}{k^2+i0}e^{-ik(x-x')}" }, { "math_id": 11, "text": "N\\bar\\psi(x)\\,\\gamma^\\mu\\,\\underline{\\psi(x)\\,\\bar\\psi(x')}\\,\\gamma^\\nu\\,\\psi(x')\\,A_\\mu(x)\\,A_\\nu(x')\\;," }, { "math_id": 12, "text": "\\underline{\\psi(x)\\bar\\psi(x')}=\\int\\frac{d^4p}{(2\\pi)^4}\\frac{i}{\\gamma p-m+i0}e^{-ip(x-x')}" }, { "math_id": 13, "text": " S = \\int \\tfrac12 \\partial_\\mu \\phi \\partial^\\mu \\phi\\, d^dx \\,." }, { "math_id": 14, "text": " \\int_A^B e^{iS}\\, D\\phi\\,, " }, { "math_id": 15, "text": " \\int_A^B e^{iS} \\phi(x_1) \\cdots \\phi(x_n) \\,D\\phi = \\left\\langle A\\left| \\phi(x_1) \\cdots \\phi(x_n) \\right|B \\right\\rangle\\,," }, { "math_id": 16, "text": " \\frac{\\displaystyle\\int e^{iS} \\phi(x_1) \\cdots \\phi(x_n) \\,D\\phi }{ \\displaystyle\\int e^{iS} \\,D\\phi } = \\left\\langle 0 \\left| \\phi(x_1) \\cdots \\phi(x_n) \\right|0\\right\\rangle \\,." }, { "math_id": 17, "text": "\\phi(x) = \\int \\frac{dk}{(2\\pi)^d} \\phi(k) e^{ik\\cdot x} = \\int_k \\phi(k) e^{ikx}\\,." }, { "math_id": 18, "text": " S= \\sum_{\\langle x,y\\rangle} \\tfrac12 \\big(\\phi(x) - \\phi(y) \\big)^2\\,," }, { "math_id": 19, "text": "S= \\int_k \\Big( \\big(1-\\cos(k_1)\\big) +\\big(1-\\cos(k_2)\\big) + \\cdots + \\big(1-\\cos(k_d)\\big) \\Big)\\phi^*_k \\phi^k\\,." }, { "math_id": 20, "text": "S = \\int_k \\tfrac12 k^2 \\left|\\phi(k)\\right|^2\\,." }, { "math_id": 21, "text": " \\phi(k)^* = \\phi(-k)\\,." }, { "math_id": 22, "text": " S = \\int_k \\tfrac12 k^2 \\phi(k) \\phi(-k)" }, { "math_id": 23, "text": " S = \\int \\tfrac12 \\partial_\\mu\\phi^* \\partial^\\mu\\phi \\,d^dx" }, { "math_id": 24, "text": " S = \\int_k \\tfrac12 k^2 \\left|\\phi(k)\\right|^2" }, { "math_id": 25, "text": " y_i = A_{ij} x_j\\,," }, { "math_id": 26, "text": "\\det(A) \\int dx_1\\, dx_2 \\cdots\\, dx_n = \\int dy_1\\, dy_2 \\cdots\\, dy_n\\,." }, { "math_id": 27, "text": "A^\\mathrm{T} A = I" }, { "math_id": 28, "text": " A_{kx} = e^{ikx} \\," }, { "math_id": 29, "text": " A^{-1}_{kx} = e^{-ikx} \\," }, { "math_id": 30, "text": " \\det A = 1 \\," }, { "math_id": 31, "text": " \\int \\exp \\left(\\frac{i}{2} \\sum_k k^2 \\phi^*(k) \\phi(k) \\right)\\, D\\phi = \\prod_k \\int_{\\phi_k} e^{\\frac{i}{2} k^2 \\left|\\phi_k \\right|^2\\, d^dk} \\," }, { "math_id": 32, "text": "d^dk = \\left(\\frac{1}{L}\\right)^d\\,," }, { "math_id": 33, "text": " e^{\\int_k - \\tfrac12 k^2 \\phi^*_k \\phi_k} = \\prod_k e^{- k^2 \\left|\\phi_k\\right|^2\\, d^dk}\\,. " }, { "math_id": 34, "text": "\\left\\langle \\phi(x_1) \\cdots \\phi(x_n) \\right\\rangle = \\frac{ \\displaystyle\\int e^{-S} \\phi(x_1) \\cdots \\phi(x_n)\\, D\\phi} {\\displaystyle\\int e^{-S}\\, D\\phi}" }, { "math_id": 35, "text": " \\left\\langle \\phi(x_1) \\cdots \\phi(x_n) \\right\\rangle = \\lim_{|C|\\rightarrow\\infty}\\frac{ \\displaystyle\\sum_C \\phi_C(x_1) \\cdots \\phi_C(x_n) }{|C| } " }, { "math_id": 36, "text": " \\left\\langle \\phi_k \\phi_{k'}\\right\\rangle = 0 \\," }, { "math_id": 37, "text": " \\left\\langle\\phi_k \\phi_k \\right\\rangle = \\frac{V}{k^2} " }, { "math_id": 38, "text": " \\left\\langle\\phi(k) \\phi(k')\\right\\rangle = \\delta(k-k') \\frac{1}{k^2} " }, { "math_id": 39, "text": "\\left\\langle\\phi(k) \\phi(k')\\right\\rangle = \\delta(k-k') \\frac{1}{2\\big(d - \\cos(k_1) + \\cos(k_2) \\cdots + \\cos(k_d)\\big) }" }, { "math_id": 40, "text": "\\delta(k) = (2\\pi)^d \\delta_D(k_1)\\delta_D(k_2) \\cdots \\delta_D(k_d) \\," }, { "math_id": 41, "text": " \\partial_\\mu \\partial^\\mu \\phi = 0\\," }, { "math_id": 42, "text": "\\partial_\\mu\\partial^\\mu \\left\\langle \\phi(x) \\phi(y)\\right\\rangle =0" }, { "math_id": 43, "text": " \\partial^2 \\Delta (x) = i\\delta(x)\\," }, { "math_id": 44, "text": " \\Delta(k) = \\frac{i}{k^2}" }, { "math_id": 45, "text": " \\left\\langle \\phi(k_1) \\phi(k_2) \\cdots \\phi(k_n)\\right\\rangle" }, { "math_id": 46, "text": "\\left\\langle \\phi(k_1) \\cdots \\phi(k_{2n})\\right\\rangle = \\sum \\prod_{i,j} \\frac{\\delta\\left(k_i - k_j\\right) }{k_i^2 } " }, { "math_id": 47, "text": " \\left\\langle \\phi(k_1) \\phi(k_2) \\phi(k_3) \\phi(k_4) \\right\\rangle = \\frac{\\delta(k_1 -k_2)}{k_1^2}\\frac{\\delta(k_3-k_4)}{k_3^2} + \\frac{\\delta(k_1-k_3)}{k_3^2}\\frac{\\delta(k_2-k_4)}{k_2^2} + \\frac{\\delta(k_1-k_4)}{k_1^2}\\frac{\\delta(k_2 -k_3)}{k_2^2}" }, { "math_id": 48, "text": "\\phi" }, { "math_id": 49, "text": " I = \\int e^{-ax^2/2}dx = \\sqrt\\frac{2\\pi}{a} " }, { "math_id": 50, "text": " \\frac{\\partial^n}{\\partial a^n } I = \\int \\frac{x^{2n}}{2^n} e^{-ax^2/2}dx = \\frac{1\\cdot 3 \\cdot 5 \\ldots \\cdot (2n-1) }{ 2 \\cdot 2 \\cdot 2 \\ldots \\;\\;\\;\\;\\;\\cdot 2\\;\\;\\;\\;\\;\\;} \\sqrt{2\\pi}\\, a^{-\\frac{2n+1}{2}}" }, { "math_id": 51, "text": " \\left\\langle x^{2n}\\right\\rangle=\\frac{\\displaystyle\\int x^{2n} e^{-a x^2/2} }{\\displaystyle \\int e^{-a x^2/2} } = 1 \\cdot 3 \\cdot 5 \\ldots \\cdot (2n-1) \\frac{1}{a^n} " }, { "math_id": 52, "text": " \\left\\langle x^2 \\right\\rangle = \\frac{1}{a} " }, { "math_id": 53, "text": " \\left\\langle x_1 x_2 x_3 \\cdots x_{2n} \\right\\rangle" }, { "math_id": 54, "text": " \\left\\langle x^{2n} \\right\\rangle = (2n-1)\\cdot(2n-3)\\ldots \\cdot5 \\cdot 3 \\cdot 1 \\left\\langle x^2\\right\\rangle^n " }, { "math_id": 55, "text": " S = \\int \\partial^\\mu \\phi \\partial_\\mu\\phi +\\frac {\\lambda}{ 4!} \\phi^4. " }, { "math_id": 56, "text": " S = \\int_k k^2 \\left|\\phi(k)\\right|^2 + \\frac{\\lambda}{4!}\\int_{k_1k_2k_3k_4} \\phi(k_1) \\phi(k_2) \\phi(k_3)\\phi(k_4) \\delta(k_1+k_2+k_3 + k_4) = S_F + X. " }, { "math_id": 57, "text": " e^{-S} = e^{-S_F} \\left( 1 + X + \\frac{1}{2!} X X + \\frac{1}{3!} X X X + \\cdots \\right) " }, { "math_id": 58, "text": "\\left\\langle \\phi(k_1) \\phi(k_2) \\phi(k_3) \\phi(k_4) \\right\\rangle = \\frac{\\displaystyle\\int e^{-S} \\phi(k_1)\\phi(k_2)\\phi(k_3)\\phi(k_4) D\\phi }{ Z}" }, { "math_id": 59, "text": " \\lambda \\frac{1}{ k_1^2} \\frac{1}{ k_2^2} \\frac{1}{ k_3^2} \\frac{1}{ k_4^2}\\,. " }, { "math_id": 60, "text": " \\frac{\\delta(k_1 + k_2)}{k_1^2} " }, { "math_id": 61, "text": "i W[J]\\equiv \\ln Z[J]." }, { "math_id": 62, "text": " Z[J]\\propto\\sum_k{D_k}" }, { "math_id": 63, "text": "\\prod_i \\frac{C_{i}^{n_i} }{ n_i!} " }, { "math_id": 64, "text": "\\left(\\frac{1}{0!}+\\frac{C_1}{1!}+\\frac{C^2_1}{2!}+\\cdots\\right)\\left(1+C_2+\\frac{1}{2}C^2_2+\\cdots\\right)\\cdots " }, { "math_id": 65, "text": "Z[J]\\propto\\prod_i{\\sum^\\infty_{n_i=0}{\\frac{C_i^{n_i}}{n_i!}}}=\\exp{\\sum_i{C_i}}\\propto \\exp{W[J]}\\,." }, { "math_id": 66, "text": " \\left\\langle \\phi_1(x_1) \\cdots \\phi_n(x_n)\\right\\rangle = \\frac{\\displaystyle\\int e^{-S} \\phi_1(x_1) \\cdots\\phi_n(x_n)\\, D\\phi }{\\displaystyle \\int e^{-S}\\, D\\phi}\\,." }, { "math_id": 67, "text": " \\int e^{-S}\\phi_1(x_1)\\cdots\\phi_n(x_n)\\, D\\phi = \\left(\\sum E_i\\right)\\left( \\exp\\left(\\sum_i C_i\\right) \\right)\\,." }, { "math_id": 68, "text": " Z= \\int e^{-S} D\\phi = e^{-HT} = e^{-\\rho V} " }, { "math_id": 69, "text": " \\int h(x) \\phi(x)\\, d^dx = \\int h(k) \\phi(k)\\, d^dk \\," }, { "math_id": 70, "text": " \\log\\big(Z[h]\\big) = \\sum_{n,C} h(k_1) h(k_2) \\cdots h(k_n) C(k_1,\\cdots,k_n)\\," }, { "math_id": 71, "text": " Z[h] = \\int e^{iS + i\\int h\\phi}\\, D\\phi \\," }, { "math_id": 72, "text": " \\int \\rho(y) e^{i k y}\\, d^n y = \\left\\langle e^{i k y} \\right\\rangle = \\left\\langle \\prod_{i=1}^{n} e^{ih_i y_i}\\right\\rangle \\," }, { "math_id": 73, "text": " Z[h] = \\int e^{iS} e^{i\\int_x h(x)\\phi(x)}\\, D\\phi = \\left\\langle e^{i h \\phi }\\right\\rangle" }, { "math_id": 74, "text": " \\left\\langle \\prod_x e^{i h_x \\phi_x}\\right\\rangle " }, { "math_id": 75, "text": " \\delta(x-y) = \\int e^{ik(x-y)}\\, dk " }, { "math_id": 76, "text": " \\delta(\\phi - \\eta) = \\int e^{ i h(x)\\big(\\phi(x) -\\eta(x)\\big)\\,d^dx}\\, Dh\\,, " }, { "math_id": 77, "text": " \\left\\langle\\phi(x)\\right\\rangle = \\frac{1}{Z} \\frac{\\partial}{\\partial h(x)} Z[h] = \\frac{\\partial}{\\partial h(x)} \\log\\big(Z[h]\\big)\\,." }, { "math_id": 78, "text": " \\int e^{M_{ij}{\\bar\\psi}^i \\psi^j}\\, D\\bar\\psi\\, D\\psi= \\mathrm{Det}(M)\\,, " }, { "math_id": 79, "text": " \\int e^{\\frac12 A_{ij} \\psi^i \\psi^j}\\, D\\psi = \\mathrm{Pfaff}(A)\\,," }, { "math_id": 80, "text": " Z = \\int e^{\\bar\\psi M \\psi + \\bar\\eta \\psi + \\bar\\psi \\eta}\\, D\\bar\\psi\\, D\\psi = \\int e^{\\left(\\bar\\psi+\\bar\\eta M^{-1}\\right)M \\left(\\psi+ M^{-1}\\eta\\right) - \\bar\\eta M^{-1}\\eta}\\, D\\bar\\psi\\, D\\psi = \\mathrm{Det}(M) e^{-\\bar\\eta M^{-1}\\eta}\\,," }, { "math_id": 81, "text": " \\left\\langle\\bar\\psi \\psi\\right\\rangle = \\frac{1}{Z} \\frac{\\partial}{\\partial \\eta} \\frac{\\partial}{\\partial \\bar\\eta} Z |_{\\eta=\\bar\\eta=0} = M^{-1}" }, { "math_id": 82, "text": " \\int e^{\\phi^* M \\phi + h^* \\phi + \\phi^* h } \\,D\\phi^*\\, D\\phi = \\frac{e^{h^* M^{-1} h} }{ \\mathrm{Det}(M)}" }, { "math_id": 83, "text": " \\left\\langle\\phi^* \\phi\\right\\rangle = \\frac{1}{Z} \\frac{\\partial}{\\partial h} \\frac{\\partial}{\\partial h^*}Z |_{h=h^*=0} = M^{-1} \\,." }, { "math_id": 84, "text": " \\int \\bar\\psi\\left(\\gamma^\\mu \\partial_{\\mu} - m \\right) \\psi " }, { "math_id": 85, "text": " S= \\int_k \\bar\\psi\\left( i\\gamma^\\mu k_\\mu - m \\right) \\psi\\,. " }, { "math_id": 86, "text": " \\left\\langle\\bar\\psi(k') \\psi (k) \\right\\rangle = \\delta (k+k')\\frac{1} {\\gamma\\cdot k - m} = \\delta(k+k')\\frac{\\gamma\\cdot k+m }{ k^2 - m^2} " }, { "math_id": 87, "text": " \\left\\langle\\bar\\psi(k_1) \\bar\\psi(k_2) \\cdots \\bar\\psi(k_n) \\psi(k'_1) \\cdots \\psi(k_n)\\right\\rangle = \\sum_{\\mathrm{pairings}} (-1)^S \\prod_{\\mathrm{pairs}\\; i,j} \\delta\\left(k_i -k_j\\right) \\frac{1}{\\gamma\\cdot k_i - m}" }, { "math_id": 88, "text": " S = \\int \\tfrac14 F^{\\mu\\nu} F_{\\mu\\nu} = \\int -\\tfrac12\\left(\\partial^\\mu A_\\nu \\partial_\\mu A^\\nu - \\partial^\\mu A_\\mu \\partial_\\nu A^\\nu \\right)\\,." }, { "math_id": 89, "text": " \\int \\delta\\left(\\partial_\\mu A^\\mu - f\\right) e^{-\\frac{f^2}{2} }\\, Df\\,. " }, { "math_id": 90, "text": " e^{- \\frac{\\left(\\partial_\\mu A_\\mu\\right)^2}{2}}\\,." }, { "math_id": 91, "text": " S= \\int \\partial^\\mu A^\\nu \\partial_\\mu A_\\nu " }, { "math_id": 92, "text": " \\left\\langle A_\\mu(k) A_\\nu(k') \\right\\rangle = \\delta\\left(k+k'\\right) \\frac{g_{\\mu\\nu}}{ k^2 }." }, { "math_id": 93, "text": "\\lambda" }, { "math_id": 94, "text": " S= \\int \\tfrac12\\left(\\partial^\\mu A^\\nu \\partial_\\mu A_\\nu - \\lambda \\left(\\partial_\\mu A^\\mu\\right)^2\\right)" }, { "math_id": 95, "text": "\\left \\langle A_\\mu(k) A_\\nu(k') \\right\\rangle =\\delta\\left(k+k'\\right)\\frac{g_{\\mu\\nu} - \\lambda\\frac{k_\\mu k_\\nu }{ k^2} }{ k^2}." }, { "math_id": 96, "text": " \\delta\\left(\\partial_\\mu A_\\mu - f\\right) e^{-\\frac{f^2}{2}} \\det M " }, { "math_id": 97, "text": " \\int f(r)\\, dx\\, dy = \\int f(r) \\int d\\theta\\, \\delta(y) \\left|\\frac{dy}{d\\theta}\\right|\\, dx\\, dy " }, { "math_id": 98, "text": " \\int f(r)\\, dx\\, dy = \\int d\\theta\\, \\int f(r) \\delta(y) \\left|\\frac{dy}{d\\theta}\\right|\\, dx\\, dy " }, { "math_id": 99, "text": " \\int f(r)\\, dx\\, dy = \\int d\\theta_0\\, \\int f(x) \\left|\\frac{dy}{d\\theta}\\right|\\, dx\\,. " }, { "math_id": 100, "text": " \\int f(r)\\, dx\\, dy = 2\\pi \\int f(x) x\\, dx " }, { "math_id": 101, "text": " \\int DA \\int \\delta\\big(F(A)\\big) \\det\\left(\\frac{\\partial F}{\\partial G}\\right)\\, DG e^{iS} = \\int DG \\int \\delta\\big(F(A)\\big)\\det\\left(\\frac{\\partial F}{ \\partial G}\\right) e^{iS} \\," }, { "math_id": 102, "text": " \\int \\det\\left(\\frac{\\partial F}{ \\partial G}\\right)e^{iS_{GF}}\\, DA \\," }, { "math_id": 103, "text": " \\partial_\\mu A^\\mu = f \\,," }, { "math_id": 104, "text": " \\partial_\\mu\\, D_\\mu \\alpha \\,," }, { "math_id": 105, "text": " \\det\\left(\\partial_\\mu\\, D_\\mu\\right) \\," }, { "math_id": 106, "text": " \\int e^{\\bar\\eta \\partial_\\mu\\, D^\\mu \\eta}\\, D\\bar\\eta\\, D\\eta \\," }, { "math_id": 107, "text": " S= \\int \\operatorname{Tr} \\partial_\\mu A_\\nu \\partial^\\mu A^\\nu + f^i_{jk} \\partial^\\nu A_i^\\mu A^j_\\mu A^k_\\nu + f^i_{jr} f^r_{kl} A_i A_j A^k A^l + \\operatorname{Tr} \\partial_\\mu \\bar\\eta \\partial^\\mu \\eta + \\bar\\eta A_j \\eta \\," }, { "math_id": 108, "text": " \\frac{1}{p^2+m^2} = \\int_0^\\infty e^{-\\tau\\left(p^2 + m^2\\right)}\\, d\\tau " }, { "math_id": 109, "text": " \\Delta(x) = \\int_0^\\infty d\\tau e^{-m^2\\tau} \\frac{1}{ ({4\\pi\\tau})^{d/2}}e^\\frac{-x^2}{ 4\\tau}" }, { "math_id": 110, "text": " \\Delta(x) = \\int_0^\\infty d\\tau \\int DX\\, e^{- \\int\\limits_0^{\\tau} \\left(\\frac{\\dot{x}^2}{2} + m^2\\right) d\\tau'} " }, { "math_id": 111, "text": " \\int_k \\frac{1}{k^2 + m^2} \\frac{1}{ (k+p)^2 + m^2} \\,." }, { "math_id": 112, "text": " \\int_{t,t'} e^{-t(k^2+m^2) - t'\\left((k+p)^2 +m^2\\right) }\\, dt\\, dt'\\,. " }, { "math_id": 113, "text": " \\int_{t,t'} e^{-(t+t')(k^2+m^2) - t' 2p\\cdot k -t' p^2}\\,, " }, { "math_id": 114, "text": " d(uv)= dt'\\quad du = dt+dt'\\,," }, { "math_id": 115, "text": " u\\, du \\wedge dv = dt \\wedge dt'\\," }, { "math_id": 116, "text": " \\int_{u,v} u e^{-u \\left( k^2+m^2 + v 2p\\cdot k + v p^2\\right)} = \\int \\frac{1}{\\left(k^2 + m^2 + v 2p\\cdot k - v p^2\\right)^2}\\, dv " }, { "math_id": 117, "text": " \\frac{1}{AB}= \\int_0^1 \\frac{1}{\\big( vA+ (1-v)B\\big)^2}\\, dv " }, { "math_id": 118, "text": " \\int_0^1 \\int\\frac{1}{\\left(k^2 + m^2 + 2vp \\cdot k + v p^2\\right)^2}\\, dk\\, dv = \\int_0^1 \\int \\frac{1}{\\left(k'^2 + m^2 + v(1-v)p^2\\right)^2}\\, dk'\\, dv" }, { "math_id": 119, "text": " \\int dk\\, \\frac{1}{k^2 + m^2} \\frac{1}{(k+p_1)^2 + m^2} \\cdots \\frac{1}{(k+p_n)^2 + m^2}" }, { "math_id": 120, "text": " \\frac{1}{D_0 D_1 \\cdots D_n} = \\int_0^\\infty \\cdots\\int_0^\\infty e^{-u_0 D_0 \\cdots -u_n D_n}\\, du_0 \\cdots du_n \\,." }, { "math_id": 121, "text": " du = du_0 + du_1 \\cdots + du_n \\," }, { "math_id": 122, "text": " d(uv_i) = d u_i \\,." }, { "math_id": 123, "text": " u^n\\, du \\wedge dv_1 \\wedge dv_2 \\cdots \\wedge dv_n = du_0 \\wedge du_1 \\cdots \\wedge du_n \\,." }, { "math_id": 124, "text": " \\int_0^\\infty \\int_{\\mathrm{simplex}} u^n e^{-u\\left(v_0 D_0 + v_1 D_1 + v_2 D_2 \\cdots + v_n D_n\\right)}\\, dv_1\\cdots dv_n\\, du\\,, " }, { "math_id": 125, "text": "v_i>0 \\quad \\mbox{and} \\quad \\sum_{i=1}^n v_i < 1 " }, { "math_id": 126, "text": "v_0 = 1-\\sum_{i=1}^n v_i\\,." }, { "math_id": 127, "text": " \\frac{1}{ D_0 \\cdots D_n } = n! \\int_{\\mathrm{simplex}} \\frac{1}{ \\left(v_0 D_0 +v_1 D_1 \\cdots + v_n D_n\\right)^{n+1}}\\, dv_1\\, dv_2 \\cdots dv_n " }, { "math_id": 128, "text": " \\left\\langle \\phi(k_1)\\phi(k_2)\\phi(k_3)\\phi(k_4)\\right\\rangle = \\frac{i}{k_1^2}\\frac{i}{k_2^2} \\frac{i}{k_3^2} \\frac{i}{k_4^2} i\\lambda \\," }, { "math_id": 129, "text": " M = i\\lambda \\," }, { "math_id": 130, "text": " \\int dk\\, |k\\rangle\\langle k|\\,. " }, { "math_id": 131, "text": " E^2 - k^2 = m^2 \\,." }, { "math_id": 132, "text": " \\int \\delta(E^2-k^2 - m^2) |E,k\\rangle\\langle E,k|\\, dE\\, dk = \\int {dk \\over 2 E} |k\\rangle\\langle k|\\,." }, { "math_id": 133, "text": "\\sqrt{E} = \\left(k^2-m^2\\right)^\\frac14\\,." } ]
https://en.wikipedia.org/wiki?curid=11617
1161784
Link (knot theory)
Collection of knots which do not intersect, but may be linked In mathematical knot theory, a link is a collection of knots which do not intersect, but which may be linked (or knotted) together. A knot can be described as a link with one component. Links and knots are studied in a branch of mathematics called knot theory. Implicit in this definition is that there is a "trivial" reference link, usually called the unlink, but the word is also sometimes used in context where there is no notion of a trivial link. For example, a co-dimension 2 link in 3-dimensional space is a subspace of 3-dimensional Euclidean space (or often the 3-sphere) whose connected components are homeomorphic to circles. The simplest nontrivial example of a link with more than one component is called the Hopf link, which consists of two circles (or unknots) linked together once. The circles in the Borromean rings are collectively linked despite the fact that no two of them are directly linked. The Borromean rings thus form a Brunnian link and in fact constitute the simplest such link. Generalizations. The notion of a link can be generalized in a number of ways. General manifolds. Frequently the word link is used to describe any submanifold of the sphere formula_0 diffeomorphic to a disjoint union of a finite number of spheres, formula_1. In full generality, the word link is essentially the same as the word "knot" – the context is that one has a submanifold "M" of a manifold "N" (considered to be trivially embedded) and a non-trivial embedding of "M" in "N", non-trivial in the sense that the 2nd embedding is not isotopic to the 1st. If "M" is disconnected, the embedding is called a link (or said to be linked). If "M" is connected, it is called a knot. Tangles, string links, and braids. While (1-dimensional) links are defined as embeddings of circles, it is often interesting and especially technically useful to consider embedded intervals (strands), as in braid theory. Most generally, one can consider a tangle – a tangle is an embedding formula_2 of a (smooth) compact 1-manifold with boundary formula_3 into the plane times the interval formula_4 such that the boundary formula_5 is embedded in formula_6 (formula_7). The type of a tangle is the manifold "X," together with a fixed embedding of formula_8 Concretely, a connected compact 1-manifold with boundary is an interval formula_9 or a circle formula_10 (compactness rules out the open interval formula_11 and the half-open interval formula_12 neither of which yields non-trivial embeddings since the open end means that they can be shrunk to a point), so a possibly disconnected compact 1-manifold is a collection of "n" intervals formula_9 and "m" circles formula_13 The condition that the boundary of "X" lies in formula_6 says that intervals either connect two lines or connect two points on one of the lines, but imposes no conditions on the circles. One may view tangles as having a vertical direction ("I"), lying between and possibly connecting two lines (formula_14 and formula_15), and then being able to move in a two-dimensional horizontal direction (formula_16) between these lines; one can project these to form a tangle diagram, analogous to a knot diagram. Tangles include links (if "X" consists of circles only), braids, and others besides – for example, a strand connecting the two lines together with a circle linked around it. In this context, a braid is defined as a tangle which is always going down – whose derivative always has a non-zero component in the vertical ("I") direction. In particular, it must consist solely of intervals, and not double back on itself; however, no specification is made on where on the line the ends lie. A string link is a tangle consisting of only intervals, with the ends of each strand required to lie at (0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1), ... – i.e., connecting the integers, and ending in the same order that they began (one may use any other fixed set of points); if this has "ℓ" components, we call it an ""ℓ"-component string link". A string link need not be a braid – it may double back on itself, such as a two-component string link that features an overhand knot. A braid that is also a string link is called a pure braid, and corresponds with the usual such notion. The key technical value of tangles and string links is that they have algebraic structure. Isotopy classes of tangles form a tensor category, where for the category structure, one can compose two tangles if the bottom end of one equals the top end of the other (so the boundaries can be stitched together), by stacking them – they do not literally form a category (pointwise) because there is no identity, since even a trivial tangle takes up vertical space, but up to isotopy they do. The tensor structure is given by juxtaposition of tangles – putting one tangle to the right of the other. For a fixed "ℓ," isotopy classes of "ℓ"-component string links form a monoid (one can compose all "ℓ"-component string links, and there is an identity), but not a group, as isotopy classes of string links need not have inverses. However, "concordance" classes (and thus also "homotopy" classes) of string links do have inverses, where inverse is given by flipping the string link upside down, and thus form a group. Every link can be cut apart to form a string link, though this is not unique, and invariants of links can sometimes be understood as invariants of string links – this is the case for Milnor's invariants, for instance. Compare with closed braids.
[ { "math_id": 0, "text": "S^n" }, { "math_id": 1, "text": "S^j" }, { "math_id": 2, "text": "T\\colon X \\to \\mathbf{R}^2 \\times I" }, { "math_id": 3, "text": "(X,\\partial X)" }, { "math_id": 4, "text": "I=[0,1]," }, { "math_id": 5, "text": "T(\\partial X)" }, { "math_id": 6, "text": "\\mathbf{R} \\times \\{0,1\\}" }, { "math_id": 7, "text": "\\{0,1\\} = \\partial I" }, { "math_id": 8, "text": "\\partial X." }, { "math_id": 9, "text": "I=[0,1]" }, { "math_id": 10, "text": "S^1" }, { "math_id": 11, "text": "(0,1)" }, { "math_id": 12, "text": "[0,1)," }, { "math_id": 13, "text": "S^1." }, { "math_id": 14, "text": "\\mathbf{R} \\times 0" }, { "math_id": 15, "text": "\\mathbf{R} \\times 1" }, { "math_id": 16, "text": "\\mathbf{R}^2" } ]
https://en.wikipedia.org/wiki?curid=1161784
11619556
Tachyonic antitelephone
Hypothetical device in theoretical physics A tachyonic antitelephone is a hypothetical device in theoretical physics that could be used to send signals into one's own past. Albert Einstein in 1907 presented a thought experiment of how faster-than-light signals can lead to a paradox of causality, which was described by Einstein and Arnold Sommerfeld in 1910 as a means "to telegraph into the past". The same thought experiment was described by Richard Chace Tolman in 1917; thus, it is also known as Tolman's paradox. A device capable of "telegraphing into the past" was later also called a "tachyonic antitelephone" by Gregory Benford et al. According to current understanding of physics, no such faster-than-light transfer of information is actually possible. One-way example. Tolman used the following variation of Einstein's thought experiment: Imagine a distance with endpoints formula_0 and formula_1. Let a signal be sent from A propagating with velocity formula_2 towards B. All of this is measured in an inertial frame where the endpoints are at rest. The arrival at B is given by: formula_3 Here, the event at A is the cause of the event at B. However, in the inertial frame moving with relative velocity "v", the time of arrival at B is given according to the Lorentz transformation ("c" is the speed of light): formula_4 It can be easily shown that if "a &gt; c", then certain values of "v" can make "Δt' " negative. In other words, the effect arises before the cause in this frame. Einstein (and similarly Tolman) concluded that this result contains in their view no logical contradiction; he said, however, it contradicts the totality of our experience so that the impossibility of "a &gt; c" seems to be sufficiently proven. Two-way example. A more common variation of this thought experiment is to send back the signal to the sender (a similar one was given by David Bohm). If Alice (A) is on a spacecraft moving away from the Earth in the positive x-direction with a speed formula_5, and she wants to communicate with Bob (B) back home. Assuming both of them have a device that is capable of transmitting and receiving faster-than-light signals at a speed of formula_2formula_6 with formula_7. Alice uses this device to send a message to Bob, who sends a reply. If the origin of the coordinates of Bob's reference frame, formula_8, coincide with the reception of Alice's message to him, then if Bob immediately sends a message back to Alice, then in his rest frame the coordinates of the reply signal (in natural units so that "c"=1) are given by: formula_9 To find out when the reply is received by Alice, we perform a Lorentz transformation to Alice's frame formula_10 moving in the positive x-direction with velocity formula_5 with respect to the Earth. In this frame Alice is at rest at position formula_11, where formula_12 is the distance that the signal Alice sent to Earth traversed in her rest frame. The coordinates of the reply signal are given by: formula_13 formula_14 The reply is received by Alice when formula_11. This means that formula_15 and thus: formula_16 Since the message Alice sent to Bob took a time of formula_17 to reach him, the message she receives back from him will reach her at time: formula_18 later than she sent her message. However, if formula_19 then formula_20 and Alice will receive the message back from Bob before she sends her message to him in the first place. Numerical example with two-way communication. As an example, Alice and Bob are aboard spaceships moving inertially with a relative speed of 0.8"c". At some point they pass right next to each other, and Alice defines the position and time of their passing to be at position "x" = 0, time "t" = 0 in her frame, while Bob defines it to be at position "x′" = 0 and time "t′" = 0 in his frame (note that this is different from the convention used in the previous section, where the origin of the coordinates was the event of Bob receiving a tachyon signal from Alice). In Alice's frame she remains at rest at position "x" = 0, while Bob is moving in the positive "x" direction at 0.8"c"; in Bob's frame he remains at rest at position "x′" = 0, and Alice is moving in the negative "x′" direction at 0.8"c". Each one also has a tachyon transmitter aboard their ship, which sends out signals that move at 2.4"c" in the ship's own frame. When Alice's clock shows that 300 days have elapsed since she passed next to Bob ("t" = 300 days in her frame), she uses the tachyon transmitter to send a message to Bob, saying "Ugh, I just ate some bad shrimp". At "t" = 450 days in Alice's frame, she calculates that since the tachyon signal has been traveling away from her at 2.4"c" for 150 days, it should now be at position x = 2.4×150 = 360 light-days in her frame, and since Bob has been traveling away from her at 0.8"c" for 450 days, he should now be at position "x" = 0.8×450 = 360 light-days in her frame as well, meaning that this is the moment the signal catches up with Bob. So, in her frame Bob receives Alice's message at "x" = 360, "t" = 450. Due to the effects of time dilation, in her frame Bob is aging more slowly than she is by a factor of formula_21, in this case 0.6, so Bob's clock only shows that 0.6×450 = 270 days have elapsed when he receives the message, meaning that in his frame he receives it at "x′" = 0, "t′" = 270. When Bob receives Alice's message, he immediately uses his own tachyon transmitter to send a message back to Alice saying "Don't eat the shrimp!". 135 days later in his frame, at "t′" = 270 + 135 = 405, he calculates that since the tachyon signal has been traveling away from him at 2.4"c" in the −"x′" direction for 135 days, it should now be at position "x′" = −2.4×135 = −324 light-days in his frame, and since Alice has been traveling at 0.8"c" in the −"x" direction for 405 days, she should now be at position "x′" = −0.8×405 = −324 light-days as well. So, in his frame Alice receives his reply at "x′" = −324, "t′" = 405. Time dilation for inertial observers is symmetrical, so in Bob's frame Alice is aging more slowly than he is, by the same factor of 0.6, so Alice's clock should only show that 0.6×405 = 243 days have elapsed when she receives his reply. This means that she receives a message from Bob saying "Don't eat the shrimp!" only 243 days after she passed Bob, while she wasn't supposed to send the message saying "Ugh, I just ate some bad shrimp" until 300 days elapsed since she passed Bob, so Bob's reply constitutes a warning about her own future. These numbers can be double-checked using the Lorentz transformation. The Lorentz transformation says that if the coordinates are known to be "x" "t", of some event in Alice's frame, the same event must have the following "x′", "t′" coordinates in Bob's frame: formula_22 Where "v" is Bob's speed along the "x"-axis in Alice's frame, c is the speed of light (we are using units of days for time and light-days for distance, so in these units "c" = 1), and formula_23 is the Lorentz factor. In this case "v"=0.8"c", and formula_24. In Alice's frame, the event of Alice sending the message happens at "x" = 0, "t" = 300, and the event of Bob receiving Alice's message happens at "x" = 360, "t" = 450. Using the Lorentz transformation, we find that in Bob's frame the event of Alice sending the message happens at position "x′" = (1/0.6)×(0 − 0.8×300) = −400 light-days, and time "t′" = (1/0.6)×(300 − 0.8×0) = 500 days. Likewise, in Bob's frame the event of Bob receiving Alice's message happens at position "x′" = (1/0.6)×(360 − 0.8×450) = 0 light-days, and time "t′" = (1/0.6)×(450 − 0.8×360) = 270 days, which are the same coordinates for Bob's frame that were found in the earlier paragraph. Comparing the coordinates in each frame, we see that in Alice's frame her tachyon signal moves forwards in time (she sent it at an earlier time than Bob received it), and between being sent and received we have (difference in position)/(difference in time) = 360/150 = 2.4"c". In Bob's frame, Alice's signal moves back in time (he received it at "t′" = 270, but it was sent at "t′" = 500), and it has a (difference in position)/(difference in time) of 400/230, about 1.739"c". The fact that the two frames disagree about the order of the events of the signal being sent and received is an example of the relativity of simultaneity, a feature of relativity which has no analogue in classical physics, and which is key to understanding why in relativity FTL communication must necessarily lead to causality violation. Bob is assumed to have sent his reply almost instantaneously after receiving Alice's message, so the coordinates of his sending the reply can be assumed to be the same: "x" = 360, "t" = 450 in Alice's frame, and "x′" = 0, "t′" = 270 in Bob's frame. If the event of Alice receiving Bob's reply happens at "x′" = 0, "t′" = 243 in her frame (as in the earlier paragraph), then according to the Lorentz transformation, in Bob's frame Alice receives his reply at position "x′"' = (1/0.6)×(0 − 0.8×243) = −324 light-days, and at time "t′" = (1/0.6)×(243 − 0.8×0) = 405 days. So evidently Bob's reply does move forward in time in his own frame, since the time it was sent was "t′" = 270 and the time it was received was "t′" = 405. And in his frame (difference in position)/(difference in time) for his signal is 324/135 = 2.4"c", exactly the same as the speed of Alice's original signal in her own frame. Likewise, in Alice's frame Bob's signal moves backwards in time (she received it before he sent it), and it has a (difference in position)/(difference in time) of 360/207, about 1.739"c". Thus the times of sending and receiving in each frame, as calculated using the Lorentz transformation, match up with the times given in earlier paragraphs, before we made explicit use of the Lorentz transformation. And by using the Lorentz transformation we can see that the two tachyon signals behave symmetrically in each observer's frame: the observer who sends a given signal measures it to move forward in time at 2.4"c", the observer who receives it measures it to move back in time at 1.739"c". This sort of possibility for symmetric tachyon signals is necessary if tachyons are to respect the first of the two postulates of special relativity, which says that all laws of physics must work exactly the same in all inertial frames. This implies that if it's possible to send a signal at 2.4"c" in one frame, it must be possible in any other frame as well, and likewise if one frame can observe a signal that moves backwards in time, any other frame must be able to observe such a phenomenon as well. This is another key idea in understanding why FTL communication leads to causality violation in relativity; if tachyons were allowed to have a "preferred frame" in violation of the first postulate of relativity, in that case it could theoretically be possible to avoid causality violations. Paradoxes. Benford et al. wrote about such paradoxes in general, offering a scenario in which two parties are able to send a message two hours into the past: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The paradoxes of backward-in-time communication are well known. Suppose A and B enter into the following agreement: A will send a message at three o'clock if and only if he does "not" receive one at one o'clock. B sends a message to reach A at one o'clock immediately on receiving one from A at three o'clock. Then the exchange of messages will take place if and only if it does not take place. This is a genuine paradox, a causal contradiction. They concluded that superluminal particles such as tachyons are therefore not allowed to convey signals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "\\Delta t=t_{1}-t_{0}=\\frac{B-A}{a}." }, { "math_id": 4, "text": "\\begin{align}\n\\Delta t' & =t'_{1}-t'_{0}=\\frac{t_{1}-vB/c^{2}}{\\sqrt{1-v^{2}/c^{2}}}-\\frac{t_{0}-vA/c^{2}}{\\sqrt{1-v^{2}/c^{2}}}\\\\\n & =\\frac{1-av/c^{2}}{\\sqrt{1-v^{2}/c^{2}}}\\Delta t.\n\\end{align}" }, { "math_id": 5, "text": "v" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "a > 1" }, { "math_id": 8, "text": "S" }, { "math_id": 9, "text": "(t,x) = (t,at)" }, { "math_id": 10, "text": "S'" }, { "math_id": 11, "text": "x' = L" }, { "math_id": 12, "text": "L" }, { "math_id": 13, "text": "t' = \\gamma \\left(1 - av\\right) t" }, { "math_id": 14, "text": "x' = \\gamma \\left(a - v\\right) t" }, { "math_id": 15, "text": "t = \\tfrac{L}{\\gamma(a - v)}" }, { "math_id": 16, "text": "t' = \\frac{1 - av}{a - v}L" }, { "math_id": 17, "text": "\\tfrac{L}{a}" }, { "math_id": 18, "text": "T = \\frac{L}{a} + t' = \\left(\\frac{1}{a} + \\frac{1 - av}{a - v}\\right)L" }, { "math_id": 19, "text": "v > \\tfrac{2a}{1 + a^2}" }, { "math_id": 20, "text": "T < 0" }, { "math_id": 21, "text": "\\frac{1}{ \\gamma} = \\sqrt{1 - { (v/c)^2}}" }, { "math_id": 22, "text": "\\begin{align}\nt' &= \\gamma \\left( t - \\frac{vx}{c^2} \\right) \\\\ \nx' &= \\gamma \\left( x - v t \\right)\\\\\n\\end{align}" }, { "math_id": 23, "text": " \\gamma = \\frac{1}{ \\sqrt{1 - { (v/c)^2}}}" }, { "math_id": 24, "text": " \\gamma = \\frac{1}{0.6}" } ]
https://en.wikipedia.org/wiki?curid=11619556
1162065
Boolean data type
Data having only values "true" or "false" In computer science, the Boolean (sometimes shortened to Bool) is a data type that has one of two possible values (usually denoted "true" and "false") which is intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean "condition" evaluates to true or false. It is a special case of a more general "logical data type—"logic does not always need to be Boolean (see probabilistic logic). Generalities. In programming languages with a built-in Boolean data type, such as Pascal and Java, the comparison operators such as codice_0 and codice_1 are usually defined to return a Boolean value. Conditional and iterative commands may be defined to test Boolean-valued expressions. Languages with no explicit Boolean data type, like C90 and Lisp, may still represent truth values by some other data type. Common Lisp uses an empty list for false, and any other value for true. The C programming language uses an integer type, where relational expressions like codice_2 and logical expressions connected by codice_3 and codice_4 are defined to have value 1 if true and 0 if false, whereas the test parts of codice_5, codice_6, codice_7, etc., treat any non-zero value as true. Indeed, a Boolean variable may be regarded (and implemented) as a numerical variable with one binary digit (bit), or as a bit string of length one, which can store only two values. The implementation of Booleans in computers are most likely represented as a full word, rather than a bit; this is usually due to the ways computers transfer blocks of information. Most programming languages, even those with no explicit Boolean type, have support for Boolean algebraic operations such as conjunction (codice_8, codice_9, codice_10), disjunction (codice_11, codice_12, codice_13), equivalence (codice_14, codice_15, codice_16), exclusive or/non-equivalence (codice_17, codice_18, codice_19, codice_20, codice_21), and negation (codice_22, codice_23, codice_24, codice_21). In some languages, like Ruby, Smalltalk, and Alice the "true" and "false" values belong to separate classes, e.g., codice_26 and codice_27, respectively, so there is no one Boolean "type". In SQL, which uses a three-valued logic for explicit comparisons because of its special treatment of Nulls, the Boolean data type (introduced in ) is also defined to include more than two truth values, so that SQL "Booleans" can store all logical values resulting from the evaluation of predicates in SQL. A column of Boolean type can be restricted to just codice_28 and codice_29 though. Language-specific implementations. ALGOL and the built-in BOOLEAN type. One of the earliest programming languages to provide an explicit codice_30 data type is ALGOL 60 (1960) with values "true" and "false" and logical operators denoted by symbols 'formula_0' (and), 'formula_1' (or), 'formula_2' (implies), 'formula_3' (equivalence), and 'formula_4' (not). Due to input device and character set limits on many computers of the time, however, most compilers used alternative representations for many of the operators, such as codice_8 or codice_32. This approach with codice_30 as a built-in (either primitive or otherwise predefined) data type was adopted by many later programming languages, such as Simula 67 (1967), ALGOL 68 (1970), Pascal (1970), Ada (1980), Java (1995), and C# (2000), among others. Fortran. The first version of FORTRAN (1957) and its successor FORTRAN II (1958) have no logical values or operations; even the conditional codice_34 statement takes an arithmetic expression and branches to one of three locations according to its sign; see arithmetic IF. FORTRAN IV (1962), however, follows the ALGOL 60 example by providing a Boolean data type (codice_35), truth literals (codice_36 and codice_37), logical codice_34 statement, Boolean-valued numeric comparison operators (codice_39, codice_40, etc.), and logical operators (codice_41, codice_42, codice_43, codice_44, and codice_45). In codice_46 statements, a specific format descriptor ('codice_47') is provided for the parsing or formatting of logical values. Lisp and Scheme. The language Lisp (1958) never had a built-in Boolean data type. Instead, conditional constructs like codice_48 assume that the logical value "false" is represented by the empty list codice_49, which is defined to be the same as the special atom codice_50 or codice_51; whereas any other s-expression is interpreted as "true". For convenience, most modern dialects of Lisp predefine the atom codice_52 to have value codice_52, so that codice_52 can be used as a mnemonic notation for "true". This approach ("any value can be used as a Boolean value") was retained in most Lisp dialects (Common Lisp, Scheme, Emacs Lisp), and similar models were adopted by many scripting languages, even ones having a distinct Boolean type or Boolean values; although which values are interpreted as "false" and which are "true" vary from language to language. In Scheme, for example, the "false" value is an atom distinct from the empty list, so the latter is interpreted as "true". Common Lisp, on the other hand, also provides the dedicated codice_55 type, derived as a specialization of the symbol. Pascal, Ada, and Haskell. The language Pascal (1970) popularized the concept of programmer-defined enumerated types, previously available with different nomenclature in COBOL, FACT and JOVIAL. A built-in codice_56 data type was then provided as a predefined enumerated type with values codice_29 and codice_28. By definition, all comparisons, logical operations, and conditional statements applied to and/or yielded codice_56 values. Otherwise, the codice_56 type had all the facilities which were available for enumerated types in general, such as ordering and use as indices. In contrast, converting between codice_56s and integers (or any other types) still required explicit tests or function calls, as in ALGOL 60. This approach ("Boolean is an enumerated type") was adopted by most later languages which had enumerated types, such as Modula, Ada, and Haskell. C, C++, D, Objective-C, AWK. Initial implementations of the language C (1972) provided no Boolean type, and to this day Boolean values are commonly represented by integers (codice_62s) in C programs. The comparison operators (codice_0, codice_16, etc.) are defined to return a signed integer (codice_62) result, either 0 (for false) or 1 (for true). Logical operators (codice_3, codice_4, codice_24, etc.) and condition-testing statements (codice_5, codice_6) assume that zero is false and all other values are true. After enumerated types (codice_71s) were added to the American National Standards Institute version of C, ANSI C (1989), many C programmers got used to defining their own Boolean types as such, for readability reasons. However, enumerated types are equivalent to integers according to the language standards; so the effective identity between Booleans and integers is still valid for C programs. Standard C (since C99) provides a Boolean type, called codice_72. By including the header codice_73, one can use the more intuitive name codice_74 and the constants codice_75 and codice_76. The language guarantees that any two true values will compare equal (which was impossible to achieve before the introduction of the type). Boolean values still behave as integers, can be stored in integer variables, and used anywhere integers would be valid, including in indexing, arithmetic, parsing, and formatting. This approach ("Boolean values are just integers") has been retained in all later versions of C. Note, that this does not mean that any integer value can be stored in a Boolean variable. C++ has a separate Boolean data type codice_74, but with automatic conversions from scalar and pointer values that are very similar to those of C. This approach was adopted also by many later languages, especially by some scripting languages such as AWK. The D programming language has a proper boolean data type codice_74. The codice_74 type is a byte-sized type that can only hold the value true or false. The only operators that can accept operands of type bool are: &amp;, |, ^, &amp;=, |=, ^=, !, &amp;&amp;, || and ?:. A codice_74 value can be implicitly converted to any integral type, with false becoming 0 and true becoming 1. The numeric literals 0 and 1 can be implicitly converted to the bool values false and true, respectively. Casting an expression to codice_74 means testing for 0 or !=0 for arithmetic types, and null or !=null for pointers or references. Objective-C also has a separate Boolean data type codice_82, with possible values being codice_83 or codice_84, equivalents of true and false respectively. Also, in Objective-C compilers that support C99, C's codice_72 type can be used, since Objective-C is a superset of C. Java. In Java, the value of the codice_55 data type can only be either codice_75 or codice_76. Perl and Lua. Perl has no Boolean data type. Instead, any value can behave as Boolean in Boolean context (condition of codice_5 or codice_6 statement, argument of codice_3 or codice_4, etc.). The number codice_93, the strings codice_94 and codice_95, the empty list codice_49, and the special value codice_97 evaluate to false. All else evaluates to true. Lua has a Boolean data type, but non-Boolean values can also behave as Booleans. The non-value codice_50 evaluates to false, whereas every other data type value evaluates to true. This includes the empty string codice_95 and the number codice_93, which are very often considered codice_76 in other languages. PL/I. PL/I has no Boolean data type. Instead, comparison operators generate BIT(1) values; '0'B represents false and '1'B represents true. The operands of, e.g., codice_102, codice_103, codice_104, are converted to bit strings and the operations are performed on each bit. The "element-expression" of an codice_105 statement is true if any bit is 1. Rexx. Rexx has no Boolean data type. Instead, comparison operators generate 0 or 1; 0 represents false and 1 represents true. The operands of, e.g., codice_102, codice_103, codice_104, must be 0 or 1. Tcl. Tcl has no separate Boolean type. Like in C, the integers 0 (false) and 1 (true—in fact any nonzero integer) are used. Examples of coding: The above will show &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;V is 1 or true&lt;/samp&gt; since the expression evaluates to 1. The above will render an error, as variable v cannot be evaluated as 0 or 1. Python, Ruby, and JavaScript. Python, from version 2.3 forward, has a codice_74 type which is a subclass of codice_62, the standard integer type. It has two possible values: codice_26 and codice_27, which are "special versions" of 1 and 0 respectively and behave as such in arithmetic contexts. Also, a numeric value of zero (integer or fractional), the null value (codice_113), the empty string, and empty containers (lists, sets, etc.) are considered Boolean false; all other values are considered Boolean true by default. Classes can define how their instances are treated in a Boolean context through the special method codice_114 (Python 2) or codice_115 (Python 3). For containers, codice_116 (the special method for determining the length of containers) is used if the explicit Boolean conversion method is not defined. In Ruby, in contrast, only codice_50 (Ruby's null value) and a special codice_76 object are "false"; all else (including the integer 0 and empty arrays) is "true". In JavaScript, the empty string (codice_119), codice_120, codice_121, codice_122, +0, −0 and codice_76 are sometimes called "falsy" (of which the complement is "truthy") to distinguish between strictly type-checked and coerced Booleans. As opposed to Python, empty containers (Arrays, Maps, Sets) are considered truthy. Languages such as PHP also use this approach. SQL. Booleans appear in SQL when a condition is needed, such as WHERE clause, in form of predicate which is produced by using operators such as comparison operators, IN operator, IS (NOT) NULL etc. However, apart from TRUE and FALSE, these operators can also yield a third state, called UNKNOWN, when comparison with codice_124 is made. The SQL92 standard introduced IS (NOT) TRUE, IS (NOT) FALSE, and IS (NOT) UNKNOWN operators which evaluate a predicate, which predated the introduction of Boolean type in . The SQL:1999 standard introduced a BOOLEAN data type as an optional feature (T031). When restricted by a NOT NULL constraint, a SQL BOOLEAN behaves like Booleans in other languages, which can store only TRUE and FALSE values. However, if it is nullable, which is the default like all other SQL data types, it can have the special null value also. Although the SQL standard defines three literals for the BOOLEAN type – TRUE, FALSE, and UNKNOWN — it also says that the NULL BOOLEAN and UNKNOWN "may be used interchangeably to mean exactly the same thing". This has caused some controversy because the identification subjects UNKNOWN to the equality comparison rules for NULL. More precisely is not TRUE but UNKNOWN/NULL. As of 2012 few major SQL systems implement the T031 feature. Firebird and PostgreSQL are notable exceptions, although PostgreSQL implements no UNKNOWN literal; codice_124 can be used instead. The treatment of Boolean values differs between SQL systems. For example, in Microsoft SQL Server, Boolean value is not supported at all, neither as a standalone data type nor representable as an integer. It shows the error message "An expression of non-Boolean type specified in a context where a condition is expected" if a column is directly used in the WHERE clause, e.g. , while a statement such as yields a syntax error. The BIT data type, which can only store integers 0 and 1 apart from NULL, is commonly used as a workaround to store Boolean values, but workarounds need to be used such as to convert between the integer and Boolean expression. Microsoft Access, which uses the Access Database Engine (ACE/JET), also does not have a Boolean data type. Similar to MS SQL Server, it uses a BIT data type. In Access it is known as a Yes/No data type which can have two values; Yes (True) or No (False). The BIT data type in Access can also can be represented numerically; True is −1 and False is 0. This differs to MS SQL Server in two ways, even though both are Microsoft products: PostgreSQL has a distinct BOOLEAN type as in the standard, which allows predicates to be stored directly into a BOOLEAN column, and allows using a BOOLEAN column directly as a predicate in a WHERE clause. In MySQL, BOOLEAN is treated as an alias of ; TRUE is the same as integer 1 and FALSE is the same is integer 0. Any non-zero integer is true in conditions. Tableau. Tableau Software has a BOOLEAN data type. The literal of a Boolean value is codice_26 or codice_27. The Tableau codice_128 function converts a Boolean to a number, returning 1 for True and 0 for False. Forth. Forth (programming language) has no Boolean type, it uses regular integers: value 0 (all bits low) represents false, and -1 (all bits high) represents true. This allows the language to define only one set of logical operators, instead of one for mathematical calculations and one for conditions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\wedge" }, { "math_id": 1, "text": "\\vee" }, { "math_id": 2, "text": "\\supset" }, { "math_id": 3, "text": "\\equiv" }, { "math_id": 4, "text": "\\neg" } ]
https://en.wikipedia.org/wiki?curid=1162065
1162163
Hard spheres
Model particles in statistical mechanics Hard spheres are widely used as model particles in the statistical mechanical theory of fluids and solids. They are defined simply as impenetrable spheres that cannot overlap in space. They mimic the extremely strong ("infinitely elastic bouncing") repulsion that atoms and spherical molecules experience at very close distances. Hard spheres systems are studied by analytical means, by molecular dynamics simulations, and by the experimental study of certain colloidal model systems. Beside being a model of theoretical significance, the hard-sphere system is used as a basis in the formulation of several modern, predictive Equations of State for real fluids through the SAFT approach, and models for transport properties in gases through Chapman-Enskog Theory. Formal definition. Hard spheres of diameter formula_0 are particles with the following pairwise interaction potential: formula_1 where formula_2 and formula_3 are the positions of the two particles. Hard-spheres gas. The first three virial coefficients for hard spheres can be determined analytically Higher-order ones can be determined numerically using Monte Carlo integration. We list A table of virial coefficients for up to eight dimensions can be found on the page Hard sphere: virial coefficients. The hard sphere system exhibits a fluid-solid phase transition between the volume fractions of freezing formula_5 and melting formula_6. The pressure diverges at random close packing formula_7 for the metastable liquid branch and at close packing formula_8 for the stable solid branch. Hard-spheres liquid. The static structure factor of the hard-spheres liquid can be calculated using the Percus–Yevick approximation. The Carnahan-Starling Equation of State. A simple, yet popular equation of state describing systems of pure hard spheres was developed in 1969 by N. F. Carnahan and K. E. Starling. By expressing the compressibility of a hard-sphere system as a geometric series, the expression formula_9 is obtained, where formula_4 is the packing fraction, given by formula_10 where formula_11 is Avogadros number, formula_12 is the molar density of the fluid, and formula_0 is the diameter of the hard-spheres. From this Equation of State, one can obtain the residual Helmholtz energy, formula_13, which yields the residual chemical potential formula_14. One can also obtain the value of the radial distribution function, formula_15, evaluated at the surface of a sphere, formula_16. The latter is of significant importance to accurate descriptions of more advanced intermolecular potentials based on perturbation theory, such as SAFT, where a system of hard spheres is taken as a reference system, and the complete pair-potential is described by perturbations to the underlying hard-sphere system. Computation of the transport properties of hard-sphere gases at moderate densities using Revised Enskog Theory also relies on an accurate value for formula_17, and the Carnahan-Starling Equation of State has been used for this purpose to large success. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma" }, { "math_id": 1, "text": "V(\\mathbf{r}_1,\\mathbf{r}_2)=\\left\\{ \\begin{matrix}0 & \\mbox{if}\\quad |\\mathbf{r}_1-\\mathbf{r}_2| \\geq \\sigma \\\\ \\infty & \\mbox{if}\\quad|\\mathbf{r}_1-\\mathbf{r}_2| < \\sigma \\end{matrix} \\right. " }, { "math_id": 2, "text": "\\mathbf{r}_1" }, { "math_id": 3, "text": "\\mathbf{r}_2" }, { "math_id": 4, "text": "\\eta" }, { "math_id": 5, "text": "\\eta_\\mathrm{f}\\approx 0.494" }, { "math_id": 6, "text": "\\eta_\\mathrm{m}\\approx 0.545" }, { "math_id": 7, "text": "\\eta_\\mathrm{rcp}\\approx 0.644" }, { "math_id": 8, "text": "\\eta_\\mathrm{cp}=\\sqrt{2}\\pi/6 \\approx 0.74048" }, { "math_id": 9, "text": "Z = \\frac{pV}{nRT} = \\frac{1 + \\eta + \\eta^2 - \\eta^3}{(1 - \\eta)^3}" }, { "math_id": 10, "text": "\\eta = \\frac{N_A \\pi n \\sigma^3}{6V}" }, { "math_id": 11, "text": "N_A" }, { "math_id": 12, "text": "n / V" }, { "math_id": 13, "text": "\\frac{A_{res}}{nRT} = \\frac{4 \\eta - 3 \\eta^2}{( 1 - \\eta )^2} " }, { "math_id": 14, "text": "\\frac{\\mu_{res}}{RT} = \\frac{8 \\eta - 9 \\eta^2 + 3 \\eta^3}{(1 - \\eta)^3}" }, { "math_id": 15, "text": "g(r)" }, { "math_id": 16, "text": "g(\\sigma) = \\frac{1 - \\frac{1}{2} \\eta}{(1 - \\eta)^3}" }, { "math_id": 17, "text": "g(\\sigma)" } ]
https://en.wikipedia.org/wiki?curid=1162163
1162226
Potts model
Model in statistical mechanics generalizing the Ising model In statistical mechanics, the Potts model, a generalization of the Ising model, is a model of interacting spins on a crystalline lattice. By studying the Potts model, one may gain insight into the behaviour of ferromagnets and certain other phenomena of solid-state physics. The strength of the Potts model is not so much that it models these physical systems well; it is rather that the one-dimensional case is exactly solvable, and that it has a rich mathematical formulation that has been studied extensively. The model is named after Renfrey Potts, who described the model near the end of his 1951 Ph.D. thesis. The model was related to the "planar Potts" or "clock model", which was suggested to him by his advisor, Cyril Domb. The four-state Potts model is sometimes known as the Ashkin–Teller model, after Julius Ashkin and Edward Teller, who considered an equivalent model in 1943. The Potts model is related to, and generalized by, several other models, including the XY model, the Heisenberg model and the N-vector model. The infinite-range Potts model is known as the Kac model. When the spins are taken to interact in a non-Abelian manner, the model is related to the flux tube model, which is used to discuss confinement in quantum chromodynamics. Generalizations of the Potts model have also been used to model grain growth in metals, coarsening in foams, and statistical properties of proteins. A further generalization of these methods by James Glazier and Francois Graner, known as the cellular Potts model, has been used to simulate static and kinetic phenomena in foam and biological morphogenesis. Definition. Vector Potts model. The Potts model consists of "spins" that are placed on a lattice; the lattice is usually taken to be a two-dimensional rectangular Euclidean lattice, but is often generalized to other dimensions and lattice structures. Originally, Domb suggested that the spin takes one of formula_0 possible values , distributed uniformly about the circle, at angles formula_1 where formula_2 and that the interaction Hamiltonian is given by formula_3 with the sum running over the nearest neighbor pairs formula_4 over all lattice sites, and formula_5 is a coupling constant, determining the interaction strength. This model is now known as the vector Potts model or the clock model. Potts provided the location in two dimensions of the phase transition for formula_6. In the limit formula_7, this becomes the XY model. Standard Potts model. What is now known as the standard Potts model was suggested by Potts in the course of his study of the model above and is defined by a simpler Hamiltonian: formula_8 where formula_9 is the Kronecker delta, which equals one whenever formula_10 and zero otherwise. The formula_11 standard Potts model is equivalent to the Ising model and the 2-state vector Potts model, with formula_12. The formula_13 standard Potts model is equivalent to the three-state vector Potts model, with formula_14. Generalized Potts model. A generalization of the Potts model is often used in statistical inference and biophysics, particularly for modelling proteins through direct coupling analysis. This generalized Potts model consists of 'spins' that each may take on formula_0 states: formula_15 (with no particular ordering). The Hamiltonian is, formula_16 where formula_17 is the energetic cost of spin formula_18 being in state formula_19 while spin formula_20 is in state formula_21, and formula_22 is the energetic cost of spin formula_18 being in state formula_19. Note: formula_23. This model resembles the Sherrington-Kirkpatrick model in that couplings can be heterogeneous and non-local. There is no explicit lattice structure in this model. Physical properties. Phase transitions. Despite its simplicity as a model of a physical system, the Potts model is useful as a model system for the study of phase transitions. For example, for the standard ferromagnetic Potts model in formula_24, a phase transition exists for all real values formula_25, with the critical point at formula_26. The phase transition is continuous (second order) for formula_27 and discontinuous (first order) for formula_28. For the clock model, there is evidence that the corresponding phase transitions are infinite order BKT transitions, and a continuous phase transition is observed when formula_29. Further use is found through the model's relation to percolation problems and the Tutte and chromatic polynomials found in combinatorics. For integer values of formula_30, the model displays the phenomenon of 'interfacial adsorption' with intriguing critical wetting properties when fixing opposite boundaries in two different states . Relation with the random cluster model. The Potts model has a close relation to the Fortuin-Kasteleyn random cluster model, another model in statistical mechanics. Understanding this relationship has helped develop efficient Markov chain Monte Carlo methods for numerical exploration of the model at small formula_0, and led to the rigorous proof of the critical temperature of the model. At the level of the partition function formula_31, the relation amounts to transforming the sum over spin configurations formula_32 into a sum over edge configurations formula_33 i.e. sets of nearest neighbor pairs of the same color. The transformation is done using the identity formula_34 This leads to rewriting the partition function as formula_35 where the FK clusters are the connected components of the union of closed segments formula_36. This is proportional to the partition function of the random cluster model with the open edge probability formula_37. An advantage of the random cluster formulation is that formula_0 can be an arbitrary complex number, rather than a natural integer. Alternatively, instead of FK clusters, the model can be formulated in terms of spin clusters, using the identity formula_38 A spin cluster is the union of neighbouring FK clusters with the same color: two neighbouring spin clusters have different colors, while two neighbouring FK clusters are colored independently. Measure-theoretic description. The one dimensional Potts model may be expressed in terms of a subshift of finite type, and thus gains access to all of the mathematical techniques associated with this formalism. In particular, it can be solved exactly using the techniques of transfer operators. (However, Ernst Ising used combinatorial methods to solve the Ising model, which is the "ancestor" of the Potts model, in his 1924 PhD thesis). This section develops the mathematical formalism, based on measure theory, behind this solution. While the example below is developed for the one-dimensional case, many of the arguments, and almost all of the notation, generalizes easily to any number of dimensions. Some of the formalism is also broad enough to handle related models, such as the XY model, the Heisenberg model and the N-vector model. Topology of the space of states. Let "Q" = {1, ..., "q"} be a finite set of symbols, and let formula_39 be the set of all bi-infinite strings of values from the set "Q". This set is called a full shift. For defining the Potts model, either this whole space, or a certain subset of it, a subshift of finite type, may be used. Shifts get this name because there exists a natural operator on this space, the shift operator τ : "Q"Z → "Q"Z, acting as formula_40 This set has a natural product topology; the base for this topology are the cylinder sets formula_41 that is, the set of all possible strings where "k"+1 spins match up exactly to a given, specific set of values ξ0, ..., ξ"k". Explicit representations for the cylinder sets can be gotten by noting that the string of values corresponds to a "q"-adic number, however the natural topology of the q-adic numbers is finer than the above product topology. Interaction energy. The interaction between the spins is then given by a continuous function "V" : "Q"Z → R on this topology. "Any" continuous function will do; for example formula_42 will be seen to describe the interaction between nearest neighbors. Of course, different functions give different interactions; so a function of "s"0, "s"1 and "s"2 will describe a next-nearest neighbor interaction. A function "V" gives interaction energy between a set of spins; it is "not" the Hamiltonian, but is used to build it. The argument to the function "V" is an element "s" ∈ "Q"Z, that is, an infinite string of spins. In the above example, the function "V" just picked out two spins out of the infinite string: the values "s"0 and "s"1. In general, the function "V" may depend on some or all of the spins; currently, only those that depend on a finite number are exactly solvable. Define the function "Hn" : "Q"Z → R as formula_43 This function can be seen to consist of two parts: the self-energy of a configuration ["s"0, "s"1, ..., "sn"] of spins, plus the interaction energy of this set and all the other spins in the lattice. The "n" → ∞ limit of this function is the Hamiltonian of the system; for finite "n", these are sometimes called the finite state Hamiltonians. Partition function and measure. The corresponding finite-state partition function is given by formula_44 with "C"0 being the cylinder sets defined above. Here, β = 1/"kT", where "k" is the Boltzmann constant, and "T" is the temperature. It is very common in mathematical treatments to set β = 1, as it is easily regained by rescaling the interaction energy. This partition function is written as a function of the interaction "V" to emphasize that it is only a function of the interaction, and not of any specific configuration of spins. The partition function, together with the Hamiltonian, are used to define a measure on the Borel σ-algebra in the following way: The measure of a cylinder set, i.e. an element of the base, is given by formula_45 One can then extend by countable additivity to the full σ-algebra. This measure is a probability measure; it gives the likelihood of a given configuration occurring in the configuration space "Q"Z. By endowing the configuration space with a probability measure built from a Hamiltonian in this way, the configuration space turns into a canonical ensemble. Most thermodynamic properties can be expressed directly in terms of the partition function. Thus, for example, the Helmholtz free energy is given by formula_46 Another important related quantity is the topological pressure, defined as formula_47 which will show up as the logarithm of the leading eigenvalue of the transfer operator of the solution. Free field solution. The simplest model is the model where there is no interaction at all, and so "V" = "c" and "Hn" = "c" (with "c" constant and independent of any spin configuration). The partition function becomes formula_48 If all states are allowed, that is, the underlying set of states is given by a full shift, then the sum may be trivially evaluated as formula_49 If neighboring spins are only allowed in certain specific configurations, then the state space is given by a subshift of finite type. The partition function may then be written as formula_50 where card is the cardinality or count of a set, and Fix is the set of fixed points of the iterated shift function: formula_51 The "q" × "q" matrix "A" is the adjacency matrix specifying which neighboring spin values are allowed. Interacting model. The simplest case of the interacting model is the Ising model, where the spin can only take on one of two values, "sn" ∈ {−1, 1} and only nearest neighbor spins interact. The interaction potential is given by formula_52 This potential can be captured in a 2 × 2 matrix with matrix elements formula_53 with the index σ, σ′ ∈ {−1, 1}. The partition function is then given by formula_54 The general solution for an arbitrary number of spins, and an arbitrary finite-range interaction, is given by the same general form. In this case, the precise expression for the matrix "M" is a bit more complex. The goal of solving a model such as the Potts model is to give an exact closed-form expression for the partition function and an expression for the Gibbs states or equilibrium states in the limit of "n" → ∞, the thermodynamic limit. Applications. Signal and image processing. The Potts model has applications in signal reconstruction. Assume that we are given noisy observation of a piecewise constant signal "g" in R"n". To recover "g" from the noisy observation vector "f" in R"n", one seeks a minimizer of the corresponding inverse problem, the "Lp"-Potts functional "P"γ("u"), which is defined by formula_55 The jump penalty formula_56 forces piecewise constant solutions and the data term formula_57 couples the minimizing candidate "u" to the data "f". The parameter γ &gt; 0 controls the tradeoff between regularity and data fidelity. There are fast algorithms for the exact minimization of the "L"1 and the "L"2-Potts functional. In image processing, the Potts functional is related to the segmentation problem. However, in two dimensions the problem is NP-hard. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q" }, { "math_id": 1, "text": "\\theta_s = \\frac{2\\pi s}{q}," }, { "math_id": 2, "text": "s = 0, 1, ..., q-1" }, { "math_id": 3, "text": "H_c = J_c\\sum_{\\langle i, j \\rangle} \\cos \\left( \\theta_{s_i} - \\theta_{s_j} \\right)" }, { "math_id": 4, "text": "\\langle i,j \\rangle" }, { "math_id": 5, "text": "J_c" }, { "math_id": 6, "text": "q = 3,4" }, { "math_id": 7, "text": "q \\to \\infty" }, { "math_id": 8, "text": "H_p = -J_p \\sum_{(i,j)}\\delta(s_i,s_j) \\," }, { "math_id": 9, "text": "\\delta(s_i, s_j)" }, { "math_id": 10, "text": "s_i = s_j" }, { "math_id": 11, "text": "q=2" }, { "math_id": 12, "text": "J_p = -2J_c" }, { "math_id": 13, "text": "q=3" }, { "math_id": 14, "text": "J_p = -\\frac{3}{2}J_c" }, { "math_id": 15, "text": "s_i \\in \\{1,\\dots,q\\}" }, { "math_id": 16, "text": "\nH = \\sum_{i < j} J_{ij}(s_i,s_j) + \\sum_i h_i(s_i),\n" }, { "math_id": 17, "text": "J_{ij}(k,k')" }, { "math_id": 18, "text": "i" }, { "math_id": 19, "text": "k" }, { "math_id": 20, "text": "j" }, { "math_id": 21, "text": "k'" }, { "math_id": 22, "text": "h_i(k)" }, { "math_id": 23, "text": "J_{ij}(k,k') = J_{ji}(k',k)" }, { "math_id": 24, "text": "2d" }, { "math_id": 25, "text": "q \\geq 1" }, { "math_id": 26, "text": "\\beta J = \\log(1 + \\sqrt{q})" }, { "math_id": 27, "text": "1 \\leq q \\leq 4" }, { "math_id": 28, "text": "q > 4" }, { "math_id": 29, "text": "q \\leq 4" }, { "math_id": 30, "text": "q \\geq 3" }, { "math_id": 31, "text": "Z_p = \\sum_{\\{s_i\\}} e^{-H_p}" }, { "math_id": 32, "text": "\\{s_i\\}" }, { "math_id": 33, "text": "\\omega=\\Big\\{(i,j)\\Big|s_i=s_j\\Big\\}" }, { "math_id": 34, "text": "\ne^{J_p\\delta(s_i,s_j)} = 1 + v \\delta(s_i,s_j) \\qquad \\text{ with } \\qquad v = e^{J_p}-1 \\ . \n" }, { "math_id": 35, "text": "\nZ_p = \\sum_\\omega v^{\\#\\text{edges}(\\omega)} q^{\\#\\text{clusters}(\\omega)}\n" }, { "math_id": 36, "text": "\\cup_{(i,j)\\in\\omega}[i,j]" }, { "math_id": 37, "text": "p=\\frac{v}{1+v}=1-e^{-J_p}" }, { "math_id": 38, "text": "\ne^{J_p\\delta(s_i,s_j)} = (1 - \\delta(s_i,s_j)) + e^{J_p} \\delta(s_i,s_j)\\ . \n" }, { "math_id": 39, "text": "Q^\\mathbf{Z}=\\{ s=(\\ldots,s_{-1},s_0,s_1,\\ldots) : s_k \\in Q \\; \\forall k \\in \\mathbf{Z} \\}" }, { "math_id": 40, "text": "\\tau (s)_k = s_{k+1}" }, { "math_id": 41, "text": "C_m[\\xi_0, \\ldots, \\xi_k]= \\{s \\in Q^\\mathbf{Z} : s_m = \\xi_0, \\ldots ,s_{m+k} = \\xi_k \\}" }, { "math_id": 42, "text": "V(s) = -J\\delta(s_0,s_1)" }, { "math_id": 43, "text": "H_n(s)= \\sum_{k=0}^n V(\\tau^k s)" }, { "math_id": 44, "text": "Z_n(V) = \\sum_{s_0,\\ldots,s_n \\in Q} \\exp(-\\beta H_n(C_0[s_0,s_1,\\ldots,s_n]))" }, { "math_id": 45, "text": "\\mu (C_k[s_0,s_1,\\ldots,s_n]) = \\frac{1}{Z_n(V)} \\exp(-\\beta H_n (C_k[s_0,s_1,\\ldots,s_n]))" }, { "math_id": 46, "text": "A_n(V)=-kT \\log Z_n(V)" }, { "math_id": 47, "text": "P(V) = \\lim_{n\\to\\infty} \\frac{1}{n} \\log Z_n(V)" }, { "math_id": 48, "text": "Z_n(c) = e^{-c\\beta} \\sum_{s_0,\\ldots,s_n \\in Q} 1" }, { "math_id": 49, "text": "Z_n(c) = e^{-c\\beta} q^{n+1}" }, { "math_id": 50, "text": "Z_n(c) = e^{-c\\beta} |\\mbox{Fix}\\, \\tau^n| = e^{-c\\beta} \\mbox{Tr} A^n" }, { "math_id": 51, "text": "\\mbox{Fix}\\, \\tau^n = \\{ s \\in Q^\\mathbf{Z} : \\tau^n s = s \\}" }, { "math_id": 52, "text": "V(\\sigma) = -J_p s_0 s_1\\," }, { "math_id": 53, "text": "M_{\\sigma \\sigma'} = \\exp \\left( \\beta J_p \\sigma \\sigma' \\right)" }, { "math_id": 54, "text": "Z_n(V) = \\mbox{Tr}\\, M^n" }, { "math_id": 55, "text": " P_\\gamma(u) = \\gamma \\| \\nabla u \\|_0 + \\| u-f\\|_p^p = \\gamma \\# \\{ i : u_i \\neq u_{i+1} \\} + \\sum_{i=1}^n |u_i - f_i|^p" }, { "math_id": 56, "text": "\\| \\nabla u \\|_0" }, { "math_id": 57, "text": "\\| u-f\\|_p^p" } ]
https://en.wikipedia.org/wiki?curid=1162226
11622604
Triangle Universities Nuclear Laboratory
The Triangle Universities Nuclear Laboratory, abbreviated as TUNL (pronounced as "tunnel"), is a tripartite research consortium operated by Duke University, the University of North Carolina at Chapel Hill, North Carolina State University and North Carolina Central University. The laboratory is located on the West Campus of Duke University in Durham, North Carolina. Researchers are now drawn from several other universities around the United States in addition to members from the founding universities. TUNL also participates in long term collaborations with universities and laboratories around the world. Funding for TUNL comes primarily from the United States Department of Energy Office of Nuclear Physics. TUNL operates three laboratory facilities, all of which reside on Duke University's campus. Two of the facilities, the Tandem Accelerator Laboratory and the Laboratory for Experimental Nuclear Astrophysics, are low energy charged beam accelerators. The third facility is the High Intensity Gamma-Ray Source (HIGS), which produces the highest intensity polarized Gamma ray beams in the world. TUNL is also involved in off-site research projects, including the Majorana Demonstrator Experiment, an ongoing Double beta decay experiment at the Sanford Underground Research Facility in Lead, South Dakota. History. Research at TUNL is focused on nuclear physics, including studies on Fundamental symmetries, Neutrinos, Nuclear astrophysics, and Hadron structure. TUNL also conducts applied research, investigating the applications of nuclear physics to topics such as National security, Public health, and Plant physiology. The Triangle Universities Nuclear Laboratory was established in 1965, with a $2.5 Million grant from the United States Atomic Energy Commission providing the funding for a new 15 MeV Tandem Van de Graaff accelerator as well as a 15 MeV Cyclotron. After three years of construction and testing, the new accelerator facility became operational in December 1968. Henry Newson, a nuclear physics professor at Duke University, was responsible for the proposal, was the original proponent of combining the efforts of the three universities, and served as the first director of the new laboratory. The Tandem Generator and the Cyclotron at TUNL were combined into what was named a Cyclo-Graaff accelerator. Ions would first be accelerated in the Cyclotron. Then, once the initial energy was high enough, the beam from the cyclotron would be injected into the Tandem Generator where it would be further accelerated. Using the accelerators together effectively doubled the maximum energy that the lab could reach when compared to the energies of each individual accelerator. This combination, the Cyclo-Graaff, would be used by Henry Newson to study Nuclear Structure until his death in 1978. Facilities. Tandem Laboratory. An FN Tandem Van de Graaff Generator with a maximum terminal voltage of 10 Mega Volts. The facility can produce light ion beams made up of Protons, Deuterons, 3He Nuclei, and 4He Nuclei. The proton and neutron beams produced at the Tandem Laboratory are available either polarized or unpolarized depending on the experiment requirements. Through secondary beam collisions, the lab can also produce polarized neutron beams, allowing the lab to study neutron interactions. The Tandem Lab is primarily intended to study the Strong force at low energies. Research at Tandem includes few-nucleon dynamics, 2-nucleon transfer reactions, and neutron multiplication. High Intensity Gamma-ray Source. The High Intensity Gamma-Ray Source (HIGS) produces gamma-rays by means of Compton backscattering. This occurs when photons from a Free-electron laser collide with accelerated Electrons, producing a beam of high energy photons with a very precise energy and a high degree of polarization. The gamma-ray beams can be produced with energies ranging from 1-100 MeV with a maximum intensity of 1000 formula_0/s/eV, making HIGS the highest intensity accelerator driven gamma-ray source in the world. Research at HIGS can be broken broadly into two groups: Nuclear Structure and Nuclear Astrophysics, with reactions such as (formula_0, formula_0'), (formula_0, n), and (formula_0, formula_1), along with Low-energy QCD, with studies on Compton scattering and Photo-Pion production. Laboratory for Experimental Nuclear Astrophysics. The two accelerators housed at LENA combine to cover the entire range of energy values up to 1 MeV and produce beams that are both stable and intense. The lab focuses on light ion beams with high current that are optimized for applications to nuclear astrophysics. Research topics at LENA include the nuclear reactions that drive astrophysical processes such as Stellar evolution, Novae, and X-ray bursts. Education. Education in nuclear physics is provided at both a graduate and undergraduate level to students at the Triangle Universities Nuclear Laboratory. TUNL draws around 40 graduate students from the three founding universities. Graduates find employment in diverse settings, including faculty positions, industry positions, and positions at government research facilities and the National Laboratories. Graduates George A. Keyworth II and John H. Gibbons served as presidential science advisers to presidents Ronald Reagan and Bill Clinton respectively. One component of undergraduate education provided by TUNL is the TUNL/Duke Research Experiences for Undergraduates, a ten-week program funded by the National Science Foundation offered during the summer with locations on TUNL's campus as well as a limited number of positions at CERN. Undergraduates from the three founding universities as well as other associated universities conduct research with faculty members throughout the year. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=11622604
1162543
Quantum efficiency
Property of photosensitive devices The term quantum efficiency (QE) may apply to incident photon to converted electron (IPCE) ratio of a photosensitive device, or it may refer to the TMR effect of a magnetic tunnel junction. This article deals with the term as a measurement of a device's electrical sensitivity to light. In a charge-coupled device (CCD) or other photodetector, it is the ratio between the number of charge carriers collected at either terminal and the number of photons hitting the device's photoreactive surface. As a ratio, QE is dimensionless, but it is closely related to the responsivity, which is expressed in amps per watt. Since the energy of a photon is inversely proportional to its wavelength, QE is often measured over a range of different wavelengths to characterize a device's efficiency at each photon energy level. For typical semiconductor photodetectors, QE drops to zero for photons whose energy is below the band gap. A photographic film typically has a QE of much less than 10%, while CCDs can have a QE of well over 90% at some wavelengths. QE of solar cells. A solar cell's quantum efficiency value indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. If the cell's quantum efficiency is integrated over the whole solar electromagnetic spectrum, one can evaluate the amount of current that the cell will produce when exposed to sunlight. The ratio between this energy-production value and the highest possible energy-production value for the cell (i.e., if the QE were 100% over the whole spectrum) gives the cell's overall energy conversion efficiency value. Note that in the event of multiple exciton generation (MEG), quantum efficiencies of greater than 100% may be achieved since the incident photons have more than twice the band gap energy and can create two or more electron-hole pairs per incident photon. Types. Two types of quantum efficiency of a solar cell are often considered: The IQE is always larger than the EQE in the visible spectrum. A low IQE indicates that the active layer of the solar cell is unable to make good use of the photons, most likely due to poor carrier collection efficiency. To measure the IQE, one first measures the EQE of the solar device, then measures its transmission and reflection, and combines these data to infer the IQE. formula_0 formula_1 The external quantum efficiency therefore depends on both the absorption of light and the collection of charges. Once a photon has been absorbed and has generated an electron-hole pair, these charges must be separated and collected at the junction. A "good" material avoids charge recombination. Charge recombination causes a drop in the external quantum efficiency. The ideal quantum efficiency graph has a square shape, where the QE value is fairly constant across the entire spectrum of wavelengths measured. However, the QE for most solar cells is reduced because of the effects of recombination, where charge carriers are not able to move into an external circuit. The same mechanisms that affect the collection probability also affect the QE. For example, modifying the front surface can affect carriers generated near the surface. Highly doped front surface layers can also cause 'free carrier absorption' which reduces QE in the longer wavelengths. And because high-energy (blue) light is absorbed very close to the surface, considerable recombination at the front surface will affect the "blue" portion of the QE. Similarly, lower energy (green) light is absorbed in the bulk of a solar cell, and a low diffusion length will affect the collection probability from the solar cell bulk, reducing the QE in the green portion of the spectrum. Generally, solar cells on the market today do not produce much electricity from ultraviolet and infrared light (&lt;400 nm and &gt;1100 nm wavelengths, respectively); these wavelengths of light are either filtered out or are absorbed by the cell, thus heating the cell. That heat is wasted energy, and could damage the cell. QE of image sensors. Quantum efficiency (QE) is the fraction of photon flux that contributes to the photocurrent in a photodetector or a pixel. Quantum efficiency is one of the most important parameters used to evaluate the quality of a detector and is often called the spectral response to reflect its wavelength dependence. It is defined as the number of signal electrons created per incident photon. In some cases it can exceed 100% (i.e. when more than one electron is created per incident photon). EQE mapping. Conventional measurement of the EQE will give the efficiency of the overall device. However it is often useful to have a map of the EQE over large area of the device. This mapping provides an efficient way to visualize the homogeneity and/or the defects in the sample. It was realized by researchers from the Institute of Researcher and Development on Photovoltaic Energy (IRDEP) who calculated the EQE mapping from electroluminescence measurements taken with a hyperspectral imager. Spectral responsivity. Spectral responsivity is a similar measurement, but it has different units: amperes per watt (A/W); (i.e. how much current comes out of the device per unit of incident light power). Responsivity is ordinarily specified for monochromatic light (i.e. light of a single wavelength). Both the quantum efficiency and the responsivity are functions of the photons' wavelength (indicated by the subscript λ). To convert from responsivity ("Rλ", in A/W) to QEλ (on a scale 0 to 1): formula_2 where λ is the wavelength in nm, "h" is the Planck constant, "c" is the speed of light in vacuum, and "e" is the elementary charge. Note that the unit W/A (watts per ampere) is equivalent to V (volts). Determination. formula_3 where formula_4 = number of electrons produced, formula_5 = number of photons absorbed. formula_6 Assuming each photon absorbed in the depletion layer produces a viable electron-hole pair, and all other photons do not, formula_7 where "t" is the measurement time (in seconds), formula_8 = incident optical power in watts, formula_9 = optical power absorbed in depletion layer, also in watts. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\text{EQE} = \\frac{\\text{electrons/sec}}{\\text{photons/sec}}= \\frac{\\text{(current)}/\\text{(charge of one electron)}}{(\\text{total power of photons})/(\\text{energy of one photon})}" }, { "math_id": 1, "text": " \\text{IQE} = \n\\frac{\\text{electrons/sec}}{\\text{absorbed photons/sec}}= \n\\frac{\\text{EQE}}{\\text{1-Reflection-Transmission}}\n" }, { "math_id": 2, "text": "QE_\\lambda=\\frac{R_\\lambda}{\\lambda}\\times\\frac{h c}{e}\\approx\\frac{R_\\lambda}{\\lambda} {\\times} (1240\\;\\mathrm{W \\cdot {nm} / A}) " }, { "math_id": 3, "text": "QE_\\lambda=\\eta =\\frac{N_e}{N_\\nu}" }, { "math_id": 4, "text": "N_e" }, { "math_id": 5, "text": "N_\\nu" }, { "math_id": 6, "text": "\\frac{N_\\nu}t = \\Phi_o \\frac{\\lambda}{hc}" }, { "math_id": 7, "text": "\\frac{N_e}t = \\Phi_{\\xi}\\frac{\\lambda}{hc}" }, { "math_id": 8, "text": "\\Phi_o" }, { "math_id": 9, "text": "\\Phi_{\\xi}" } ]
https://en.wikipedia.org/wiki?curid=1162543
11627475
Time slicing (digital broadcasting)
Time slicing is a technique used by the DVB-H and ATSC-M/H technologies for achieving power-savings on mobile terminal devices. It is based on the time-multiplexed transmission of different services. DVB-H and ATSC-M/H transmit large pieces of data in bursts, allowing the receiver to be switched off in inactive periods. The result is power savings of up to 90% - and the same inactive receiver could be used to monitor neighboring cells for seamless handovers. Detailed description. Motivation. A special problem for mobile terminals is the limited battery capacity. In a way, being compatible with a broadband terrestrial service would place a burden on the mobile terminal, because demodulating and decoding a high data-rate stream involves certain power dissipation in the tuner and the demodulator. An investigation at the beginning of the development of DVB-H showed that the total power consumption of a DVB-T front end was more than 1 Watt at the time of the examination and was expected not to decrease below 600 mW until 2006; meanwhile a somewhat lower value seems possible but the envisaged target of 100 mW as a maximum threshold for the entire front end incorporated in a DVB-H terminal is still unobtainable for a DVB-T receiver. A considerable drawback for battery-operated terminals is the fact that with DVB-T or ATSC, the whole data stream has to be decoded before any one of the services (TV programmes) of the multiplex can be accessed. The power saving made possible by time slicing is derived from the fact that essentially only those parts of the stream which carry the data of the service currently selected have to be processed. However, the data stream needs to be reorganized in a suitable way for that purpose. In DVB-H and ATSC-M/H, service multiplexing is performed in a pure time-division multiplex. The data of one particular service are therefore not transmitted continuously but in compact periodical bursts with interruptions in between. Multiplexing of several services leads again to a continuous, uninterrupted transmitted stream of constant data-rate. Burst transmission. This kind of signal can be received time-selectively: the terminal synchronizes to the bursts of the wanted service but switches to a power-save mode during the intermediate time when other services are being transmitted. The power-save time between bursts, relative to the on-time required for the reception of an individual service, is a direct measure of the power saving provided by time slicing. Bursts entering the receiver have to be buffered and read out of the buffer at the service data-rate. The amount of data contained in one burst needs to be sufficient for bridging the power-save period of the front end. For tuning into a stream, a burst needs to carry a video frame that allows the decoder to display the video instantaneously, otherwise, the next burst has to be awaited. The position of the bursts is signaled in terms of the relative time difference between two consecutive bursts of the same service. This information is called "delta t". It is transmitted multiple times within a single burst as to provide error redundancy. Practically, the duration of one burst is in the range of several hundred milliseconds whereas the power-save time may amount to several seconds. A lead time for powering up the front end, for resynchronization, etc. has to be taken into account; this time period is assumed to be less than 250 ms according to the DVB-H technical standard. Depending on the ratio of on-time / power-save time, the resulting power saving may be more than 90%. As an example, the figure on the right shows a portion of a data stream containing time-sliced services. One quarter of the assumed total capacity of a DVB-T channel of 13.27 Mbit/s is assigned to DVB-H services whereas the remaining capacity is shared between ordinary DVB-T services. This example shows that it is feasible to transmit both DVB-T and DVB-H within the same network. Calculating burst parameters. The length of a burst formula_0 can be calculated through the size of the burst formula_1 and the bitrate of the burst formula_2. An additional factor of 0.96 is introduced to compensate for the headers of the underlying MPEG transport stream, because they are created after applying Time Slicing. formula_3 The actual on time of a burst, referred to as formula_4, incorporates the synchronization time stated above (250ms). formula_5 The constant bitrate of a stream formula_6 can be calculated from the burst bitrate and the ON and OFF lengths: formula_7 Vice versa, the OFF time that is to be used can be calculated from the actual constant bitrate of the video stream. This is more intuitive, since the constant (or average) video bitrate is known before applying time slicing. formula_8 The energy saving percentage formula_9 can be finally expressed by formula_10 Benefits and Disadvantages. Time slicing requires a sufficiently high number of multiplexed services and a certain minimum burst data-rate to guarantee effective power saving. Basically, the power consumption of the front end correlates with the service data-rate of the service currently selected. Time slicing offers another benefit for the terminal architecture. The rather long power-save periods may be used to search for channels in neighboring radio cells offering the selected service. This way a channel handover can be performed at the border between two cells which remains imperceptible for the user. Both the monitoring of the services in adjacent cells and the reception of the selected service data can be done with the same front end.
[ { "math_id": 0, "text": "T_{B}" }, { "math_id": 1, "text": "B_B" }, { "math_id": 2, "text": "R_B" }, { "math_id": 3, "text": "T_B = \\frac{B_B}{R_B \\cdot 0.96}" }, { "math_id": 4, "text": "T_{ON}" }, { "math_id": 5, "text": "T_{ON} = T_B + T_{Sync}" }, { "math_id": 6, "text": "R_C" }, { "math_id": 7, "text": "R_C = \\frac{R_B}{T_{ON}+T_{OFF}}" }, { "math_id": 8, "text": "T_{OFF} = \\frac{R_B}{R_C}-T_{ON}" }, { "math_id": 9, "text": "P" }, { "math_id": 10, "text": "P = (1 - R_C \\cdot (\\frac{1}{R_B \\cdot 0.96}+\\frac{T_{Sync}}{B_B})) \\cdot 100 \\% " } ]
https://en.wikipedia.org/wiki?curid=11627475
11628729
Berendsen thermostat
The Berendsen thermostat is an algorithm to re-scale the velocities of particles in molecular dynamics simulations to control the simulation temperature. Basic description. In this scheme, the system is weakly coupled to a heat bath with some temperature. The thermostat suppresses fluctuations of the kinetic energy of the system and therefore cannot produce trajectories consistent with the canonical ensemble. The temperature of the system is corrected such that the deviation exponentially decays with some time constant formula_0. formula_1 Though the thermostat does not generate a correct canonical ensemble (especially for small systems), for large systems on the order of hundreds or thousands of atoms/molecules, the approximation yields roughly correct results for most calculated properties. The scheme is widely used due to the efficiency with which it relaxes a system to some target (bath) temperature. In many instances, systems are initially equilibrated using the Berendsen scheme, while properties are calculated using the widely known Nosé–Hoover thermostat, which correctly generates trajectories consistent with a canonical ensemble. However, the Berendsen thermostat can result in the flying ice cube effect, an artifact which can be eliminated by using the more rigorous Bussi–Donadio–Parrinello thermostat; for this reason, it has been recommended that usage of the Berendsen thermostat be discontinued in almost all cases except for replication of prior studies.
[ { "math_id": 0, "text": "\\tau " }, { "math_id": 1, "text": "\\frac{dT}{dt}=\\frac{T_0-T}{\\tau}" } ]
https://en.wikipedia.org/wiki?curid=11628729
11630694
Almost symplectic manifold
In differential geometry, an almost symplectic structure on a differentiable manifold formula_0 is a two-form formula_1 on formula_0 that is everywhere non-singular. If in addition formula_1 is closed then it is a symplectic form. An almost symplectic manifold is an Sp-structure; requiring formula_1 to be closed is an integrability condition. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=11630694
11630973
Torsion-free abelian group
Abelian group with no non-trivial torsion elements In mathematics, specifically in abstract algebra, a torsion-free abelian group is an abelian group which has no non-trivial torsion elements; that is, a group in which the group operation is commutative and the identity element is the only element with finite order. While finitely generated abelian groups are completely classified, not much is known about infinitely generated abelian groups, even in the torsion-free countable case. Definitions. An abelian group formula_0 is said to be torsion-free if no element other than the identity formula_1 is of finite order. Explicitly, for any formula_2, the only element formula_3 for which formula_4 is formula_5. A natural example of a torsion-free group is formula_6, as only the integer 0 can be added to itself finitely many times to reach 0. More generally, the free abelian group formula_7 is torsion-free for any formula_8. An important step in the proof of the classification of finitely generated abelian groups is that every such torsion-free group is isomorphic to a formula_7. A non-finitely generated countable example is given by the additive group of the polynomial ring formula_9 (the free abelian group of countable rank). More complicated examples are the additive group of the rational field formula_10, or its subgroups such as formula_11 (rational numbers whose denominator is a power of formula_12). Yet more involved examples are given by groups of higher rank. Groups of rank 1. Rank. The "rank" of an abelian group formula_13 is the dimension of the formula_10-vector space formula_14. Equivalently it is the maximal cardinality of a linearly independent (over formula_15) subset of formula_13. If formula_13 is torsion-free then it injects into formula_14. Thus, torsion-free abelian groups of rank 1 are exactly subgroups of the additive group formula_10. Classification. Torsion-free abelian groups of rank 1 have been completely classified. To do so one associates to a group formula_13 a subset formula_16 of the prime numbers, as follows: pick any formula_17, for a prime formula_12 we say that formula_18 if and only if formula_19 for every formula_20. This does not depend on the choice of formula_21 since for another formula_22 there exists formula_23 such that formula_24. Baer proved that formula_16 is a complete isomorphism invariant for rank-1 torsion free abelian groups. Classification problem in general. The hardness of a classification problem for a certain type of structures on a countable set can be quantified using model theory and descriptive set theory. In this sense it has been proved that the classification problem for countable torsion-free abelian groups is as hard as possible. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\langle G, + ,0\\rangle " }, { "math_id": 1, "text": " e " }, { "math_id": 2, "text": "n > 0" }, { "math_id": 3, "text": "x \\in G" }, { "math_id": 4, "text": "nx = 0" }, { "math_id": 5, "text": "x = 0" }, { "math_id": 6, "text": " \\langle \\mathbb Z,+,0\\rangle " }, { "math_id": 7, "text": "\\mathbb Z^r" }, { "math_id": 8, "text": "r \\in \\mathbb N" }, { "math_id": 9, "text": "\\mathbb Z[X]" }, { "math_id": 10, "text": "\\mathbb Q" }, { "math_id": 11, "text": "\\mathbb Z[p^{-1}]" }, { "math_id": 12, "text": "p" }, { "math_id": 13, "text": "A" }, { "math_id": 14, "text": "\\mathbb Q \\otimes_{\\mathbb Z} A" }, { "math_id": 15, "text": "\\Z" }, { "math_id": 16, "text": "\\tau(A)" }, { "math_id": 17, "text": "x \\in A \\setminus \\{0\\}" }, { "math_id": 18, "text": "p \\in \\tau(A)" }, { "math_id": 19, "text": "x \\in p^kA" }, { "math_id": 20, "text": "k \\in \\mathbb N" }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": "y \\in A\\setminus \\{0\\}" }, { "math_id": 23, "text": "n, m \\in \\mathbb Z\\setminus\\{0\\}" }, { "math_id": 24, "text": "ny = mx" } ]
https://en.wikipedia.org/wiki?curid=11630973
1163167
Powerset construction
Method for making finite automata deterministic In the theory of computation and automata theory, the powerset construction or subset construction is a standard method for converting a nondeterministic finite automaton (NFA) into a deterministic finite automaton (DFA) which recognizes the same formal language. It is important in theory because it establishes that NFAs, despite their additional flexibility, are unable to recognize any language that cannot be recognized by some DFA. It is also important in practice for converting easier-to-construct NFAs into more efficiently executable DFAs. However, if the NFA has "n" states, the resulting DFA may have up to 2"n" states, an exponentially larger number, which sometimes makes the construction impractical for large NFAs. The construction, sometimes called the Rabin–Scott powerset construction (or subset construction) to distinguish it from similar constructions for other types of automata, was first published by Michael O. Rabin and Dana Scott in 1959. Intuition. To simulate the operation of a DFA on a given input string, one needs to keep track of a single state at any time: the state that the automaton will reach after seeing a prefix of the input. In contrast, to simulate an NFA, one needs to keep track of a set of states: all of the states that the automaton could reach after seeing the same prefix of the input, according to the nondeterministic choices made by the automaton. If, after a certain prefix of the input, a set S of states can be reached, then after the next input symbol x the set of reachable states is a deterministic function of S and x. Therefore, the sets of reachable NFA states play the same role in the NFA simulation as single DFA states play in the DFA simulation, and in fact the sets of NFA states appearing in this simulation may be re-interpreted as being states of a DFA. Construction. The powerset construction applies most directly to an NFA that does not allow state transformations without consuming input symbols (aka: "ε-moves"). Such an automaton may be defined as a 5-tuple ("Q", Σ, "T", "q"0, "F"), in which Q is the set of states, Σ is the set of input symbols, T is the transition function (mapping a state and an input symbol to a set of states), "q"0 is the initial state, and F is the set of accepting states. The corresponding DFA has states corresponding to subsets of Q. The initial state of the DFA is {"q"0}, the (one-element) set of initial states. The transition function of the DFA maps a state S (representing a subset of Q) and an input symbol x to the set "T"("S","x") = ∪{"T"("q","x") | "q" ∈ "S"}, the set of all states that can be reached by an x-transition from a state in S. A state S of the DFA is an accepting state if and only if at least one member of S is an accepting state of the NFA. In the simplest version of the powerset construction, the set of all states of the DFA is the powerset of Q, the set of all possible subsets of Q. However, many states of the resulting DFA may be useless as they may be unreachable from the initial state. An alternative version of the construction creates only the states that are actually reachable. NFA with ε-moves. For an NFA with ε-moves (also called an ε-NFA), the construction must be modified to deal with these by computing the "ε-closure" of states: the set of all states reachable from some given state using only ε-moves. Van Noord recognizes three possible ways of incorporating this closure computation in the powerset construction: Multiple initial states. If NFAs are defined to allow for multiple initial states, the initial state of the corresponding DFA is the set of all initial states of the NFA, or (if the NFA also has ε-moves) the set of all states reachable from initial states by ε-moves. Example. The NFA below has four states; state 1 is initial, and states 3 and 4 are accepting. Its alphabet consists of the two symbols 0 and 1, and it has ε-moves. The initial state of the DFA constructed from this NFA is the set of all NFA states that are reachable from state 1 by ε-moves; that is, it is the set {1,2,3}. A transition from {1,2,3} by input symbol 0 must follow either the arrow from state 1 to state 2, or the arrow from state 3 to state 4. Additionally, neither state 2 nor state 4 have outgoing ε-moves. Therefore, T({1,2,3},0) = {2,4}, and by the same reasoning the full DFA constructed from the NFA is as shown below. As can be seen in this example, there are five states reachable from the start state of the DFA; the remaining 11 sets in the powerset of the set of NFA states are not reachable. Complexity. Because the DFA states consist of sets of NFA states, an n-state NFA may be converted to a DFA with at most 2"n" states. For every n, there exist n-state NFAs such that every subset of states is reachable from the initial subset, so that the converted DFA has exactly 2"n" states, giving Θ(2"n") worst-case time complexity. A simple example requiring nearly this many states is the language of strings over the alphabet {0,1} in which there are at least n characters, the nth from last of which is 1. It can be represented by an ("n" + 1)-state NFA, but it requires 2"n" DFA states, one for each n-character suffix of the input; cf. picture for "n"=4. Applications. Brzozowski's algorithm for DFA minimization uses the powerset construction, twice. It converts the input DFA into an NFA for the reverse language, by reversing all its arrows and exchanging the roles of initial and accepting states, converts the NFA back into a DFA using the powerset construction, and then repeats its process. Its worst-case complexity is exponential, unlike some other known DFA minimization algorithms, but in many examples it performs more quickly than its worst-case complexity would suggest. Safra's construction, which converts a non-deterministic Büchi automaton with n states into a deterministic Muller automaton or into a deterministic Rabin automaton with 2O(n log n) states, uses the powerset construction as part of its machinery. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{q' ~|~ q \\to^{*}_\\varepsilon q' \\}" }, { "math_id": 1, "text": "\\{q' ~|~ \\exists q \\in Q', q \\to^{*}_\\varepsilon q' \\}" } ]
https://en.wikipedia.org/wiki?curid=1163167
11634
Field extension
Construction of a larger algebraic field by "adding elements" to a smaller field In mathematics, particularly in algebra, a field extension (denoted formula_0) is a pair of fields formula_1, such that the operations of "K" are those of "L" restricted to "K". In this case, "L" is an extension field of "K" and "K" is a subfield of "L". For example, under the usual notions of addition and multiplication, the complex numbers are an extension field of the real numbers; the real numbers are a subfield of the complex numbers. Field extensions are fundamental in algebraic number theory, and in the study of polynomial roots through Galois theory, and are widely used in algebraic geometry. Subfield. A subfield formula_2 of a field formula_3 is a subset formula_4 that is a field with respect to the field operations inherited from formula_3. Equivalently, a subfield is a subset that contains formula_5, and is closed under the operations of addition, subtraction, multiplication, and taking the inverse of a nonzero element of formula_2. As 1 – 1 = 0, the latter definition implies formula_2 and formula_3 have the same zero element. For example, the field of rational numbers is a subfield of the real numbers, which is itself a subfield of the complex numbers. More generally, the field of rational numbers is (or is isomorphic to) a subfield of any field of characteristic formula_6. The characteristic of a subfield is the same as the characteristic of the larger field. Extension field. If "K" is a subfield of "L", then "L" is an extension field or simply extension of "K", and this pair of fields is a field extension. Such a field extension is denoted formula_0 (read as ""L" over "K""). If "L" is an extension of "F", which is in turn an extension of "K", then "F" is said to be an intermediate field (or intermediate extension or subextension) of formula_0. Given a field extension formula_0, the larger field "L" is a "K"-vector space. The dimension of this vector space is called the degree of the extension and is denoted by formula_7. The degree of an extension is 1 if and only if the two fields are equal. In this case, the extension is a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;trivial extension. Extensions of degree 2 and 3 are called quadratic extensions and cubic extensions, respectively. A finite extension is an extension that has a finite degree. Given two extensions formula_0 and formula_8, the extension formula_9 is finite if and only if both formula_0 and formula_8 are finite. In this case, one has formula_10 Given a field extension formula_0 and a subset "S" of "L", there is a smallest subfield of "L" that contains "K" and "S". It is the intersection of all subfields of "L" that contain "K" and "S", and is denoted by "K"("S") (read as ""K" "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;adjoin" "S""). One says that "K"("S") is the field "generated" by "S" over "K", and that "S" is a generating set of "K"("S") over "K". When formula_11 is finite, one writes formula_12 instead of formula_13 and one says that "K"("S") is &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;finitely generated over "K". If "S" consists of a single element "s", the extension "K"("s") / "K" is called a simple extension and "s" is called a primitive element of the extension. An extension field of the form "K"("S") is often said to result from the "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;adjunction" of "S" to "K". In characteristic 0, every finite extension is a simple extension. This is the primitive element theorem, which does not hold true for fields of non-zero characteristic. If a simple extension "K"("s") / "K" is not finite, the field "K"("s") is isomorphic to the field of rational fractions in "s" over "K". Caveats. The notation "L" / "K" is purely formal and does not imply the formation of a quotient ring or quotient group or any other kind of division. Instead the slash expresses the word "over". In some literature the notation "L":"K" is used. It is often desirable to talk about field extensions in situations where the small field is not actually contained in the larger one, but is naturally embedded. For this purpose, one abstractly defines a field extension as an injective ring homomorphism between two fields. "Every" non-zero ring homomorphism between fields is injective because fields do not possess nontrivial proper ideals, so field extensions are precisely the morphisms in the category of fields. Henceforth, we will suppress the injective homomorphism and assume that we are dealing with actual subfields. Examples. The field of complex numbers formula_14 is an extension field of the field of real numbers formula_15, and formula_15 in turn is an extension field of the field of rational numbers formula_16. Clearly then, formula_17 is also a field extension. We have formula_18 because formula_19 is a basis, so the extension formula_20 is finite. This is a simple extension because formula_21 formula_22 (the cardinality of the continuum), so this extension is infinite. The field formula_23 is an extension field of formula_24 also clearly a simple extension. The degree is 2 because formula_25 can serve as a basis. The field formula_26 is an extension field of both formula_27 and formula_24 of degree 2 and 4 respectively. It is also a simple extension, as one can show that formula_28 Finite extensions of formula_16 are also called algebraic number fields and are important in number theory. Another extension field of the rationals, which is also important in number theory, although not a finite extension, is the field of p-adic numbers formula_29 for a prime number "p". It is common to construct an extension field of a given field "K" as a quotient ring of the polynomial ring "K"["X"] in order to "create" a root for a given polynomial "f"("X"). Suppose for instance that "K" does not contain any element "x" with "x"2 = −1. Then the polynomial formula_30 is irreducible in "K"["X"], consequently the ideal generated by this polynomial is maximal, and formula_31 is an extension field of "K" which "does" contain an element whose square is −1 (namely the residue class of "X"). By iterating the above construction, one can construct a splitting field of any polynomial from "K"["X"]. This is an extension field "L" of "K" in which the given polynomial splits into a product of linear factors. If "p" is any prime number and "n" is a positive integer, there is a unique (up to isomorphism) finite field formula_32 with "pn" elements; this is an extension field of the prime field formula_33 with "p" elements. Given a field "K", we can consider the field "K"("X") of all rational functions in the variable "X" with coefficients in "K"; the elements of "K"("X") are fractions of two polynomials over "K", and indeed "K"("X") is the field of fractions of the polynomial ring "K"["X"]. This field of rational functions is an extension field of "K". This extension is infinite. Given a Riemann surface "M", the set of all meromorphic functions defined on "M" is a field, denoted by formula_34 It is a transcendental extension field of formula_14 if we identify every complex number with the corresponding constant function defined on "M". More generally, given an algebraic variety "V" over some field "K", the function field "K"("V"), consisting of the rational functions defined on "V", is an extension field of "K". Algebraic extension. An element "x" of a field extension formula_0 is algebraic over "K" if it is a root of a nonzero polynomial with coefficients in "K". For example, formula_35 is algebraic over the rational numbers, because it is a root of formula_36 If an element "x" of "L" is algebraic over "K", the monic polynomial of lowest degree that has "x" as a root is called the minimal polynomial of "x". This minimal polynomial is irreducible over "K". An element "s" of "L" is algebraic over "K" if and only if the simple extension "K"("s") /"K" is a finite extension. In this case the degree of the extension equals the degree of the minimal polynomial, and a basis of the "K"-vector space "K"("s") consists of formula_37 where "d" is the degree of the minimal polynomial. The set of the elements of "L" that are algebraic over "K" form a subextension, which is called the algebraic closure of "K" in "L". This results from the preceding characterization: if "s" and "t" are algebraic, the extensions "K"("s") /"K" and "K"("s")("t") /"K"("s") are finite. Thus "K"("s", "t") /"K" is also finite, as well as the sub extensions "K"("s" ± "t") /"K", "K"("st") /"K" and "K"(1/"s") /"K" (if "s" ≠ 0). It follows that "s" ± "t", "st" and 1/"s" are all algebraic. An "algebraic extension" formula_0 is an extension such that every element of "L" is algebraic over "K". Equivalently, an algebraic extension is an extension that is generated by algebraic elements. For example, formula_38 is an algebraic extension of formula_16, because formula_35 and formula_39 are algebraic over formula_40 A simple extension is algebraic if and only if it is finite. This implies that an extension is algebraic if and only if it is the union of its finite subextensions, and that every finite extension is algebraic. Every field "K" has an algebraic closure, which is up to an isomorphism the largest extension field of "K" which is algebraic over "K", and also the smallest extension field such that every polynomial with coefficients in "K" has a root in it. For example, formula_14 is an algebraic closure of formula_15, but not an algebraic closure of formula_16, as it is not algebraic over formula_16 (for example π is not algebraic over formula_16). Transcendental extension. Given a field extension formula_0, a subset "S" of "L" is called algebraically independent over "K" if no non-trivial polynomial relation with coefficients in "K" exists among the elements of "S". The largest cardinality of an algebraically independent set is called the transcendence degree of "L"/"K". It is always possible to find a set "S", algebraically independent over "K", such that "L"/"K"("S") is algebraic. Such a set "S" is called a transcendence basis of "L"/"K". All transcendence bases have the same cardinality, equal to the transcendence degree of the extension. An extension formula_0 is said to be &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;purely transcendental if and only if there exists a transcendence basis "S" of formula_0 such that "L" = "K"("S"). Such an extension has the property that all elements of "L" except those of "K" are transcendental over "K", but, however, there are extensions with this property which are not purely transcendental—a class of such extensions take the form "L"/"K" where both "L" and "K" are algebraically closed. If "L"/"K" is purely transcendental and "S" is a transcendence basis of the extension, it doesn't necessarily follow that "L" = "K"("S"). On the opposite, even when one knows a transcendence basis, it may be difficult to decide whether the extension is purely separable, and if it is so, it may be difficult to find a transcendence basis "S" such that "L" = "K"("S"). For example, consider the extension formula_41 where formula_42 is transcendental over formula_24 and formula_43 is a root of the equation formula_44 Such an extension can be defined as formula_45 in which formula_42 and formula_43 are the equivalence classes of formula_46 and formula_47 Obviously, the singleton set formula_48 is transcendental over formula_16 and the extension formula_49 is algebraic; hence formula_48 is a transcendence basis that does not generates the extension formula_49. Similarly, formula_50 is a transcendence basis that does not generates the whole extension. However the extension is purely transcendental since, if one set formula_51 one has formula_52 and formula_53 and thus formula_54 generates the whole extension. Purely transcendental extensions of an algebraically closed field occur as function fields of rational varieties. The problem of finding a rational parametrization of a rational variety is equivalent with the problem of finding a transcendence basis that generates the whole extension. Normal, separable and Galois extensions. An algebraic extension formula_0 is called normal if every irreducible polynomial in "K"["X"] that has a root in "L" completely factors into linear factors over "L". Every algebraic extension "F"/"K" admits a normal closure "L", which is an extension field of "F" such that formula_0 is normal and which is minimal with this property. An algebraic extension formula_0 is called separable if the minimal polynomial of every element of "L" over "K" is separable, i.e., has no repeated roots in an algebraic closure over "K". A Galois extension is a field extension that is both normal and separable. A consequence of the primitive element theorem states that every finite separable extension has a primitive element (i.e. is simple). Given any field extension formula_0, we can consider its automorphism group formula_55, consisting of all field automorphisms "α": "L" → "L" with "α"("x") = "x" for all "x" in "K". When the extension is Galois this automorphism group is called the Galois group of the extension. Extensions whose Galois group is abelian are called abelian extensions. For a given field extension formula_0, one is often interested in the intermediate fields "F" (subfields of "L" that contain "K"). The significance of Galois extensions and Galois groups is that they allow a complete description of the intermediate fields: there is a bijection between the intermediate fields and the subgroups of the Galois group, described by the fundamental theorem of Galois theory. Generalizations. Field extensions can be generalized to ring extensions which consist of a ring and one of its subrings. A closer non-commutative analog are central simple algebras (CSAs) – ring extensions over a field, which are simple algebra (no non-trivial 2-sided ideals, just as for a field) and where the center of the ring is exactly the field. For example, the only finite field extension of the real numbers is the complex numbers, while the quaternions are a central simple algebra over the reals, and all CSAs over the reals are Brauer equivalent to the reals or the quaternions. CSAs can be further generalized to Azumaya algebras, where the base field is replaced by a commutative local ring. Extension of scalars. Given a field extension, one can "extend scalars" on associated algebraic objects. For example, given a real vector space, one can produce a complex vector space via complexification. In addition to vector spaces, one can perform extension of scalars for associative algebras defined over the field, such as polynomials or group algebras and the associated group representations. Extension of scalars of polynomials is often used implicitly, by just considering the coefficients as being elements of a larger field, but may also be considered more formally. Extension of scalars has numerous applications, as discussed in extension of scalars: applications.
[ { "math_id": 0, "text": "L/K" }, { "math_id": 1, "text": "K \\subseteq L" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "K\\subseteq L" }, { "math_id": 5, "text": "1" }, { "math_id": 6, "text": "0" }, { "math_id": 7, "text": "[L:K]" }, { "math_id": 8, "text": "M/L" }, { "math_id": 9, "text": "M/K" }, { "math_id": 10, "text": "[M : K]=[M : L]\\cdot[L : K]." }, { "math_id": 11, "text": "S=\\{x_1, \\ldots, x_n\\}" }, { "math_id": 12, "text": "K(x_1, \\ldots, x_n)" }, { "math_id": 13, "text": "K(\\{x_1, \\ldots, x_n\\})," }, { "math_id": 14, "text": "\\Complex" }, { "math_id": 15, "text": "\\R" }, { "math_id": 16, "text": "\\Q" }, { "math_id": 17, "text": "\\Complex/\\Q" }, { "math_id": 18, "text": "[\\Complex:\\R] =2" }, { "math_id": 19, "text": "\\{1, i\\}" }, { "math_id": 20, "text": "\\Complex/\\R" }, { "math_id": 21, "text": "\\Complex = \\R(i)." }, { "math_id": 22, "text": "[\\R:\\Q] =\\mathfrak c" }, { "math_id": 23, "text": "\\Q(\\sqrt{2}) = \\left \\{ a + b\\sqrt{2} \\mid a,b \\in \\Q \\right \\}," }, { "math_id": 24, "text": "\\Q," }, { "math_id": 25, "text": "\\left\\{1, \\sqrt{2}\\right\\}" }, { "math_id": 26, "text": "\\begin{align}\n\\Q\\left(\\sqrt{2}, \\sqrt{3}\\right) &= \\Q \\left(\\sqrt{2}\\right) \\left(\\sqrt{3}\\right) \\\\\n&= \\left\\{ a+b\\sqrt{3} \\mid a,b \\in \\Q\\left(\\sqrt{2}\\right) \\right\\} \\\\\n&= \\left\\{ a + b \\sqrt{2} + c\\sqrt{3} + d\\sqrt{6} \\mid a,b,c, d \\in \\Q \\right\\},\n\\end{align}" }, { "math_id": 27, "text": "\\Q(\\sqrt{2})" }, { "math_id": 28, "text": "\\begin{align}\n\\Q(\\sqrt{2}, \\sqrt{3}) &= \\Q (\\sqrt{2} + \\sqrt{3}) \\\\\n&= \\left \\{ a + b (\\sqrt{2} + \\sqrt{3}) + c (\\sqrt{2} + \\sqrt{3})^2 + d(\\sqrt{2} + \\sqrt{3})^3 \\mid a,b,c, d \\in \\Q\\right\\}.\n\\end{align}" }, { "math_id": 29, "text": "\\Q_p" }, { "math_id": 30, "text": "X^2+1" }, { "math_id": 31, "text": "L = K[X]/(X^2+1)" }, { "math_id": 32, "text": "GF(p^n) = \\mathbb{F}_{p^n}" }, { "math_id": 33, "text": "\\operatorname{GF}(p) = \\mathbb{F}_p = \\Z/p\\Z" }, { "math_id": 34, "text": "\\Complex(M)." }, { "math_id": 35, "text": "\\sqrt 2" }, { "math_id": 36, "text": "x^2-2." }, { "math_id": 37, "text": "1, s, s^2, \\ldots, s^{d-1}," }, { "math_id": 38, "text": "\\Q(\\sqrt 2, \\sqrt 3)" }, { "math_id": 39, "text": "\\sqrt 3" }, { "math_id": 40, "text": "\\Q." }, { "math_id": 41, "text": "\\Q(x, y)/\\Q," }, { "math_id": 42, "text": "x" }, { "math_id": 43, "text": "y" }, { "math_id": 44, "text": "y^2-x^3=0." }, { "math_id": 45, "text": "\\Q(X)[Y]/\\langle Y^2-X^3\\rangle," }, { "math_id": 46, "text": "X" }, { "math_id": 47, "text": "Y." }, { "math_id": 48, "text": "\\{x\\}" }, { "math_id": 49, "text": "\\Q(x, y)/\\Q(x)" }, { "math_id": 50, "text": "\\{y\\}" }, { "math_id": 51, "text": "t=y/x," }, { "math_id": 52, "text": "x=t^2" }, { "math_id": 53, "text": "y=t^3," }, { "math_id": 54, "text": "t" }, { "math_id": 55, "text": "\\text{Aut}(L/K)" } ]
https://en.wikipedia.org/wiki?curid=11634
11634012
Locality-sensitive hashing
Algorithmic technique using hashing In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items. Hashing-based approximate nearest-neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH). Locality-preserving hashing was initially devised as a way to facilitate data pipelining in implementations of massively parallel algorithms that use randomized routing and universal hashing to reduce memory contention and network congestion. Definitions. A finite family formula_0 of functions formula_1 is defined to be an "LSH family" for if it satisfies the following condition. For any two points formula_6 and a hash function formula_7 chosen uniformly at random from formula_8: Such a family formula_8 is called formula_14-sensitive. LSH with respect to a similarity measure. Alternatively it is possible to define an LSH family on a universe of items U endowed with a similarity function formula_15. In this setting, a LSH scheme is a family of hash functions H coupled with a probability distribution D over H such that a function formula_16 chosen according to D satisfies formula_17 for each formula_18. Amplification. Given a formula_19-sensitive family formula_8, we can construct new families formula_20 by either the AND-construction or OR-construction of formula_8. To create an AND-construction, we define a new family formula_20 of hash functions g, where each function g is constructed from k random functions formula_21 from formula_8. We then say that for a hash function formula_22, formula_23 if and only if all formula_24 for formula_25. Since the members of formula_8 are independently chosen for any formula_22, formula_20 is a formula_26-sensitive family. To create an OR-construction, we define a new family formula_20 of hash functions g, where each function g is constructed from k random functions formula_21 from formula_8. We then say that for a hash function formula_22, formula_23 if and only if formula_24 for one or more values of i. Since the members of formula_8 are independently chosen for any formula_22, formula_20 is a formula_27-sensitive family. Applications. LSH has been applied to several problem domains, including: Methods. Bit sampling for Hamming distance. One of the easiest ways to construct an LSH family is by bit sampling. This approach works for the Hamming distance over d-dimensional vectors formula_28. Here, the family formula_8 of hash functions is simply the family of all the projections of points on one of the formula_29 coordinates, i.e., formula_30, where formula_31 is the formula_32th coordinate of formula_33. A random function formula_7 from formula_34 simply selects a random bit from the input point. This family has the following parameters: formula_35, formula_36. That is, any two vectors formula_37 with Hamming distance at most formula_38 collide under a random formula_7 with probability at least formula_39. Any formula_37 with Hamming distance at least formula_40 collide with probability at most formula_41. Min-wise independent permutations. Suppose U is composed of subsets of some ground set of enumerable items S and the similarity function of interest is the Jaccard index J. If π is a permutation on the indices of S, for formula_42 let formula_43. Each possible choice of π defines a single hash function h mapping input sets to elements of S. Define the function family H to be the set of all such functions and let D be the uniform distribution. Given two sets formula_44 the event that formula_45 corresponds exactly to the event that the minimizer of π over formula_46 lies inside formula_47. As h was chosen uniformly at random, formula_48 and formula_49 define an LSH scheme for the Jaccard index. Because the symmetric group on n elements has size n!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized n. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen π. It has been established that a min-wise independent family of permutations is at least of size formula_50, and that this bound is tight. Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most k. Approximate min-wise independence differs from the property by at most a fixed ε. Open source methods. Nilsimsa Hash. Nilsimsa is a locality-sensitive hashing algorithm used in anti-spam efforts. The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements: Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash. TLSH. TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications. The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar. An implementation of TLSH is available as open-source software. Random projection. The random projection method of LSH due to Moses Charikar called SimHash (also sometimes called arccos) uses an approximation of the cosine distance between vectors. The technique was used to approximate the NP-complete max-cut problem. The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector r) at the outset and use the hyperplane to hash input vectors. Given an input vector v and a hyperplane defined by r, we let formula_53. That is, formula_54 depending on which side of the hyperplane v lies. This way, each possible choice of a random hyperplane r can be interpreted as a hash function formula_55. For two vectors u,v with angle formula_56 between them, it can be shown that formula_57 Since the ratio between formula_51 and formula_52 is at least 0.87856 when formula_58, the probability of two vectors being on the same side of the random hyperplane is approximately proportional to the cosine distance between them. Stable distributions. The hash function formula_59 maps a d-dimensional vector formula_60 onto the set of integers. Each hash function in the family is indexed by a choice of random formula_61 and formula_62 where formula_61 is a d-dimensional vector with entries chosen independently from a stable distribution and formula_62 is a real number chosen uniformly from the range [0,r]. For a fixed formula_63 the hash function formula_64 is given by formula_65. Other construction methods for hash functions have been proposed to better fit the data. In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee. Semantic hashing. Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have higher semantic similarity. The hashcodes are found via training of an artificial neural network or graphical model. Algorithm for nearest neighbor search. One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor search algorithms. Consider an LSH family formula_8. The algorithm has two main parameters: the width parameter k and the number of hash tables L. In the first step, we define a new family formula_20 of hash functions g, where each function g is obtained by concatenating k functions formula_21 from formula_8, i.e., formula_66. In other words, a random hash function g is obtained by concatenating k randomly chosen hash functions from formula_8. The algorithm then constructs L hash tables, each corresponding to a different randomly chosen hash function g. In the preprocessing step we hash all n d-dimensional points from the data set S into each of the L hash tables. Given that the resulting hash tables have only n non-zero entries, one can reduce the amount of memory used per each hash table to formula_67 using standard hash functions. Given a query point q, the algorithm iterates over the L hash functions g. For each g considered, it retrieves the data points that are hashed into the same bucket as q. The process is stopped as soon as a point within distance cR from q is found. Given the parameters k and L, the algorithm has the following performance guarantees: For a fixed approximation ratio formula_73 and probabilities formula_39 and formula_41, one can set formula_74 and formula_75, where formula_76. Then one obtains the following performance guarantees: Improvements. When t is large, it is possible to reduce the hashing time from formula_80. This was shown by and which gave It is also sometimes the case that the factor formula_83 can be very large. This happens for example with Jaccard similarity data, where even the most similar neighbor often has a quite low Jaccard similarity with the query. In it was shown how to reduce the query time to formula_84 (not including hashing costs) and similarly the space usage. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathcal F" }, { "math_id": 1, "text": "h\\colon M \\to S" }, { "math_id": 2, "text": "\\mathcal M =(M, d)" }, { "math_id": 3, "text": "r>0" }, { "math_id": 4, "text": "c>1" }, { "math_id": 5, "text": "p_1 > p_2" }, { "math_id": 6, "text": "a, b \\in M" }, { "math_id": 7, "text": "h" }, { "math_id": 8, "text": "\\mathcal F" }, { "math_id": 9, "text": "d(a,b) \\le r" }, { "math_id": 10, "text": "h(a)=h(b)" }, { "math_id": 11, "text": "p_1" }, { "math_id": 12, "text": "d(a,b) \\ge cr" }, { "math_id": 13, "text": "p_2" }, { "math_id": 14, "text": "(r,cr,p_1,p_2)" }, { "math_id": 15, "text": "\\phi\\colon U \\times U \\to [0,1]" }, { "math_id": 16, "text": "h \\in H" }, { "math_id": 17, "text": "Pr [h(a) = h(b)] = \\phi(a,b)" }, { "math_id": 18, "text": "a,b \\in U" }, { "math_id": 19, "text": "(d_1, d_2, p_1, p_2)" }, { "math_id": 20, "text": "\\mathcal G" }, { "math_id": 21, "text": "h_1, \\ldots, h_k" }, { "math_id": 22, "text": "g \\in \\mathcal G" }, { "math_id": 23, "text": "g(x) = g(y)" }, { "math_id": 24, "text": "h_i(x) = h_i(y)" }, { "math_id": 25, "text": "i = 1, 2, \\ldots, k" }, { "math_id": 26, "text": "(d_1, d_2, p_{1}^k, p_{2}^k)" }, { "math_id": 27, "text": "(d_1, d_2, 1- (1 - p_1)^k, 1 - (1 - p_2)^k)" }, { "math_id": 28, "text": "\\{0,1\\}^d" }, { "math_id": 29, "text": "d" }, { "math_id": 30, "text": "{\\mathcal F}=\\{h\\colon \\{0,1\\}^d\\to \\{0,1\\}\\mid h(x)=x_i \\text{ for some } i\\in \\{1, \\ldots, d\\}\\}" }, { "math_id": 31, "text": "x_i" }, { "math_id": 32, "text": "i" }, { "math_id": 33, "text": "x" }, { "math_id": 34, "text": "{\\mathcal F}" }, { "math_id": 35, "text": "P_1=1-R/d" }, { "math_id": 36, "text": "P_2=1-cR/d" }, { "math_id": 37, "text": "x,y" }, { "math_id": 38, "text": "R" }, { "math_id": 39, "text": "P_1" }, { "math_id": 40, "text": "cR" }, { "math_id": 41, "text": "P_2" }, { "math_id": 42, "text": "A \\subseteq S" }, { "math_id": 43, "text": "h(A) = \\min_{a \\in A} \\{ \\pi(a) \\}" }, { "math_id": 44, "text": "A,B \\subseteq S" }, { "math_id": 45, "text": "h(A) = h(B)" }, { "math_id": 46, "text": "A \\cup B" }, { "math_id": 47, "text": "A \\cap B" }, { "math_id": 48, "text": "Pr[h(A) = h(B)] = J(A,B)\\," }, { "math_id": 49, "text": "(H,D)\\," }, { "math_id": 50, "text": "\\operatorname{lcm}\\{\\,1, 2, \\ldots, n\\,\\} \\ge e^{n-o(n)}" }, { "math_id": 51, "text": "\\frac{\\theta(u,v)}{\\pi}" }, { "math_id": 52, "text": "1-\\cos(\\theta(u,v))" }, { "math_id": 53, "text": "h(v) = \\sgn(v \\cdot r)" }, { "math_id": 54, "text": "h(v) = \\pm 1" }, { "math_id": 55, "text": "h(v)" }, { "math_id": 56, "text": "\\theta(u,v)" }, { "math_id": 57, "text": "Pr[h(u) = h(v)] = 1 - \\frac{\\theta(u,v)}{\\pi}." }, { "math_id": 58, "text": "\\theta(u, v) \\in [0, \\pi]" }, { "math_id": 59, "text": "h_{\\mathbf{a},b} (\\boldsymbol{\\upsilon}) : \n\\mathcal{R}^d\n\\to \\mathcal{N} " }, { "math_id": 60, "text": "\\boldsymbol{\\upsilon}" }, { "math_id": 61, "text": "\\mathbf{a}" }, { "math_id": 62, "text": "b" }, { "math_id": 63, "text": "\\mathbf{a},b" }, { "math_id": 64, "text": "h_{\\mathbf{a},b}" }, { "math_id": 65, "text": "h_{\\mathbf{a},b} (\\boldsymbol{\\upsilon}) = \\left \\lfloor\n\\frac{\\mathbf{a}\\cdot \\boldsymbol{\\upsilon}+b}{r} \\right \\rfloor " }, { "math_id": 66, "text": "g(p) = [h_1(p), \\ldots, h_k(p)]" }, { "math_id": 67, "text": "O(n)" }, { "math_id": 68, "text": "O(nLkt)" }, { "math_id": 69, "text": "h \\in \\mathcal F" }, { "math_id": 70, "text": "O(nL)" }, { "math_id": 71, "text": "O(L(kt+dnP_2^k))" }, { "math_id": 72, "text": "1 - ( 1 - P_1^k ) ^ L" }, { "math_id": 73, "text": "c=1+\\epsilon" }, { "math_id": 74, "text": "k = \\left\\lceil\\tfrac{\\log n}{\\log 1/P_2}\\right\\rceil" }, { "math_id": 75, "text": "L = \\lceil P_1^{-k}\\rceil = O(n^{\\rho}P_1^{-1})" }, { "math_id": 76, "text": "\\rho={\\tfrac{\\log P_1}{\\log P_2}}" }, { "math_id": 77, "text": "O(n^{1+\\rho}P_1^{-1}kt)" }, { "math_id": 78, "text": "O(n^{1+\\rho}P_1^{-1})" }, { "math_id": 79, "text": "O(n^{\\rho}P_1^{-1}(kt+d))" }, { "math_id": 80, "text": "O(n^{\\rho})" }, { "math_id": 81, "text": "O(t\\log^2(1/P_2)/P_1 + n^{\\rho}(d + 1/P_1))" }, { "math_id": 82, "text": "O(n^{1+\\rho}/P_1 + \\log^2(1/P_2)/P_1)" }, { "math_id": 83, "text": "1/P_1" }, { "math_id": 84, "text": "O(n^\\rho/P_1^{1-\\rho})" } ]
https://en.wikipedia.org/wiki?curid=11634012
11637604
Tobler hyperelliptical projection
Pseudocylindrical equal-area map projection The Tobler hyperelliptical projection is a family of equal-area pseudocylindrical projections that may be used for world maps. Waldo R. Tobler introduced the construction in 1973 as the "hyperelliptical" projection, now usually known as the Tobler hyperelliptical projection. Overview. As with any pseudocylindrical projection, in the projection’s normal aspect, the parallels of latitude are parallel, straight lines. Their spacing is calculated to provide the equal-area property. The projection blends the cylindrical equal-area projection, which has straight, vertical meridians, with meridians that follow a particular kind of curve known as "superellipses" or "Lamé curves" or sometimes as "hyperellipses". A hyperellipse is described by formula_0, where formula_1 and formula_2 are free parameters. Tobler's hyperelliptical projection is given as: formula_3 where formula_4 is the longitude, formula_5 is the latitude, and formula_6 is the relative weight given to the cylindrical equal-area projection. For a purely cylindrical equal-area, formula_7; for a projection with pure hyperellipses for meridians, formula_8; and for weighted combinations, formula_9. When formula_8 and formula_10 the projection degenerates to the Collignon projection; when formula_8, formula_11, and formula_12 the projection becomes the Mollweide projection. Tobler favored the parameterization shown with the top illustration; that is, formula_8, formula_13, and formula_14. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^k + y^k = \\gamma^k" }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "\\begin{align}\n&x = \\lambda [\\alpha + (1 - \\alpha) \\frac{(\\gamma^k - y^k)^{1/k}}{\\gamma}] \\\\\n\\alpha &y = \\sin \\varphi + \\frac{\\alpha - 1}{\\gamma} \\int_0^y (\\gamma^k - z^k)^{1/k} dz\n\\end{align}" }, { "math_id": 4, "text": "\\lambda" }, { "math_id": 5, "text": "\\varphi" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "\\alpha = 1" }, { "math_id": 8, "text": "\\alpha = 0" }, { "math_id": 9, "text": "0 < \\alpha < 1" }, { "math_id": 10, "text": "k = 1" }, { "math_id": 11, "text": "k = 2" }, { "math_id": 12, "text": "\\gamma = 4 / \\pi" }, { "math_id": 13, "text": "k = 2.5" }, { "math_id": 14, "text": "\\gamma \\approx 1.183136" } ]
https://en.wikipedia.org/wiki?curid=11637604
11641180
Logic alphabet
The logic alphabet, also called the X-stem Logic Alphabet (XLA), constitutes an iconic set of symbols that systematically represents the sixteen possible binary truth functions of logic. The logic alphabet was developed by Shea Zellweger. The major emphasis of his iconic "logic alphabet" is to provide a more cognitively ergonomic notation for logic. Zellweger's visually iconic system more readily reveals, to the novice and expert alike, the underlying symmetry relationships and geometric properties of the sixteen binary connectives within Boolean algebra. Truth functions. Truth functions are functions from sequences of truth values to truth values. A unary truth function, for example, takes a single truth value and maps it to another truth value. Similarly, a binary truth function maps ordered pairs of truth values to truth values, while a ternary truth function maps ordered triples of truth values to truth values, and so on. In the unary case, there are two possible inputs, viz. T and F, and thus four possible unary truth functions: one mapping T to T and F to F, one mapping T to F and F to F, one mapping T to T and F to T, and finally one mapping T to F and F to T, this last one corresponding to the familiar operation of logical negation. In the form of a table, the four unary truth functions may be represented as follows. In the binary case, there are four possible inputs, viz. (T, T), (T, F), (F, T), and (F, F), thus yielding sixteen possible binary truth functions – in general, there are formula_0 "n"-ary truth functions for each natural number "n". The sixteen possible binary truth functions are listed in the table below. Content. Zellweger's logic alphabet offers a visually systematic way of representing each of the sixteen binary truth functions. The idea behind the logic alphabet is to first represent the sixteen binary truth functions in the form of a square matrix rather than the more familiar tabular format seen in the table above, and then to assign a letter shape to each of these matrices. Letter shapes are derived from the distribution of Ts in the matrix. When drawing a logic symbol, one passes through each square with assigned F values while stopping in a square with assigned T values. In the extreme examples, the symbol for tautology is a X (stops in all four squares), while the symbol for contradiction is an O (passing through all squares without stopping). The square matrix corresponding to each binary truth function, as well as its corresponding letter shape, are displayed in the table below. Significance. The interest of the logic alphabet lies in its aesthetic, symmetric, and geometric qualities. These qualities combine to allow an individual to more easily, rapidly and visually manipulate the relationships between entire truth tables. A logic operation performed on a two-dimensional logic alphabet connective, with its geometric qualities, produces a symmetry transformation. When a symmetry transformation occurs, each input symbol, without any further thought, immediately changes into the correct output symbol. For example, by reflecting the symbol for NAND (viz. 'h') across the vertical axis we produce the symbol for ←, whereas by reflecting it across the horizontal axis we produce the symbol for →, and by reflecting it across both the horizontal and vertical axes we produce the symbol for ∨. Similar symmetry transformations can be obtained by operating upon the other symbols. In effect, the X-stem Logic Alphabet is derived from three disciplines that have been stacked and combined: (1) mathematics, (2) logic, and (3) semiotics. This happens because, in keeping with the mathelogical semiotics, the connectives have been custom designed in the form of geometric letter shapes that serve as iconic replicas of their corresponding square-framed truth tables. Logic cannot do it alone. Logic is sandwiched between mathematics and semiotics. Indeed, Zellweger has constructed intriguing structures involving the symbols of the logic alphabet on the basis of these symmetries ( ). The considerable aesthetic appeal of the logic alphabet has led to exhibitions of Zellweger's work at the Museum of Jurassic Technology in Los Angeles, among other places. The value of the logic alphabet lies in its use as a visually simpler pedagogical tool than the traditional system for logic notation. The logic alphabet eases the introduction to the fundamentals of logic, especially for children, at much earlier stages of cognitive development. Because the logic notation system, in current use today, is so deeply embedded in our computer culture, the "logic alphabets" adoption and value by the field of logic itself, at this juncture, is questionable. Additionally, systems of natural deduction, for example, generally require introduction and elimination rules for each connective, meaning that the use of all sixteen binary connectives would result in a highly complex proof system. Various subsets of the sixteen binary connectives (e.g., {∨,&amp;,→,~}, {∨,~}, {&amp;, ~}, {→,~}) are themselves functionally complete in that they suffice to define the remaining connectives. In fact, both NAND and NOR are sole sufficient operators, meaning that the remaining connectives can all be defined solely in terms of either of them. Nonetheless, the logic alphabet’s two-dimensional geometric letter shapes along with its group symmetry properties can help ease the learning curve for children and adult students alike, as they become familiar with the interrelations and operations on all 16 binary connectives. Giving children and students this advantage is a decided gain. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{2^n}" } ]
https://en.wikipedia.org/wiki?curid=11641180
1164126
Knot group
Fundamental group of a knot complement In mathematics, a knot is an embedding of a circle into 3-dimensional Euclidean space. The knot group of a knot "K" is defined as the fundamental group of the knot complement of "K" in R3, formula_0 Other conventions consider knots to be embedded in the 3-sphere, in which case the knot group is the fundamental group of its complement in formula_1. Properties. Two equivalent knots have isomorphic knot groups, so the knot group is a knot invariant and can be used to distinguish between certain pairs of inequivalent knots. This is because an equivalence between two knots is a self-homeomorphism of formula_2 that is isotopic to the identity and sends the first knot onto the second. Such a homeomorphism restricts onto a homeomorphism of the complements of the knots, and this restricted homeomorphism induces an isomorphism of fundamental groups. However, it is possible for two inequivalent knots to have isomorphic knot groups (see below for an example). The abelianization of a knot group is always isomorphic to the infinite cyclic group Z; this follows because the abelianization agrees with the first homology group, which can be easily computed. The knot group (or fundamental group of an oriented link in general) can be computed in the Wirtinger presentation by a relatively simple algorithm. formula_3 or formula_4 formula_5 formula_6
[ { "math_id": 0, "text": "\\pi_1(\\mathbb{R}^3 \\setminus K)." }, { "math_id": 1, "text": " S^3" }, { "math_id": 2, "text": "\\mathbb{R}^3" }, { "math_id": 3, "text": "\\langle x,y \\mid x^2 = y^3 \\rangle" }, { "math_id": 4, "text": "\\langle a, b \\mid aba = bab \\rangle." }, { "math_id": 5, "text": "\\langle x,y \\mid x^p = y^q \\rangle." }, { "math_id": 6, "text": "\\langle x,y \\mid yxy^{-1}xy=xyx^{-1}yx\\rangle" } ]
https://en.wikipedia.org/wiki?curid=1164126
11647118
511 (number)
Natural number 511 is the natural number following 510 and preceding 512. It is a Mersenne number, being one less than a power of 2: formula_0. As a result, 511 is a palindromic number and a repdigit in bases 2 (1111111112). It is also palindromic and a repdigit in base 8 (7778). It is a generalized heptagonal number (sequence in the OEIS), since formula_1 when formula_2. It is a Harshad number in bases 3, 5, 7, 10, 13 and 15. Special use in computers. The octal representation of 511 (7778) is commonly used by Unix commands to specify a custom record separator in order to "slurp" input as a whole, rather than line-by-line (i.e. separated at newline characters). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "511=2^9-1" }, { "math_id": 1, "text": "511=\\frac{1}{2}(5n^2-3n)" }, { "math_id": 2, "text": "n=-14" } ]
https://en.wikipedia.org/wiki?curid=11647118
1164724
Injector
Type of pump using high pressure fluid to entrain a lower pressure fluid An injector is a system of ducting and nozzles used to direct the flow of a high-pressure fluid in such a way that a lower pressure fluid is entrained in the jet and carried through a duct to a region of higher pressure. It is a fluid-dynamic pump with no moving parts except a valve to control inlet flow. Depending on the application, an injector can also take the form of an "eductor-jet pump", a "water eductor" or an "aspirator". An "ejector" operates on similar principles to create a vacuum feed connection for braking systems etc. The motive fluid may be a liquid, steam or any other gas. The entrained suction fluid may be a gas, a liquid, a slurry, or a dust-laden gas stream. Steam injector. The steam injector is a common device used for delivering water to steam boilers, especially in steam locomotives. It is a typical application of the injector principle used to deliver cold water to a boiler against its own pressure, using its own live or exhaust steam, replacing any mechanical pump. When first developed, its operation was intriguing because it seemed paradoxical, almost like perpetual motion, but it was later explained using thermodynamics. Other types of injector may use other pressurised motive fluids such as air. History. Giffard. The injector was invented by Henri Giffard in early 1850s and patented in France in 1858, for use on steam locomotives. It was patented in the United Kingdom by Sharp, Stewart and Company of Glasgow. After some initial scepticism resulting from the unfamiliar and superficially paradoxical mode of operation, the injector became widely adopted for steam locomotives as an alternative to mechanical pumps.5,7 Kneass. Strickland Landis Kneass was a civil engineer, experimenter, and author, with many accomplishments involving railroading. Kneass began publishing a mathematical model of the physics of the injector, which he had verified by experimenting with steam. A steam injector has three primary sections: Nozzle. Figure 15 shows four sketches Kneass drew of steam passing through a nozzle. In general, compressible flows through a diverging duct increases velocity as a gas expands. The two sketches at the bottom of figure 15 are both diverging, but the bottom one is slightly curved, and produced the highest velocity flow parallel to the axis. The area of a duct is proportional to the square of the diameter, and the curvature allows the steam to expand more linearly as it passes through the duct. An ideal gas cools during adiabatic expansion (without adding heat), releasing less energy than the same gas would during isothermal expansion (constant temperature). Expansion of steam follows an intermediate thermodynamic process called the Rankine cycle. Steam does more work than an ideal gas, because steam remains hot during expansion. The extra heat comes from enthalpy of vaporization, as some of the steam condenses back into dropplets of water intermixed with steam. Combining tube. At the end of the nozzle, the steam has very high velocity, but at less than atmospheric pressure, drawing in cold water which becomes entrained in the stream, where the steam condenses into droplets of water in a converging duct. Delivery tube. The delivery tube is a diverging duct where the force of deceleration increases pressure, allowing the stream of water to enter the boiler. Operation. The injector consists of a body filled with a secondary fluid, into which a motive fluid is injected. The motive fluid induces the secondary fluid to move. Injectors exist in many variations, and can have several stages, each repeating the same basic operating principle, to increase their overall effect. It uses the Venturi effect of a converging-diverging nozzle on a steam jet to convert the pressure energy of the steam to velocity energy, reducing its pressure to below that of the atmosphere, which enables it to entrain a fluid (e.g., water). After passing through the convergent "combining cone", the mixed fluid is fully condensed, releasing the latent heat of evaporation of the steam which imparts extra velocity to the water. The condensate mixture then enters a divergent "delivery cone" which slows the jet, converting kinetic energy back into static pressure energy above the pressure of the boiler enabling its feed through a non-return valve. Most of the heat energy in the condensed steam is returned to the boiler, increasing the thermal efficiency of the process. Injectors are therefore typically over 98% energy-efficient overall; they are also simple compared to the many moving parts in a feed pump. Key design parameters. Fluid feed rate and operating pressure range are the key parameters of an injector, and vacuum pressure and evacuation rate are the key parameters for an ejector. Compression ratio and the entrainment ratio may also be defined: The compression ratio of the injector, formula_0, is defined as ratio of the injector's outlet pressure formula_1 to the inlet pressure of the suction fluid formula_2. The entrainment ratio of the injector, formula_3, is defined as the amount formula_4 (in kg/h) of suction fluid that can be entrained and compressed by a given amount formula_5 (in kg/h) of motive fluid. Lifting properties. Other key properties of an injector include the fluid inlet pressure requirements i.e. whether it is lifting or non-lifting. In a non-lifting injector, positive inlet fluid pressure is needed e.g. the cold water input is fed by gravity. The steam-cone minimal orifice diameter is kept larger than the combining cone minimal diameter. The non-lifting Nathan 4000 injector used on the Southern Pacific 4294 could push 12,000 US gallons (45,000 L) per hour at 250 psi (17 bar). The lifting injector can operate with negative inlet fluid pressure i.e. fluid lying below the level of the injector. It differs from the non-lifting type mainly in the relative dimensions of the nozzles. Overflow. An overflow is required for excess steam or water to discharge, especially during starting. If the injector cannot initially overcome boiler pressure, the overflow allows the injector to continue to draw water and steam. Check valve. There is at least one check valve (called a "clack valve" in locomotives because of the distinctive noise it makes) between the exit of the injector and the boiler to prevent back flow, and usually a valve to prevent air being sucked in at the overflow. Exhaust steam injector. Efficiency was further improved by the development of a multi-stage injector which is powered not by live steam from the boiler but by exhaust steam from the cylinders, thereby making use of the residual energy in the exhaust steam which would otherwise go to waste. However, an exhaust injector also cannot work when the locomotive is stationary; later exhaust injectors could use a supply of live steam if no exhaust steam was available. Problems. Injectors can be troublesome under certain running conditions, such as when vibration causes the combined steam and water jet to "knock off". Originally the injector had to be restarted by careful manipulation of the steam and water controls, and the distraction caused by a malfunctioning injector was largely responsible for the 1913 Ais Gill rail accident. Later injectors were designed to automatically restart on sensing the collapse in vacuum from the steam jet, for example with a spring-loaded delivery cone. Another common problem occurs when the incoming water is too warm and is less effective at condensing the steam in the combining cone. That can also occur if the metal body of the injector is too hot, e.g. from prolonged use. The internal parts of an injector are subject to erosive wear, particularly damage at the throat of the delivery cone which may be due to cavitation. Vacuum ejectors. An additional use for the injector technology is in vacuum ejectors in continuous train braking systems, which were made compulsory in the UK by the Regulation of Railways Act 1889. A vacuum ejector uses steam pressure to draw air out of the vacuum pipe and reservoirs of continuous train brake. Steam locomotives, with a ready source of steam, found ejector technology ideal with its rugged simplicity and lack of moving parts. A steam locomotive usually has two ejectors: a large ejector for releasing the brakes when stationary and a small ejector for maintaining the vacuum against leaks. The exhaust from the ejectors is invariably directed to the smokebox, by which means it assists the blower in draughting the fire. The small ejector is sometimes replaced by a reciprocating pump driven from the crosshead because this is more economical of steam and is only required to operate when the train is moving. Vacuum brakes have been superseded by air brakes in modern trains, which allow the use of smaller brake cylinders and/or higher braking force due to the greater difference from atmospheric pressure. Earlier application of the principle. An empirical application of the principle was in widespread use on steam locomotives before its formal development as the injector, in the form of the arrangement of the blastpipe and chimney in the locomotive smokebox. The sketch on the right shows a cross section through a smokebox, rotated 90 degrees; it can be seen that the same components are present, albeit differently named, as in the generic diagram of an injector at the top of the article. Exhaust steam from the cylinders is directed through a nozzle on the end of the blastpipe, to reduce pressure inside the smokebox by entraining the flue gases from the boiler which are then ejected via the chimney. The effect is to increase the draught on the fire to a degree proportional to the rate of steam consumption, so that as more steam is used, more heat is generated from the fire and steam production is also increased. The effect was first noted by Richard Trevithick and subsequently developed empirically by the early locomotive engineers; Stephenson's Rocket made use of it, and this constitutes much of the reason for its notably improved performance in comparison with contemporary machines. Modern uses. The use of injectors (or ejectors) in various industrial applications has become quite common due to their relative simplicity and adaptability. For example: Well pumps. Jet pumps are commonly used to extract water from water wells. The main pump, often a centrifugal pump, is powered and installed at ground level. Its discharge is split, with the greater part of the flow leaving the system, while a portion of the flow is returned to the jet pump installed below ground in the well. This recirculated part of the pumped fluid is used to power the jet. At the jet pump, the high-energy, low-mass returned flow drives more fluid from the well, becoming a low-energy, high-mass flow which is then piped to the inlet of the main pump. Shallow well pumps are those in which the jet assembly is attached directly to the main pump and are limited to a depth of approximately 5-8m to prevent cavitation. Deep well pumps are those in which the jet is located at the bottom of the well. The maximum depth for deep well pumps is determined by the inside diameter of and the velocity through the jet. The major advantage of jet pumps for deep well installations is the ability to situate all mechanical parts (e.g., electric/petrol motor, rotating impellers) at the ground surface for easy maintenance. The advent of the electrical submersible pump has partly replaced the need for jet type well pumps, except for driven point wells or surface water intakes. Multi-stage steam vacuum ejectors. In practice, for suction pressure below 100 mbar absolute, more than one ejector is used, usually with condensers between the ejector stages. Condensing of motive steam greatly improves ejector set efficiency; both barometric and shell-and-tube surface condensers are used. In operation a two-stage system consists of a primary high-vacuum (HV) ejector and a secondary low-vacuum (LV) ejector. Initially the LV ejector is operated to pull vacuum down from the starting pressure to an intermediate pressure. Once this pressure is reached, the HV ejector is then operated in conjunction with the LV ejector to finally pull vacuum to the required pressure. In operation a three-stage system consists of a primary booster, a secondary high-vacuum (HV) ejector, and a tertiary low-vacuum (LV) ejector. As per the two-stage system, initially the LV ejector is operated to pull vacuum down from the starting pressure to an intermediate pressure. Once this pressure is reached, the HV ejector is then operated in conjunction with the LV ejector to pull vacuum to the lower intermediate pressure. Finally the booster is operated (in conjunction with the HV &amp; LV ejectors) to pull vacuum to the required pressure. Construction materials. Injectors or ejectors are made of carbon steel, stainless steel, brass, titanium, PTFE, carbon, and other materials. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_2/P_1" }, { "math_id": 1, "text": "P_2" }, { "math_id": 2, "text": "P_1" }, { "math_id": 3, "text": "W_s/W_m" }, { "math_id": 4, "text": "W_s" }, { "math_id": 5, "text": "W_m" } ]
https://en.wikipedia.org/wiki?curid=1164724
1164753
Gauss–Newton algorithm
Mathematical algorithm The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required. Non-linear least squares problems arise, for instance, in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations. The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton, and first appeared in Gauss's 1809 work "Theoria motus corporum coelestium in sectionibus conicis solem ambientum". Description. Given formula_1 functions formula_2 (often called residuals) of formula_3 variables formula_4 with formula_5 the Gauss–Newton algorithm iteratively finds the value of formula_0 that minimize the sum of squares formula_6 Starting with an initial guess formula_7 for the minimum, the method proceeds by the iterations formula_8 where, if r and β are column vectors, the entries of the Jacobian matrix are formula_9 and the symbol formula_10 denotes the matrix transpose. At each iteration, the update formula_11 can be found by rearranging the previous equation in the following two steps: With substitutions formula_14, formula_15, and formula_16, this turns into the conventional matrix equation of form formula_17, which can then be solved in a variety of methods (see Notes). If "m" = "n", the iteration simplifies to formula_18 which is a direct generalization of Newton's method in one dimension. In data fitting, where the goal is to find the parameters formula_19 such that a given model function formula_20 best fits some data points formula_21, the functions formula_22are the residuals: formula_23 Then, the Gauss–Newton method can be expressed in terms of the Jacobian formula_24 of the function formula_25 as formula_26 Note that formula_27 is the left pseudoinverse of formula_28. Notes. The assumption "m" ≥ "n" in the algorithm statement is necessary, as otherwise the matrix formula_29 is not invertible and the normal equations cannot be solved (at least uniquely). The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions "r""i". Using Taylor's theorem, we can write at every iteration: formula_30 with formula_31. The task of finding formula_32 minimizing the sum of squares of the right-hand side; i.e., formula_33 is a linear least-squares problem, which can be solved explicitly, yielding the normal equations in the algorithm. The normal equations are "n" simultaneous linear equations in the unknown increments formula_32. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of formula_34. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail, as formula_29 becomes singular. When formula_35 is complex formula_36 the conjugate form should be used: formula_37. Example. In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions. In a biology experiment studying the relation between substrate concentration ["S"] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained. It is desired to find a curve (model function) of the form formula_40 that fits best the data in the least-squares sense, with the parameters formula_41 and formula_42 to be determined. Denote by formula_43 and formula_44 the values of ["S"] and rate respectively, with formula_45. Let formula_46 and formula_47. We will find formula_48 and formula_49 such that the sum of squares of the residuals formula_50 is minimized. The Jacobian formula_51 of the vector of residuals formula_52 with respect to the unknowns formula_53 is a formula_54 matrix with the formula_55-th row having the entries formula_56 Starting with the initial estimates of formula_57 and formula_58, after five iterations of the Gauss–Newton algorithm, the optimal values formula_38 and formula_39 are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data. Convergence properties. The Gauss-Newton iteration is guaranteed to converge toward a local minimum point formula_59 under 4 conditions: The functions formula_60 are twice continuously differentiable in an open convex set formula_61, the Jacobian formula_62 is of full column rank, the initial iterate formula_63 is near formula_59, and the local minimum value formula_64 is small. The convergence is quadratic if formula_65. It can be shown that the increment Δ is a descent direction for "S", and, if the algorithm converges, then the limit is a stationary point of "S". For large minimum value formula_64, however, convergence is not guaranteed, not even local convergence as in Newton's method, or convergence under the usual Wolfe conditions. The rate of convergence of the Gauss–Newton algorithm can approach quadratic. The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix formula_66 is ill-conditioned. For example, consider the problem with formula_67 equations and formula_68 variable, given by formula_69 The optimum is at formula_70. (Actually the optimum is at formula_71 for formula_72, because formula_73, but formula_74.) If formula_75, then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| &lt; 1, then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| &gt; 1, then the method does not even converge locally. Solving overdetermined systems of equations. The Gauss-Newton iteration formula_76 is an effective method for solving overdetermined systems of equations in the form of formula_77 with formula_78 and formula_79 where formula_80 is the Moore-Penrose inverse (also known as pseudoinverse) of the Jacobian matrix formula_81 of formula_82. It can be considered an extension of Newton's method and enjoys the same local quadratic convergence toward isolated regular solutions. If the solution doesn't exist but the initial iterate formula_83 is near a point formula_84 at which the sum of squares formula_85 reaches a small local minimum, the Gauss-Newton iteration linearly converges to formula_86. The point formula_86 is often called a least squares solution of the overdetermined system. Derivation from Newton's method. In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear. The recurrence relation for Newton's method for minimizing a function "S" of parameters formula_87 is formula_88 where g denotes the gradient vector of "S", and H denotes the Hessian matrix of "S". Since formula_89, the gradient is given by formula_90 Elements of the Hessian are calculated by differentiating the gradient elements, formula_91, with respect to formula_92: formula_93 The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by formula_94 where formula_95 are entries of the Jacobian Jr. Note that when the exact hessian is evaluated near an exact fit we have near-zero formula_52, so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as formula_96 These expressions are substituted into the recurrence relation above to obtain the operational equations formula_97 Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation formula_98 that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected: Improved versions. With the Gauss–Newton method the sum of squares of the residuals "S" may not decrease at every iteration. However, since Δ is a descent direction, unless formula_100 is a stationary point, it holds that formula_101 for all sufficiently small formula_102. Thus, if divergence occurs, one solution is to employ a fraction formula_103 of the increment vector Δ in the updating formula: formula_104 In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective function "S". An optimal value for formula_103 can be found by using a line search algorithm, that is, the magnitude of formula_103 is determined by finding the value that minimizes "S", usually using a direct search method in the interval formula_105 or a backtracking line search such as Armijo-line search. Typically, formula_103 should be chosen such that it satisfies the Wolfe conditions or the Goldstein conditions. In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, a trust region method. The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent, formula_106 where D is a positive diagonal matrix. Note that when D is the identity matrix I and formula_107, then formula_108, therefore the direction of Δ approaches the direction of the negative gradient formula_109. The so-called Marquardt parameter formula_110 may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time formula_110 is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in "S". Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of "S" then becomes a standard Gauss–Newton minimization. Large-scale optimization. For large-scale optimization, the Gauss–Newton method is of special interest because it is often (though certainly not always) true that the matrix formula_111 is more sparse than the approximate Hessian formula_112. In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as the conjugate gradient method. In order to make this kind of approach work, one needs at least an efficient method for computing the product formula_113 for some vector p. With sparse matrix storage, it is in general practical to store the rows of formula_111 in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one defines c"i" as row "i" of the matrix formula_111, the following simple relation holds: formula_114 so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited for parallel computations. Note that every row c"i" is the gradient of the corresponding residual "r""i"; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other. Related algorithms. In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian formula_115 is built up numerically using first derivatives formula_116 only so that after "n" refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems. Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions. Example implementations. Julia. The following implementation in Julia provides one method which uses a provided Jacobian and another computing with automatic differentiation. gaussnewton(r,J,β₀,maxiter,tol) Perform Gauss-Newton optimization to minimize the residual function `r` with Jacobian `J` starting from `β₀`. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations. function gaussnewton(r,J,β₀,maxiter,tol) β = copy(β₀) for _ in 1:maxiter Jβ = J(β); Δ = -(Jβ'*Jβ) \ (Jβ'*r(β)) β += Δ if sqrt(sum(abs2,Δ)) &lt; tol break end end return β end import AbstractDifferentiation as AD, Zygote backend = AD.ZygoteBackend() # other backends are available gaussnewton(r,β₀,maxiter,tol) Perform Gauss-Newton optimization to minimize the residual function `r` starting from `β₀`. The relevant Jacobian is calculated using automatic differentiation. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations. function gaussnewton(r,β₀,maxiter,tol) β = copy(β₀) for _ in 1:maxiter rβ, Jβ = AD.value_and_jacobian(backend,r,β) Δ = -(Jβ[1]'*Jβ[1]) \ (Jβ[1]'*rβ) β += Δ if sqrt(sum(abs2,Δ)) &lt; tol break end end return β end
[ { "math_id": 0, "text": "\\beta" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "\\textbf{r} = (r_1, \\ldots, r_m)" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\boldsymbol{\\beta} = (\\beta_1, \\ldots \\beta_n)," }, { "math_id": 5, "text": "m \\geq n," }, { "math_id": 6, "text": " S(\\boldsymbol \\beta) = \\sum_{i=1}^m r_i(\\boldsymbol \\beta)^{2}." }, { "math_id": 7, "text": "\\boldsymbol \\beta^{(0)}" }, { "math_id": 8, "text": " \\boldsymbol \\beta^{(s+1)} = \\boldsymbol \\beta^{(s)} - \\left(\\mathbf{J_r}^\\operatorname{T} \\mathbf{J_r} \\right)^{-1} \\mathbf{J_r}^\\operatorname{T} \\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right), " }, { "math_id": 9, "text": " \\left(\\mathbf{J_r}\\right)_{ij} = \\frac{\\partial r_i \\left(\\boldsymbol \\beta^{(s)}\\right)}{\\partial \\beta_j}," }, { "math_id": 10, "text": "^\\operatorname{T}" }, { "math_id": 11, "text": "\\Delta = \\boldsymbol \\beta^{(s+1)} - \\boldsymbol \\beta^{(s)}" }, { "math_id": 12, "text": "\\Delta = -\\left(\\mathbf{J_r}^\\operatorname{T} \\mathbf{J_r} \\right)^{-1} \\mathbf{J_r}^\\operatorname{T} \\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right)" }, { "math_id": 13, "text": "\\mathbf{J_r}^\\operatorname{T} \\mathbf{J_r} \\Delta = -\\mathbf{J_r}^\\operatorname{T} \\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right) " }, { "math_id": 14, "text": "A = \\mathbf{J_r}^\\operatorname{T} \\mathbf{J_r} " }, { "math_id": 15, "text": "\\mathbf{b} = -\\mathbf{J_r}^\\operatorname{T} \\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right) " }, { "math_id": 16, "text": "\\mathbf {x} = \\Delta " }, { "math_id": 17, "text": "A\\mathbf {x} = \\mathbf {b} " }, { "math_id": 18, "text": " \\boldsymbol \\beta^{(s+1)} = \\boldsymbol \\beta^{(s)} - \\left(\\mathbf{J_r}\\right)^{-1} \\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right)," }, { "math_id": 19, "text": "\\boldsymbol{\\beta}" }, { "math_id": 20, "text": " \\mathbf{f}(\\mathbf{x}, \\boldsymbol{\\beta}) " }, { "math_id": 21, "text": " (x_i, y_i) " }, { "math_id": 22, "text": " r_i " }, { "math_id": 23, "text": "r_i(\\boldsymbol \\beta) = y_i - f\\left(x_i, \\boldsymbol \\beta\\right)." }, { "math_id": 24, "text": " \\mathbf{J_f} = -\\mathbf{J_r} " }, { "math_id": 25, "text": " \\mathbf{f} " }, { "math_id": 26, "text": " \\boldsymbol \\beta^{(s+1)} = \\boldsymbol \\beta^{(s)} + \\left(\\mathbf{J_f}^\\operatorname{T} \\mathbf{J_f} \\right)^{-1} \\mathbf{J_f}^\\operatorname{T} \\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right). " }, { "math_id": 27, "text": "\\left(\\mathbf{J_f}^\\operatorname{T} \\mathbf{J_f}\\right)^{-1} \\mathbf{J_f}^\\operatorname{T}" }, { "math_id": 28, "text": "\\mathbf{J_f}" }, { "math_id": 29, "text": " \\mathbf{J_r}^T\\mathbf{J_r} " }, { "math_id": 30, "text": "\\mathbf{r}(\\boldsymbol \\beta) \\approx \\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right) + \\mathbf{J_r}\\left(\\boldsymbol \\beta^{(s)}\\right)\\Delta" }, { "math_id": 31, "text": "\\Delta = \\boldsymbol \\beta - \\boldsymbol \\beta^{(s)}" }, { "math_id": 32, "text": " \\Delta " }, { "math_id": 33, "text": "\\min \\left\\|\\mathbf{r}\\left(\\boldsymbol \\beta^{(s)}\\right) + \\mathbf{J_r}\\left(\\boldsymbol \\beta^{(s)}\\right)\\Delta\\right\\|_2^2," }, { "math_id": 34, "text": " \\mathbf{J_r} " }, { "math_id": 35, "text": "\\mathbf{r}" }, { "math_id": 36, "text": "\\mathbf{r}:\\Complex^n \\to \\Complex" }, { "math_id": 37, "text": "\\left(\\overline \\mathbf{J_r}^\\operatorname{T} \\mathbf{J_r}\\right)^{-1}\\overline \\mathbf{J_r}^\\operatorname{T}" }, { "math_id": 38, "text": "\\hat\\beta_1 = 0.362" }, { "math_id": 39, "text": "\\hat\\beta_2 = 0.556" }, { "math_id": 40, "text": "\\text{rate} = \\frac{V_\\text{max} \\cdot [S]}{K_M + [S]}" }, { "math_id": 41, "text": "V_\\text{max}" }, { "math_id": 42, "text": "K_M" }, { "math_id": 43, "text": "x_i" }, { "math_id": 44, "text": "y_i" }, { "math_id": 45, "text": "i = 1, \\dots, 7" }, { "math_id": 46, "text": "\\beta_1 = V_\\text{max}" }, { "math_id": 47, "text": "\\beta_2 = K_M" }, { "math_id": 48, "text": "\\beta_1" }, { "math_id": 49, "text": "\\beta_2" }, { "math_id": 50, "text": "r_i = y_i - \\frac{\\beta_1 x_i}{\\beta_2 + x_i}, \\quad (i = 1, \\dots, 7)" }, { "math_id": 51, "text": "\\mathbf{J_r}" }, { "math_id": 52, "text": "r_i" }, { "math_id": 53, "text": "\\beta_j" }, { "math_id": 54, "text": "7 \\times 2" }, { "math_id": 55, "text": "i" }, { "math_id": 56, "text": "\\frac{\\partial r_i}{\\partial \\beta_1} = -\\frac{x_i}{\\beta_2 + x_i}; \n\\quad\n\\frac{\\partial r_i}{\\partial \\beta_2} = \\frac{\\beta_1 \\cdot x_i}{\\left(\\beta_2 + x_i\\right)^2}." }, { "math_id": 57, "text": "\\beta_1 = 0.9" }, { "math_id": 58, "text": "\\beta_2 = 0.2" }, { "math_id": 59, "text": "\\hat{\\beta}" }, { "math_id": 60, "text": "r_1,\\ldots,r_m" }, { "math_id": 61, "text": "D\\ni\\hat{\\beta}" }, { "math_id": 62, "text": "\\mathbf{J}_\\mathbf{r}(\\hat{\\beta})" }, { "math_id": 63, "text": "\\beta^{(0)}" }, { "math_id": 64, "text": "|S(\\hat{\\beta})|" }, { "math_id": 65, "text": "|S(\\hat{\\beta})|=0" }, { "math_id": 66, "text": "\\mathbf{J_r^\\operatorname{T} J_r}" }, { "math_id": 67, "text": "m = 2" }, { "math_id": 68, "text": "n = 1" }, { "math_id": 69, "text": "\\begin{align}\n r_1(\\beta) &= \\beta + 1, \\\\\n r_2(\\beta) &= \\lambda \\beta^2 + \\beta - 1.\n\\end{align} " }, { "math_id": 70, "text": "\\beta = 0" }, { "math_id": 71, "text": "\\beta = -1" }, { "math_id": 72, "text": "\\lambda = 2" }, { "math_id": 73, "text": "S(0) = 1^2 + (-1)^2 = 2" }, { "math_id": 74, "text": "S(-1) = 0" }, { "math_id": 75, "text": "\\lambda = 0" }, { "math_id": 76, "text": "\\mathbf{x}^{(k+1)} = \\mathbf{x}^{(k)} - J(\\mathbf{x}^{(k)})^\\dagger\\mathbf{f}(\\mathbf{x}^{(k)}) \\,,\\quad k=0,1,\\ldots" }, { "math_id": 77, "text": "\\mathbf{f}(\\mathbf{x})=\\mathbf{0}" }, { "math_id": 78, "text": "\\mathbf{f}(\\mathbf{x}) = \\begin{bmatrix} f_1(x_1,\\ldots,x_n) \\\\ \\vdots \\\\ f_m(x_1,\\ldots,x_n) \\end{bmatrix}" }, { "math_id": 79, "text": "m>n" }, { "math_id": 80, "text": "J(\\mathbf{x})^\\dagger" }, { "math_id": 81, "text": "J(\\mathbf{x})" }, { "math_id": 82, "text": "\\mathbf{f}(\\mathbf{x})" }, { "math_id": 83, "text": "\\mathbf{x}^{(0)}" }, { "math_id": 84, "text": "\\hat{\\mathbf{x}} = (\\hat{x}_1,\\ldots,\\hat{x}_n)" }, { "math_id": 85, "text": "\\sum_{i=1}^m |f_i(x_1,\\ldots,x_n)|^2 \\equiv \\|\\mathbf{f}(\\mathbf{x})\\|_2^2" }, { "math_id": 86, "text": "\\hat{\\mathbf{x}}" }, { "math_id": 87, "text": "\\boldsymbol\\beta" }, { "math_id": 88, "text": " \\boldsymbol\\beta^{(s+1)} = \\boldsymbol\\beta^{(s)} - \\mathbf H^{-1} \\mathbf g," }, { "math_id": 89, "text": "S = \\sum_{i=1}^m r_i^2" }, { "math_id": 90, "text": "g_j = 2 \\sum_{i=1}^m r_i \\frac{\\partial r_i}{\\partial \\beta_j}." }, { "math_id": 91, "text": "g_j" }, { "math_id": 92, "text": "\\beta_k" }, { "math_id": 93, "text": "H_{jk} = 2 \\sum_{i=1}^m \\left(\\frac{\\partial r_i}{\\partial \\beta_j} \\frac{\\partial r_i}{\\partial \\beta_k} + r_i \\frac{\\partial^2 r_i}{\\partial \\beta_j \\partial \\beta_k}\\right)." }, { "math_id": 94, "text": "H_{jk} \\approx 2 \\sum_{i=1}^m J_{ij} J_{ik}," }, { "math_id": 95, "text": "J_{ij} = {\\partial r_i}/{\\partial \\beta_j}" }, { "math_id": 96, "text": "\\mathbf{g} = 2 {\\mathbf{J}_\\mathbf{r}}^\\operatorname{T} \\mathbf{r}, \\quad \\mathbf{H} \\approx 2 {\\mathbf{J}_\\mathbf{r}}^\\operatorname{T} \\mathbf{J_r}." }, { "math_id": 97, "text": " \\boldsymbol{\\beta}^{(s+1)} = \\boldsymbol\\beta^{(s)} + \\Delta; \\quad \\Delta = -\\left(\\mathbf{J_r}^\\operatorname{T} \\mathbf{J_r}\\right)^{-1} \\mathbf{J_r}^\\operatorname{T} \\mathbf{r}." }, { "math_id": 98, "text": "\\left|r_i \\frac{\\partial^2 r_i}{\\partial \\beta_j \\partial \\beta_k}\\right| \\ll \\left|\\frac{\\partial r_i}{\\partial \\beta_j} \\frac{\\partial r_i}{\\partial \\beta_k}\\right|" }, { "math_id": 99, "text": "\\frac{\\partial^2 r_i}{\\partial \\beta_j \\partial \\beta_k}" }, { "math_id": 100, "text": "S\\left(\\boldsymbol \\beta^s\\right)" }, { "math_id": 101, "text": "S\\left(\\boldsymbol \\beta^s + \\alpha\\Delta\\right) < S\\left(\\boldsymbol \\beta^s\\right)" }, { "math_id": 102, "text": "\\alpha>0" }, { "math_id": 103, "text": "\\alpha" }, { "math_id": 104, "text": " \\boldsymbol \\beta^{s+1} = \\boldsymbol \\beta^s + \\alpha \\Delta." }, { "math_id": 105, "text": "0 < \\alpha < 1" }, { "math_id": 106, "text": "\\left(\\mathbf{J^\\operatorname{T} J + \\lambda D}\\right) \\Delta = -\\mathbf{J}^\\operatorname{T} \\mathbf{r}," }, { "math_id": 107, "text": "\\lambda \\to +\\infty" }, { "math_id": 108, "text": "\\lambda \\Delta = \\lambda \\left(\\mathbf{J^\\operatorname{T} J} + \\lambda \\mathbf{I}\\right)^{-1} \\left(-\\mathbf{J}^\\operatorname{T} \\mathbf{r}\\right) = \\left(\\mathbf{I} - \\mathbf{J^\\operatorname{T} J} / \\lambda + \\cdots \\right) \\left(-\\mathbf{J}^\\operatorname{T} \\mathbf{r}\\right) \\to -\\mathbf{J}^\\operatorname{T} \\mathbf{r}" }, { "math_id": 109, "text": "-\\mathbf{J}^\\operatorname{T} \\mathbf{r}" }, { "math_id": 110, "text": "\\lambda" }, { "math_id": 111, "text": "\\mathbf{J}_\\mathbf{r}" }, { "math_id": 112, "text": "\\mathbf{J}_\\mathbf{r}^\\operatorname{T} \\mathbf{J_r}" }, { "math_id": 113, "text": "{\\mathbf{J}_\\mathbf{r}}^\\operatorname{T} \\mathbf{J_r} \\mathbf{p}" }, { "math_id": 114, "text": "{\\mathbf{J}_\\mathbf{r}}^\\operatorname{T}\\mathbf{J_r} \\mathbf{p} = \\sum_i \\mathbf c_i \\left(\\mathbf c_i \\cdot \\mathbf{p}\\right)," }, { "math_id": 115, "text": "\\frac{\\partial^2 S}{\\partial \\beta_j \\partial\\beta_k}" }, { "math_id": 116, "text": "\\frac{\\partial r_i}{\\partial\\beta_j}" } ]
https://en.wikipedia.org/wiki?curid=1164753
11647860
Spacetime diagram
Graph of space and time in special relativity A spacetime diagram is a graphical illustration of locations in space at various times, especially in the special theory of relativity. Spacetime diagrams can show the geometry underlying phenomena like time dilation and length contraction without mathematical equations. The history of an object's location through time traces out a line or curve on a spacetime diagram, referred to as the object's world line. Each point in a spacetime diagram represents a unique position in space and time and is referred to as an event. The most well-known class of spacetime diagrams are known as Minkowski diagrams, developed by Hermann Minkowski in 1908. Minkowski diagrams are two-dimensional graphs that depict events as happening in a universe consisting of one space dimension and one time dimension. Unlike a regular distance-time graph, the distance is displayed on the horizontal axis and time on the vertical axis. Additionally, the time and space units of measurement are chosen in such a way that an object moving at the speed of light is depicted as following a 45° angle to the diagram's axes. Introduction to kinetic diagrams. Position versus time graphs. In the study of 1-dimensional kinematics, position vs. time graphs (called x-t graphs for short) provide a useful means to describe motion. Kinematic features besides the object's position are visible by the slope and shape of the lines. In Fig 1-1, the plotted object moves away from the origin at a positive constant velocity (1.66 m/s) for 6 seconds, halts for 5 seconds, then returns to the origin over a period of 7 seconds at a non-constant speed (but negative velocity). At its most basic level, a spacetime diagram is merely a time vs position graph, with the directions of the axes in a usual p-t graph exchanged; that is, the vertical axis refers to temporal and the horizontal axis to spatial coordinate values. Especially when used in special relativity (SR), the temporal axes of a spacetime diagram are often scaled with the speed of light c, and thus are often labeled by ct. This changes the dimension of the addressed physical quantity from &lt;"Time"&gt; to &lt;"Length"&gt;, in accordance with the dimension associated with the spatial axis, which is frequently labeled x. Standard configuration of reference frames. To ease insight into how spacetime coordinates, measured by observers in different reference frames, compare with each other, it is useful to standardize and simplify the setup. Two Galilean reference frames (i.e., conventional 3-space frames), S and S′ (pronounced "S prime"), each with observers O and O′ at rest in their respective frames, but measuring the other as moving with speeds ±"v" are said to be in "standard configuration", when: This spatial setting is displayed in the Fig 1-2, in which the temporal coordinates are separately annotated as quantities "t" and "t"'. In a further step of simplification it is often sufficient to consider just the direction of the observed motion and ignore the other two spatial components, allowing "x" and "ct" to be plotted in 2-dimensional spacetime diagrams, as introduced above. Non-relativistic "spacetime diagrams". The black axes labelled "x" and "ct" on Fig 1-3 are the coordinate system of an observer, referred to as "at rest", and who is positioned at "x" = 0. This observer's world line is identical with the "ct" time axis. Each parallel line to this axis would correspond also to an object at rest but at another position. The blue line describes an object moving with constant speed "v" to the right, such as a moving observer. This blue line labelled "ct"′ may be interpreted as the time axis for the second observer. Together with the "x" axis, which is identical for both observers, it represents their coordinate system. Since the reference frames are in standard configuration, both observers agree on the location of the origin of their coordinate systems. The axes for the moving observer are not perpendicular to each other and the scale on their time axis is stretched. To determine the coordinates of a certain event, two lines, each parallel to one of the two axes, must be constructed passing through the event, and their intersections with the axes read off. Determining position and time of the event A as an example in the diagram leads to the same time for both observers, as expected. Only for the position different values result, because the moving observer has approached the position of the event A since "t" = 0. Generally stated, all events on a line parallel to the "x" axis happen simultaneously for both observers. There is only one universal time "t" = "t"′, modelling the existence of one common position axis. On the other hand, due to two different time axes the observers usually measure different coordinates for the same event. This graphical translation from "x" and "t" to "x"′ and "t"′ and vice versa is described mathematically by the so-called Galilean transformation. Minkowski diagrams. Overview. The term Minkowski diagram refers to a specific form of spacetime diagram frequently used in special relativity. A Minkowski diagram is a two-dimensional graphical depiction of a portion of Minkowski space, usually where space has been curtailed to a single dimension. The units of measurement in these diagrams are taken such that the light cone at an event consists of the lines of slope plus or minus one through that event. The horizontal lines correspond to the usual notion of "simultaneous events" for a stationary observer at the origin. A particular Minkowski diagram illustrates the result of a Lorentz transformation. The Lorentz transformation relates two inertial frames of reference, where an observer stationary at the event (0, 0) makes a change of velocity along the "x"-axis. As shown in Fig 2-1, the new time axis of the observer forms an angle "α" with the previous time axis, with "α" &lt;. In the new frame of reference the simultaneous events lie parallel to a line inclined by "α" to the previous lines of simultaneity. This is the new "x"-axis. Both the original set of axes and the primed set of axes have the property that they are orthogonal with respect to the Minkowski inner product or "relativistic dot product". Whatever the magnitude of α, the line "ct" = "x" forms the universal bisector, as shown in Fig 2-2. One frequently encounters Minkowski diagrams where the time units of measurement are scaled by a factor of "c" such that one unit of "x" equals one unit of "t". Such a diagram may have units of With that, light paths are represented by lines parallel to the bisector between the axes. Mathematical details. The angle "α" between the "x" and "x"′ axes will be identical with that between the time axes "ct" and "ct"′. This follows from the second postulate of special relativity, which says that the speed of light is the same for all observers, regardless of their relative motion (see below). The angle "α" is given by formula_0 The corresponding boost from "x" and "t" to "x"′ and "t"′ and vice versa is described mathematically by the Lorentz transformation, which can be written formula_1 where formula_2 is the Lorentz factor. By applying the Lorentz transformation, the spacetime axes obtained for a boosted frame will always correspond to conjugate diameters of a pair of hyperbolas. As illustrated in Fig 2-3, the boosted and unboosted spacetime axes will in general have unequal unit lengths. If "U" is the unit length on the axes of "ct" and "x" respectively, the unit length on the axes of "ct"′ and "x"′ is: formula_3 The "ct"-axis represents the worldline of a clock resting in "S", with "U" representing the duration between two events happening on this worldline, also called the proper time between these events. Length "U" upon the "x"-axis represents the rest length or proper length of a rod resting in "S". The same interpretation can also be applied to distance "U"′ upon the "ct"′- and "x"′-axes for clocks and rods resting in "S"′. History. Albert Einstein announced his theory of special relativity in 1905, with Hermann Minkowski providing his graphical representation in 1908. In Minkowski's 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlines. The first diagram used a branch of the unit hyperbola formula_4 to show the locus of a unit of proper time depending on velocity, thus illustrating time dilation. The second diagram showed the conjugate hyperbola to calibrate space, where a similar stretching leaves the impression of FitzGerald contraction. In 1914 Ludwik Silberstein included a diagram of "Minkowski's representation of the Lorentz transformation". This diagram included the unit hyperbola, its conjugate, and a pair of conjugate diameters. Since the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativity. E. T. Whittaker has pointed out that the principle of relativity is tantamount to the arbitrariness of what hyperbola radius is selected for time in the Minkowski diagram. In 1912 Gilbert N. Lewis and Edwin B. Wilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams. When Taylor and Wheeler composed "Spacetime Physics" (1966), they did "not" use the term "Minkowski diagram" for their spacetime geometry. Instead they included an acknowledgement of Minkowski's contribution to philosophy by the totality of his innovation of 1908. Loedel diagrams. While a frame at rest in a Minkowski diagram has orthogonal spacetime axes, a frame moving relative to the rest frame in a Minkowski diagram has spacetime axes which form an acute angle. This asymmetry of Minkowski diagrams can be misleading, since special relativity postulates that any two inertial reference frames must be physically equivalent. The Loedel diagram is an alternative spacetime diagram that makes the symmetry of inertial references frames much more manifest. Formulation via median frame. Several authors showed that there is a frame of reference between the resting and moving ones where their symmetry would be apparent ("median frame"). In this frame, the two other frames are moving in opposite directions with equal speed. Using such coordinates makes the units of length and time the same for both axes. If "β" and formula_2 are given between formula_5 and formula_6, then these expressions are connected with the values in their median frame "S"0 as follows: formula_7 For instance, if "β" 0.5 between formula_5 and formula_6, then by (2) they are moving in their median frame "S"0 with approximately ±0.268"c" each in opposite directions. On the other hand, if "β"0 0.5 in "S"0, then by (1) the relative velocity between formula_5 and formula_6 in their own rest frames is 0.8"c". The construction of the axes of formula_5 and formula_6 is done in accordance with the ordinary method using tan "α" "β"0 with respect to the orthogonal axes of the median frame (Fig. 3-1). However, it turns out that when drawing such a symmetric diagram, it is possible to derive the diagram's relations even without mentioning the median frame and "β"0 at all. Instead, the relative velocity "β" between formula_5 and formula_6 can directly be used in the following construction, providing the same result: If "φ" is the angle between the axes of "ct"′ and "ct" (or between "x" and "x"′), and "θ" between the axes of "x"′ and "ct"′, it is given: formula_8 Two methods of construction are obvious from Fig. 3-2: the "x"-axis is drawn perpendicular to the "ct"′-axis, the "x"′ and "ct"-axes are added at angle "φ"; and the "x"′-axis is drawn at angle "θ" with respect to the "ct"′-axis, the "x"-axis is added perpendicular to the "ct"′-axis and the "ct"-axis perpendicular to the "x"′-axis. In a Minkowski diagram, lengths on the page cannot be directly compared to each other, due to warping factor between the axes' unit lengths in a Minkowski diagram. In particular, if formula_9 and formula_10 are the unit lengths of the rest frame axes and moving frame axes, respectively, in a Minkowski diagram, then the two unit lengths are warped relative to each other via the formula: formula_11 By contrast, in a symmetric Loedel diagram, both the formula_5 and formula_6 frame axes are warped by the same factor relative to the median frame and hence have identical unit lengths. This implies that, for a Loedel spacetime diagram, we can directly compare spacetime lengths between different frames as they appear on the page; no unit length scaling/conversion between frames is necessary due to the symmetric nature of the Loedel diagram. Relativistic phenomena in diagrams. Time dilation. Relativistic time dilation refers to the fact that a clock (indicating its proper time in its rest frame) that moves relative to an observer is observed to run slower. The situation is depicted in the symmetric Loedel diagrams of Fig 4-1. Note that we can compare spacetime lengths on page directly with each other, due to the symmetric nature of the Loedel diagram. In Fig 4-2, the observer whose reference frame is given by the black axes is assumed to move from the origin O towards A. The moving clock has the reference frame given by the blue axes and moves from O to B. For the black observer, all events happening simultaneously with the event at A are located on a straight line parallel to its space axis. This line passes through A and B, so A and B are simultaneous from the reference frame of the observer with black axes. However, the clock that is moving relative to the black observer marks off time along the blue time axis. This is represented by the distance from O to B. Therefore, the observer at A with the black axes notices their clock as reading the distance from O to A while they observe the clock moving relative him or her to read the distance from O to B. Due to the distance from O to B being smaller than the distance from O to A, they conclude that the time passed on the clock moving relative to them is smaller than that passed on their own clock. A second observer, having moved together with the clock from O to B, will argue that the black axis clock has only reached C and therefore runs slower. The reason for these apparently paradoxical statements is the different determination of the events happening synchronously at different locations. Due to the principle of relativity, the question of who is right has no answer and does not make sense. Length contraction. Relativistic length contraction refers to the fact that a ruler (indicating its proper length in its rest frame) that moves relative to an observer is observed to contract/shorten. The situation is depicted in symmetric Loedel diagrams in Fig 4-3. Note that we can compare spacetime lengths on page directly with each other, due to the symmetric nature of the Loedel diagram. In Fig 4-4, the observer is assumed again to move along the "ct"-axis. The world lines of the endpoints of an object moving relative to him are assumed to move along the "ct"′-axis and the parallel line passing through A and B. For this observer the endpoints of the object at "t" = 0 are O and A. For a second observer moving together with the object, so that for him the object is at rest, it has the proper length OB at "t"′ = 0. Due to OA &lt; OB. the object is contracted for the first observer. The second observer will argue that the first observer has evaluated the endpoints of the object at O and A respectively and therefore at different times, leading to a wrong result due to his motion in the meantime. If the second observer investigates the length of another object with endpoints moving along the "ct"-axis and a parallel line passing through C and D he concludes the same way this object to be contracted from OD to OC. Each observer estimates objects moving with the other observer to be contracted. This apparently paradoxical situation is again a consequence of the relativity of simultaneity as demonstrated by the analysis via Minkowski diagram. For all these considerations it was assumed, that both observers take into account the speed of light and their distance to all events they see in order to determine the actual times at which these events happen from their point of view. Constancy of the speed of light. Another postulate of special relativity is the constancy of the speed of light. It says that any observer in an inertial reference frame measuring the vacuum speed of light relative to themself obtains the same value regardless of his own motion and that of the light source. This statement seems to be paradoxical, but it follows immediately from the differential equation yielding this, and the Minkowski diagram agrees. It explains also the result of the Michelson–Morley experiment which was considered to be a mystery before the theory of relativity was discovered, when photons were thought to be waves through an undetectable medium. For world lines of photons passing the origin in different directions "x" = "ct" and "x" = −"ct" holds. That means any position on such a world line corresponds with steps on "x"- and "ct"-axes of equal absolute value. From the rule for reading off coordinates in coordinate system with tilted axes follows that the two world lines are the angle bisectors of the "x"- and "ct"-axes. As shown in Fig 4-5, the Minkowski diagram illustrates them as being angle bisectors of the "x′"- and "ct"′-axes as well. That means both observers measure the same speed "c" for both photons. Further coordinate systems corresponding to observers with arbitrary velocities can be added to this Minkowski diagram. For all these systems both photon world lines represent the angle bisectors of the axes. The more the relative speed approaches the speed of light the more the axes approach the corresponding angle bisector. The formula_12 axis is always more flat and the time axis more steep than the photon world lines. The scales on both axes are always identical, but usually different from those of the other coordinate systems. Speed of light and causality. Straight lines passing the origin which are steeper than both photon world lines correspond with objects moving more slowly than the speed of light. If this applies to an object, then it applies from the viewpoint of all observers, because the world lines of these photons are the angle bisectors for any inertial reference frame. Therefore, any point above the origin and between the world lines of both photons can be reached with a speed smaller than that of the light and can have a cause-and-effect relationship with the origin. This area is the absolute future, because any event there happens later compared to the event represented by the origin regardless of the observer, which is obvious graphically from the Minkowski diagram in Fig 4-6. Following the same argument the range below the origin and between the photon world lines is the absolute past relative to the origin. Any event there belongs definitely to the past and can be the cause of an effect at the origin. The relationship between any such pairs of event is called "timelike", because they have a time distance greater than zero for all observers. A straight line connecting these two events is always the time axis of a possible observer for whom they happen at the same place. Two events which can be connected just with the speed of light are called "lightlike". In principle a further dimension of space can be added to the Minkowski diagram leading to a three-dimensional representation. In this case the ranges of future and past become cones with apexes touching each other at the origin. They are called light cones. The speed of light as a limit. Following the same argument, all straight lines passing through the origin and which are more nearly horizontal than the photon world lines, would correspond to objects or signals moving faster than light regardless of the speed of the observer. Therefore, no event outside the light cones can be reached from the origin, even by a light-signal, nor by any object or signal moving with less than the speed of light. Such pairs of events are called "spacelike" because they have a finite spatial distance different from zero for all observers. On the other hand, a straight line connecting such events is always the space coordinate axis of a possible observer for whom they happen at the same time. By a slight variation of the velocity of this coordinate system in both directions it is always possible to find two inertial reference frames whose observers estimate the chronological order of these events to be different. Given an object moving faster than light, say from O to A in Fig 4-7, then for any observer watching the object moving from O to A, another observer can be found (moving at less than the speed of light with respect to the first) for whom the object moves from A to O. The question of which observer is right has no unique answer, and therefore makes no physical sense. Any such moving object or signal would violate the principle of causality. Also, any general technical means of sending signals faster than light would permit information to be sent into the originator's own past. In the diagram, an observer at O in the "x"-"ct" system sends a message moving faster than light to A. At A, it is received by another observer, moving so as to be in the "x"′-"ct"′ system, who sends it back, again faster than light, arriving at B. But B is in the past relative to O. The absurdity of this process becomes obvious when both observers subsequently confirm that they received no message at all, but all messages were directed towards the other observer as can be seen graphically in the Minkowski diagram. Furthermore, if it were possible to accelerate an observer to the speed of light, their space and time axes would coincide with their angle bisector. The coordinate system would collapse, in concordance with the fact that due to time dilation, time would effectively stop passing for them. These considerations show that the speed of light as a limit is a consequence of the properties of spacetime, and not of the properties of objects such as technologically imperfect space ships. The prohibition of faster-than-light motion, therefore, has nothing in particular to do with electromagnetic waves or light, but comes as a consequence of the structure of spacetime. Accelerating observers. It is often, incorrectly, asserted that special relativity cannot handle accelerating particles or accelerating reference frames. In reality, accelerating particles present no difficulty at all in special relativity. On the other hand, accelerating "frames" do require some special treatment, However, as long as one is dealing with flat, Minkowskian spacetime, special relativity can handle the situation. It is only in the presence of gravitation that general relativity is required. An accelerating particle's 4-vector acceleration is the derivative with respect to proper time of its 4-velocity. This is not a difficult situation to handle. Accelerating frames require that one understand the concept of a "momentarily comoving reference frame" (MCRF), which is to say, a frame traveling at the same instantaneous velocity of a particle at any given instant. Consider the animation in Fig 5-1. The curved line represents the world line of a particle that undergoes continuous acceleration, including complete changes of direction in the positive and negative x-directions. The red axes are the axes of the MCRF for each point along the particle's trajectory. The coordinates of events in the unprimed (stationary) frame can be related to their coordinates in any momentarily co-moving primed frame using the Lorentz transformations. Fig 5-2 illustrates the changing views of spacetime along the world line of a rapidly accelerating particle. The formula_13 axis (not drawn) is vertical, while the formula_14 axis (not drawn) is horizontal. The dashed line is the spacetime trajectory ("world line") of the particle. The balls are placed at regular intervals of proper time along the world line. The solid diagonal lines are the light cones for the observer's current event, and they intersect at that event. The small dots are other arbitrary events in the spacetime. The slope of the world line (deviation from being vertical) is the velocity of the particle on that section of the world line. Bends in the world line represent particle acceleration. As the particle accelerates, its view of spacetime changes. These changes in view are governed by the Lorentz transformations. Also note that: If one imagines each event to be the flashing of a light, then the events that are within the past light cone of the observer are the events visible to the observer. The slope of the world line (deviation from being vertical) gives the velocity relative to the observer. Case of non-inertial reference frames. The photon world lines are determined using the metric with formula_15. The light cones are deformed according to the position. In an inertial reference frame a free particle has a straight world line. In a non-inertial reference frame the world line of a free particle is curved. Let's take the example of the fall of an object dropped without initial velocity from a rocket. The rocket has a uniformly accelerated motion with respect to an inertial reference frame. As can be seen from Fig 6-2 of a Minkowski diagram in a non-inertial reference frame, the object once dropped, gains speed, reaches a maximum, and then sees its speed decrease and asymptotically cancel on the horizon where its proper time freezes at formula_16. The velocity is measured by an observer at rest in the accelerated rocket. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "\\tan\\alpha = \\frac{v}{c} = \\beta." }, { "math_id": 1, "text": "\\begin{align}\n ct' &= \\gamma (ct - \\beta x),\\\\\n x' &= \\gamma (x - \\beta ct) \\\\\n\\end{align}" }, { "math_id": 2, "text": "\\gamma = \\left(1 - \\beta^2\\right)^{-\\frac{1}{2}}" }, { "math_id": 3, "text": "U' = U\\sqrt\\frac{1 + \\beta^2}{1 - \\beta^2}\\,." }, { "math_id": 4, "text": " t^2 - x^2 = 1 " }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "S^\\prime" }, { "math_id": 7, "text": "\\begin{align}\n &(1) & \\beta &= \\frac{2\\beta_0}{1 + {\\beta_0}^2},\\\\[3pt]\n &(2) & \\beta_{0} &= \\frac{\\gamma - 1}{\\beta\\gamma}.\n\\end{align}" }, { "math_id": 8, "text": "\\begin{align}\n \\sin\\varphi = \\cos\\theta &= \\beta,\\\\\n \\cos\\varphi = \\sin\\theta &= \\frac{1}{\\gamma},\\\\\n \\tan\\varphi = \\cot\\theta &= \\beta\\gamma.\n\\end{align}" }, { "math_id": 9, "text": "U" }, { "math_id": 10, "text": "U^\\prime" }, { "math_id": 11, "text": "U^\\prime = U\\sqrt\\frac{1 + \\beta^2}{1 - \\beta^2}" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "ct'" }, { "math_id": 14, "text": "x'" }, { "math_id": 15, "text": "d \\tau = 0" }, { "math_id": 16, "text": "t_\\text{H}" } ]
https://en.wikipedia.org/wiki?curid=11647860
11649911
Prevalent and shy sets
In mathematics, the notions of prevalence and shyness are notions of "almost everywhere" and "measure zero" that are well-suited to the study of infinite-dimensional spaces and make use of the translation-invariant Lebesgue measure on finite-dimensional real spaces. The term "shy" was suggested by the American mathematician John Milnor. Definitions. Prevalence and shyness. Let formula_0 be a real topological vector space and let formula_1 be a Borel-measurable subset of formula_2 formula_1 is said to be prevalent if there exists a finite-dimensional subspace formula_3 of formula_4 called the probe set, such that for all formula_5 we have formula_6 for formula_7-almost all formula_8 where formula_7 denotes the formula_9-dimensional Lebesgue measure on formula_10 Put another way, for every formula_11 Lebesgue-almost every point of the hyperplane formula_12 lies in formula_13 A non-Borel subset of formula_0 is said to be prevalent if it contains a prevalent Borel subset. A Borel subset of formula_0 is said to be shy if its complement is prevalent; a non-Borel subset of formula_0 is said to be shy if it is contained within a shy Borel subset. An alternative, and slightly more general, definition is to define a set formula_1 to be shy if there exists a transverse measure for formula_1 (other than the trivial measure). Local prevalence and shyness. A subset formula_1 of formula_0 is said to be locally shy if every point formula_5 has a neighbourhood formula_14 whose intersection with formula_1 is a shy set. formula_1 is said to be locally prevalent if its complement is locally shy. Theorems involving prevalence and shyness. In the following, "almost every" is taken to mean that the stated property holds of a prevalent subset of the space in question. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "V." }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "V," }, { "math_id": 5, "text": "v \\in V" }, { "math_id": 6, "text": "v + p \\in S" }, { "math_id": 7, "text": "\\lambda_P" }, { "math_id": 8, "text": "p \\in P," }, { "math_id": 9, "text": "\\dim (P)" }, { "math_id": 10, "text": "P." }, { "math_id": 11, "text": "v \\in V," }, { "math_id": 12, "text": "v + P" }, { "math_id": 13, "text": "S." }, { "math_id": 14, "text": "N_v" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "\\R^n" }, { "math_id": 17, "text": "[0, 1]" }, { "math_id": 18, "text": "\\R" }, { "math_id": 19, "text": "C([0, 1]; \\R)" }, { "math_id": 20, "text": "f" }, { "math_id": 21, "text": "L^p" }, { "math_id": 22, "text": "L^1([0, 1]; \\R)" }, { "math_id": 23, "text": "\\int_0^1 f(x) \\, \\mathrm{d} x \\neq 0." }, { "math_id": 24, "text": "k" }, { "math_id": 25, "text": "C^k([0, 1]; \\R)." }, { "math_id": 26, "text": "1 < p \\leq +\\infty," }, { "math_id": 27, "text": "a = \\left(a_n\\right)_{n \\in \\N} \\in \\ell^p" }, { "math_id": 28, "text": "\\sum_{n \\in \\N} a_n" }, { "math_id": 29, "text": "M" }, { "math_id": 30, "text": "C^1" }, { "math_id": 31, "text": "d" }, { "math_id": 32, "text": "\\R^n." }, { "math_id": 33, "text": "1 \\leq k \\leq +\\infty," }, { "math_id": 34, "text": "C^k" }, { "math_id": 35, "text": "f : \\R^n \\to \\R^{2d+1}" }, { "math_id": 36, "text": "M." }, { "math_id": 37, "text": "A" }, { "math_id": 38, "text": "d," }, { "math_id": 39, "text": "m \\geq ," }, { "math_id": 40, "text": "f : \\R^n \\to \\R^m," }, { "math_id": 41, "text": "f(A)" }, { "math_id": 42, "text": "d." }, { "math_id": 43, "text": "f : \\R^n \\to \\R^n" }, { "math_id": 44, "text": "p" }, { "math_id": 45, "text": "p." } ]
https://en.wikipedia.org/wiki?curid=11649911
1165029
Collimator
Device which narrows or straightens a beam A collimator is a device which narrows a beam of particles or waves. To narrow can mean either to cause the directions of motion to become more aligned in a specific direction (i.e., make collimated light or parallel rays), or to cause the spatial cross section of the beam to become smaller (beam limiting device). History. The English physicist Henry Kater was the inventor of the floating collimator, which rendered a great service to practical astronomy. He reported about his invention in January 1825. In his report, Kater mentioned previous work in this area by Carl Friedrich Gauss and Friedrich Bessel. Optical collimators. In optics, a collimator may consist of a curved mirror or lens with some type of light source and/or an image at its focus. This can be used to replicate a target focused at infinity with little or no parallax. In lighting, collimators are typically designed using the principles of nonimaging optics. Optical collimators can be used to calibrate other optical devices, to check if all elements are aligned on the optical axis, to set elements at proper focus, or to align two or more devices such as binoculars or gun barrels and gunsights. A surveying camera may be collimated by setting its fiduciary markers so that they define the principal point, as in photogrammetry. Optical collimators are also used as gun sights in the collimator sight, which is a simple optical collimator with a cross hair or some other reticle at its focus. The viewer only sees an image of the reticle. They have to use it either with both eyes open and one eye looking into the collimator sight, with one eye open and moving the head to alternately see the sight and the target, or with one eye to partially see the sight and target at the same time. Adding a beam splitter allows the viewer to see the reticle and the field of view, making a reflector sight. Collimators may be used with laser diodes and CO2 cutting lasers. Proper collimation of a laser source with long enough coherence length can be verified with a shearing interferometer. X-ray, gamma ray, and neutron collimators. In X-ray optics, gamma ray optics, and neutron optics, a collimator is a device that filters a stream of rays so that only those traveling parallel to a specified direction are allowed through. Collimators are used for X-ray, gamma-ray, and neutron imaging because it is difficult to focus these types of radiation into an image using lenses, as is routine with electromagnetic radiation at optical or near-optical wavelengths. Collimators are also used in radiation detectors in nuclear power stations to make them directionally sensitive. Applications. The figure to the right illustrates how a Söller collimator is used in neutron and X-ray machines. The upper panel shows a situation where a collimator is not used, while the lower panel introduces a collimator. In both panels the source of radiation is to the right, and the image is recorded on the gray plate at the left of the panels. Without a collimator, rays from all directions will be recorded; for example, a ray that has passed through the top of the specimen (to the right of the diagram) but happens to be travelling in a downwards direction may be recorded at the bottom of the plate. The resultant image will be so blurred and indistinct as to be useless. In the lower panel of the figure, a collimator has been added (blue bars). This may be a sheet of lead or other material opaque to the incoming radiation with many tiny holes bored through it or in the case of neutrons it can be a sandwich arrangement (which can be up to several feet long; see ENGIN-X) with many layers alternating between neutron absorbing material (e.g., gadolinium) with neutron transmitting material. This can be something simple, such as air; alternatively, if mechanical strength is needed, a material such as aluminium may be used. If this forms part of a rotating assembly, the sandwich may be curved. This allows energy selection in addition to collimation; the curvature of the collimator and its rotation will present a straight path only to one energy of neutrons. Only rays that are travelling nearly parallel to the holes will pass through them—any others will be absorbed by hitting the plate surface or the side of a hole. This ensures that rays are recorded in their proper place on the plate, producing a clear image. For industrial radiography using gamma radiation sources such as iridium-192 or cobalt-60, a collimator (beam limiting device) allows the radiographer to control the exposure of radiation to expose a film and create a radiograph, to inspect materials for defects. A collimator in this instance is most commonly made of tungsten, and is rated according to how many half value layers it contains, i.e., how many times it reduces undesirable radiation by half. For instance, the thinnest walls on the sides of a 4 HVL tungsten collimator thick will reduce the intensity of radiation passing through them by 88.5%. The shape of these collimators allows emitted radiation to travel freely toward the specimen and the x-ray film, while blocking most of the radiation that is emitted in undesirable directions such as toward workers. Limitations. Although collimators improve resolution, they also reduce intensity by blocking incoming radiation, which is undesirable for remote sensing instruments that require high sensitivity. For this reason, the gamma ray spectrometer on the Mars Odyssey is a non-collimated instrument. Most lead collimators let less than 1% of incident photons through. Attempts have been made to replace collimators with electronic analysis. In radiation therapy. Collimators (beam limiting devices) are used in linear accelerators used for radiotherapy treatments. They help to shape the beam of radiation emerging from the machine and can limit the maximum field size of a beam. The treatment head of a linear accelerator consists of both a primary and secondary collimator. The primary collimator is positioned after the electron beam has reached a vertical orientation. When using photons, it is placed after the beam has passed through the X-ray target. The secondary collimator is positioned after either a flattening filter (for photon therapy) or a scattering foil (for electron therapy). The secondary collimator consists of two jaws which can be moved to either enlarge or minimize the size of the treatment field. New systems involving multileaf collimators (MLCs) are used to further shape a beam to localise treatment fields in radiotherapy. MLCs consist of approximately 50–120 leaves of heavy, metal collimator plates which slide into place to form the desired field shape. Computing the spatial resolution. To find the spatial resolution of a parallel hole collimator with a hole length, formula_0, a hole diameter formula_1 and a distance to the imaged object formula_2, the following formula can be used formula_3 where the effective length is defined as formula_4 Where formula_5 is the linear attenuation coefficient of the material from which the collimator is made. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "l" }, { "math_id": 1, "text": "D" }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "R_\\text{collimator} = D + \\frac{Ds}{l_\\text{effective}}" }, { "math_id": 4, "text": "l_\\text{effective} = l - \\frac{2}{\\mu}" }, { "math_id": 5, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=1165029
1165182
Torus knot
Knot which lies on the surface of a torus in 3-dimensional space In knot theory, a torus knot is a special kind of knot that lies on the surface of an unknotted torus in R3. Similarly, a torus link is a link which lies on the surface of a torus in the same way. Each torus knot is specified by a pair of coprime integers "p" and "q". A torus link arises if "p" and "q" are not coprime (in which case the number of components is gcd("p, q")). A torus knot is trivial (equivalent to the unknot) if and only if either "p" or "q" is equal to 1 or −1. The simplest nontrivial example is the (2,3)-torus knot, also known as the trefoil knot. Geometrical representation. A torus knot can be rendered geometrically in multiple ways which are topologically equivalent (see Properties below) but geometrically distinct. The convention used in this article and its figures is the following. The ("p","q")-torus knot winds "q" times around a circle in the interior of the torus, and "p" times around its axis of rotational symmetry.. If "p" and "q" are not relatively prime, then we have a torus link with more than one component. The direction in which the strands of the knot wrap around the torus is also subject to differing conventions. The most common is to have the strands form a right-handed screw for "p q &gt; 0". The ("p","q")-torus knot can be given by the parametrization formula_0 where formula_1 and formula_2. This lies on the surface of the torus given by formula_3 (in cylindrical coordinates). Other parameterizations are also possible, because knots are defined up to continuous deformation. The illustrations for the (2,3)- and (3,8)-torus knots can be obtained by taking formula_4, and in the case of the (2,3)-torus knot by furthermore subtracting respectively formula_5 and formula_6 from the above parameterizations of "x" and "y". The latter generalizes smoothly to any coprime "p,q" satisfying formula_7. Properties. A torus knot is trivial iff either "p" or "q" is equal to 1 or −1. Each nontrivial torus knot is prime and chiral. The ("p","q") torus knot is equivalent to the ("q","p") torus knot. This can be proved by moving the strands on the surface of the torus. The ("p",−"q") torus knot is the obverse (mirror image) of the ("p","q") torus knot. The (−"p",−"q") torus knot is equivalent to the ("p","q") torus knot except for the reversed orientation. Any ("p","q")-torus knot can be made from a closed braid with "p" strands. The appropriate braid word is formula_8 The crossing number of a ("p","q") torus knot with "p","q" &gt; 0 is given by "c" = min(("p"−1)"q", ("q"−1)"p"). The genus of a torus knot with "p","q" &gt; 0 is formula_9 The Alexander polynomial of a torus knot is formula_10 where formula_11 The Jones polynomial of a (right-handed) torus knot is given by formula_12 The complement of a torus knot in the 3-sphere is a Seifert-fibered manifold, fibred over the disc with two singular fibres. Let "Y" be the "p"-fold dunce cap with a disk removed from the interior, "Z" be the "q"-fold dunce cap with a disk removed from its interior, and "X" be the quotient space obtained by identifying "Y" and "Z" along their boundary circle. The knot complement of the ("p", "q") -torus knot deformation retracts to the space "X". Therefore, the knot group of a torus knot has the presentation formula_13 Torus knots are the only knots whose knot groups have nontrivial center (which is infinite cyclic, generated by the element formula_14 in the presentation above). The stretch factor of the ("p","q") torus knot, as a curve in Euclidean space, is Ω(min("p","q")), so torus knots have unbounded stretch factors. Undergraduate researcher John Pardon won the 2012 Morgan Prize for his research proving this result, which solved a problem originally posed by Mikhail Gromov. Connection to complex hypersurfaces. The ("p","q")−torus knots arise when considering the link of an isolated complex hypersurface singularity. One intersects the complex hypersurface with a hypersphere, centred at the isolated singular point, and with sufficiently small radius so that it does not enclose, nor encounter, any other singular points. The intersection gives a submanifold of the hypersphere. Let "p" and "q" be coprime integers, greater than or equal to two. Consider the holomorphic function formula_15 given by formula_16 Let formula_17 be the set of formula_18 such that formula_19 Given a real number formula_20 we define the real three-sphere formula_21 as given by formula_22 The function formula_23 has an isolated critical point at formula_24 since formula_25 if and only if formula_26 Thus, we consider the structure of formula_27 close to formula_28 In order to do this, we consider the intersection formula_29 This intersection is the so-called link of the singularity formula_30 The link of formula_31, where "p" and "q" are coprime, and both greater than or equal to two, is exactly the ("p","q")−torus knot. List. The figure on the right is torus link (72,4) . "g"-torus knot. A g-torus knot is a closed curve drawn on a g-torus. More technically, it is the homeomorphic image of a circle in S³ which can be realized as a subset of a genus "g" handlebody in S³ (whose complement is also a genus "g" handlebody). If a link is a subset of a genus two handlebody, it is a double torus link. For genus two, the simplest example of a double torus knot that is not a torus knot is the figure-eight knot. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n x &= r\\cos(p\\phi) \\\\\n y &= r\\sin(p\\phi) \\\\\n z &= -\\sin(q\\phi)\n\\end{align}" }, { "math_id": 1, "text": "r = \\cos(q\\phi)+2" }, { "math_id": 2, "text": "0<\\phi<2\\pi" }, { "math_id": 3, "text": "(r-2)^2 + z^2 = 1" }, { "math_id": 4, "text": "r = \\cos(q\\phi)+4" }, { "math_id": 5, "text": "3\\cos((p-q)\\phi)" }, { "math_id": 6, "text": "3\\sin((p-q)\\phi)" }, { "math_id": 7, "text": "p<q<2p" }, { "math_id": 8, "text": "(\\sigma_1\\sigma_2\\cdots\\sigma_{p-1})^q." }, { "math_id": 9, "text": "g = \\frac{1}{2}(p-1)(q-1)." }, { "math_id": 10, "text": "t^k\\frac{(t^{pq}-1)(t-1)}{(t^p-1)(t^q-1)}," }, { "math_id": 11, "text": "k=-\\frac{(p-1)(q-1)}{2}." }, { "math_id": 12, "text": "t^{(p-1)(q-1)/2}\\frac{1-t^{p+1}-t^{q+1}+t^{p+q}}{1-t^2}." }, { "math_id": 13, "text": "\\langle x,y \\mid x^p = y^q\\rangle." }, { "math_id": 14, "text": "x^p = y^q" }, { "math_id": 15, "text": " f: \\Complex^2 \\to \\Complex" }, { "math_id": 16, "text": "f(w,z) := w^p + z^q." }, { "math_id": 17, "text": "V_f \\subset \\Complex^2" }, { "math_id": 18, "text": "(w,z) \\in \\Complex^2" }, { "math_id": 19, "text": "f(w,z) = 0." }, { "math_id": 20, "text": "0 < \\varepsilon \\ll 1, " }, { "math_id": 21, "text": "\\mathbb{S}^3_{\\varepsilon} \\subset \\R^4 \\hookrightarrow \\Complex^2" }, { "math_id": 22, "text": "|w|^2 + |z|^2 = \\varepsilon^2." }, { "math_id": 23, "text": "f" }, { "math_id": 24, "text": "(0,0) \\in \\Complex^2" }, { "math_id": 25, "text": "\\partial f/\\partial w = \\partial f/ \\partial z = 0" }, { "math_id": 26, "text": "w = z = 0." }, { "math_id": 27, "text": "V_f" }, { "math_id": 28, "text": "(0,0) \\in \\Complex^2." }, { "math_id": 29, "text": "V_f \\cap \\mathbb{S}^3_{\\varepsilon} \\subset \\mathbb{S}^3_{\\varepsilon}." }, { "math_id": 30, "text": "f(w,z) = w^p + z^q." }, { "math_id": 31, "text": "f(w,z) = w^p + z^q" } ]
https://en.wikipedia.org/wiki?curid=1165182
1165244
Chronon
Hypothetical quantum of time A chronon is a proposed quantum of time, that is, a discrete and indivisible "unit" of time as part of a hypothesis that proposes that time is not continuous. In simple language, a chronon is the smallest, discrete, non-decomposable unit of time in a temporal data model. In a one-dimensional model, a chronon is a "time interval" or "period", while in an "n"-dimensional model it is a non-decomposable region in "n"-dimensional time. Important special types of chronons include valid-time, transaction-time, and bitemporal chronons. It is not easy to see how it could possibly be recast so as to postulate only a discrete spacetime (or even a merely dense one). For a set of instants to be dense, every instant not in the set must have a sequence of instants in the set that converge (get arbitrarily close) to it. For it to be a continuum, however, something more is required— that every set of instants earlier (later) than any given one should have a tight upper (lower) bound that is also an instant (see least upper bound property). It is continuity that enables modern mathematics to surmount the paradox of extension framed by the pre-Socratic eleatic Zeno—a paradox comprising the question of how a finite interval can be made up of dimensionless points or instants. Early work. While time is a continuous quantity in both standard quantum mechanics and general relativity, many physicists have suggested that a discrete model of time might work, especially when considering the combination of quantum mechanics with general relativity to produce a theory of quantum gravity. The term was introduced in this sense by Robert Lévi in 1927. A quantum theory in which time is a quantum variable with a discrete spectrum, and which is nevertheless consistent with special relativity, was proposed by Chen Ning Yang in 1947. Henry Margenau in 1950 suggested that the chronon might be the time for light to travel the classical radius of an electron. Work by Caldirola. A prominent model was introduced by Piero Caldirola in 1980. In Caldirola's model, one chronon corresponds to about 6.27×10−24 seconds for an electron. This is much longer than the Planck time, which is only about seconds. The Planck time may be postulated as a lower-bound on the length of time that could exist between two connected events, but it is not a quantization of time itself since there is no requirement that the time between two events be separated by a discrete number of Planck times. For example, ordered pairs of events (A, B) and (B, C) could each be separated by slightly more than 1 Planck time: this would produce a measurement limit of 1 Planck time between A and B or B and C, but a limit of 3 Planck times between A and C. The chronon is a quantization of the evolution in a system along its world line. Consequently, the value of the chronon, like other quantized observables in quantum mechanics, is a function of the system under consideration, particularly its boundary conditions. The value for the chronon, "θ"0, is calculated as formula_0 From this formula, it can be seen that the nature of the moving particle being considered must be specified, since the value of the chronon depends on the particle's charge and mass. Caldirola claims that the chronon has important implications for quantum mechanics, in particular that it allows for a clear answer to the question of whether a free-falling charged particle does or does not emit radiation. This model supposedly avoids the difficulties met by Abraham–Lorentz's and Dirac's approaches to the problem and provides a natural explication of quantum decoherence. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\theta_0 = \\frac{1}{6\\pi\\epsilon_0} \\frac{e^2}{m_0c^3}." } ]
https://en.wikipedia.org/wiki?curid=1165244
11652693
Graphene nanoribbon
Carbon allotrope Graphene nanoribbons (GNRs, also called nano-graphene ribbons or nano-graphite ribbons) are strips of graphene with width less than 100 nm. Graphene ribbons were introduced as a theoretical model by Mitsutaka Fujita and coauthors to examine the edge and nanoscale size effect in graphene. Production. Nanotomy. Large quantities of width-controlled GNRs can be produced via graphite nanotomy, where applying a sharp diamond knife on graphite produces graphite nanoblocks, which can then be exfoliated to produce GNRs as shown by Vikas Berry. GNRs can also be produced by "unzipping" or axially cutting nanotubes. In one such method multi-walled carbon nanotubes were unzipped in solution by action of potassium permanganate and sulfuric acid. In another method GNRs were produced by plasma etching of nanotubes partly embedded in a polymer film. More recently, graphene nanoribbons were grown onto silicon carbide (SiC) substrates using ion implantation followed by vacuum or laser annealing. The latter technique allows any pattern to be written on SiC substrates with 5 nm precision. Epitaxy. GNRs were grown on the edges of three-dimensional structures etched into silicon carbide wafers. When the wafers are heated to approximately , silicon is preferentially driven off along the edges, forming nanoribbons whose structure is determined by the pattern of the three-dimensional surface. The ribbons had perfectly smooth edges, annealed by the fabrication process. Electron mobility measurements surpassing one million correspond to a sheet resistance of one ohm per square — two orders of magnitude lower than in two-dimensional graphene. Chemical vapor deposition. Nanoribbons narrower than 10 nm grown on a germanium wafer act like semiconductors, exhibiting a band gap. Inside a reaction chamber, using chemical vapor deposition, methane is used to deposit hydrocarbons on the wafer surface, where they react with each other to produce long, smooth-edged ribbons. The ribbons were used to create prototype transistors. At a very slow growth rate, the graphene crystals naturally grow into long nanoribbons on a specific germanium crystal facet. By controlling the growth rate and growth time, the researchers achieved control over the nanoribbon width. Recently, researchers from SIMIT (Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences) reported on a strategy to grow graphene nanoribbons with controlled widths and smooth edges directly onto dielectric hexagonal boron nitride (h-BN) substrates. The team use nickel nanoparticles to etch monolayer-deep, nanometre-wide trenches into h-BN, and subsequently fill them with graphene using chemical vapour deposition. Modifying the etching parameters allows the width of the trench to be tuned to less than 10 nm, and the resulting sub-10-nm ribbons display bandgaps of almost 0.5 eV. Integrating these nanoribbons into field effect transistor devices reveals on–off ratios of greater than 104 at room temperature, as well as high carrier mobilities of ~750 cm2 V−1 s−1. Multistep nanoribbon synthesis. A bottom-up approach was investigated. In 2017 dry contact transfer was used to press a fiberglass applicator coated with a powder of atomically precise graphene nanoribbons on a hydrogen-passivated Si(100) surface under vacuum. 80 of 115 GNRs visibly obscured the substrate lattice with an average apparent height of 0.30 nm. The GNRs do not align to the Si lattice, indicating a weak coupling. The average bandgap over 21 GNRs was 2.85 eV with a standard deviation of 0.13 eV. The method unintentionally overlapped some nanoribbons, allowing the study of multilayer GNRs. Such overlaps could be formed deliberately by manipulation with a scanning tunneling microscope. Hydrogen depassivation left no band-gap. Covalent bonds between the Si surface and the GNR leads to metallic behavior. The Si surface atoms move outward, and the GNR changes from flat to distorted, with some C atoms moving in toward the Si surface. Electronic structure. The electronic states of GNRs largely depend on the edge structures (armchair or zigzag). In zigzag edges each successive edge segment is at the opposite angle to the previous. In armchair edges, each pair of segments is a 120/-120 degree rotation of the prior pair. The animation below provides a visualization explanation of both. Zigzag edges provide the edge localized state with non-bonding molecular orbitals near the Fermi energy. They are expected to have large changes in optical and electronic properties from quantization. Calculations based on tight binding theory predict that zigzag GNRs are always metallic while armchairs can be either metallic or semiconducting, depending on their width. However, density functional theory (DFT) calculations show that armchair nanoribbons are semiconducting with an energy gap scaling with the inverse of the GNR width. Experiments verified that energy gaps increase with decreasing GNR width. Graphene nanoribbons with controlled edge orientation have been fabricated by scanning tunneling microscope (STM) lithography. Energy gaps up to 0.5 eV in a 2.5 nm wide armchair ribbon were reported. Armchair nanoribbons are metallic or semiconducting and present spin polarized edges. Their gap opens thanks to an unusual antiferromagnetic coupling between the magnetic moments at opposite edge carbon atoms. This gap size is inversely proportional to the ribbon width and its behavior can be traced back to the spatial distribution properties of edge-state wave functions, and the mostly local character of the exchange interaction that originates the spin polarization. Therefore, the quantum confinement, inter-edge superexchange, and intra-edge direct exchange interactions in zigzag GNR are important for its magnetism and band gap. The edge magnetic moment and band gap of zigzag GNR are reversely proportional to the electron/hole concentration and they can be controlled by alkaline adatoms. Their 2D structure, high electrical and thermal conductivity and low noise also make GNRs a possible alternative to copper for integrated circuit interconnects. Research is exploring the creation of quantum dots by changing the width of GNRs at select points along the ribbon, creating quantum confinement. Heterojunctions inside single graphene nanoribbons have been realized, among which structures that have been shown to function as tunnel barriers. Graphene nanoribbons possess semiconductive properties and may be a technological alternative to silicon semiconductors capable of sustaining microprocessor clock speeds in the vicinity of 1 THz field-effect transistors less than 10 nm wide have been created with GNR – "GNRFETs" – with an Ion/Ioff ratio &gt;106 at room temperature. Mechanical properties. While it is difficult to prepare graphene nanoribbons with precise geometry to conduct the real tensile test due to the limiting resolution in nanometer scale, the mechanical properties of the two most common graphene nanoribbons (zigzag and armchair) were investigated by computational modeling using density functional theory, molecular dynamics, and finite element method. Since the two-dimensional graphene sheet with strong bonding is known to be one of the stiffest materials, graphene nanoribbons Young's modulus also has a value of over 1 TPa. The Young's modulus, shear modulus and Poisson's ratio of graphene nanoribbons are different with varying sizes (with different length and width) and shapes. These mechanical properties are anisotropic and would usually be discussed in two in-plane directions, parallel and perpendicular to the one-dimensional periodic direction. Mechanical properties here will be a little bit different from the two-dimensional graphene sheets because of the distinct geometry, bond length, and bond strength particularly at the edge of graphene nanoribbons. It is possible to tune these nanomechanical properties with further chemical doping to change the bonding environment at the edge of graphene nanoribbons. While increasing the width of graphene nanoribbons, the mechanical properties will converge to the value measured on the graphene sheets. One analysis predicted the high Young's modulus for armchair graphene nanoribbons to be around 1.24 TPa by the molecular dynamics method. They also showed the nonlinear elastic behaviors with higher-order terms in the stress-strain curve. In the higher strain region, it would need even higher-order (&gt;3) to fully describe the nonlinear behavior. Other scientists also reported the nonlinear elasticity by the finite element method, and found that Young's modulus, tensile strength, and ductility of armchair graphene nanoribbons are all greater than those of zigzag graphene nanoribbons. Another report predicted the linear elasticity for the strain between -0.02 and 0.02 on the zigzag graphene nanoribbons by the density functional theory model. Within the linear region, the electronic properties would be relatively stable under the slightly changing geometry. The energy gaps increase from -0.02 eV to 0.02 eV for the strain between -0.02 and 0.02, which provides the feasibilities for future engineering applications. The tensile strength of the armchair graphene nanoribbons is 175 GPa with the great ductility of 30.26% fracture strain, which shows the greater mechanical properties comparing to the value of 130 GPa and 25% experimentally measured on monolayer graphene. As expected, graphene nanoribbons with smaller width would completely break down faster, since the ratio of the weaker edged bonds increased. While the tensile strain on graphene nanoribbons reached its maximum, C-C bonds would start to break and then formed much bigger rings to make materials weaker until fracture. Optical properties. The earliest numerical results on the optical properties of graphene nanoribbons were obtained by Lin and Shyu in 2000. The different selection rules for optical transitions in graphene nanoribbons with armchair and zigzag edges were reported. These results were supplemented by a comparative study of zigzag nanoribbons with single wall armchair carbon nanotubes by Hsu and Reichl in 2007. It was demonstrated that selection rules in zigzag ribbons are different from those in carbon nanotube and the eigenstates in zigzag ribbons can be classified as either symmetric or antisymmetric. Also, it was predicted that edge states should play an important role in the optical absorption of zigzag nanoribbons. Optical transitions between the edge and bulk states should enrich the low-energy region (formula_0 eV) of the absorption spectrum by strong absorption peaks. Analytical derivation of the numerically obtained selection rules was presented in 2011. The selection rule for the incident light polarized longitudinally to the zigzag ribbon axis is that formula_1 is odd, where formula_2 and formula_3 number the energy bands, while for the perpendicular polarization formula_4 is even. Intraband (intersubband) transitions between the conduction (valence) sub-bands are also allowed if formula_4 is even. For graphene nanoribbons with armchair edges the selection rule is formula_5. Similar to tubes transitions intersubband transitions are forbidden for armchair graphene nanoribbons. Despite different selection rules in single wall armchair carbon nanotubes and zigzag graphene nanoribbons a hidden correlation of the absorption peaks is predicted. The correlation of the absorption peaks in tubes and ribbons should take place when the number of atoms in the tube unit cell formula_6 is related to the number of atoms in the zigzag ribbon unit cell formula_7 as follows: formula_8, which is so-called matching condition for the periodic and hard wall boundary conditions. These results obtained within the nearest-neighbor approximation of the tight-binding model have been corroborated with first principles density functional theory calculations taking into account exchange and correlation effects. First-principle calculations with quasiparticle corrections and many-body effects explored the electronic and optical properties of graphene-based materials. With GW calculation, the properties of graphene-based materials are accurately investigated, including graphene nanoribbons, edge and surface functionalized armchair graphene nanoribbons and scaling properties in armchair graphene nanoribbons. Analyses. Graphene nanoribbons can be analyzed by scanning tunneling microscope, Raman spectroscopy, infrared spectroscopy, and X-ray photoelectron spectroscopy. For example, out-of-plane bending vibration of one C-H on one benzene ring, called SOLO, which is similar to zigzag edge, on zigzag GNRs has been reported to appear at 899 cm−1, whereas that of two C-H on one benzene ring, called DUO, which is similar to armchair edge, on armchair GNRs has been reported to appear at 814 cm−1 as results of calculated IR spectra. However, analyses of graphene nanoribbon on substrates are difficult using infrared spectroscopy even with a Reflection Absorption Spectrometry method. Thus, a large quantity of graphene nanoribbon is necessary for infrared spectroscopy analyses. Reactivity. Zigzag edges are known to be more reactive than armchair edges, as observed in the dehydrogenation reactivities between the compound with zigzag edges (tetracene) and armchair edges (chrysene). Also, zigzag edges tends to be more oxidized than armchair edges without gasification. The zigzag edges with longer length can be more reactive as it can be seen from the dependence of the length of acenes on the reactivity. Applications. Polymeric nanocomposites. Graphene nanoribbons and their oxidized counterparts called graphene oxide nanoribbons have been investigated as nano-fillers to improve the mechanical properties of polymeric nanocomposites. Increases in the mechanical properties of epoxy composites on loading of graphene nanoribbons were observed. An increase in the mechanical properties of biodegradable polymeric nanocomposites of poly(propylene fumarate) at low weight percentage was achieved by loading of oxidized graphene nanoribbons, fabricated for bone tissue engineering applications. Contrast agent for bioimaging. Hybrid imaging modalities, such as photoacoustic (PA) tomography (PAT) and thermoacoustic (TA) tomography (TAT) have been developed for bioimaging applications. PAT/TAT combines advantages of pure ultrasound and pure optical imaging/radio frequency (RF), providing good spatial resolution, great penetration depth and high soft-tissue contrast. GNR synthesized by unzipping single- and multi-walled carbon nanotubes have been reported as contrast agents for photoacoustic and thermoacoustic imaging and tomography. Catalysis. In catalysis, GNRs offer several advantageous features that make them attractive as catalysts or catalyst supports. Firstly, their high surface-to-volume ratio provides abundant active sites for catalytic reactions. This enhanced surface area enables efficient interaction with reactant molecules, leading to improved catalytic performance. Secondly, the edge structure of GNRs plays a crucial role in catalysis. The zigzag and armchair edges of GNRs possess distinctive electronic properties, making them suitable for specific reactions. For instance, the presence of unsaturated carbon atoms at the edges can serve as active sites for adsorption and reaction of various molecules. Moreover, GNRs can be functionalized or doped with heteroatoms to tailor their catalytic properties further. Functionalization with specific groups or doping with elements like silicon, nitrogen, boron, or transition metals can introduce additional active sites or modify the electronic structure, allowing for selective catalytic transformations. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "<3" }, { "math_id": 1, "text": " \\Delta J = J_2 - J_1" }, { "math_id": 2, "text": " J_{1}" }, { "math_id": 3, "text": " J_{2}" }, { "math_id": 4, "text": "\\Delta J = J_2 - J_1" }, { "math_id": 5, "text": "\\Delta J = J_2 - J_1 = 0" }, { "math_id": 6, "text": "N_t" }, { "math_id": 7, "text": "N_r" }, { "math_id": 8, "text": "N_t = 2 N_r + 4" } ]
https://en.wikipedia.org/wiki?curid=11652693
1165549
N-vector model
In statistical mechanics, the "n"-vector model or O("n") model is a simple system of interacting spins on a crystalline lattice. It was developed by H. Eugene Stanley as a generalization of the Ising model, XY model and Heisenberg model. In the "n"-vector model, "n"-component unit-length classical spins formula_0 are placed on the vertices of a "d"-dimensional lattice. The Hamiltonian of the "n"-vector model is given by: formula_1 where the sum runs over all pairs of neighboring spins formula_2 and formula_3 denotes the standard Euclidean inner product. Special cases of the "n"-vector model are: formula_4: The self-avoiding walk formula_5: The Ising model formula_6: The XY model formula_7: The Heisenberg model formula_8: Toy model for the Higgs sector of the Standard Model The general mathematical formalism used to describe and solve the "n"-vector model and certain generalizations are developed in the article on the Potts model. Reformulation as a loop model. In a small coupling expansion, the weight of a configuration may be rewritten as formula_9 Integrating over the vector formula_0 gives rise to expressions such as formula_10 which is interpreted as a sum over the 3 possible ways of connecting the vertices formula_11 pairwise using 2 lines going through vertex formula_12. Integrating over all vectors, the corresponding lines combine into closed loops, and the partition function becomes a sum over loop configurations: formula_13 where formula_14 is the set of loop configurations, with formula_15 the number of loops in the configuration formula_16, and formula_17 the total number of lattice edges. In two dimensions, it is common to assume that loops do not cross: either by choosing the lattice to be trivalent, or by considering the model in a dilute phase where crossings are irrelevant, or by forbidding crossings by hand. The resulting model of non-intersecting loops can then be studied using powerful algebraic methods, and its spectrum is exactly known. Moreover, the model is closely related to the random cluster model, which can also be formulated in terms of non-crossing loops. Much less is known in models where loops are allowed to cross, and in higher than two dimensions. Continuum limit. The continuum limit can be understood to be the sigma model. This can be easily obtained by writing the Hamiltonian in terms of the product formula_18 where formula_19 is the "bulk magnetization" term. Dropping this term as an overall constant factor added to the energy, the limit is obtained by defining the Newton finite difference as formula_20 on neighboring lattice locations formula_21 Then formula_22 in the limit formula_23, where formula_24 is the gradient in the formula_25 direction. Thus, in the limit, formula_26 which can be recognized as the kinetic energy of the field formula_27 in the sigma model. One still has two possibilities for the spin formula_27: it is either taken from a discrete set of spins (the Potts model) or it is taken as a point on the sphere formula_28; that is, formula_27 is a continuously-valued vector of unit length. In the later case, this is referred to as the formula_29 non-linear sigma model, as the rotation group formula_29 is group of isometries of formula_28, and obviously, formula_28 isn't "flat", "i.e." isn't a linear field. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{s}_i" }, { "math_id": 1, "text": "H = K{\\sum}_{\\langle i,j \\rangle}\\mathbf{s}_i \\cdot \\mathbf{s}_j" }, { "math_id": 2, "text": "\\langle i, j \\rangle" }, { "math_id": 3, "text": "\\cdot" }, { "math_id": 4, "text": "n=0" }, { "math_id": 5, "text": "n=1" }, { "math_id": 6, "text": "n=2" }, { "math_id": 7, "text": "n=3" }, { "math_id": 8, "text": "n=4" }, { "math_id": 9, "text": "\ne^H \\underset{K\\to 0}{\\sim} \\prod_{\\langle i,j \\rangle}\\left(1+K\\mathbf{s}_i \\cdot \\mathbf{s}_j \\right)\n" }, { "math_id": 10, "text": "\n\\int d\\mathbf{s}_i\\ \\prod_{j=1}^4\\left(\\mathbf{s}_i \\cdot \\mathbf{s}_j\\right) \n= \\left(\\mathbf{s}_1\\cdot \\mathbf{s}_2\\right)\\left(\\mathbf{s}_3\\cdot \\mathbf{s}_4\\right)\n+ \\left(\\mathbf{s}_1\\cdot \\mathbf{s}_4\\right)\\left(\\mathbf{s}_2\\cdot \\mathbf{s}_3\\right)\n+ \\left(\\mathbf{s}_1\\cdot \\mathbf{s}_3\\right)\\left(\\mathbf{s}_2\\cdot \\mathbf{s}_4\\right)\n" }, { "math_id": 11, "text": "1,2,3,4" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "\nZ = \\sum_{L\\in\\mathcal{L}} K^{E(L)}n^{|L|}\n" }, { "math_id": 14, "text": "\\mathcal{L}" }, { "math_id": 15, "text": "|L|" }, { "math_id": 16, "text": "L" }, { "math_id": 17, "text": "E(L)" }, { "math_id": 18, "text": "-\\tfrac{1}{2}(\\mathbf{s}_i - \\mathbf{s}_j) \\cdot (\\mathbf{s}_i - \\mathbf{s}_j) = \\mathbf{s}_i \\cdot \\mathbf{s}_j - 1" }, { "math_id": 19, "text": "\\mathbf{s}_i \\cdot \\mathbf{s}_i=1" }, { "math_id": 20, "text": "\\delta_h[\\mathbf{s}](i,j)=\\frac{\\mathbf{s}_i - \\mathbf{s}_j}{h}" }, { "math_id": 21, "text": "i,j." }, { "math_id": 22, "text": "\\delta_h[\\mathbf{s}]\\to\\nabla_\\mu\\mathbf{s}" }, { "math_id": 23, "text": "h\\to 0" }, { "math_id": 24, "text": "\\nabla_\\mu" }, { "math_id": 25, "text": "(i,j)\\to\\mu" }, { "math_id": 26, "text": "-\\mathbf{s}_i\\cdot \\mathbf{s}_j\\to \\tfrac{1}{2}\\nabla_\\mu\\mathbf{s} \\cdot \\nabla_\\mu\\mathbf{s}" }, { "math_id": 27, "text": "\\mathbf{s}" }, { "math_id": 28, "text": "S^{n-1}" }, { "math_id": 29, "text": "O(n)" } ]
https://en.wikipedia.org/wiki?curid=1165549
11655832
Transverse measure
In mathematics, a measure on a real vector space is said to be transverse to a given set if it assigns measure zero to every translate of that set, while assigning finite and positive (i.e. non-zero) measure to some compact set. Definition. Let "V" be a real vector space together with a metric space structure with respect to which it is complete. A Borel measure "μ" is said to be transverse to a Borel-measurable subset "S" of "V" if formula_0 is the translate of "S" by "v". The first requirement ensures that, for example, the trivial measure is not considered to be a transverse measure. Example. As an example, take "V" to be the Euclidean plane R2 with its usual Euclidean norm/metric structure. Define a measure "μ" on R2 by setting "μ"("E") to be the one-dimensional Lebesgue measure of the intersection of "E" with the first coordinate axis: formula_1 An example of a compact set "K" with positive and finite "μ"-measure is "K" = "B"1(0), the closed unit ball about the origin, which has "μ"("K") = 2. Now take the set "S" to be the second coordinate axis. Any translate ("v"1, "v"2) + "S" of "S" will meet the first coordinate axis in precisely one point, ("v"1, 0). Since a single point has Lebesgue measure zero, "μ"(("v"1, "v"2) + "S") = 0, and so "μ" is transverse to "S". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v + S = \\{ v + s \\in V | s \\in S \\}" }, { "math_id": 1, "text": "\\mu (E)= \\lambda^{1} \\big( \\{ x \\in \\mathbf{R} | (x, 0) \\in E \\subseteq \\mathbf{R}^{2} \\} \\big)." } ]
https://en.wikipedia.org/wiki?curid=11655832
1165668
Ewald's sphere
Energy conservation during diffraction by atoms The Ewald sphere is a geometric construction used in electron, neutron, and x-ray diffraction which shows the relationship between: It was conceived by Paul Peter Ewald, a German physicist and crystallographer. Ewald himself spoke of the sphere of reflection. It is often simplified to the two-dimensional "Ewald's circle" model or may be referred to as the Ewald sphere. Ewald construction. A crystal can be described as a lattice of atoms, which in turn this leads to the reciprocal lattice. With electrons, neutrons or x-rays there is diffraction by the atoms, and if there is an incident plane wave formula_0 with a wavevector formula_1, there will be outgoing wavevectors formula_2 and formula_3 as shown in the diagram after the wave has been diffracted by the atoms. The energy of the waves (electron, neutron or x-ray) depends upon the magnitude of the wavevector, so if there is no change in energy (elastic scattering) these have the same magnitude, that is they must all lie on the Ewald sphere. In the Figure the red dot is the origin for the wavevectors, the black spots are reciprocal lattice points (vectors) and shown in blue are three wavevectors. For the wavevector formula_2 the corresponding reciprocal lattice point formula_4 lies on the Ewald sphere, which is the condition for Bragg diffraction. For formula_3 the corresponding reciprocal lattice point formula_5 is off the Ewald sphere, so formula_6 where formula_7 is called the excitation error. The amplitude and also intensity of diffraction into the wavevector formula_3 depends upon the Fourier transform of the shape of the sample, the excitation error formula_7, the structure factor for the relevant reciprocal lattice vector, and also whether the scattering is weak or strong. For neutrons and x-rays the scattering is generally weak so there is mainly Bragg diffraction, but it is much stronger for electron diffraction. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\exp(2 \\pi i \\mathbf{k_0}\\cdot \\mathbf{r})" }, { "math_id": 1, "text": "\\mathbf{k_0}" }, { "math_id": 2, "text": "\\mathbf{k_1}" }, { "math_id": 3, "text": "\\mathbf{k_2}" }, { "math_id": 4, "text": "\\mathbf{g_1}" }, { "math_id": 5, "text": "\\mathbf{g_2}" }, { "math_id": 6, "text": "\\mathbf{k_2} = \\mathbf{k_0} + \\mathbf{g_2} + \\mathbf{s}" }, { "math_id": 7, "text": "\\mathbf{s}" } ]
https://en.wikipedia.org/wiki?curid=1165668
11659
Fourier analysis
Branch of mathematics In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer. The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term "Fourier analysis" often refers to the study of both operations. The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis. To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems. Applications. Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas. This wide applicability stems from many useful properties of the transforms: In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument. Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image. In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed. When a function formula_0 is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function formula_1 at frequency formula_2 represents the amplitude of a frequency component whose initial phase is given by the angle of formula_1 (polar coordinates). Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze "spatial" frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control. When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation. Some examples include: Variants of Fourier analysis. (Continuous) Fourier transform. Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a "frequency distribution". One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (formula_5), and the domain of the output (final) function is ordinary frequency, the transform of function formula_0 at frequency formula_2 is given by the complex number: formula_6 Evaluating this quantity for all values of formula_2 produces the "frequency-domain" function. Then formula_0 can be represented as a recombination of complex exponentials of all possible frequencies: formula_7 which is the inverse transform formula. The complex number, formula_8 conveys both amplitude and phase of frequency formula_9 See Fourier transform for much more information, including: Fourier series. The Fourier transform of a periodic function, formula_10 with period formula_11 becomes a Dirac comb function, modulated by a sequence of complex coefficients: formula_12     (where formula_13 is the integral over any interval of length formula_4). The inverse transform, known as Fourier series, is a representation of formula_14 in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients: formula_15 Any formula_14 can be expressed as a periodic summation of another function, formula_0: formula_16 and the coefficients are proportional to samples of formula_1 at discrete intervals of formula_17: formula_18 Note that any formula_0 whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering formula_0 (and therefore formula_1) from just these samples (i.e. from the Fourier series) is that the non-zero portion of formula_0 be confined to a known interval of duration formula_11 which is the frequency domain dual of the Nyquist–Shannon sampling theorem. See Fourier series for more information, including the historical development. Discrete-time Fourier transform (DTFT). The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function: formula_19 which is known as the DTFT. Thus the DTFT of the formula_20 sequence is also the Fourier transform of the modulated Dirac comb function. The Fourier series coefficients (and inverse transform), are defined by: formula_21 Parameter formula_3 corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula.  Thus we have the important result that when a discrete data sequence, formula_22 is proportional to samples of an underlying continuous function, formula_23 one can observe a periodic summation of the continuous Fourier transform, formula_24 Note that any formula_0 with the same discrete sample values produces the same DTFT.  But under certain idealized conditions one can theoretically recover formula_1 and formula_0 exactly. A sufficient condition for perfect recovery is that the non-zero portion of formula_1 be confined to a known frequency interval of width formula_25  When that interval is formula_26 the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing. Another reason to be interested in formula_27 is that it often provides insight into the amount of aliasing caused by the sampling process. Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including: Discrete Fourier transform (DFT). Similar to a Fourier series, the DTFT of a periodic sequence, formula_28 with period formula_29, becomes a Dirac comb function, modulated by a sequence of complex coefficients (see ): formula_30     (where formula_31 is the sum over any sequence of length formula_32 The formula_33 sequence is what is customarily known as the DFT of one cycle of formula_34 It is also formula_29-periodic, so it is never necessary to compute more than formula_29 coefficients. The inverse transform, also known as a discrete Fourier series, is given by: formula_35   where formula_36 is the sum over any sequence of length formula_32 When formula_37 is expressed as a periodic summation of another function: formula_38   and   formula_39 the coefficients are samples of formula_27 at discrete intervals of formula_40: formula_41 Conversely, when one wants to compute an arbitrary number formula_42 of discrete samples of one cycle of a continuous DTFT, formula_43 it can be done by computing the relatively simple DFT of formula_28 as defined above. In most cases, formula_29 is chosen equal to the length of non-zero portion of formula_44 Increasing formula_45 known as "zero-padding" or "interpolation", results in more closely spaced samples of one cycle of formula_46 Decreasing formula_45 causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see ) In most cases of practical interest, the formula_20 sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array. The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers. See Discrete Fourier transform for much more information, including: Summary. For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact. It is common in practice for the duration of "s"(•) to be limited to the period, P or N.  But these formulas do not require that condition. Symmetry properties. When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: formula_47 From this, various relationships are apparent, for example: History. An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions). The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see ). In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit, which has been described as the first formula for the DFT, and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits. Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples. An early modern development toward Fourier analysis was the 1770 paper "Réflexions sur la résolution algébrique des équations" by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic: Lagrange transformed the roots formula_56 formula_57 formula_58 into the resolvents: formula_59 where ζ is a cubic root of unity, which is the DFT of order 3. A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper "Mémoire sur la propagation de la chaleur dans les corps solides" by Joseph Fourier, whose crucial insight was to model "all" functions by trigonometric series, introducing the Fourier series. Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series. The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory. The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey. Time–frequency transforms. In signal processing terms, a function (of time) is a representation of a signal with perfect "time resolution", but no frequency information, while the Fourier transform has perfect "frequency resolution", but no time information. As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. Fourier transforms on arbitrary locally compact abelian topological groups. The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform. More specific, Fourier analysis can be done on cosets, even discrete cosets. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "s(t)" }, { "math_id": 1, "text": "S(f)" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "P" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "S(f) = \\int_{-\\infty}^{\\infty} s(t) \\cdot e^{- i2\\pi f t} \\, dt." }, { "math_id": 7, "text": "s(t) = \\int_{-\\infty}^{\\infty} S(f) \\cdot e^{i2\\pi f t} \\, df," }, { "math_id": 8, "text": "S(f)," }, { "math_id": 9, "text": "f." }, { "math_id": 10, "text": "s_{_P}(t)," }, { "math_id": 11, "text": "P," }, { "math_id": 12, "text": "S[k] = \\frac{1}{P}\\int_{P} s_{_P}(t)\\cdot e^{-i2\\pi \\frac{k}{P} t}\\, dt, \\quad k\\in\\Z," }, { "math_id": 13, "text": "\\int_{P}" }, { "math_id": 14, "text": "s_{_P}(t)" }, { "math_id": 15, "text": "s_{_P}(t)\\ \\ =\\ \\ \\mathcal{F}^{-1}\\left\\{\\sum_{k=-\\infty}^{+\\infty} S[k]\\, \\delta \\left(f-\\frac{k}{P}\\right)\\right\\}\\ \\ =\\ \\ \\sum_{k=-\\infty}^\\infty S[k]\\cdot e^{i2\\pi \\frac{k}{P} t}." }, { "math_id": 16, "text": "s_{_P}(t) \\,\\triangleq\\, \\sum_{m=-\\infty}^\\infty s(t-mP)," }, { "math_id": 17, "text": "\\frac{1}{P}" }, { "math_id": 18, "text": "S[k] =\\frac{1}{P}\\cdot S\\left(\\frac{k}{P}\\right)." }, { "math_id": 19, "text": "S_\\tfrac{1}{T}(f)\\ \\triangleq\\ \\underbrace{\\sum_{k=-\\infty}^{\\infty} S\\left(f - \\frac{k}{T}\\right) \\equiv \\overbrace{\\sum_{n=-\\infty}^{\\infty} s[n] \\cdot e^{-i2\\pi f n T}}^{\\text{Fourier series (DTFT)}}}_{\\text{Poisson summation formula}} = \\mathcal{F} \\left \\{ \\sum_{n=-\\infty}^{\\infty} s[n]\\ \\delta(t-nT)\\right \\},\\," }, { "math_id": 20, "text": "s[n]" }, { "math_id": 21, "text": "s[n]\\ \\triangleq\\ T \\int_\\frac{1}{T} S_\\tfrac{1}{T}(f)\\cdot e^{i2\\pi f nT} \\,df = T \\underbrace{\\int_{-\\infty}^{\\infty} S(f)\\cdot e^{i2\\pi f nT} \\,df}_{\\triangleq\\, s(nT)}." }, { "math_id": 22, "text": "s[n]," }, { "math_id": 23, "text": "s(t)," }, { "math_id": 24, "text": "S(f)." }, { "math_id": 25, "text": "\\tfrac{1}{T}." }, { "math_id": 26, "text": "\\left[-\\tfrac{1}{2T}, \\tfrac{1}{2T}\\right]," }, { "math_id": 27, "text": "S_\\tfrac{1}{T}(f)" }, { "math_id": 28, "text": "s_{_N}[n]," }, { "math_id": 29, "text": "N" }, { "math_id": 30, "text": "S[k] = \\sum_n s_{_N}[n]\\cdot e^{-i2\\pi \\frac{k}{N} n}, \\quad k\\in\\Z," }, { "math_id": 31, "text": "\\sum_{n}" }, { "math_id": 32, "text": "N." }, { "math_id": 33, "text": "S[k]" }, { "math_id": 34, "text": "s_{_N}." }, { "math_id": 35, "text": "s_{_N}[n] = \\frac{1}{N} \\sum_{k} S[k]\\cdot e^{i2\\pi \\frac{n}{N}k}," }, { "math_id": 36, "text": "\\sum_{k}" }, { "math_id": 37, "text": "s_{_N}[n]" }, { "math_id": 38, "text": "s_{_N}[n]\\, \\triangleq\\, \\sum_{m=-\\infty}^{\\infty} s[n-mN]," }, { "math_id": 39, "text": "s[n]\\, \\triangleq\\, T\\cdot s(nT)," }, { "math_id": 40, "text": "\\tfrac{1}{P} = \\tfrac{1}{NT}" }, { "math_id": 41, "text": "S[k] = S_\\tfrac{1}{T}\\left(\\frac{k}{P}\\right)." }, { "math_id": 42, "text": "(N)" }, { "math_id": 43, "text": "S_\\tfrac{1}{T}(f)," }, { "math_id": 44, "text": "s[n]." }, { "math_id": 45, "text": "N," }, { "math_id": 46, "text": "S_\\tfrac{1}{T}(f)." }, { "math_id": 47, "text": "\n\\begin{array}{rccccccccc}\n\\text{Time domain} & s & = & s_{_{\\text{RE}}} & + & s_{_{\\text{RO}}} & + & i s_{_{\\text{IE}}} & + & \\underbrace{i\\ s_{_{\\text{IO}}}} \\\\\n&\\Bigg\\Updownarrow\\mathcal{F} & &\\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F}\\\\\n\\text{Frequency domain} & S & = & S_\\text{RE} & + & \\overbrace{\\,i\\ S_\\text{IO}\\,} & + & i S_\\text{IE} & + & S_\\text{RO}\n\\end{array}\n" }, { "math_id": 48, "text": "(s_{_{RE}}+s_{_{RO}})" }, { "math_id": 49, "text": "S_{RE}+i\\ S_{IO}." }, { "math_id": 50, "text": "(i\\ s_{_{IE}}+i\\ s_{_{IO}})" }, { "math_id": 51, "text": "S_{RO}+i\\ S_{IE}," }, { "math_id": 52, "text": "(s_{_{RE}}+i\\ s_{_{IO}})" }, { "math_id": 53, "text": "S_{RE}+S_{RO}," }, { "math_id": 54, "text": "(s_{_{RO}}+i\\ s_{_{IE}})" }, { "math_id": 55, "text": "i\\ S_{IE}+i\\ S_{IO}," }, { "math_id": 56, "text": "x_1," }, { "math_id": 57, "text": "x_2," }, { "math_id": 58, "text": "x_3" }, { "math_id": 59, "text": "\\begin{align}\nr_1 &= x_1 + x_2 + x_3\\\\\nr_2 &= x_1 + \\zeta x_2 + \\zeta^2 x_3\\\\\nr_3 &= x_1 + \\zeta^2 x_2 + \\zeta x_3\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=11659
1166059
Boltzmann machine
Type of stochastic recurrent neural network A Boltzmann machine (also called Sherrington–Kirkpatrick model with external field or stochastic Ising model), named after Ludwig Boltzmann is a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model, that is a stochastic Ising model. It is a statistical physics technique applied in the context of cognitive science. It is also classified as a Markov random field. Boltzmann machines are theoretically intriguing because of the locality and Hebbian nature of their training algorithm (being trained by Hebb's rule), and because of their parallelism and the resemblance of their dynamics to simple physical processes. Boltzmann machines with unconstrained connectivity have not been proven useful for practical problems in machine learning or inference, but if the connectivity is properly constrained, the learning can be made efficient enough to be useful for practical problems. They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function. They were heavily popularized and promoted by Geoffrey Hinton, Terry Sejnowski and Yann LeCun in cognitive sciences communities, particularly in machine learning, as part of "energy-based models" (EBM), because Hamiltonians of spin glasses as energy are used as a starting point to define the learning task. Structure. A Boltzmann machine, like a Sherrington–Kirkpatrick model, is a network of units with a total "energy" (Hamiltonian) defined for the overall network. Its units produce binary results. Boltzmann machine weights are stochastic. The global energy formula_1 in a Boltzmann machine is identical in form to that of Hopfield networks and Ising models: formula_2 Where: Often the weights formula_0 are represented as a symmetric matrix formula_9 with zeros along the diagonal. Unit state probability. The difference in the global energy that results from a single unit formula_4 equaling 0 (off) versus 1 (on), written formula_10, assuming a symmetric matrix of weights, is given by: formula_11 This can be expressed as the difference of energies of two states: formula_12 Substituting the energy of each state with its relative probability according to the Boltzmann factor (the property of a Boltzmann distribution that the energy of a state is proportional to the negative log probability of that state) gives: formula_13 where formula_14 is the Boltzmann constant and is absorbed into the artificial notion of temperature formula_15. We then rearrange terms and consider that the probabilities of the unit being on and off must sum to one: formula_16 formula_17 formula_18 formula_19 formula_20 formula_21 Solving for formula_22, the probability that the formula_4-th unit is on gives: formula_23 where the scalar formula_15 is referred to as the temperature of the system. This relation is the source of the logistic function found in probability expressions in variants of the Boltzmann machine. Equilibrium state. The network runs by repeatedly choosing a unit and resetting its state. After running for long enough at a certain temperature, the probability of a global state of the network depends only upon that global state's energy, according to a Boltzmann distribution, and not on the initial state from which the process was started. This means that log-probabilities of global states become linear in their energies. This relationship is true when the machine is "at thermal equilibrium", meaning that the probability distribution of global states has converged. Running the network beginning from a high temperature, its temperature gradually decreases until reaching a thermal equilibrium at a lower temperature. It then may converge to a distribution where the energy level fluctuates around the global minimum. This process is called simulated annealing. To train the network so that the chance it will converge to a global state according to an external distribution over these states, the weights must be set so that the global states with the highest probabilities get the lowest energies. This is done by training. Training. The units in the Boltzmann machine are divided into 'visible' units, V, and 'hidden' units, H. The visible units are those that receive information from the 'environment', i.e. the training set is a set of binary vectors over the set V. The distribution over the training set is denoted formula_24. The distribution over global states converges as the Boltzmann machine reaches thermal equilibrium. We denote this distribution, after we marginalize it over the hidden units, as formula_25. Our goal is to approximate the "real" distribution formula_24 using the formula_25 produced by the machine. The similarity of the two distributions is measured by the Kullback–Leibler divergence, formula_26: formula_27 where the sum is over all the possible states of formula_28. formula_26 is a function of the weights, since they determine the energy of a state, and the energy determines formula_29, as promised by the Boltzmann distribution. A gradient descent algorithm over formula_26 changes a given weight, formula_0, by subtracting the partial derivative of formula_26 with respect to the weight. Boltzmann machine training involves two alternating phases. One is the "positive" phase where the visible units' states are clamped to a particular binary state vector sampled from the training set (according to formula_30). The other is the "negative" phase where the network is allowed to run freely, i.e. only the input nodes have their state determined by external data, but the output nodes are allowed to float. The gradient with respect to a given weight, formula_0, is given by the equation: formula_31 where: This result follows from the fact that at thermal equilibrium the probability formula_35 of any global state formula_36 when the network is free-running is given by the Boltzmann distribution. This learning rule is biologically plausible because the only information needed to change the weights is provided by "local" information. That is, the connection (synapse, biologically) does not need information about anything other than the two neurons it connects. This is more biologically realistic than the information needed by a connection in many other neural network training algorithms, such as backpropagation. The training of a Boltzmann machine does not use the EM algorithm, which is heavily used in machine learning. By minimizing the KL-divergence, it is equivalent to maximizing the log-likelihood of the data. Therefore, the training procedure performs gradient ascent on the log-likelihood of the observed data. This is in contrast to the EM algorithm, where the posterior distribution of the hidden nodes must be calculated before the maximization of the expected value of the complete data likelihood during the M-step. Training the biases is similar, but uses only single node activity: formula_37 Problems. Theoretically the Boltzmann machine is a rather general computational medium. For instance, if trained on photographs, the machine would theoretically model the distribution of photographs, and could use that model to, for example, complete a partial photograph. Unfortunately, Boltzmann machines experience a serious practical problem, namely that it seems to stop learning correctly when the machine is scaled up to anything larger than a trivial size. This is due to important effects, specifically: Types. Restricted Boltzmann machine. Although learning is impractical in general Boltzmann machines, it can be made quite efficient in a restricted Boltzmann machine (RBM) which does not allow intralayer connections between hidden units and visible units, i.e. there is no connection between visible to visible and hidden to hidden units. After training one RBM, the activities of its hidden units can be treated as data for training a higher-level RBM. This method of stacking RBMs makes it possible to train many layers of hidden units efficiently and is one of the most common deep learning strategies. As each new layer is added the generative model improves. An extension to the restricted Boltzmann machine allows using real valued data rather than binary data. One example of a practical RBM application is in speech recognition. Deep Boltzmann machine. A deep Boltzmann machine (DBM) is a type of binary pairwise Markov random field (undirected probabilistic graphical model) with multiple layers of hidden random variables. It is a network of symmetrically coupled stochastic binary units. It comprises a set of visible units formula_38 and layers of hidden units formula_39. No connection links units of the same layer (like RBM). For the &lt;templatestyles src="Template:Tooltip/styles.css" /&gt;DBM, the probability assigned to vector ν is formula_40 where formula_41 are the set of hidden units, and formula_42 are the model parameters, representing visible-hidden and hidden-hidden interactions. In a DBN only the top two layers form a restricted Boltzmann machine (which is an undirected graphical model), while lower layers form a directed generative model. In a DBM all layers are symmetric and undirected. Like DBNs, DBMs can learn complex and abstract internal representations of the input in tasks such as object or speech recognition, using limited, labeled data to fine-tune the representations built using a large set of unlabeled sensory input data. However, unlike DBNs and deep convolutional neural networks, they pursue the inference and training procedure in both directions, bottom-up and top-down, which allow the DBM to better unveil the representations of the input structures. However, the slow speed of DBMs limits their performance and functionality. Because exact maximum likelihood learning is intractable for DBMs, only approximate maximum likelihood learning is possible. Another option is to use mean-field inference to estimate data-dependent expectations and approximate the expected sufficient statistics by using Markov chain Monte Carlo (MCMC). This approximate inference, which must be done for each test input, is about 25 to 50 times slower than a single bottom-up pass in DBMs. This makes joint optimization impractical for large data sets, and restricts the use of DBMs for tasks such as feature representation. Spike-and-slab RBMs. The need for deep learning with real-valued inputs, as in Gaussian RBMs, led to the spike-and-slab RBM ("ss"RBM), which models continuous-valued inputs with binary latent variables. Similar to basic RBMs and its variants, a spike-and-slab RBM is a bipartite graph, while like GRBMs, the visible units (input) are real-valued. The difference is in the hidden layer, where each hidden unit has a binary spike variable and a real-valued slab variable. A spike is a discrete probability mass at zero, while a slab is a density over continuous domain; their mixture forms a prior. An extension of ssRBM called μ-ssRBM provides extra modeling capacity using additional terms in the energy function. One of these terms enables the model to form a conditional distribution of the spike variables by marginalizing out the slab variables given an observation. In Mathematics. In more general mathematical setting, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning it is called a log-linear model. In deep learning the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine. History. The Boltzmann machine is based on a spin-glass model of Sherrington–Kirkpatrick's stochastic Ising model. The original contribution in applying such energy-based models in cognitive science appeared in papers by Hinton and Sejnowski. The seminal publication by John Hopfield connected physics and statistical mechanics, mentioning spin glasses. The idea of applying the Ising model with annealed Gibbs sampling is present in Douglas Hofstadter's Copycat project. Similar ideas (with a change of sign in the energy function) are found in Paul Smolensky's "Harmony Theory". The explicit analogy drawn with statistical mechanics in the Boltzmann Machine formulation led to the use of terminology borrowed from physics (e.g., "energy" rather than "harmony"), which became standard in the field. The widespread adoption of this terminology may have been encouraged by the fact that its use led to the adoption of a variety of concepts and methods from statistical mechanics. The various proposals to use simulated annealing for inference were apparently independent. Ising models became considered to be a special case of Markov random fields, which find widespread application in linguistics, robotics, computer vision and artificial intelligence. formula_43. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w_{ij}" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "E = -\\left(\\sum_{i<j} w_{ij} \\, s_i \\, s_j + \\sum_i \\theta_i \\, s_i \\right)" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "s_i" }, { "math_id": 6, "text": "s_i \\in \\{0,1\\}" }, { "math_id": 7, "text": "\\theta_i" }, { "math_id": 8, "text": "-\\theta_i" }, { "math_id": 9, "text": "W=[w_{ij}]" }, { "math_id": 10, "text": "\\Delta E_i" }, { "math_id": 11, "text": "\\Delta E_i = \\sum_{j>i} w_{ij} \\, s_j + \\sum_{j<i} w_{ji} \\, s_j + \\theta_i" }, { "math_id": 12, "text": "\\Delta E_i = E_\\text{i=off} - E_\\text{i=on}" }, { "math_id": 13, "text": "\\Delta E_i = -k_B\\,T\\ln(p_\\text{i=off}) - (-k_B\\,T\\ln(p_\\text{i=on}))" }, { "math_id": 14, "text": "k_B" }, { "math_id": 15, "text": "T" }, { "math_id": 16, "text": "\\frac{\\Delta E_i}{T} = \\ln(p_\\text{i=on}) - \\ln(p_\\text{i=off})" }, { "math_id": 17, "text": "\\frac{\\Delta E_i}{T} = \\ln(p_\\text{i=on}) - \\ln(1 - p_\\text{i=on})" }, { "math_id": 18, "text": "\\frac{\\Delta E_i}{T} = \\ln\\left(\\frac{p_\\text{i=on}}{1 - p_\\text{i=on}}\\right)" }, { "math_id": 19, "text": "-\\frac{\\Delta E_i}{T} = \\ln\\left(\\frac{1 - p_\\text{i=on}}{p_\\text{i=on}}\\right)" }, { "math_id": 20, "text": "-\\frac{\\Delta E_i}{T} = \\ln\\left(\\frac{1}{p_\\text{i=on}} - 1\\right)" }, { "math_id": 21, "text": "\\exp\\left(-\\frac{\\Delta E_i}{T}\\right) = \\frac{1}{p_\\text{i=on}} - 1" }, { "math_id": 22, "text": "p_\\text{i=on}" }, { "math_id": 23, "text": "p_\\text{i=on} = \\frac{1}{1+\\exp(-\\frac{\\Delta E_i}{T})}" }, { "math_id": 24, "text": "P^{+}(V)" }, { "math_id": 25, "text": "P^{-}(V)" }, { "math_id": 26, "text": "G" }, { "math_id": 27, "text": "G = \\sum_{v}{P^{+}(v)\\ln\\left({\\frac{P^{+}(v)}{P^{-}(v)}}\\right)}" }, { "math_id": 28, "text": "V" }, { "math_id": 29, "text": "P^{-}(v)" }, { "math_id": 30, "text": "P^{+}" }, { "math_id": 31, "text": "\\frac{\\partial{G}}{\\partial{w_{ij}}} = -\\frac{1}{R}[p_{ij}^{+}-p_{ij}^{-}]" }, { "math_id": 32, "text": "p_{ij}^{+}" }, { "math_id": 33, "text": "p_{ij}^{-}" }, { "math_id": 34, "text": "R" }, { "math_id": 35, "text": "P^{-}(s)" }, { "math_id": 36, "text": "s" }, { "math_id": 37, "text": "\\frac{\\partial{G}}{\\partial{\\theta_{i}}} = -\\frac{1}{R}[p_{i}^{+}-p_{i}^{-}]" }, { "math_id": 38, "text": "\\boldsymbol{\\nu} \\in \\{0,1\\}^D" }, { "math_id": 39, "text": "\\boldsymbol{h}^{(1)} \\in \\{0,1\\}^{F_1}, \\boldsymbol{h}^{(2)} \\in \\{0,1\\}^{F_2}, \\ldots, \\boldsymbol{h}^{(L)} \\in \\{0,1\\}^{F_L}" }, { "math_id": 40, "text": "p(\\boldsymbol{\\nu}) = \\frac{1}{Z}\\sum_h e^{\\sum_{ij}W_{ij}^{(1)}\\nu_i h_j^{(1)} + \\sum_{jl}W_{jl}^{(2)}h_j^{(1)}h_l^{(2)}+\\sum_{lm}W_{lm}^{(3)}h_l^{(2)}h_m^{(3)}}," }, { "math_id": 41, "text": "\\boldsymbol{h} = \\{\\boldsymbol{h}^{(1)}, \\boldsymbol{h}^{(2)}, \\boldsymbol{h}^{(3)} \\}" }, { "math_id": 42, "text": "\\theta = \\{\\boldsymbol{W}^{(1)}, \\boldsymbol{W}^{(2)}, \\boldsymbol{W}^{(3)} \\} " }, { "math_id": 43, "text": "G' = \\sum_{v}{P^{-}(v)\\ln\\left({\\frac{P^{-}(v)}{P^{+}(v)}}\\right)}" } ]
https://en.wikipedia.org/wiki?curid=1166059
1166245
Principle of distributivity
The principle of distributivity states that the algebraic distributive law is valid, where both logical conjunction and logical disjunction are distributive over each other so that for any propositions "A", "B" and "C" the equivalences formula_0 and formula_1 hold. The principle of distributivity is valid in classical logic, but both valid and invalid in quantum logic. The article "Is Logic Empirical?" discusses the case that quantum logic is the correct, empirical logic, on the grounds that the principle of distributivity is inconsistent with a reasonable interpretation of quantum phenomena. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A \\land (B \\lor C) \\iff (A \\land B) \\lor (A \\land C)" }, { "math_id": 1, "text": "A \\lor (B \\land C) \\iff (A \\lor B) \\land (A \\lor C)" } ]
https://en.wikipedia.org/wiki?curid=1166245
11663522
Π pad
An attenuator whose circuit components are in the shape of the Greek letter pi The Π pad (pi pad) is a specific type of attenuator circuit in electronics whereby the topology of the circuit is formed in the shape of the Greek capital letter pi (Π). Attenuators are used in electronics to reduce the level of a signal. They are also referred to as pads due to their effect of padding down a signal by analogy with acoustics. Attenuators have a flat frequency response attenuating all frequencies equally in the band they are intended to operate. The attenuator has the opposite task of an amplifier. The topology of an attenuator circuit will usually follow one of the simple filter sections. However, there is no need for more complex circuitry, as there is with filters, due to the simplicity of the frequency response required. Circuits are required to be balanced or unbalanced depending on the geometry of the transmission lines with which they are to be used. For radio frequency applications, the format is often unbalanced, such as coaxial. For audio and telecommunications, balanced circuits are usually required, such as with the twisted pair format. The Π pad is intrinsically an unbalanced circuit. However, it can be converted to a balanced circuit by placing half the series resistance in the return path. Such a circuit is called a box section because the circuit is formed in the shape of a box. Terminology. An attenuator is a form of a two-port network with a generator connected to one port and a load connected to the other. In all of the circuits given below it is assumed that the generator and load impedances are purely resistive (though not necessarily equal) and that the attenuator circuit is required to perfectly match to these. The symbols used for these impedances are; formula_0 the impedance of the generator formula_1 the impedance of the load Popular values of impedance are 600Ω in telecommunications and audio, 75Ω for video and dipole antennae, and 50Ω for RF. The voltage transfer function, "A", is, formula_2 While the inverse of this is the loss, "L", of the attenuator, formula_3 The value of attenuation is normally marked on the attenuator as its loss, "L"dB, in decibels (dB). The relationship with "L" is; formula_4 Popular values of attenuator are 3dB, 6dB, 10dB, 20dB, and 40dB. However, it is often more convenient to express the loss in nepers, formula_5 where formula_6 is the attenuation in nepers (one neper is approximately 8.7 dB). Impedance and loss. The values of resistance of the attenuator's elements can be calculated using image parameter theory. The starting point here is the image impedances of the L section in figure 2. The image admittance of the input is, formula_7 and the image impedance of the output is, formula_8 The loss of the L section when terminated in its image impedances is, formula_9 where the image parameter transmission function, "γ"L is given by, formula_10 The loss of this L section in the reverse direction is given by, formula_11 For an attenuator, "Z" and "Y" are simple resistors and "γ" becomes the image parameter attenuation (that is, the attenuation when terminated with the image impedances) in nepers. A Π pad can be viewed as being two L sections back-to-back as shown in figure 3. Most commonly, the generator and load impedances are equal so that "Z"1 = "Z"2 = Z0 and a symmetrical Π pad is used. In this case, the impedance matching terms inside the square roots all cancel and, formula_12 Substituting "Z" and "Y" for the corresponding resistors, formula_13 formula_14 These equations can easily be extended to non-symmetrical cases. Resistor values. The equations above find the impedance and loss for an attenuator with given resistor values. The usual requirement in a design is the other way around – the resistor values for a given impedance and loss are needed. These can be found by transposing and substituting the last two equations above; If formula_15 formula_16 with formula_2 formula_17 O pad. The unbalanced pi pad can be converted to a balanced O pad by putting one half of Rz in each side of a balanced line. The simple four element O pad attenuates the differential mode signal but does little to attenuate any common mode signal. To ensure attenuation of the common mode signal also, a split O pad can be created by splitting and grounding Rx and Ry. Conversion of two-port to pi pad. If a passive two-port can be expressed with admittance parameters, then that two-port is equivalent to a pi pad. In general, the admittance parameters are frequency dependent and not necessarily resistive. In that case the elements of the pi pad would not be simple components. However, in the case where the two-port is purely resistive or substantially resistive over the frequency range of interest, then the two-port can be replaced with a pi pad made of resistors. Conversion of tee pad to pi pad. Pi pads and tee pads are easily converted back and forth. If one of the pads is composed of only resistors then the other is also composed entirely of resistors.
[ { "math_id": 0, "text": "Z_1 \\,\\!" }, { "math_id": 1, "text": "Z_2 \\,\\!" }, { "math_id": 2, "text": "A = \\frac{V_\\mathrm {out}}{V_\\mathrm {in}}" }, { "math_id": 3, "text": "L = \\frac{V_\\mathrm {in}}{V_\\mathrm {out}}" }, { "math_id": 4, "text": "L_\\mathrm{dB} = 20 \\log L \\,\\!" }, { "math_id": 5, "text": " L = e^\\gamma \\," }, { "math_id": 6, "text": "\\gamma \\," }, { "math_id": 7, "text": "Y_\\mathrm {i \\Pi} = \\sqrt {Y^2 + \\frac{Y}{Z}}" }, { "math_id": 8, "text": "Z_\\mathrm {i T} = \\sqrt {Z^2 + \\frac{Z}{Y}}" }, { "math_id": 9, "text": "L_\\mathrm {L1} = \\sqrt{Z_\\mathrm {i \\Pi} Y_\\mathrm {i T}} \\ e^{\\gamma_\\mathrm L}" }, { "math_id": 10, "text": "\\gamma_\\mathrm L=\\sinh^{-1}{\\sqrt{ZY}}" }, { "math_id": 11, "text": "L_\\mathrm {L2}=\\sqrt{Z_\\mathrm {i T} Y_\\mathrm {i \\Pi}} \\ e^{\\gamma_\\mathrm L}" }, { "math_id": 12, "text": "L_\\mathrm \\Pi = L_\\mathrm {L1} L_\\mathrm {L2} = e^{2 \\gamma_\\mathrm L} = e^{\\gamma_\\mathrm \\Pi} \\," }, { "math_id": 13, "text": "\\gamma_\\mathrm \\Pi = 2 \\gamma_\\mathrm L = 2 \\sinh^{-1}{\\sqrt{\\frac{R_2}{2R_1}}} \\," }, { "math_id": 14, "text": "\\frac {1}{Z_0} = \\sqrt {\\frac {1}{{R_1}^2} + \\frac {2}{R_1 R_2}}" }, { "math_id": 15, "text": " Z_0 = Z_1 = Z_2 \\, " }, { "math_id": 16, "text": " R_1 = Z_0 \\coth \\left ( \\frac {\\gamma_ \\mathrm \\Pi}{2} \\right ) = Z_0 \\frac {1 + A} {1 - A} " }, { "math_id": 17, "text": " R_2 = \\frac {2R_1}{\\left ( \\frac {R_1}{Z_0} \\right ) ^2 -1} = Z_0 \\frac{1 - A^2}{2 A} " } ]
https://en.wikipedia.org/wiki?curid=11663522
11664784
Fizeau experiment
Experiment measuring the speed of light in moving water The Fizeau experiment&lt;ref name="https://www.google.com/books/edition/Introductory_Special_Relativity/zpjBEBbIjAIC"&gt;&lt;/ref&gt; was carried out by Hippolyte Fizeau in 1851 to measure the relative speeds of light in moving water. Fizeau used a special interferometer arrangement to measure the effect of movement of a medium upon the speed of light. According to the theories prevailing at the time, light traveling through a moving medium would be dragged along by the medium, so that the measured speed of the light would be a simple sum of its speed "through" the medium plus the speed "of" the medium. Fizeau indeed detected a dragging effect, but the magnitude of the effect that he observed was far lower than expected. When he repeated the experiment with air in place of water he observed no effect. His results seemingly supported the partial aether-drag hypothesis of Fresnel, a situation that was disconcerting to most physicists. Over half a century passed before a satisfactory explanation of Fizeau's unexpected measurement was developed with the advent of Albert Einstein's theory of special relativity. Einstein later pointed out the importance of the experiment for special relativity, in which it corresponds to the relativistic velocity-addition formula when restricted to small velocities. Although it is referred to as "the" Fizeau experiment, Fizeau was an active experimenter who carried out a wide variety of different experiments involving measuring the speed of light in various situations. Experimental setup. A highly simplified representation of Fizeau's 1851 experiment is presented in Fig. 2. Incoming light is split into two beams by a beam splitter (BS) and passed through two columns of water flowing in opposite directions. The two beams are then recombined to form an interference pattern that can be interpreted by an observer. The simplified arrangement illustrated in Fig. 2 would have required the use of monochromatic light, which would have enabled only dim fringes. Because of white light's short coherence length, use of white light would have required matching up the optical paths to an impractical degree of precision, and the apparatus would have been extremely sensitive to vibration, motion shifts, and temperature effects. On the other hand, Fizeau's actual apparatus, illustrated in Fig. 3 and Fig. 4, was set up as a common-path interferometer. This guaranteed that the opposite beams would pass through equivalent paths, so that fringes readily formed even when using the sun as a light source. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The double transit of the light was for the purpose of augmenting the distance traversed in the medium in motion, and further to compensate entirely any accidental difference of temperature or pressure between the two tubes, from which might result a displacement of the fringes, which would be mingled with the displacement which the motion alone would have produced; and thus have rendered the observation of it uncertain. A light ray emanating from the source "S′" is reflected by a beam splitter "G" and is collimated into a parallel beam by lens "L". After passing the slits "O"1 and "O"2, two rays of light travel through the tubes "A"1 and "A"2, through which water is streaming back and forth as shown by the arrows. The rays reflect off a mirror "m" at the focus of lens "L′", so that one ray always propagates in the same direction as the water stream, and the other ray opposite to the direction of the water stream. After passing back and forth through the tubes, both rays unite at "S", where they produce interference fringes that can be visualized through the illustrated eyepiece. The interference pattern can be analyzed to determine the speed of light traveling along each leg of the tube. Fresnel drag coefficient. Assume that water flows in the pipes with speed "v". According to the non-relativistic theory of the luminiferous aether, the speed of light should be increased or decreased when "dragged" along by the water through the aether frame, dependent upon the direction. According to Stokes' complete aether drag hypothesis, the overall speed of a beam of light should be a simple additive sum of its speed "through" the water plus the speed "of" the water. That is, if "n" is the index of refraction of water, so that "c/n" is the speed of light in stationary water, then the predicted speed of light "w" in one arm would be formula_0 and the predicted speed in the other arm would be formula_1 Hence light traveling against the flow of water should be slower than light traveling with the flow of water. The interference pattern between the two beams when the light is recombined at the observer depends upon the transit times over the two paths, and can be used to calculate the speed of light as a function of the speed of the water. Fizeau found that formula_2 In other words, light appeared to be dragged by the water, but the magnitude of the dragging was much lower than expected. The Fizeau experiment forced physicists to accept the empirical validity of an older theory of Augustin-Jean Fresnel (1818) that had been invoked to explain an 1810 experiment by Arago, namely, that a medium moving through the stationary aether drags light propagating through it with only a fraction of the medium's speed, with a dragging coefficient "f" given by formula_3 In 1895, Hendrik Lorentz predicted the existence of an extra term due to dispersion: formula_4 Since the medium is flowing towards or away from the observer, the light traveling through the medium is Doppler-shifted, and the refractive index used in the formula has to be that appropriate to the Doppler-shifted wavelength. Zeeman verified the existence of Lorentz' dispersion term in 1915. It turned out later that Fresnel's dragging coefficient is indeed in accordance with the relativistic velocity addition formula, see the section Derivation in special relativity. Repetitions. Albert A. Michelson and Edward W. Morley (1886) repeated Fizeau's experiment with improved accuracy, addressing several concerns with Fizeau's original experiment: (1) Deformation of the optical components in Fizeau's apparatus could cause artifactual fringe displacement; (2) observations were rushed, since the pressurized flow of water lasted only a short time; (3) the laminar flow profile of water flowing through Fizeau's small diameter tubes meant that only their central portions were available, resulting in faint fringes; (4) there were uncertainties in Fizeau's determination of flow rate across the diameter of the tubes. Michelson redesigned Fizeau's apparatus with larger diameter tubes and a large reservoir providing three minutes of steady water flow. His common-path interferometer design provided automatic compensation of path length, so that white light fringes were visible at once as soon as the optical elements were aligned. Topologically, the light path was that of a Sagnac interferometer with an even number of reflections in each light path. This offered extremely stable fringes that were, to first order, completely insensitive to any movement of its optical components. The stability was such that it was possible for him to insert a glass plate at h or even to hold a lighted match in the light path without displacing the center of the fringe system. Using this apparatus, Michelson and Morley were able to completely confirm Fizeau's results not just in water, but also in air. Other experiments were conducted by Pieter Zeeman in 1914–1915. Using a scaled-up version of Michelson's apparatus connected directly to Amsterdam's main water conduit, Zeeman was able to perform extended measurements using monochromatic light ranging from violet (4358 Å) through red (6870 Å) to confirm Lorentz's modified coefficient. In 1910, Franz Harress used a "rotating" device and overall confirmed Fresnel's dragging coefficient. However, he additionally found a "systematic bias" in the data, which later turned out to be the Sagnac effect. Since then, many experiments have been conducted measuring such dragging coefficients in a diversity of materials of differing refractive index, often in combination with the Sagnac effect. For instance, in experiments using ring lasers together with rotating disks, or in neutron interferometric experiments. Also a transverse dragging effect was observed, i.e. when the medium is moving at right angles to the direction of the incident light. Hoek experiment. An indirect confirmation of Fresnel's dragging coefficient was provided by Martin Hoek (1868). His apparatus was similar to Fizeau's, though in his version only one arm contained an area filled with resting water, while the other arm was in the air. As seen by an observer resting in the aether, Earth and hence the water is in motion. So the following travel times of two light rays traveling in opposite directions were calculated by Hoek (neglecting the transverse direction, see image): The travel times are not the same, which should be indicated by an interference shift. However, if Fresnel's dragging coefficient is applied to the water in the aether frame, the travel time difference (to first order in "v/c") vanishes. Using different setups Hoek actually obtained a null result, confirming Fresnel's dragging coefficient. (For a similar experiment refuting the possibility of "shielding" the aether wind, see Hammar experiment). In the particular version of the experiment shown here, Hoek used a prism "P" to disperse light from a slit into a spectrum which passed through a collimator "C" before entering the apparatus. With the apparatus oriented parallel to the hypothetical aether wind, Hoek expected the light in one circuit to be retarded 7/600 mm with respect to the other. Where this retardation represented an integral number of wavelengths, he expected to see constructive interference; where this retardation represented a half-integral number of wavelengths, he expected to see destructive interference. In the absence of dragging, his expectation was for the observed spectrum to be continuous with the apparatus oriented transversely to the aether wind, and to be banded with the apparatus oriented parallel to the aether wind. His actual experimental results were completely negative. Controversy. Although Fresnel's hypothesis was empirically successful in explaining Fizeau's results, many experts in the field, including Fizeau himself (1851), Éleuthère Mascart (1872), Ketteler (1873), Veltmann (1873), and Lorentz (1886) found Fresnel's mechanical reasoning for partial aether-dragging unpalatable for various reasons. For example, Veltmann (1870) Explains that Fresnel's hypothesis was proposed as a "so-called compensation" of aberration which will "exactly cancel out" the deflection of Arago experiment. He then goes on to demonstrate a method for using Stokes' fully dragged aether in lieu of Fresnel's hypothesis which would still be "necessary at the end of the development." At the end he returns to the principle of Fresnel emphasizing that it is a mathematical relationship that represents a "common principle" to a "class of explanations" of starlight aberration by clarifying: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The speed with which the movement of light takes part in the movement of the medium depends on the speed of propagation and must therefore be different for each color. (translation by Google) "Die Geschwindigkeit, mit welcher die Lichtbewegung an der Bewegung des Mediums theilnimmt, hängt von der Fortpflanzungsgeschwindigkeit ab und müsste deshalb für jede Farbe eine andere sein." This line can be more directly translated as "the speed with which the movement of light to the movement of the [material] medium depends [, also depends] on the propagation speed [in the medium] and therefore [there] is needed a different one for each color." Thus confirming Fresnel's mathematical principle (but not his explanation) that rate at which a medium affects the speed of light is dependent upon the index of refraction which was already established to be a measure of alterations to light's speed dependent on frequency. However the historian Stachel in 2005 gives us a different interpretation that assumes the "one for each color" to mean ether instead of differing "rates" or "speeds." &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Mascart (1872) demonstrated a result for polarized light traveling through a birefringent medium is insensitive to the motion of the earth. After establishing that Fresnel's theory represents an exact compensatory mechanism that cancels aberration effects, he discusses various other exact compensatory mechanisms in mechanical wave systems including the insensitivity to the doppler effect of co-moving experiments. He concludes "[Fresnel's] formula is not applicable to birefringent media." He finalized this report on his experiments in birefringent media with his finding that the experiment in anisotropic media produced a resulting quantity which was "four times lower than that which we would obtain by applying to the propagation of circularly polarized waves the formula demonstrated by Fresnel for the case of isotropic bodies." Fizeau himself shows he was aware of the mechanical feasibility of Fresnel's hypothesis earlier in his report, but Fizeau's surprise and defied expectation of Stokes' complete drag was intimated at the conclusion to the report: Lastly, if only one part of the æther is carried along, the velocity of light would be increased, but only by a fraction of the velocity of the body, and not, as in the first hypothesis, by the whole velocity. This consequence is not so obvious as the former, but Fresnel has shown that it may be supported by mechanical arguments of great probability.[...] The success of the experiment seems to me to render the adoption of Fresnel's hypothesis necessary, or at least the law which he found for the expression of the alteration of the velocity of light by the effect of motion of a body; for although that law being found true may be a very strong proof in favour of the hypothesis of which it is only a consequence, perhaps the conception of Fresnel may appear so extraordinary, and in some respects so difficult, to admit, that other proofs and a profound examination on the part of geometricians will still be necessary before adopting it as an expression of the real facts of the case. Despite the dissatisfaction of most physicists with Fresnel's partial aether-dragging hypothesis, repetitions and improvements to Fizeau's experiment (see sections above) by others confirmed his results to high accuracy. In addition to Mascart's experiments which demonstrated an insensitivity to earth's motion and complaints about the partial aether-dragging hypothesis, another major problem arose with the Michelson–Morley experiment (1887). Mascart's claims that optical experiments of refraction and reflection would be insensitive to the earth's motion were proven out by this later experiment. In Fresnel's theory, the aether is almost stationary and the Earth is moving through it, so the experiment should have given a partially reduced, but net positive, result. Only a complete aether drag by the medium of the air would result in a null. However, the result of this experiment was reported as null. Thus from the viewpoint of the aether models at that time, the experimental situation was contradictory: On one hand, the aberration of light, the Fizeau experiment and its repetition by Michelson and Morley in 1886 appeared to support only a small degree of aether-dragging. On the other hand, the Michelson–Morley experiment of 1887 appeared to prove that the aether is at rest with respect to Earth, apparently supporting the idea of complete aether-dragging (see aether drag hypothesis). So the success of Fresnel's hypothesis in explaining Fizeau's results helped lead to a theoretical crisis, which was only resolved by the introduction of relativistic theory. Is it fantastic to imagine that someone might have been led to develop some or all of these kinematical responses to the challenge presented by the situation in the optics of moving bodies around 1880, given that an optical principle of relative motion had been formulated by Mascart? Perhaps no more fantastic than what actually happened: Einstein’s development around 1905 of a kinematical response to the challenge presented by the situation in the electrodynamics of moving bodies, given that an electrodynamic principle of relative motion had already been formulated by Poincaré. Lorentz's interpretation. In 1892, Hendrik Lorentz proposed a modification of Fresnel's model, in which the aether is completely stationary. He succeeded in deriving Fresnel's dragging coefficient as the result of an interaction between the moving water with an undragged aether. He also discovered that the transition from one to another reference frame could be simplified by using an auxiliary time variable which he called "local time": formula_5 In 1895, Lorentz more generally explained Fresnel's coefficient based on the concept of local time. However, Lorentz's theory had the same fundamental problem as Fresnel's: a stationary aether contradicted the Michelson–Morley experiment. So in 1892 Lorentz proposed that moving bodies contract in the direction of motion (FitzGerald-Lorentz contraction hypothesis, since George FitzGerald had already arrived in 1889 at this conclusion). The equations that he used to describe these effects were further developed by him until 1904. These are now called the Lorentz transformations in his honor, and are identical in form to the equations that Einstein was later to derive from first principles. Unlike Einstein's equations, however, Lorentz's transformations were strictly "ad hoc", their only justification being that they seemed to work. Derivation in special relativity. Einstein showed how Lorentz's equations could be derived as the logical outcome of a set of two simple starting postulates. In addition Einstein recognized that the stationary aether concept has no place in special relativity, and that the Lorentz transformation concerns the nature of space and time. Together with the moving magnet and conductor problem, the negative aether drift experiments, and the aberration of light, the Fizeau experiment was one of the key experimental results that shaped Einstein's thinking about relativity. Robert S. Shankland reported some conversations with Einstein, in which Einstein emphasized the importance of the Fizeau experiment: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;He continued to say the experimental results which had influenced him most were the observations of stellar aberration and Fizeau's measurements on the speed of light in moving water. "They were enough," he said. Max von Laue (1907) demonstrated that the Fresnel drag coefficient can be easily explained as a natural consequence of the relativistic formula for addition of velocities, namely: The speed of light in immobile water is "c/n". From the velocity composition law it follows that the speed of light observed in the laboratory, where water is flowing with speed "v" (in the same direction as light) is formula_6 Thus the difference in speed is (assuming "v" is small comparing to "c", dropping higher order terms) formula_7 formula_8 This is accurate when "v"/"c" ≪ 1, and agrees with the formula based upon Fizeau's measurements, which satisfied the condition "v"/"c" ≪ 1. Fizeau's experiment is hence supporting evidence for the collinear case of Einstein's velocity addition formula. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Secondary sources
[ { "math_id": 0, "text": "w_+=\\frac{c}{n}+v \\ , " }, { "math_id": 1, "text": "w_-=\\frac{c}{n} - v \\ . " }, { "math_id": 2, "text": "w_+=\\frac{c}{n}+ v\\left(1-\\frac{1}{n^2}\\right) \\ . " }, { "math_id": 3, "text": "f = 1-\\frac{1}{n^2} \\ . " }, { "math_id": 4, "text": " w_+ = \\frac {c}{n} + v \\left(1 - \\frac{1}{n^2} - \\frac{\\lambda}{n} \\! \\cdot \\! \\frac{ \\mathrm{d} n }{ \\mathrm{d} \\lambda } \\right) \\ . " }, { "math_id": 5, "text": "t^{\\prime}=t-\\frac{vx}{c^{2}} \\ . " }, { "math_id": 6, "text": "V_\\mathrm{lab}=\\frac{\\frac{c}{n}+v}{1+\\frac{\\frac{c}{n}v}{c^2}}=\\frac{\\frac{c}{n}+v}{1+\\frac{v}{cn}} \\ ." }, { "math_id": 7, "text": "V_\\mathrm{lab}-\\frac{c}{n} = \\frac{\\frac{c}{n}+v}{1+\\frac{v}{cn}}-\\frac{c}{n}=\\frac{\\frac{c}{n}+v-\\frac{c}{n}(1+\\frac{v}{cn})}{1+\\frac{v}{cn}} " }, { "math_id": 8, "text": " = \\frac{v\\left(1-\\frac{1}{n^2}\\right)}{1+\\frac{v}{cn}}\\approx v\\left(1-\\frac{1}{n^2}\\right) \\ ." } ]
https://en.wikipedia.org/wiki?curid=11664784
11665456
Slip (vehicle dynamics)
In (automotive) vehicle dynamics, slip is the relative motion between a tire and the road surface it is moving on. This slip can be generated either by the tire's rotational speed being greater or less than the free-rolling speed (usually described as "percent" slip), or by the tire's plane of rotation being at an angle to its direction of motion (referred to as slip angle). In rail vehicle dynamics, this overall slip of the wheel relative to the rail is called "creepage". It is distinguished from the local sliding velocity of surface particles of wheel and rail, which is called "micro-slip". Longitudinal slip. The longitudinal slip is generally given as a percentage of the difference between the surface speed of the wheel compared to the speed between axle and road surface, as: formula_0 where formula_1 is the longitudinal component of the rotational speed of the wheel, formula_2 is wheel radius at the point of contact and formula_3 is vehicle speed in the plane of the tire. A positive slip indicates that the wheels are spinning; negative slip indicates that they are skidding. Locked brakes, formula_4, means that formula_5 and sliding without rotating. Rotation with no velocity, formula_6 and formula_7, means that formula_8. Lateral slip. The lateral slip of a tire is the angle between the direction it is moving and the direction it is pointing. This can occur, for instance, in cornering, and is enabled by deformation in the tire carcass and tread. Despite the name, no actual sliding is necessary for small slip angles. Sliding may occur, starting at the rear of the contact patch, as slip angle increases. The slip angle can be defined as: formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\text{slip}=-\\frac{v_x - r_e\\Omega }{v_x}" }, { "math_id": 1, "text": "\\Omega" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "v_x" }, { "math_id": 4, "text": "r_e \\Omega = 0" }, { "math_id": 5, "text": "\\text{slip} = -1 = -100\\%" }, { "math_id": 6, "text": "r_e \\Omega\\ne 0" }, { "math_id": 7, "text": "v = 0" }, { "math_id": 8, "text": "\\text{slip} = \\infty" }, { "math_id": 9, "text": "\\alpha = \\arctan\\left(\\frac{v_y}{|v_x|}\\right)" } ]
https://en.wikipedia.org/wiki?curid=11665456
1166579
Exponential formula
In combinatorial mathematics, the exponential formula (called the polymer expansion in physics) states that the exponential generating function for structures on finite sets is the exponential of the exponential generating function for connected structures. The exponential formula is a power series version of a special case of Faà di Bruno's formula. Algebraic statement. Here is a purely algebraic statement, as a first introduction to the combinatorial use of the formula. For any formal power series of the form formula_0 we have formula_1 where formula_2 and the index formula_3 runs through all partitions formula_4 of the set formula_5. (When formula_6 the product is empty and by definition equals formula_7.) Formula in other expressions. One can write the formula in the following form: formula_8 and thus formula_9 where formula_10 is the formula_11th complete Bell polynomial. Alternatively, the exponential formula can also be written using the cycle index of the symmetric group, as follows:formula_12where formula_13 stands for the cycle index polynomial for the symmetric group formula_14, defined as:formula_15and formula_16 denotes the number of cycles of formula_17 of size formula_18. This is a consequence of the general relation between formula_13 and Bell polynomials:formula_19 Combinatorial interpretation. In combinatorial applications, the numbers formula_20 count the number of some sort of "connected" structure on an formula_11-point set, and the numbers formula_21 count the number of (possibly disconnected) structures. The numbers formula_22 count the number of isomorphism classes of structures on formula_11 points, with each structure being weighted by the reciprocal of its automorphism group, and the numbers formula_23 count isomorphism classes of connected structures in the same way.
[ { "math_id": 0, "text": "f(x)=a_1 x+{a_2 \\over 2}x^2+{a_3 \\over 6}x^3+\\cdots+{a_n \\over n!}x^n+\\cdots\\," }, { "math_id": 1, "text": "\\exp f(x)=e^{f(x)}=\\sum_{n=0}^\\infty {b_n \\over n!}x^n,\\," }, { "math_id": 2, "text": "b_n = \\sum_{\\pi=\\left\\{\\,S_1,\\,\\dots,\\,S_k\\,\\right\\}} a_{\\left|S_1\\right|}\\cdots a_{\\left|S_k\\right|}," }, { "math_id": 3, "text": "\\pi" }, { "math_id": 4, "text": "\\{ S_1,\\ldots,S_k \\}" }, { "math_id": 5, "text": "\\{ 1,\\ldots, n \\}" }, { "math_id": 6, "text": "k = 0," }, { "math_id": 7, "text": "1" }, { "math_id": 8, "text": "b_n = B_n(a_1,a_2,\\dots,a_n)," }, { "math_id": 9, "text": "\\exp\\left(\\sum_{n=1}^\\infty {a_n \\over n!} x^n \\right) = \\sum_{n=0}^\\infty {B_n(a_1,\\dots,a_n) \\over n!} x^n," }, { "math_id": 10, "text": "B_n(a_1,\\ldots,a_n)" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "\\exp\\left(\\sum_{n=1}^\\infty a_n {x^n \\over n} \\right) = \\sum_{n=0}^\\infty Z_n(a_1,\\dots,a_n) x^n," }, { "math_id": 13, "text": "Z_n" }, { "math_id": 14, "text": "S_n" }, { "math_id": 15, "text": "Z_n (x_1,\\cdots ,x_n) = \\frac 1{n!} \\sum_{\\sigma\\in S_n} x_1^{\\sigma_1}\\cdots x_n^{\\sigma_n}" }, { "math_id": 16, "text": "\\sigma_j" }, { "math_id": 17, "text": "\\sigma" }, { "math_id": 18, "text": "j\\in \\{ 1, \\cdots, n \\}" }, { "math_id": 19, "text": "Z_n(x_1,\\dots,x_n) = {1 \\over n!} B_n(0!\\,x_1, 1!\\,x_2, \\dots, (n-1)!\\,x_n)." }, { "math_id": 20, "text": "a_n" }, { "math_id": 21, "text": "b_n" }, { "math_id": 22, "text": "b_n/n!" }, { "math_id": 23, "text": "a_n/n!" }, { "math_id": 24, "text": "b_3 = B_3(a_1,a_2,a_3) = a_3 + 3a_2 a_1 + a_1^3," }, { "math_id": 25, "text": "\\{1,2,3\\}" }, { "math_id": 26, "text": "3" }, { "math_id": 27, "text": "2" }, { "math_id": 28, "text": "Z_3 (a_1,a_2,a_3) = {1 \\over 6}(2 a_3 + 3 a_1 a_2 + a_1^3) = {1 \\over 6} B_3 (a_1, a_2, 2 a_3) " }, { "math_id": 29, "text": "S_3" }, { "math_id": 30, "text": "S_3 = \\{ (1)(2)(3), (1)(23), (2)(13), (3)(12), (123), (132) \\}" }, { "math_id": 31, "text": "b_n = 2^{n(n-1)/2}" }, { "math_id": 32, "text": "Z" }, { "math_id": 33, "text": "\\ln(Z)" } ]
https://en.wikipedia.org/wiki?curid=1166579
1166647
Magnetic reconnection
Process in plasma physics Magnetic reconnection is a physical process occurring in electrically conducting plasmas, in which the magnetic topology is rearranged and magnetic energy is converted to kinetic energy, thermal energy, and particle acceleration. Magnetic reconnection involves plasma flows at a substantial fraction of the Alfvén wave speed, which is the fundamental speed for mechanical information flow in a magnetized plasma. The concept of magnetic reconnection was developed in parallel by researchers working in solar physics and in the interaction between the solar wind and magnetized planets. This reflects the bidirectional nature of reconnection, which can either disconnect formerly connected magnetic fields or connect formerly disconnected magnetic fields, depending on the circumstances. Ron Giovanelli is credited with the first publication invoking magnetic energy release as a potential mechanism for particle acceleration in solar flares. Giovanelli proposed in 1946 that solar flares stem from the energy obtained by charged particles influenced by induced electric fields within close proximity of sunspots. In the years 1947-1948, he published more papers further developing the reconnection model of solar flares. In these works, he proposed that the mechanism occurs at points of neutrality (weak or null magnetic field) within structured magnetic fields. James Dungey is credited with first use of the term “magnetic reconnection” in his 1950 PhD thesis, to explain the coupling of mass, energy and momentum from the solar wind into Earth's magnetosphere. The concept was published for the first time in a seminal paper in 1961. Dungey coined the term "reconnection" because he envisaged field lines and plasma moving together in an inflow toward a magnetic neutral point (2D) or line (3D), breaking apart and then rejoining again but with different magnetic field lines and plasma, in an outflow away from the magnetic neutral point or line. In the meantime, the first theoretical framework of magnetic reconnection was established by Peter Sweet and Eugene Parker at a conference in 1956. Sweet pointed out that by pushing two plasmas with oppositely directed magnetic fields together, resistive diffusion is able to occur on a length scale much shorter than a typical equilibrium length scale. Parker was in attendance at this conference and developed scaling relations for this model during his return travel. Fundamental principles. Magnetic reconnection is a breakdown of "ideal-magnetohydrodynamics" and so of "Alfvén's theorem" (also called the "frozen-in flux theorem") which applies to large-scale regions of a highly-conducting magnetoplasma, for which the Magnetic Reynolds Number is very large: this makes the convective term in the induction equation dominate in such regions. The frozen-in flux theorem states that in such regions the field moves with the plasma velocity (the mean of the ion and electron velocities, weighted by their mass). The reconnection breakdown of this theorem occurs in regions of large magnetic shear (by Ampére's law these are current sheets) which are regions of small width where the Magnetic Reynolds Number can become small enough to make the diffusion term in the induction equation dominate, meaning that the field diffuses through the plasma from regions of high field to regions of low field. In reconnection, the inflow and outflow regions both obey Alfvén's theorem and the diffusion region is a very small region at the centre of the current sheet where field lines diffuse together, merge and reconfigure such that they are transferred from the topology of the inflow regions (i.e., along the current sheet) to that of the outflow regions (i.e., threading the current sheet). The rate of this magnetic flux transfer is the electric field associated with both the inflow and the outflow and is called the "reconnection rate". The equivalence of magnetic shear and current can be seen from one of Maxwell's equations formula_0 In a plasma (ionized gas), for all but exceptionally high frequency phenomena, the second term on the right-hand side of this equation, the displacement current, is negligible compared to the effect of the free current formula_1 and this equation reduces to Ampére's law for free charges. The displacement current is neglected in both the Parker-Sweet and Petschek theoretical treatments of reconnection, discussed below, and in the derivation of ideal MHD and Alfvén's theorem which is applied in those theories everywhere outside the small diffusion region. The resistivity of the current layer allows magnetic flux from either side to diffuse through the current layer, cancelling outflux from the other side of the boundary. However, the small spatial scale of the current sheet makes the Magnetic Reynolds Number small and so this alone can make the diffusion term dominate in the induction equation without the resistivity being enhanced. When the diffusing field lines from the two sites of the boundary touch they form the separatrices and so have both the topology of the inflow region (i.e. along the current sheet) and the outflow region (i.e., threading the current sheet). In magnetic reconnection the field lines evolve from the inflow topology through the separatrices topology to the outflow topology. When this happens, the plasma is pulled out by Magnetic tension force acting on the reconfigured field lines and ejecting them along the current sheet. The resulting drop in pressure pulls more plasma and magnetic flux into the central region, yielding a self-sustaining process. The importance of Dungey's concept of a localized breakdown of ideal-MHD is that the outflow along the current sheet prevents the build-up in plasma pressure that would otherwise choke off the inflow. In Parker-Sweet reconnection the outflow is only along a thin layer the centre of the current sheet and this limits the reconnection rate that can be achieved to low values. On the other hand, in Petschek reconnection the outflow region is much broader, being between shock fronts (now thought to be Alfvén waves) that stand in the inflow: this allows much faster escape of the plasma frozen-in on reconnected field lines and the reconnection rate can be much higher. Dungey coined the term "reconnection" because he initially envisaged field lines of the inflow topology breaking and then joining together again in the outflow topology. However, this means that magnetic monopoles would exist, albeit for a very limited period, which would violate Maxwell's equation that the divergence of the field is zero. However, by considering the evolution through the separatrix topology, the need to invoke magnetic monopoles is avoided. Global numerical MHD models of the magnetosphere, which use the equations of ideal MHD, still simulate magnetic reconnection even though it is a breakdown of ideal MHD. The reason is close to Dungey's original thoughts: at each time step of the numerical model the equations of ideal MHD are solved at each grid point of the simulation to evaluate the new field and plasma conditions. The magnetic field lines then have to be re-traced. The tracing algorithm makes errors at thin current sheets and joins field lines up by threading the current sheet where they were previously aligned with the current sheet. This is often called "numerical resistivity" and the simulations have predictive value because the error propagates according to a diffusion equation. A current problem in plasma physics is that observed reconnection happens much faster than predicted by MHD in high Lundquist number plasmas (i.e. fast magnetic reconnection). Solar flares, for example, proceed 13–14 orders of magnitude faster than a naive calculation would suggest, and several orders of magnitude faster than current theoretical models that include turbulence and kinetic effects. One possible mechanism to explain the discrepancy is that the electromagnetic turbulence in the boundary layer is sufficiently strong to scatter electrons, raising the plasma's local resistivity. This would allow the magnetic flux to diffuse faster. Properties. Physical interpretation. The qualitative description of the reconnection process is such that magnetic field lines from different magnetic domains (defined by the field line connectivity) are spliced to one another, changing their patterns of connectivity with respect to the sources. It is a violation of an approximate conservation law in plasma physics, called Alfvén's theorem (also called the "frozen-in flux theorem") and can concentrate mechanical or magnetic energy in both space and time. Solar flares, the largest explosions in the Solar System, may involve the reconnection of large systems of magnetic flux on the Sun, releasing, in minutes, energy that has been stored in the magnetic field over a period of hours to days. Magnetic reconnection in Earth's magnetosphere is one of the mechanisms responsible for the aurora, and it is important to the science of controlled nuclear fusion because it is one mechanism preventing magnetic confinement of the fusion fuel. In an electrically conductive plasma, magnetic field lines are grouped into 'domains'— bundles of field lines that connect from a particular place to another particular place, and that are topologically distinct from other field lines nearby. This topology is approximately preserved even when the magnetic field itself is strongly distorted by the presence of variable currents or motion of magnetic sources, because effects that might otherwise change the magnetic topology instead induce eddy currents in the plasma; the eddy currents have the effect of canceling out the topological change. Types of reconnection. In two dimensions, the most common type of magnetic reconnection is separator reconnection, in which four separate magnetic domains exchange magnetic field lines. Domains in a magnetic plasma are separated by "separatrix surfaces": curved surfaces in space that divide different bundles of flux. Field lines on one side of the separatrix all terminate at a particular magnetic pole, while field lines on the other side all terminate at a different pole of similar sign. Since each field line generally begins at a north magnetic pole and ends at a south magnetic pole, the most general way of dividing simple flux systems involves four domains separated by two separatrices: one separatrix surface divides the flux into two bundles, each of which shares a south pole, and the other separatrix surface divides the flux into two bundles, each of which shares a north pole. The intersection of the separatrices forms a "separator", a single line that is at the boundary of the four separate domains. In separator reconnection, field lines enter the separator from two of the domains, and are spliced one to the other, exiting the separator in the other two domains (see the first figure). In three dimensions, the geometry of the field lines become more complicated than the two-dimensional case and it is possible for reconnection to occur in regions where a separator does not exist, but with the field lines connected by steep gradients. These regions are known as quasi-separatrix layers (QSLs), and have been observed in theoretical configurations and solar flares. Theoretical descriptions. Slow reconnection: Sweet–Parker model. The first theoretical framework of magnetic reconnection was established by Peter Sweet and Eugene Parker at a conference in 1956. Sweet pointed out that by pushing two plasmas with oppositely directed magnetic fields together, resistive diffusion is able to occur on a length scale much shorter than a typical equilibrium length scale. Parker was in attendance at this conference and developed scaling relations for this model during his return travel. The Sweet–Parker model describes time-independent magnetic reconnection in the resistive MHD framework when the reconnecting magnetic fields are antiparallel (oppositely directed) and effects related to viscosity and compressibility are unimportant. The initial velocity is simply an formula_2 velocity, so formula_3 where formula_4 is the out-of-plane electric field, formula_5 is the characteristic inflow velocity, and formula_6 is the characteristic upstream magnetic field strength. By neglecting displacement current, the low-frequency Ampere's law, formula_7, gives the relation formula_8 where formula_9 is the current sheet half-thickness. This relation uses that the magnetic field reverses over a distance of formula_10. By matching the ideal electric field outside of the layer with the resistive electric field formula_11 inside the layer (using Ohm's law), we find that formula_12 where formula_13 is the magnetic diffusivity. When the inflow density is comparable to the outflow density, conservation of mass yields the relationship formula_14 where formula_15 is the half-length of the current sheet and formula_16 is the outflow velocity. The left and right hand sides of the above relation represent the mass flux into the layer and out of the layer, respectively. Equating the upstream magnetic pressure with the downstream dynamic pressure gives formula_17 where formula_18 is the mass density of the plasma. Solving for the outflow velocity then gives formula_19 where formula_20 is the Alfvén velocity. With the above relations, the dimensionless reconnection rate formula_21 can then be written in two forms, the first in terms of formula_22 using the result earlier derived from Ohm's law, the second in terms of formula_23 from the conservation of mass as formula_24 Since the dimensionless Lundquist number formula_25 is given by formula_26 the two different expressions of formula_21 are multiplied by each other and then square-rooted, giving a simple relation between the reconnection rate formula_21 and the Lundquist number formula_25 formula_27 Sweet–Parker reconnection allows for reconnection rates much faster than global diffusion, but is not able to explain the fast reconnection rates observed in solar flares, the Earth's magnetosphere, and laboratory plasmas. Additionally, Sweet–Parker reconnection neglects three-dimensional effects, collisionless physics, time-dependent effects, viscosity, compressibility, and downstream pressure. Numerical simulations of two-dimensional magnetic reconnection typically show agreement with this model. Results from the Magnetic Reconnection Experiment (MRX) of collisional reconnection show agreement with a generalized Sweet–Parker model which incorporates compressibility, downstream pressure and anomalous resistivity. Fast reconnection: Petschek model. The fundamental reason that Petschek reconnection is faster than Parker-Sweet is that it broadens the outflow region and thereby removes some of the limitation caused by the build up in plasma pressure. The inflow velocity, and thus the reconnection rate, can only be very small if the outflow region is narrow. In 1964, Harry Petschek proposed a mechanism where the inflow and outflow regions are separated by stationary slow mode shocks that stand in the inflows. The aspect ratio of the diffusion region is then of order unity and the maximum reconnection rate becomes formula_28 This expression allows for fast reconnection and is almost independent of the Lundquist number. Theory and numerical simulations show that most of the actions of the shocks that were proposed by Petschek can be carried out by Alfvén waves and in particular rotational discontinuities (RDs). In cases of asymmetric plasma densities on the two sides of the current sheet (as at Earth's dayside magnetopause) the Alfvén wave that propagates into the inflow on higher-density side (in the case of the magnetopause the denser magnetosheath) has a lower propagation speed and so the field rotation increasingly becomes at that RD as the field line propagates away from the reconnection site: hence the magnetopause current sheet becomes increasingly concentrated in the outer, slower, RD. Simulations of resistive MHD reconnection with uniform resistivity showed the development of elongated current sheets in agreement with the Sweet–Parker model rather than the Petschek model. When a localized anomalously large resistivity is used, however, Petschek reconnection can be realized in resistive MHD simulations. Because the use of an anomalous resistivity is only appropriate when the particle mean free path is large compared to the reconnection layer, it is likely that other collisionless effects become important before Petschek reconnection can be realized. Anomalous resistivity and Bohm diffusion. In the Sweet–Parker model, the common assumption is that the magnetic diffusivity is constant. This can be estimated using the equation of motion for an electron with mass formula_29 and electric charge formula_30: formula_31 where formula_32 is the collision frequency. Since in the steady state, formula_33, then the above equation along with the definition of electric current, formula_34, where formula_35 is the electron number density, yields formula_36 Nevertheless, if the drift velocity of electrons exceeds the thermal velocity of plasma, a steady state cannot be achieved and magnetic diffusivity should be much larger than what is given in the above. This is called anomalous resistivity, formula_37, which can enhance the reconnection rate in the Sweet–Parker model by a factor of formula_38. Another proposed mechanism is known as the Bohm diffusion across the magnetic field. This replaces the Ohmic resistivity with formula_39, however, its effect, similar to the anomalous resistivity, is still too small compared with the observations. Stochastic reconnection. In stochastic reconnection, magnetic field has a small scale random component arising because of turbulence. For the turbulent flow in the reconnection region, a model for magnetohydrodynamic turbulence should be used such as the model developed by Goldreich and Sridhar in 1995. This stochastic model is independent of small scale physics such as resistive effects and depends only on turbulent effects. Roughly speaking, in stochastic model, turbulence brings initially distant magnetic field lines to small separations where they can reconnect locally (Sweet-Parker type reconnection) and separate again due to turbulent super-linear diffusion (Richardson diffusion ). For a current sheet of the length formula_40, the upper limit for reconnection velocity is given by formula_41 where formula_42. Here formula_43, and formula_44are turbulence injection length scale and velocity respectively and formula_45is the Alfvén velocity. This model has been successfully tested by numerical simulations. Non-MHD process: Collisionless reconnection. On length scales shorter than the ion inertial length formula_46 (where formula_47 is the ion plasma frequency), ions decouple from electrons and the magnetic field becomes frozen into the electron fluid rather than the bulk plasma. On these scales, the Hall effect becomes important. Two-fluid simulations show the formation of an X-point geometry rather than the double Y-point geometry characteristic of resistive reconnection. The electrons are then accelerated to very high speeds by Whistler waves. Because the ions can move through a wider "bottleneck" near the current layer and because the electrons are moving much faster in Hall MHD than in standard MHD, reconnection may proceed more quickly. Two-fluid/collisionless reconnection is particularly important in the Earth's magnetosphere. Observations. Solar atmosphere. Magnetic reconnection occurs during solar flares, coronal mass ejections, and many other events in the solar atmosphere. The observational evidence for solar flares includes observations of inflows/outflows, downflowing loops, and changes in the magnetic topology. In the past, observations of the solar atmosphere were done using remote imaging; consequently, the magnetic fields were inferred or extrapolated rather than observed directly. However, the first direct observations of solar magnetic reconnection were gathered in 2012 (and released in 2013) by the High Resolution Coronal Imager. Earth's magnetosphere. Magnetic reconnection events that occur in the Earth's magnetosphere (in the dayside magnetopause and in the magnetotail) were for many years inferred because they uniquely explained many aspects of the large-scale behaviour of the magnetosphere and its dependence on the orientation of the near-Earth Interplanetary magnetic field. Subsequently, spacecraft such as Cluster II and the Magnetospheric Multiscale Mission. have made observations of sufficient resolution and in multiple locations to observe the process directly and in-situ. Cluster II is a four-spacecraft mission, with the four spacecraft arranged in a tetrahedron to separate the spatial and temporal changes as the suite flies through space. It has observed numerous reconnection events in which the Earth's magnetic field reconnects with that of the Sun (i.e. the Interplanetary Magnetic Field). These include 'reverse reconnection' that causes sunward convection in the Earth's ionosphere near the polar cusps; 'dayside reconnection', which allows the transmission of particles and energy into the Earth's vicinity and 'tail reconnection', which causes auroral substorms by injecting particles deep into the magnetosphere and releasing the energy stored in the Earth's magnetotail. The Magnetospheric Multiscale Mission, launched on 13 March 2015, improved the spatial and temporal resolution of the Cluster II results by having a tighter constellation of spacecraft. This led to a better understanding of the behavior of the electrical currents in the electron diffusion region. On 26 February 2008, THEMIS probes were able to determine the triggering event for the onset of magnetospheric substorms. Two of the five probes, positioned approximately one third the distance to the Moon, measured events suggesting a magnetic reconnection event 96 seconds prior to auroral intensification. Dr. Vassilis Angelopoulos of the University of California, Los Angeles, who is the principal investigator for the THEMIS mission, claimed, "Our data show clearly and for the first time that magnetic reconnection is the trigger.". Laboratory plasma experiments. Magnetic reconnection has also been observed in numerous laboratory experiments. For example, studies on the Large Plasma Device (LAPD) at UCLA have observed and mapped quasi-separatrix layers near the magnetic reconnection region of a two flux rope system, while experiments on the Magnetic Reconnection Experiment (MRX) at the Princeton Plasma Physics Laboratory (PPPL) have confirmed many aspects of magnetic reconnection, including the Sweet–Parker model in regimes where the model is applicable. Analysis of the physics of helicity injection, used to create the initial plasma current in the NSTX spherical tokamak, led Dr. Fatima Ebrahimi to propose a plasma thruster that uses fast magnetic reconnection to accelerate plasma to produce thrust for space propulsion. Sawtooth oscillations are periodic mixing events occurring in the tokamak plasma core. The Kadomtsev model describes sawtooth oscillations as a consequence of magnetic reconnection due to displacement of the central region with safety factor formula_48 caused by the internal kink mode. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nabla \\times \\mathbf{B} = \\mu \\mathbf{J} + \\mu \\epsilon \\frac{\\partial \\mathbf{E}}{\\partial t}." }, { "math_id": 1, "text": "\\mathbf{J}" }, { "math_id": 2, "text": "E\\times B" }, { "math_id": 3, "text": "E_y = v_\\text{in} B_\\text{in}" }, { "math_id": 4, "text": "E_y" }, { "math_id": 5, "text": "v_\\text{in}" }, { "math_id": 6, "text": "B_\\text{in}" }, { "math_id": 7, "text": "\\mathbf{J} = \\frac{1}{\\mu_0}\\nabla\\times\\mathbf{B}" }, { "math_id": 8, "text": "J_y \\sim \\frac{B_\\text{in}}{\\mu_0\\delta}," }, { "math_id": 9, "text": "\\delta" }, { "math_id": 10, "text": "\\sim2\\delta" }, { "math_id": 11, "text": "\\mathbf{E} = \\frac{1}{\\sigma}\\mathbf{J}" }, { "math_id": 12, "text": "v_\\text{in} = \\frac{E_y}{B_\\text{in}} \\sim \\frac{1}{\\mu_0\\sigma\\delta} = \\frac{\\eta}{\\delta}," }, { "math_id": 13, "text": "\\eta" }, { "math_id": 14, "text": "v_\\text{in}L \\sim v_\\text{out}\\delta, " }, { "math_id": 15, "text": "L" }, { "math_id": 16, "text": "v_\\text{out}" }, { "math_id": 17, "text": "\\frac{B_\\text{in}^2}{2\\mu_0} \\sim \\frac{\\rho v_\\text{out}^2}{2}" }, { "math_id": 18, "text": "\\rho" }, { "math_id": 19, "text": "v_\\text{out} \\sim \\frac{B_\\text{in}}{\\sqrt{\\mu_0\\rho}} \\equiv v_A" }, { "math_id": 20, "text": "v_A" }, { "math_id": 21, "text": "R" }, { "math_id": 22, "text": "(\\eta, \\delta, v_A)" }, { "math_id": 23, "text": "(\\delta, L)" }, { "math_id": 24, "text": "R = \\frac{v_\\text{in}}{v_\\text{out}} \\sim \\frac{\\eta}{v_A\\delta} \\sim \\frac{\\delta}{L}." }, { "math_id": 25, "text": "S" }, { "math_id": 26, "text": "S \\equiv \\frac{Lv_A}{\\eta}," }, { "math_id": 27, "text": "R ~ \\sim \\sqrt{\\frac{\\eta}{v_A L}} = \\frac{1}{S^\\frac{1}{2}}." }, { "math_id": 28, "text": "\\frac{v_\\text{in}}{v_A} \\approx \\frac{\\pi}{8 \\ln S}." }, { "math_id": 29, "text": "m" }, { "math_id": 30, "text": "e" }, { "math_id": 31, "text": "{d{\\mathbf{v}} \\over dt} = {e \\over m}\\mathbf{E} - \\nu\\mathbf{v}," }, { "math_id": 32, "text": "\\nu" }, { "math_id": 33, "text": "d{\\mathbf{v}}/dt = 0" }, { "math_id": 34, "text": "{\\mathbf{J}} = en{\\mathbf{v}}" }, { "math_id": 35, "text": "n " }, { "math_id": 36, "text": "\\eta = \\nu{c^2 \\over \\omega_{pi}^2}." }, { "math_id": 37, "text": "\\eta_\\text{anom}" }, { "math_id": 38, "text": "\\eta_\\text{anom}/\\eta" }, { "math_id": 39, "text": "v_A^2 (mc/eB)" }, { "math_id": 40, "text": "L " }, { "math_id": 41, "text": "v = v_\\text{turb} \\; \\operatorname{min}\\left[\\left( {L \\over l} \\right)^\\frac{1}{2}, \\left( {l \\over L} \\right)^\\frac{1}{2} \\right]," }, { "math_id": 42, "text": "v_\\text{turb} = v_l^2/v_A" }, { "math_id": 43, "text": "l" }, { "math_id": 44, "text": "v_l" }, { "math_id": 45, "text": "v_A " }, { "math_id": 46, "text": "c / \\omega_{pi}" }, { "math_id": 47, "text": "\\omega_{pi} \\equiv \\sqrt{\\frac{n_i Z^2 e^2}{\\epsilon_0 m_i}}" }, { "math_id": 48, "text": "q < 1" } ]
https://en.wikipedia.org/wiki?curid=1166647
11670238
744 (number)
Natural number 744 (seven hundred [and] forty four) is the natural number following 743 and preceding 745. In mathematics. 744 is a semiperfect number. It is also an abundant number. The j-invariant, an important function in the study of modular forms and Monstrous moonshine, can be written as a Fourier series in which the constant term is 744: formula_0where formula_1. One consequence of this is that 744 appears in expressions for Ramanujan's constant and other almost integers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "j(\\tau) = q^{-1} + 744 + 196\\,884 q + 21\\,493\\,760 q^2 + 864\\,299\\,970 q^3 + \\cdots," }, { "math_id": 1, "text": "q = e^{2\\pi i\\tau}" } ]
https://en.wikipedia.org/wiki?curid=11670238
11671
Fick's laws of diffusion
Mathematical descriptions of molecular diffusion Fick's laws of diffusion describe diffusion and were first posited by Adolf Fick in 1855 on the basis of largely experimental results. They can be used to solve for the diffusion coefficient, D. Fick's first law can be used to derive his second law which in turn is identical to the diffusion equation. "Fick's first law": Movement of particles from high to low concentration (diffusive flux) is directly proportional to the particle's concentration gradient. "Fick's second law": Prediction of change in concentration gradient with time due to diffusion. A diffusion process that obeys Fick's laws is called normal or Fickian diffusion; otherwise, it is called anomalous diffusion or non-Fickian diffusion. History. In 1855, physiologist Adolf Fick first reported his now well-known laws governing the transport of mass through diffusive means. Fick's work was inspired by the earlier experiments of Thomas Graham, which fell short of proposing the fundamental laws for which Fick would become famous. Fick's law is analogous to the relationships discovered at the same epoch by other eminent scientists: Darcy's law (hydraulic flow), Ohm's law (charge transport), and Fourier's Law (heat transport). Fick's experiments (modeled on Graham's) dealt with measuring the concentrations and fluxes of salt, diffusing between two reservoirs through tubes of water. It is notable that Fick's work primarily concerned diffusion in fluids, because at the time, diffusion in solids was not considered generally possible. Today, Fick's Laws form the core of our understanding of diffusion in solids, liquids, and gases (in the absence of bulk fluid motion in the latter two cases). When a diffusion process does "not" follow Fick's laws (which happens in cases of diffusion through porous media and diffusion of swelling penetrants, among others), it is referred to as "non-Fickian". Fick's first law. Fick's first law relates the diffusive flux to the gradient of the concentration. It postulates that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient (spatial derivative), or in simplistic terms the concept that a solute will move from a region of high concentration to a region of low concentration across a concentration gradient. In one (spatial) dimension, the law can be written in various forms, where the most common form (see) is in a molar basis: formula_0 where D is proportional to the squared velocity of the diffusing particles, which depends on the temperature, viscosity of the fluid and the size of the particles according to the Stokes–Einstein relation. In dilute aqueous solutions the diffusion coefficients of most ions are similar and have values that at room temperature are in the range of . For biological molecules the diffusion coefficients normally range from 10−10 to 10−11 m2/s. In two or more dimensions we must use ∇, the del or gradient operator, which generalises the first derivative, obtaining formula_2 where J denotes the diffusion flux vector. The driving force for the one-dimensional diffusion is the quantity −, which for ideal mixtures is the concentration gradient. Variations of the first law. Another form for the first law is to write it with the primary variable as mass fraction (yi, given for example in kg/kg), then the equation changes to: formula_3 where The formula_4 is outside the gradient operator. This is because: formula_5 where ρsi is the partial density of the ith species. Beyond this, in chemical systems other than ideal solutions or mixtures, the driving force for diffusion of each species is the gradient of chemical potential of this species. Then Fick's first law (one-dimensional case) can be written formula_6 where The driving force of Fick's law can be expressed as a fugacity difference: formula_7 Fugacity formula_8 has Pa units. formula_8 is a partial pressure of component "i" in a vapor formula_9 or liquid formula_10 phase. At vapor liquid equilibrium the evaporation flux is zero because formula_11. Derivation of Fick's first law for gases. Four versions of Fick's law for binary gas mixtures are given below. These assume: thermal diffusion is negligible; the body force per unit mass is the same on both species; and either pressure is constant or both species have the same molar mass. Under these conditions, Ref. shows in detail how the diffusion equation from the kinetic theory of gases reduces to this version of Fick's law: formula_12 where Vi is the diffusion velocity of species i. In terms of species flux this is formula_13 If, additionally, formula_14, this reduces to the most common form of Fick's law, formula_15 If (instead of or in addition to formula_14) both species have the same molar mass, Fick's law becomes formula_16 where formula_17 is the mole fraction of species i. Fick's second law. Fick's second law predicts how diffusion causes the concentration to change with respect to time. It is a partial differential equation which in one dimension reads: formula_18 where In two or more dimensions we must use the Laplacian Δ = ∇2, which generalises the second derivative, obtaining the equation formula_21 Fick's second law has the same mathematical form as the Heat equation and its fundamental solution is the same as the Heat kernel, except switching thermal conductivity formula_22 with diffusion coefficient formula_23: formula_24 Derivation of Fick's second law. Fick's second law can be derived from Fick's first law and the mass conservation in absence of any chemical reactions: formula_25 Assuming the diffusion coefficient D to be a constant, one can exchange the orders of the differentiation and multiply by the constant: formula_26 and, thus, receive the form of the Fick's equations as was stated above. For the case of diffusion in two or more dimensions Fick's second law becomes formula_27 which is analogous to the heat equation. If the diffusion coefficient is not a constant, but depends upon the coordinate or concentration, Fick's second law yields formula_28 An important example is the case where φ is at a steady state, i.e. the concentration does not change by time, so that the left part of the above equation is identically zero. In one dimension with constant D, the solution for the concentration will be a linear change of concentrations along x. In two or more dimensions we obtain formula_29 which is Laplace's equation, the solutions to which are referred to by mathematicians as harmonic functions. Example solutions and generalization. Fick's second law is a special case of the convection–diffusion equation in which there is no advective flux and no net volumetric source. It can be derived from the continuity equation: formula_30 where j is the total flux and R is a net volumetric source for φ. The only source of flux in this situation is assumed to be "diffusive flux": formula_31 Plugging the definition of diffusive flux to the continuity equation and assuming there is no source ("R" = 0), we arrive at Fick's second law: formula_32 If flux were the result of both diffusive flux and advective flux, the convection–diffusion equation is the result. Example solution 1: constant concentration source and diffusion length. A simple case of diffusion with time t in one dimension (taken as the x-axis) from a boundary located at position "x" = 0, where the concentration is maintained at a value "n"0 is formula_33 where erfc is the complementary error function. This is the case when corrosive gases diffuse through the oxidative layer towards the metal surface (if we assume that concentration of gases in the environment is constant and the diffusion space – that is, the corrosion product layer – is "semi-infinite", starting at 0 at the surface and spreading infinitely deep in the material). If, in its turn, the diffusion space is "infinite" (lasting both through the layer with "n"("x", 0) = 0, "x" &gt; 0 and that with "n"("x", 0) = "n"0, "x" ≤ 0), then the solution is amended only with coefficient in front of "n"0 (as the diffusion now occurs in both directions). This case is valid when some solution with concentration "n"0 is put in contact with a layer of pure solvent. (Bokstein, 2005) The length is called the "diffusion length" and provides a measure of how far the concentration has propagated in the x-direction by diffusion in time t (Bird, 1976). As a quick approximation of the error function, the first two terms of the Taylor series can be used: formula_34 If D is time-dependent, the diffusion length becomes formula_35 This idea is useful for estimating a diffusion length over a heating and cooling cycle, where D varies with temperature. Example solution 2: Brownian particle and mean squared displacement. Another simple case of diffusion is the Brownian motion of one particle. The particle's Mean squared displacement from its original position is: formula_36 where formula_37 is the dimension of the particle's Brownian motion. For example, the diffusion of a molecule across a cell membrane 8 nm thick is 1-D diffusion because of the spherical symmetry; However, the diffusion of a molecule from the membrane to the center of a eukaryotic cell is a 3-D diffusion. For a cylindrical cactus, the diffusion from photosynthetic cells on its surface to its center (the axis of its cylindrical symmetry) is a 2-D diffusion. The square root of MSD, formula_38, is often used as a characterization of how far has the particle moved after time formula_39 has elapsed. The MSD is symmetrically distributed over the 1D, 2D, and 3D space. Thus, the probability distribution of the magnitude of MSD in 1D is Gaussian and 3D is a Maxwell-Boltzmann distribution. Generalizations. The Chapman–Enskog formulae for diffusion in gases include exactly the same terms. These physical models of diffusion are different from the test models ∂"t""φi" = Σ"j" "Dij" Δ"φj" which are valid for very small deviations from the uniform equilibrium. Earlier, such terms were introduced in the Maxwell–Stefan diffusion equation. For anisotropic multicomponent diffusion coefficients one needs a rank-four tensor, for example "D""ij","αβ", where "i", "j" refer to the components and "α", "β" = 1, 2, 3 correspond to the space coordinates. Applications. Equations based on Fick's law have been commonly used to model transport processes in foods, neurons, biopolymers, pharmaceuticals, porous soils, population dynamics, nuclear materials, plasma physics, and semiconductor doping processes. The theory of voltammetric methods is based on solutions of Fick's equation. On the other hand, in some cases a "Fickian (another common approximation of the transport equation is that of the diffusion theory)" description is inadequate. For example, in polymer science and food science a more general approach is required to describe transport of components in materials undergoing a glass transition. One more general framework is the Maxwell–Stefan diffusion equations of multi-component mass transfer, from which Fick's law can be obtained as a limiting case, when the mixture is extremely dilute and every chemical species is interacting only with the bulk mixture and not with other species. To account for the presence of multiple species in a non-dilute mixture, several variations of the Maxwell–Stefan equations are used. See also non-diagonal coupled transport processes (Onsager relationship). Fick's flow in liquids. When two miscible liquids are brought into contact, and diffusion takes place, the macroscopic (or average) concentration evolves following Fick's law. On a mesoscopic scale, that is, between the macroscopic scale described by Fick's law and molecular scale, where molecular random walks take place, fluctuations cannot be neglected. Such situations can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale. In particular, fluctuating hydrodynamic equations include a Fick's flow term, with a given diffusion coefficient, along with hydrodynamics equations and stochastic terms describing fluctuations. When calculating the fluctuations with a perturbative approach, the zero order approximation is Fick's law. The first order gives the fluctuations, and it comes out that fluctuations contribute to diffusion. This represents somehow a tautology, since the phenomena described by a lower order approximation is the result of a higher approximation: this problem is solved only by renormalizing the fluctuating hydrodynamics equations. Sorption rate and collision frequency of diluted solute. Adsorption, absorption, and collision of molecules, particles, and surfaces are important problems in many fields. These fundamental processes regulate chemical, biological, and environmental reactions. Their rate can be calculated using the diffusion constant and Fick's laws of diffusion especially when these interactions happen in diluted solutions. Typically, the diffusion constant of molecules and particles defined by Fick's equation can be calculated using the Stokes–Einstein equation. In the ultrashort time limit, in the order of the diffusion time "a"2/"D", where "a" is the particle radius, the diffusion is described by the Langevin equation. At a longer time, the Langevin equation merges into the Stokes–Einstein equation. The latter is appropriate for the condition of the diluted solution, where long-range diffusion is considered. According to the fluctuation-dissipation theorem based on the Langevin equation in the long-time limit and when the particle is significantly denser than the surrounding fluid, the time-dependent diffusion constant is: formula_46 where (all in SI units) For a single molecule such as organic molecules or biomolecules (e.g. proteins) in water, the exponential term is negligible due to the small product of "mμ" in the ultrafast picosecond region, thus irrelevant to the relatively slower adsorption of diluted solute. The adsorption or absorption rate of a dilute solute to a surface or interface in a (gas or liquid) solution can be calculated using Fick's laws of diffusion. The accumulated number of molecules adsorbed on the surface is expressed by the Langmuir-Schaefer equation by integrating the diffusion flux equation over time as shown in the simulated molecular diffusion in the first section of this page: formula_47 The equation is named after American chemists Irving Langmuir and Vincent Schaefer. Briefly as explained in, the concentration gradient profile near a newly created (from formula_50) absorptive surface (placed at formula_51) in a once uniform bulk solution is solved in the above sections from Fick's equation, formula_52 The concentration gradient at the subsurface at formula_54 is simplified to the pre-exponential factor of the distribution formula_55 And the rate of diffusion (flux) across area formula_56 of the plane is formula_57 Integrating over time, formula_58 The Langmuir–Schaefer equation can be extended to the Ward–Tordai Equation to account for the "back-diffusion" of rejected molecules from the surface: formula_59 where formula_48 is the bulk concentration, formula_60 is the sub-surface concentration (which is a function of time depending on the reaction model of the adsorption), and formula_61 is a dummy variable. Monte Carlo simulations show that these two equations work to predict the adsorption rate of systems that form predictable concentration gradients near the surface but have troubles for systems without or with unpredictable concentration gradients, such as typical biosensing systems or when flow and convection are significant. A brief history of diffusive adsorption is shown in the right figure. A noticeable challenge of understanding the diffusive adsorption at the single-molecule level is the fractal nature of diffusion. Most computer simulations pick a time step for diffusion which ignores the fact that there are self-similar finer diffusion events (fractal) within each step. Simulating the fractal diffusion shows that a factor of two corrections should be introduced for the result of a fixed time-step adsorption simulation, bringing it to be consistent with the above two equations. A more problematic result of the above equations is they predict the lower limit of adsorption under ideal situations but is very difficult to predict the actual adsorption rates. The equations are derived at the long-time-limit condition when a stable concentration gradient has been formed near the surface. But real adsorption is often done much faster than this infinite time limit, i.e., the concentration gradient, decay of concentration at the sub-surface, is only partially formed before the surface has been saturated or flow is on to maintain a certain gradient, thus the adsorption rate measured is almost always faster than the equations have predicted for low or none energy barrier adsorption (unless there is a significant adsorption energy barrier that slows down the absorption significantly), for example, thousands to millions time faster in the self-assembly of monolayers at the water-air or water-substrate interfaces. As such, it is necessary to calculate the evolution of the concentration gradient near the surface and find out a proper time to stop the imagined infinite evolution for practical applications. While it is hard to predict when to stop but it is reasonably easy to calculate the shortest time that matters, the critical time when the first nearest neighbor from the substrate surface feels the building-up of the concentration gradient. This yields the upper limit of the adsorption rate under an ideal situation when there are no other factors than diffusion that affect the absorber dynamics: formula_62 This equation can be used to predict the initial adsorption rate of any system; It can be used to predict the steady-state adsorption rate of a typical biosensing system when the binding site is just a very small fraction of the substrate surface and a near-surface concentration gradient is never formed; It can also be used to predict the adsorption rate of molecules on the surface when there is a significant flow to push the concentration gradient very shallowly in the sub-surface. This critical time is significantly different from the first passenger arriving time or the mean free-path time. Using the average first-passenger time and Fick's law of diffusion to estimate the average binding rate will significantly over-estimate the concentration gradient because the first passenger usually comes from many layers of neighbors away from the target, thus its arriving time is significantly longer than the nearest neighbor diffusion time. Using the mean free path time plus the Langmuir equation will cause an artificial concentration gradient between the initial location of the first passenger and the target surface because the other neighbor layers have no change yet, thus significantly lower estimate the actual binding time, i.e., the actual first passenger arriving time itself, the inverse of the above rate, is difficult to calculate. If the system can be simplified to 1D diffusion, then the average first passenger time can be calculated using the same nearest neighbor critical diffusion time for the first neighbor distance to be the MSD, formula_67 In this critical time, it is unlikely the first passenger has arrived and adsorbed. But it sets the speed of the layers of neighbors to arrive. At this speed with a concentration gradient that stops around the first neighbor layer, the gradient does not project virtually in the longer time when the actual first passenger arrives. Thus, the average first passenger coming rate (unit # molecule/s) for this 3D diffusion simplified in 1D problem, formula_69 When the area of interest is the size of a molecule (specifically, a "long cylindrical molecule" such as DNA), the adsorption rate equation represents the collision frequency of two molecules in a diluted solution, with one molecule a specific side and the other no steric dependence, i.e., a molecule (random orientation) hit one side of the other. The diffusion constant need to be updated to the relative diffusion constant between two diffusing molecules. This estimation is especially useful in studying the interaction between a small molecule and a larger molecule such as a protein. The effective diffusion constant is dominated by the smaller one whose diffusion constant can be used instead. The above hitting rate equation is also useful to predict the kinetics of molecular self-assembly on a surface. Molecules are randomly oriented in the bulk solution. Assuming 1/6 of the molecules has the right orientation to the surface binding sites, i.e. 1/2 of the z-direction in x, y, z three dimensions, thus the concentration of interest is just 1/6 of the bulk concentration. Put this value into the equation one should be able to calculate the theoretical adsorption kinetic curve using the Langmuir adsorption model. In a more rigid picture, 1/6 can be replaced by the steric factor of the binding geometry. The bimolecular collision frequency related to many reactions including protein coagulation/aggregation is initially described by Smoluchowski coagulation equation proposed by Marian Smoluchowski in a seminal 1916 publication, derived from Brownian motion and Fick's laws of diffusion. Under an idealized reaction condition for A + B → product in a diluted solution, Smoluchovski suggested that the molecular flux at the infinite time limit can be calculated from Fick's laws of diffusion yielding a fixed/stable concentration gradient from the target molecule, e.g. B is the target molecule holding fixed relatively, and A is the moving molecule that creates a concentration gradient near the target molecule B due to the coagulation reaction between A and B. Smoluchowski calculated the collision frequency between A and B in the solution with unit #/s/m3: formula_72 where, The reaction order of this bimolecular reaction is 2 which is the analogy to the result from collision theory by replacing the moving speed of the molecule with diffusive flux. In the collision theory, the traveling time between A and B is proportional to the distance which is a similar relationship for the diffusion case if the flux is fixed. However, under a practical condition, the concentration gradient near the target molecule is evolving over time with the molecular flux evolving as well, and on average the flux is much bigger than the infinite time limit flux Smoluchowski has proposed. Before the first passenger arrival time, Fick's equation predicts a concentration gradient over time which does not build up yet in reality. Thus, this Smoluchowski frequency represents the lower limit of the real collision frequency. In 2022, Chen calculates the upper limit of the collision frequency between A and B in a solution assuming the bulk concentration of the moving molecule is fixed after the first nearest neighbor of the target molecule. Thus the concentration gradient evolution stops at the first nearest neighbor layer given a stop-time to calculate the actual flux. He named this the critical time and derived the diffusive collision frequency in unit #/s/m3: formula_77 where, This equation assumes the upper limit of a diffusive collision frequency between A and B is when the first neighbor layer starts to feel the evolution of the concentration gradient, whose reaction order is instead of 2. Both the Smoluchowski equation and the JChen equation satisfy dimensional checks with SI units. But the former is dependent on the radius and the latter is on the area of the collision sphere. From dimensional analysis, there will be an equation dependent on the volume of the collision sphere but eventually, all equations should converge to the same numerical rate of the collision that can be measured experimentally. The actual reaction order for a bimolecular unit reaction could be between 2 and , which makes sense because the diffusive collision time is squarely dependent on the distance between the two molecules. Biological perspective. The first law gives rise to the following formula: formula_79 in which Fick's first law is also important in radiation transfer equations. However, in this context, it becomes inaccurate when the diffusion constant is low and the radiation becomes limited by the speed of light rather than by the resistance of the material the radiation is flowing through. In this situation, one can use a flux limiter. The exchange rate of a gas across a fluid membrane can be determined by using this law together with Graham's law. Under the condition of a diluted solution when diffusion takes control, the membrane permeability mentioned in the above section can be theoretically calculated for the solute using the equation mentioned in the last section (use with particular care because the equation is derived for dense solutes, while biological molecules are not denser than water. Also this equation assumes ideal concentration gradient forms near the membrane and evolves over time): formula_80 where The flux is decay over the square root of time because a concentration gradient builds up near the membrane over time under ideal conditions. When there is flow and convection, the flux can be significantly different than the equation predicts and show an effective time t with a fixed value, which makes the flux stable instead of decay over time. A critical time has been estimated under idealized flow conditions when there is no gradient formed. This strategy is adopted in biology such as blood circulation. Semiconductor fabrication applications. The semiconductor is a collective term for a series of devices. It mainly includes three categories:two-terminal devices, three-terminal devices, and four-terminal devices. The combination of the semiconductors is called an integrated circuit. The relationship between Fick's law and semiconductors: the principle of the semiconductor is transferring chemicals or dopants from a layer to a layer. Fick's law can be used to control and predict the diffusion by knowing how much the concentration of the dopants or chemicals move per meter and second through mathematics. Therefore, different types and levels of semiconductors can be fabricated. Integrated circuit fabrication technologies, model processes like CVD, thermal oxidation, wet oxidation, doping, etc. use diffusion equations obtained from Fick's law. CVD method of fabricate semiconductor. The wafer is a kind of semiconductor whose silicon substrate is coated with a layer of CVD-created polymer chain and films. This film contains n-type and p-type dopants and takes responsibility for dopant conductions. The principle of CVD relies on the gas phase and gas-solid chemical reaction to create thin films. The viscous flow regime of CVD is driven by a pressure gradient. CVD also includes a diffusion component distinct from the surface diffusion of adatoms. In CVD, reactants and products must also diffuse through a boundary layer of stagnant gas that exists next to the substrate. The total number of steps required for CVD film growth are gas phase diffusion of reactants through the boundary layer, adsorption and surface diffusion of adatoms, reactions on the substrate, and gas phase diffusion of products away through the boundary layer. The velocity profile for gas flow is: formula_83 where Integrated the x from 0 to L, it gives the average thickness: formula_87 To keep the reaction balanced, reactants must diffuse through the stagnant boundary layer to reach the substrate. So a thin boundary layer is desirable. According to the equations, increasing vo would result in more wasted reactants. The reactants will not reach the substrate uniformly if the flow becomes turbulent. Another option is to switch to a new carrier gas with lower viscosity or density. The Fick's first law describes diffusion through the boundary layer. As a function of pressure ("P") and temperature ("T") in a gas, diffusion is determined. formula_88 where The equation tells that increasing the temperature or decreasing the pressure can increase the diffusivity. Fick's first law predicts the flux of the reactants to the substrate and product away from the substrate: formula_92 where In ideal gas law formula_95, the concentration of the gas is expressed by partial pressure. formula_96 where As a result, Fick's first law tells us we can use a partial pressure gradient to control the diffusivity and control the growth of thin films of semiconductors. In many realistic situations, the simple Fick's law is not an adequate formulation for the semiconductor problem. It only applies to certain conditions, for example, given the semiconductor boundary conditions: constant source concentration diffusion, limited source concentration, or moving boundary diffusion (where junction depth keeps moving into the substrate). Invalidity of Fickian diffusion. Even though Fickian diffusion has been used to model diffusion processes in semiconductor manufacturing (including CVD reactors) in early days, it often fails to validate the diffusion in advanced semiconductor nodes (&lt; 90 nm). This mostly stems from the inability of Fickian diffusion to model diffusion processes accurately at molecular level and smaller. In advanced semiconductor manufacturing, it is important to understand the movement at atomic scales, which is failed by continuum diffusion. Today, most semiconductor manufacturers use random walk to study and model diffusion processes. This allows us to study the effects of diffusion in a discrete manner to understand the movement of individual atoms, molecules, plasma etc. In such a process, the movements of diffusing species (atoms, molecules, plasma etc.) are treated as a discrete entity, following a random walk through the CVD reactor, boundary layer, material structures etc. Sometimes, the movements might follow a biased-random walk depending on the processing conditions. Statistical analysis is done to understand variation/stochasticity arising from the random walk of the species, which in-turn affects the overall process and electrical variations. Food production and cooking. The formulation of Fick's first law can explain a variety of complex phenomena in the context of food and cooking: Diffusion of molecules such as ethylene promotes plant growth and ripening, salt and sugar molecules promotes meat brining and marinating, and water molecules promote dehydration. Fick's first law can also be used to predict the changing moisture profiles across a spaghetti noodle as it hydrates during cooking. These phenomena are all about the spontaneous movement of particles of solutes driven by the concentration gradient. In different situations, there is different diffusivity which is a constant. By controlling the concentration gradient, the cooking time, shape of the food, and salting can be controlled. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "J = -D \\frac{d \\varphi}{d x} " }, { "math_id": 1, "text": " \\frac{d \\varphi}{d x} " }, { "math_id": 2, "text": " \\mathbf{J}=- D\\nabla \\varphi" }, { "math_id": 3, "text": "\\mathbf{J}_i = -\\frac{\\rho D}{M_i}\\nabla y_i " }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "y_i = \\frac{\\rho_{si}}{\\rho}" }, { "math_id": 6, "text": "J_i = - \\frac{D c_i}{RT} \\frac{\\partial \\mu_i}{\\partial x}" }, { "math_id": 7, "text": "J_i = - \\frac{D}{RT} \\frac{\\partial f_i}{\\partial x}" }, { "math_id": 8, "text": " f_i " }, { "math_id": 9, "text": " f_i^\\text{G} " }, { "math_id": 10, "text": " f_i^\\text{L} " }, { "math_id": 11, "text": " f_i^\\text{G} = f_i^\\text{L} " }, { "math_id": 12, "text": " \\mathbf{V_i}=- D\\nabla \\ln y_i ," }, { "math_id": 13, "text": "\\mathbf{J_i}=- \\frac{\\rho D}{M_i}\\nabla y_i . " }, { "math_id": 14, "text": " \\nabla \\rho = 0" }, { "math_id": 15, "text": " \\mathbf{J_i}=- D\\nabla \\varphi ." }, { "math_id": 16, "text": "\\mathbf{J_i}=- \\frac{\\rho D}{M_i}\\nabla x_i, " }, { "math_id": 17, "text": " x_i " }, { "math_id": 18, "text": "\\frac{\\partial \\varphi}{\\partial t} = D\\,\\frac{\\partial^2 \\varphi}{\\partial x^2}" }, { "math_id": 19, "text": "[\\mathsf{N}\\mathsf{L}^{-3}]" }, { "math_id": 20, "text": "[\\mathsf{L}^2\\mathsf{T}^{-1}]" }, { "math_id": 21, "text": "\\frac{\\partial \\varphi}{\\partial t} = D\\Delta \\varphi" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "D" }, { "math_id": 24, "text": "\\varphi(x,t)=\\frac{1}{\\sqrt{4\\pi Dt}}\\exp\\left(-\\frac{x^2}{4Dt}\\right)." }, { "math_id": 25, "text": "\\frac{\\partial \\varphi}{\\partial t} + \\frac{\\partial}{\\partial x}J = 0\n\\Rightarrow\\frac{\\partial \\varphi}{\\partial t} -\\frac{\\partial}{\\partial x}\\left(D\\frac{\\partial}{\\partial x}\\varphi\\right)\\,=0" }, { "math_id": 26, "text": "\\frac{\\partial}{\\partial x}\\left(D\\frac{\\partial}{\\partial x} \\varphi\\right) = D\\frac{\\partial}{\\partial x} \\frac{\\partial}{\\partial x} \\varphi = D\\frac{\\partial^2\\varphi}{\\partial x^2}" }, { "math_id": 27, "text": "\\frac{\\partial \\varphi}{\\partial t} = D\\,\\nabla^2\\varphi," }, { "math_id": 28, "text": "\\frac{\\partial \\varphi}{\\partial t} = \\nabla \\cdot (D\\,\\nabla\\varphi)." }, { "math_id": 29, "text": " \\nabla^2\\varphi = 0," }, { "math_id": 30, "text": " \\frac{\\partial \\varphi}{\\partial t} + \\nabla\\cdot\\mathbf{j} = R, " }, { "math_id": 31, "text": "\\mathbf{j}_{\\text{diffusion}} = -D \\nabla \\varphi" }, { "math_id": 32, "text": "\\frac{\\partial \\varphi}{\\partial t} = D\\frac{\\partial^2 \\varphi}{\\partial x^2}" }, { "math_id": 33, "text": "n \\left(x,t \\right)=n_0 \\operatorname{erfc} \\left( \\frac{x}{2\\sqrt{Dt}}\\right) ." }, { "math_id": 34, "text": "n(x,t)=n_0 \\left[ 1 - 2 \\left(\\frac{x}{2\\sqrt{Dt\\pi}}\\right) \\right] " }, { "math_id": 35, "text": " 2\\sqrt{\\int_0^t D( \\tau ) \\,d\\tau}. " }, { "math_id": 36, "text": "\\text{MSD} \\equiv \\langle (\\mathbf{x}-\\mathbf{x_0})^2\\rangle=2nDt" }, { "math_id": 37, "text": "n" }, { "math_id": 38, "text": "\\sqrt{2nDt}" }, { "math_id": 39, "text": "t" }, { "math_id": 40, "text": "\\frac{\\partial \\varphi(x,t)}{\\partial t}=\\nabla\\cdot \\bigl(D(x) \\nabla \\varphi(x,t)\\bigr)=D(x) \\Delta \\varphi(x,t)+\\sum_{i=1}^3 \\frac{\\partial D(x)}{\\partial x_i} \\frac{\\partial \\varphi(x,t)}{\\partial x_i}" }, { "math_id": 41, "text": "J=-D \\nabla \\varphi ," }, { "math_id": 42, "text": " J_i=-\\sum_{j=1}^3 D_{ij} \\frac{\\partial \\varphi}{\\partial x_j}." }, { "math_id": 43, "text": "\\frac{\\partial \\varphi(x,t)}{\\partial t}=\\nabla\\cdot \\bigl(D \\nabla \\varphi(x,t)\\bigr)=\\sum_{i=1}^3\\sum_{j=1}^3D_{ij} \\frac{\\partial^2 \\varphi(x,t)}{\\partial x_i \\partial x_j}. " }, { "math_id": 44, "text": "\\frac{\\partial \\varphi(x,t)}{\\partial t}=\\nabla\\cdot \\bigl(D(x) \\nabla \\varphi(x,t)\\bigr)=\\sum_{i,j=1}^3\\left(D_{ij}(x) \\frac{\\partial^2 \\varphi(x,t)}{\\partial x_i \\partial x_j}+ \\frac{\\partial D_{ij}(x)}{\\partial x_i } \\frac{\\partial \\varphi(x,t)}{\\partial x_i}\\right). " }, { "math_id": 45, "text": "\\frac{\\partial \\varphi_i}{\\partial t} = \\sum_j \\nabla\\cdot\\left(D_{ij} \\frac{\\varphi_i}{\\varphi_j} \\nabla \\, \\varphi_j\\right) ." }, { "math_id": 46, "text": " D(t) = \\mu \\, k_{\\rm B} T\\left(1-e^{-t/(m\\mu)}\\right) " }, { "math_id": 47, "text": " \\Gamma= 2AC_b\\sqrt{\\frac{Dt}{\\pi}}" }, { "math_id": 48, "text": "C_b" }, { "math_id": 49, "text": " \\Gamma " }, { "math_id": 50, "text": "t=0" }, { "math_id": 51, "text": "x=0" }, { "math_id": 52, "text": " \\frac{\\partial C}{\\partial x} = \\frac{C_b}{\\sqrt{\\pi Dt}}\\text{exp}(-\\frac{x^2}{4Dt}) " }, { "math_id": 53, "text": " x, t " }, { "math_id": 54, "text": "x = 0" }, { "math_id": 55, "text": " (\\frac{\\partial C}{\\partial x}) _{x = 0} = \\frac{C_b}{\\sqrt{\\pi Dt}} " }, { "math_id": 56, "text": "A" }, { "math_id": 57, "text": " (\\frac{\\partial \\Gamma }{\\partial t}) _{x = 0} = -\\frac{DAC_b}{\\sqrt{\\pi Dt}} " }, { "math_id": 58, "text": " \\Gamma = \\int_0^t (\\frac{\\partial \\Gamma}{\\partial t}) _{x = 0} = 2AC_b\\sqrt{\\frac{Dt}{\\pi}} " }, { "math_id": 59, "text": " \\Gamma= 2A{C_\\text{b}}\\sqrt{\\frac{Dt}{\\pi}} - A\\sqrt{\\frac{D}{\\pi}}\\int_0^\\sqrt{t}\\frac{C(\\tau)}{\\sqrt{t-\\tau}} \\, d\\tau " }, { "math_id": 60, "text": "C" }, { "math_id": 61, "text": "\\tau" }, { "math_id": 62, "text": " \\langle r \\rangle = \\frac{4}{\\pi}Ac_b^{4/3}D" }, { "math_id": 63, "text": " \\langle r \\rangle " }, { "math_id": 64, "text": " A " }, { "math_id": 65, "text": " C_b " }, { "math_id": 66, "text": " D " }, { "math_id": 67, "text": "L = \\sqrt{2Dt} " }, { "math_id": 68, "text": "L~=C_b^{-1/3} " }, { "math_id": 69, "text": " <r> =a/t= 2aC_b^{2/3}D " }, { "math_id": 70, "text": " a" }, { "math_id": 71, "text": "4 \\pi L^2 /4" }, { "math_id": 72, "text": " Z_{AB} = 4{\\pi}RD_rC_AC_B" }, { "math_id": 73, "text": "R" }, { "math_id": 74, "text": "D_r = D_A + D_B" }, { "math_id": 75, "text": "C_A" }, { "math_id": 76, "text": "C_B" }, { "math_id": 77, "text": " Z_{AB} = \\frac{8}{\\pi}{\\sigma} D_rC_AC_B\\sqrt[3]{C_A+C_B} " }, { "math_id": 78, "text": "{\\sigma}" }, { "math_id": 79, "text": "\\text{flux} = {-P \\left(c_2 - c_1\\right)}" }, { "math_id": 80, "text": " P= 2A_p\\eta_{tm} \\sqrt{ D/(\\pi t)}" }, { "math_id": 81, "text": "A_P" }, { "math_id": 82, "text": "\\eta_{tm}" }, { "math_id": 83, "text": "\\delta(x) = \\left( \\frac{5x}{\\mathrm{Re}^{1/2}} \\right) \\mathrm{Re}=\\frac{v\\rho L}{\\eta}" }, { "math_id": 84, "text": "\\delta" }, { "math_id": 85, "text": "\\mathrm{Re}" }, { "math_id": 86, "text": "\\eta" }, { "math_id": 87, "text": "\\delta = \\frac{10L}{3\\mathrm{Re}^{1/2}}" }, { "math_id": 88, "text": "D = D_0 \\left(\\frac{P_0}{P}\\right) \\left(\\frac{T}{T_0}\\right)^{3/2}" }, { "math_id": 89, "text": "P_0" }, { "math_id": 90, "text": "T_0" }, { "math_id": 91, "text": "D_0" }, { "math_id": 92, "text": "J = -D_i \\left ( \\frac{dc_i}{dx} \\right )" }, { "math_id": 93, "text": "x" }, { "math_id": 94, "text": "dc_i" }, { "math_id": 95, "text": "PV = nRT" }, { "math_id": 96, "text": "J = - D_i \\left ( \\frac{P_i-P_0}{\\delta RT} \\right )" }, { "math_id": 97, "text": "\\frac{P_i-P_0}{\\delta}" } ]
https://en.wikipedia.org/wiki?curid=11671
1167800
Nilpotent matrix
Mathematical concept in algebra In linear algebra, a nilpotent matrix is a square matrix "N" such that formula_0 for some positive integer formula_1. The smallest such formula_1 is called the index of formula_2, sometimes the degree of formula_2. More generally, a nilpotent transformation is a linear transformation formula_3 of a vector space such that formula_4 for some positive integer formula_1 (and thus, formula_5 for all formula_6). Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings. Examples. Example 1. The matrix formula_7 is nilpotent with index 2, since formula_8. Example 2. More generally, any formula_9-dimensional triangular matrix with zeros along the main diagonal is nilpotent, with index formula_10 . For example, the matrix formula_11 is nilpotent, with formula_12 The index of formula_13 is therefore 4. Example 3. Although the examples above have a large number of zero entries, a typical nilpotent matrix does not. For example, formula_14 although the matrix has no zero entries. Example 4. Additionally, any matrices of the form formula_15 such as formula_16 or formula_17 square to zero. Example 5. Perhaps some of the most striking examples of nilpotent matrices are formula_18 square matrices of the form: formula_19 The first few of which are: formula_20 These matrices are nilpotent but there are no zero entries in any powers of them less than the index. Example 6. Consider the linear space of polynomials of a bounded degree. The derivative operator is a linear map. We know that applying the derivative to a polynomial decreases its degree by one, so when applying it iteratively, we will eventually obtain zero. Therefore, on such a space, the derivative is representable by a nilpotent matrix. Characterization. For an formula_21 square matrix formula_2 with real (or complex) entries, the following are equivalent: The last theorem holds true for matrices over any field of characteristic 0 or sufficiently large characteristic. (cf. Newton's identities) This theorem has several consequences, including: See also: Jordan–Chevalley decomposition#Nilpotency criterion. Classification. Consider the formula_21 (upper) shift matrix: formula_26 This matrix has 1s along the superdiagonal and 0s everywhere else. As a linear transformation, the shift matrix "shifts" the components of a vector one position to the left, with a zero appearing in the last position: formula_27 This matrix is nilpotent with degree formula_9, and is the canonical nilpotent matrix. Specifically, if formula_2 is any nilpotent matrix, then formula_2 is similar to a block diagonal matrix of the form formula_28 where each of the blocks formula_29 is a shift matrix (possibly of different sizes). This form is a special case of the Jordan canonical form for matrices. For example, any nonzero 2 × 2 nilpotent matrix is similar to the matrix formula_30 That is, if formula_2 is any nonzero 2 × 2 nilpotent matrix, then there exists a basis b1, b2 such that "Nb1 = 0 and "Nb2 = b1. This classification theorem holds for matrices over any field. (It is not necessary for the field to be algebraically closed.) Flag of subspaces. A nilpotent transformation formula_3 on formula_31 naturally determines a flag of subspaces formula_32 and a signature formula_33 The signature characterizes formula_3 up to an invertible linear transformation. Furthermore, it satisfies the inequalities formula_34 Conversely, any sequence of natural numbers satisfying these inequalities is the signature of a nilpotent transformation. Generalizations. A linear operator formula_35 is locally nilpotent if for every vector formula_36, there exists a formula_37 such that formula_38 For operators on a finite-dimensional vector space, local nilpotence is equivalent to nilpotence.
[ { "math_id": 0, "text": "N^k = 0\\," }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "L^k = 0" }, { "math_id": 5, "text": "L^j = 0" }, { "math_id": 6, "text": "j \\geq k" }, { "math_id": 7, "text": " \nA = \\begin{bmatrix} \n0 & 1 \\\\\n0 & 0 \n\\end{bmatrix}\n" }, { "math_id": 8, "text": "A^2 = 0" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "\\le n" }, { "math_id": 11, "text": " \nB=\\begin{bmatrix} \n0 & 2 & 1 & 6\\\\\n0 & 0 & 1 & 2\\\\\n0 & 0 & 0 & 3\\\\\n0 & 0 & 0 & 0 \n\\end{bmatrix}\n" }, { "math_id": 12, "text": "\nB^2=\\begin{bmatrix} \n0 & 0 & 2 & 7\\\\\n0 & 0 & 0 & 3\\\\\n0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 \n\\end{bmatrix}\n;\\ \n\nB^3=\\begin{bmatrix} \n0 & 0 & 0 & 6\\\\\n0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 \n\\end{bmatrix}\n;\\ \n\nB^4=\\begin{bmatrix} \n0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 \n\\end{bmatrix}\n" }, { "math_id": 13, "text": "B" }, { "math_id": 14, "text": " \nC=\\begin{bmatrix} \n5 & -3 & 2 \\\\\n15 & -9 & 6 \\\\\n10 & -6 & 4\n\\end{bmatrix}\n\\qquad\nC^2=\\begin{bmatrix} \n0 & 0 & 0 \\\\\n0 & 0 & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}\n" }, { "math_id": 15, "text": "\n\\begin{bmatrix}\na_1 & a_1 & \\cdots & a_1 \\\\\na_2 & a_2 & \\cdots & a_2 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n-a_1-a_2-\\ldots-a_{n-1} & -a_1-a_2-\\ldots-a_{n-1} & \\ldots & -a_1-a_2-\\ldots-a_{n-1} \n\\end{bmatrix}" }, { "math_id": 16, "text": "\n\\begin{bmatrix}\n5 & 5 & 5 \\\\\n6 & 6 & 6 \\\\\n-11 & -11 & -11\n\\end{bmatrix}\n" }, { "math_id": 17, "text": "\\begin{bmatrix}\n1 & 1 & 1 & 1 \\\\\n2 & 2 & 2 & 2 \\\\\n4 & 4 & 4 & 4 \\\\\n-7 & -7 & -7 & -7\n\\end{bmatrix}\n" }, { "math_id": 18, "text": "n\\times n" }, { "math_id": 19, "text": "\\begin{bmatrix}\n2 & 2 & 2 & \\cdots & 1-n \\\\\nn+2 & 1 & 1 & \\cdots & -n \\\\\n1 & n+2 & 1 & \\cdots & -n \\\\\n1 & 1 & n+2 & \\cdots & -n \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots\n\\end{bmatrix}" }, { "math_id": 20, "text": "\\begin{bmatrix}\n2 & -1 \\\\\n4 & -2\n\\end{bmatrix}\n\\qquad\n\\begin{bmatrix}\n2 & 2 & -2 \\\\\n5 & 1 & -3 \\\\\n1 & 5 & -3\n\\end{bmatrix}\n\\qquad\n\\begin{bmatrix}\n2 & 2 & 2 & -3 \\\\\n6 & 1 & 1 & -4 \\\\\n1 & 6 & 1 & -4 \\\\\n1 & 1 & 6 & -4\n\\end{bmatrix}\n\\qquad\n\\begin{bmatrix}\n2 & 2 & 2 & 2 & -4 \\\\\n7 & 1 & 1 & 1 & -5 \\\\\n1 & 7 & 1 & 1 & -5 \\\\\n1 & 1 & 7 & 1 & -5 \\\\\n1 & 1 & 1 & 7 & -5\n\\end{bmatrix}\n\\qquad\n\\ldots\n" }, { "math_id": 21, "text": "n \\times n" }, { "math_id": 22, "text": "\\det \\left(xI - N\\right) = x^n" }, { "math_id": 23, "text": "x^k" }, { "math_id": 24, "text": "k \\leq n" }, { "math_id": 25, "text": "2 \\times 2" }, { "math_id": 26, "text": "S = \\begin{bmatrix} \n 0 & 1 & 0 & \\ldots & 0 \\\\\n 0 & 0 & 1 & \\ldots & 0 \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & 0 & \\ldots & 1 \\\\\n 0 & 0 & 0 & \\ldots & 0\n\\end{bmatrix}." }, { "math_id": 27, "text": "S(x_1,x_2,\\ldots,x_n) = (x_2,\\ldots,x_n,0)." }, { "math_id": 28, "text": " \\begin{bmatrix} \n S_1 & 0 & \\ldots & 0 \\\\ \n 0 & S_2 & \\ldots & 0 \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & \\ldots & S_r \n\\end{bmatrix} " }, { "math_id": 29, "text": "S_1,S_2,\\ldots,S_r" }, { "math_id": 30, "text": " \\begin{bmatrix} \n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix}. " }, { "math_id": 31, "text": "\\mathbb{R}^n" }, { "math_id": 32, "text": " \\{0\\} \\subset \\ker L \\subset \\ker L^2 \\subset \\ldots \\subset \\ker L^{q-1} \\subset \\ker L^q = \\mathbb{R}^n" }, { "math_id": 33, "text": " 0 = n_0 < n_1 < n_2 < \\ldots < n_{q-1} < n_q = n,\\qquad n_i = \\dim \\ker L^i. " }, { "math_id": 34, "text": " n_{j+1} - n_j \\leq n_j - n_{j-1}, \\qquad \\mbox{for all } j = 1,\\ldots,q-1. " }, { "math_id": 35, "text": "T" }, { "math_id": 36, "text": "v" }, { "math_id": 37, "text": "k\\in\\mathbb{N}" }, { "math_id": 38, "text": "T^k(v) = 0.\\!\\," } ]
https://en.wikipedia.org/wiki?curid=1167800
11678446
List of representations of e
The mathematical constant "e" can be represented in a variety of ways as a real number. Since "e" is an irrational number (see proof that e is irrational), it cannot be represented as the quotient of two integers, but it can be represented as a continued fraction. Using calculus, "e" may also be represented as an infinite series, infinite product, or other types of limit of a sequence. As a continued fraction. Euler proved that the number "e" is represented as the infinite simple continued fraction (sequence in the OEIS): formula_0 Its convergence can be tripled by allowing just one fractional number: formula_1 Here are some infinite generalized continued fraction expansions of "e". The second is generated from the first by a simple equivalence transformation. formula_2 formula_3 This last, equivalent to [1; 0.5, 12, 5, 28, 9, ...], is a special case of a general formula for the exponential function: formula_4 As an infinite series. The number "e" can be expressed as the sum of the following infinite series: formula_5 for any real number "x". In the special case where "x" = 1 or −1, we have: formula_6, and formula_7 Other series include the following: formula_8 formula_9 formula_10 formula_11 formula_12 formula_13 formula_14 where formula_15 is the nth Bell number. formula_16 Consideration of how to put upper bounds on "e" leads to this descending series: formula_17 which gives at least one correct (or rounded up) digit per term. That is, if 1 ≤ "n", then formula_18 More generally, if "x" is not in {2, 3, 4, 5, ...}, then formula_19 As a recursive function. The series representation of formula_20, given as formula_21can also be expressed using a form of recursion. When formula_22 is iteratively factored from the original series the result is the nested series formula_23which equates to formula_24 This fraction is of the form formula_25, where formula_26 computes the sum of the terms from formula_27 to formula_28. As an infinite product. The number "e" is also given by several infinite product forms including Pippenger's product formula_29 and Guillera's product formula_30 where the "n"th factor is the "n"th root of the product formula_31 as well as the infinite product formula_32 More generally, if 1 &lt; "B" &lt; "e"2 (which includes "B" = 2, 3, 4, 5, 6, or 7), then formula_33 Also formula_34 As the limit of a sequence. The number "e" is equal to the limit of several infinite sequences: formula_35 and formula_36 (both by Stirling's formula). The symmetric limit, formula_37 may be obtained by manipulation of the basic limit definition of "e". The next two definitions are direct corollaries of the prime number theorem formula_38 where formula_39 is the "n"th prime and formula_40 is the primorial of the "n"th prime. formula_41 where formula_42 is the prime-counting function. Also: formula_43 In the special case that formula_44, the result is the famous statement: formula_45 The ratio of the factorial formula_46, that counts all permutations of an ordered set S with cardinality formula_47, and the subfactorial (a.k.a. the derangement function) formula_48, which counts the amount of permutations where no element appears in its original position, tends to formula_20 as formula_47 grows. formula_49 As a ratio of ratios. A unique representation of "e" can be found within the structure of Pascal's Triangle, as discovered by Harlan Brothers. Pascal's Triangle is composed of binomial coefficients, which are traditionally summed to derive polynomial expansions. However, Brothers identified a product-based relationship between these coefficients that links to "e". Specifically, the ratio of the products of binomial coefficients in adjacent rows of Pascal's Triangle tends to "e" as the row number increases. This relationship and its proof are outlined in the discussion on the properties of the rows of Pascal's Triangle. In trigonometry. Trigonometrically, "e" can be written in terms of the sum of two hyperbolic functions, formula_50 at "x" = 1. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, \\ldots, 1, 2n, 1, \\ldots]. " }, { "math_id": 1, "text": " e = [1; 1/2, 12, 5, 28, 9, 44, 13, 60, 17, \\ldots, 4(4n-1), 4n+1, \\ldots]. " }, { "math_id": 2, "text": "\ne= 2+\\cfrac{1}{1+\\cfrac{1}{2+\\cfrac{2}{3+\\cfrac{3}{4+\\cfrac{4}{5+\\ddots}}}}} = 2+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\cfrac{5}{5+\\cfrac{6}{6+\\ddots\\,}}}}}\n" }, { "math_id": 3, "text": "e = 2+\\cfrac{1}{1+\\cfrac{2}{5+\\cfrac{1}{10+\\cfrac{1}{14+\\cfrac{1}{18+\\ddots\\,}}}}} = 1+\\cfrac{2}{1+\\cfrac{1}{6+\\cfrac{1}{10+\\cfrac{1}{14+\\cfrac{1}{18+\\ddots\\,}}}}}" }, { "math_id": 4, "text": "e^{x/y} = 1+\\cfrac{2x} {2y-x+\\cfrac{x^2} {6y+\\cfrac{x^2} {10y+\\cfrac{x^2} {14y+\\cfrac{x^2} {18y+\\ddots}}}}}" }, { "math_id": 5, "text": "e^x = \\sum_{k=0}^\\infty \\frac{x^k}{k!} " }, { "math_id": 6, "text": "e = \\sum_{k=0}^\\infty \\frac{1}{k!}" }, { "math_id": 7, "text": "e^{-1} = \\sum_{k=0}^\\infty \\frac{(-1)^k}{k!}." }, { "math_id": 8, "text": "e = \\left [ \\sum_{k=0}^\\infty \\frac{1-2k}{(2k)!} \\right ]^{-1}" }, { "math_id": 9, "text": "e = \\frac{1}{2} \\sum_{k=0}^\\infty \\frac{k+1}{k!}" }, { "math_id": 10, "text": "e = 2 \\sum_{k=0}^\\infty \\frac{k+1}{(2k+1)!}" }, { "math_id": 11, "text": "e = \\sum_{k=0}^\\infty \\frac{3-4k^2}{(2k+1)!}" }, { "math_id": 12, "text": "e = \\sum_{k=0}^\\infty \\frac{(3k)^2+1}{(3k)!} = \\sum_{k=0}^\\infty \\frac{(3k+1)^2+1}{(3k+1)!} = \\sum_{k=0}^\\infty \\frac{(3k+2)^2+1}{(3k+2)!}" }, { "math_id": 13, "text": "e = \\left [ \\sum_{k=0}^\\infty \\frac{4k+3}{2^{2k+1}\\,(2k+1)!} \\right ]^2" }, { "math_id": 14, "text": "e = \\sum_{k=0}^\\infty \\frac{k^n}{B_n(k!)}" }, { "math_id": 15, "text": "B_n" }, { "math_id": 16, "text": "e = \\sum_{k=0}^\\infty \\frac{2k+3}{(k+2)!}" }, { "math_id": 17, "text": "e = 3 - \\sum_{k=2}^\\infty \\frac{1}{k! (k-1) k} = 3 - \\frac{1}{4} - \\frac{1}{36} - \\frac{1}{288} - \\frac{1}{2400} - \\frac{1}{21600} - \\frac{1}{211680} - \\frac{1}{2257920} - \\cdots " }, { "math_id": 18, "text": "e < 3 - \\sum_{k=2}^n \\frac{1}{k! (k-1) k} < e + 0.6 \\cdot 10^{1-n} \\,." }, { "math_id": 19, "text": "e^x = \\frac{2+x}{2-x} + \\sum_{k=2}^\\infty \\frac{- x^{k+1}}{k! (k-x) (k+1-x)} \\,." }, { "math_id": 20, "text": "e" }, { "math_id": 21, "text": "e = \\frac{1}{0!} + \\frac{1}{1!} + \\frac{1}{2!} + \\frac{1}{3!} \\cdots" }, { "math_id": 22, "text": "\\frac{1}{n}" }, { "math_id": 23, "text": "e = 1 + \\frac{1}{1}(1 + \\frac{1}{2}(1 + \\frac{1}{3}(1 + \\cdots )))" }, { "math_id": 24, "text": "e = 1 + \\cfrac{1 + \\cfrac{1 + \\cfrac{1 + \\cdots }{3}}{2}}{1}" }, { "math_id": 25, "text": "f(n) = 1 + \\frac{f(n + 1)}{n}" }, { "math_id": 26, "text": "f(1)" }, { "math_id": 27, "text": "1" }, { "math_id": 28, "text": "\\infty" }, { "math_id": 29, "text": " e= 2 \\left ( \\frac{2}{1} \\right )^{1/2} \\left ( \\frac{2}{3}\\; \\frac{4}{3} \\right )^{1/4} \\left ( \\frac{4}{5}\\; \\frac{6}{5}\\; \\frac{6}{7}\\; \\frac{8}{7} \\right )^{1/8} \\cdots " }, { "math_id": 30, "text": " e = \\left ( \\frac{2}{1} \\right )^{1/1} \\left (\\frac{2^2}{1 \\cdot 3} \\right )^{1/2} \\left (\\frac{2^3 \\cdot 4}{1 \\cdot 3^3} \\right )^{1/3} \n\\left (\\frac{2^4 \\cdot 4^4}{1 \\cdot 3^6 \\cdot 5} \\right )^{1/4} \\cdots ," }, { "math_id": 31, "text": "\\prod_{k=0}^n (k+1)^{(-1)^{k+1}{n \\choose k}}," }, { "math_id": 32, "text": " e = \\frac{2\\cdot 2^{(\\ln(2)-1)^2} \\cdots}{2^{\\ln(2)-1}\\cdot 2^{(\\ln(2)-1)^3}\\cdots }." }, { "math_id": 33, "text": " e = \\frac{B\\cdot B^{(\\ln(B)-1)^2} \\cdots}{B^{\\ln(B)-1}\\cdot B^{(\\ln(B)-1)^3}\\cdots }." }, { "math_id": 34, "text": " e = \\lim\\limits_{n\\rightarrow\\infty}\\prod_{k=0}^n{n \\choose k}^{2/{((n\n+\\alpha)(n+\\beta))}}\\ \\forall\\alpha,\\beta\\in\\Bbb R" }, { "math_id": 35, "text": " e= \\lim_{n \\to \\infty} n\\cdot\\left ( \\frac{\\sqrt{2 \\pi n}}{n!} \\right )^{1/n} " }, { "math_id": 36, "text": " e=\\lim_{n \\to \\infty} \\frac{n}{\\sqrt[n]{n!}} " }, { "math_id": 37, "text": "e=\\lim_{n \\to \\infty} \\left [ \\frac{(n+1)^{n+1}}{n^n}- \\frac{n^n}{(n-1)^{n-1}} \\right ]" }, { "math_id": 38, "text": "e= \\lim_{n \\to \\infty}(p_n \\#)^{1/p_n} " }, { "math_id": 39, "text": " p_n " }, { "math_id": 40, "text": " p_n \\# " }, { "math_id": 41, "text": "e= \\lim_{n \\to \\infty}n^{\\pi(n)/n} " }, { "math_id": 42, "text": " \\pi(n) " }, { "math_id": 43, "text": "e^x= \\lim_{n \\to \\infty}\\left (1+ \\frac{x}{n} \\right )^n." }, { "math_id": 44, "text": "x = 1" }, { "math_id": 45, "text": "e= \\lim_{n \\to \\infty}\\left (1+ \\frac{1}{n} \\right )^n." }, { "math_id": 46, "text": "n!" }, { "math_id": 47, "text": "n" }, { "math_id": 48, "text": "!n" }, { "math_id": 49, "text": "e= \\lim_{n \\to \\infty} \\frac{n!}{!n}." }, { "math_id": 50, "text": "e^x = \\sinh(x) + \\cosh(x) ," } ]
https://en.wikipedia.org/wiki?curid=11678446
11680428
Seawanhaka Corinthian Yacht Club
The Seawanhaka Corinthian Yacht Club is one of the older yacht clubs in the Western Hemisphere, ranking 18th after the Royal Nova Scotia Yacht Squadron, New York Yacht Club, Royal Bermuda Yacht Club, Mobile Yacht Club, Pass Christian Yacht Club, Southern Yacht Club, Biloxi Yacht Club, Royal Canadian Yacht Club, Buffalo Yacht Club, Neenah Nodaway Yacht Club, Raritan Yacht Club, Detroit Boat Club Detroit Yacht Club, San Francisco Yacht Club, Portland Yacht Club, New Hamburg Yacht Club, Eastern Yacht Club, and Milwaukee Yacht Club. It is located in Centre Island, New York, with access to Long Island Sound. History. The Seawanhaka Corinthian Yacht Club was founded (as the "Seawanhaka Yacht Club") in September 1871 aboard the sloop "Glance", anchored off Centre Island. "Glance"'s captain, William L. Swan, was elected Seawanhaka's first Commodore. Charles E. Willis became the Vice Commodore, Frederic de P. Foster assigned as the first Secretary, Gerard Beekmanthe Treasurer and William Foulke as the Measurers. For many years, club meetings were held aboard this flagship. In the 1880s the Club maintained a clubhouse and anchorage at Stapleton, Staten Island near the clubhouse of the New York Yacht Club. On February 1, 1887, it was incorporated under the latter name. In 1881 Seawanhaka held Cup races from the New York harbor to Sandy Hook, NJ. Burgee. Club's triangular blue burgee has 12 White stars, eight in a horizontal direction and four others crossing vertically. The design was made to perpetuate the memory of the 12 founders. Clubhouses. In 1881, the club leased space on Centre Island, and the word "Corinthian" was incorporated into the club's name. In 1887 the organization leased a club house in Manhattan. Finally, in 1891–1892, the club returned to Centre Island, where a new club house was opened, and the club merged with the Oyster Bay Yacht Club. Recognizing its important history, the Seawanhaka Corinthian Yacht Club was listed on the National Register of Historic Places in 1974. Seawanhaka Rule. In 1882, the club adopted a rating rule that would govern all its races: formula_0 Simply known as the "Seawanhaka Rule", it served as a rating for all eastern seaboard races from 1887 onwards, including the America's Cup from 1893 to 1903. The Load Waterline Length was usually placed under a class limit, where any amount beyond the limit was counted double. In the 1893 America's Cup the limit was set at 85 ft, so the Load Waterline Length of an 86 ft yacht would have counted as 87 ft. Junior Club. Seawanhaka Corinthian Junior Yacht club (SCJYC) was incorporated in 1936 as one of the first Junior Yacht Clubs on Long Island Sound. The new organization built on decades of less formal Junior sailing programs at the Seawanhaka and was intended to give the Juniors an independent club and clubhouse (also completed in 1936). Over its history SCJYC has produced many sailing champions but its most central mission has always been to produce lifelong sailors. In 2017 US Sailing awarded SCJYC Sailing Director Tomas Ruiz DeLuque with The Captain Joe Prosser Award for exceptional service to sailing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Rating=\\frac{Load\\ Waterline\\ Length+\\sqrt{Sail\\ Area}}2" } ]
https://en.wikipedia.org/wiki?curid=11680428
11680645
Interval order
In mathematics, especially order theory, the interval order for a collection of intervals on the real line is the partial order corresponding to their left-to-right precedence relation—one interval, "I"1, being considered less than another, "I"2, if "I"1 is completely to the left of "I"2. More formally, a countable poset formula_1 is an interval order if and only if there exists a bijection from formula_2 to a set of real intervals, so formula_3, such that for any formula_4 we have formula_5 in formula_6 exactly when formula_7. Such posets may be equivalently characterized as those with no induced subposet isomorphic to the pair of two-element chains, in other words as the formula_0-free posets . Fully written out, this means that for any two pairs of elements formula_8 and formula_9 one must have formula_10 or formula_11. The subclass of interval orders obtained by restricting the intervals to those of unit length, so they all have the form formula_12, is precisely the semiorders. The complement of the comparability graph of an interval order (formula_2, ≤) is the interval graph formula_13. Interval orders should not be confused with the interval-containment orders, which are the inclusion orders on intervals on the real line (equivalently, the orders of dimension ≤ 2). Interval orders and dimension. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: What is the complexity of determining the order dimension of an interval order? An important parameter of partial orders is order dimension: the dimension of a partial order formula_6 is the least number of linear orders whose intersection is formula_6. For interval orders, dimension can be arbitrarily large. And while the problem of determining the dimension of general partial orders is known to be NP-hard, determining the dimension of an interval order remains a problem of unknown computational complexity. A related parameter is interval dimension, which is defined analogously, but in terms of interval orders instead of linear orders. Thus, the interval dimension of a partially ordered set formula_1 is the least integer formula_14 for which there exist interval orders formula_15 on formula_2 with formula_16 exactly when formula_17 and formula_18. The interval dimension of an order is never greater than its order dimension. Combinatorics. In addition to being isomorphic to formula_0-free posets, unlabeled interval orders on formula_19 are also in bijection with a subset of fixed-point-free involutions on ordered sets with cardinality formula_20 . These are the involutions with no so-called left- or right-neighbor nestings where, for any involution formula_21 on formula_22, a left nesting is an formula_23 such that formula_24 and a right nesting is an formula_23 such that formula_25. Such involutions, according to semi-length, have ordinary generating function formula_26 The coefficient of formula_27 in the expansion of formula_28 gives the number of unlabeled interval orders of size formula_29. The sequence of these numbers (sequence in the OEIS) begins 1, 2, 5, 15, 53, 217, 1014, 5335, 31240, 201608, 1422074, 10886503, 89903100, 796713190, 7541889195, 75955177642, … Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(2+2)" }, { "math_id": 1, "text": "P = (X, \\leq)" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": " x_i \\mapsto (\\ell_i, r_i) " }, { "math_id": 4, "text": "x_i, x_j \\in X" }, { "math_id": 5, "text": " x_i < x_j " }, { "math_id": 6, "text": "P" }, { "math_id": 7, "text": " r_i < \\ell_j " }, { "math_id": 8, "text": "a > b" }, { "math_id": 9, "text": "c > d" }, { "math_id": 10, "text": "a > d" }, { "math_id": 11, "text": "c > b" }, { "math_id": 12, "text": "(\\ell_i, \\ell_i + 1)" }, { "math_id": 13, "text": "(X, \\cap)" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "\\preceq_1, \\ldots, \\preceq_k" }, { "math_id": 16, "text": "x \\leq y" }, { "math_id": 17, "text": "x \\preceq_1 y, \\ldots," }, { "math_id": 18, "text": "x \\preceq_k y" }, { "math_id": 19, "text": "[n]" }, { "math_id": 20, "text": "2n" }, { "math_id": 21, "text": "f" }, { "math_id": 22, "text": "[2n]" }, { "math_id": 23, "text": "i \\in [2n]" }, { "math_id": 24, "text": " i < i+1 < f(i+1) < f(i)\n" }, { "math_id": 25, "text": " f(i) < f(i+1) < i < i+1 " }, { "math_id": 26, "text": "F(t) = \\sum_{n \\geq 0} \\prod_{i = 1}^n (1-(1-t)^i). " }, { "math_id": 27, "text": "t^n" }, { "math_id": 28, "text": "F(t)" }, { "math_id": 29, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=11680645
11680848
Kelvin bridge
Resistance measuring instrument A Kelvin bridge, also called a Kelvin double bridge and in some countries a Thomson bridge, is a measuring instrument used to measure unknown electrical resistors below 1 ohm. It is specifically designed to measure resistors that are constructed as four terminal resistors. Historically Kelvin Bridges were used to measure shunt resistors for Ammeters and sub one ohm reference resistors in Metrology laboratories. In Scientific community the Kelvin Bridge paired with Null Detector was used to achieve highest precision and allow early detection of Superconductivity. Background. Resistors above about 1 ohm in value can be measured using a variety of techniques, such as an ohmmeter or by using a Wheatstone bridge. In such resistors, the resistance of the connecting wires or terminals is negligible compared to the resistance value. For resistors of less than an ohm, the resistance of the connecting wires or terminals becomes significant, and conventional measurement techniques will include them in the result. To overcome the problems of these undesirable resistances (known as 'parasitic resistance'), very low value resistors and particularly precision resistors and high current ammeter shunts are constructed as four terminal resistors. These resistances have a pair of current terminals and a pair of potential or voltage terminals. In use, a current is passed between the current terminals, but the volt drop across the resistor is measured at the potential terminals. The volt drop measured will be entirely due to the resistor itself as the parasitic resistance of the leads carrying the current to and from the resistor are not included in the potential circuit. To measure such resistances requires a bridge circuit designed to work with four terminal resistances. That bridge is the Kelvin bridge. Principle of operation. The operation of the Kelvin bridge is very similar to the Wheatstone bridge, but uses two additional resistors. Resistors "R"1 and "R"2 are connected to the outside potential terminals of the four terminal known or standard resistor "R""s" and the unknown resistor "R""x" (identified as "P"1 and "P"′1 in the diagram). The resistors "R""s", "R""x", "R"1 and "R"2 are essentially a Wheatstone bridge. In this arrangement, the parasitic resistance of the upper part of "R""s" and the lower part of "R""x" is outside of the potential measuring part of the bridge and therefore are not included in the measurement. However, the link between "R""s" and "R""x" ("R"par) "is" included in the potential measurement part of the circuit and therefore can affect the accuracy of the result. To overcome this, a second pair of resistors "R"′1 and "R"′2 form a second pair of arms of the bridge (hence 'double bridge') and are connected to the inner potential terminals of "R""s" and "R""x" (identified as "P"2 and "P"′2 in the diagram). The detector D is connected between the junction of "R"1 and "R"2 and the junction of "R"′1 and "R"′2. The balance equation of this bridge is given by the equation formula_0 In a practical bridge circuit, the ratio of "R"′1 to "R"′2 is arranged to be the same as the ratio of R1 to R2 (and in most designs, "R"1 = "R"′1 and "R"2 = "R"′2). As a result, the last term of the above equation becomes zero and the balance equation becomes formula_1 Rearranging to make "R""x" the subject formula_2 The parasitic resistance "R"par has been eliminated from the balance equation and its presence does not affect the measurement result. This equation is the same as for the functionally equivalent Wheatstone bridge. In practical use the magnitude of the supply B, can be arranged to provide current through Rs and Rx at or close to the rated operating currents of the smaller rated resistor. This contributes to smaller errors in measurement. This current does not flow through the measuring bridge itself. This bridge can also be used to measure resistors of the more conventional two terminal design. The bridge potential connections are merely connected as close to the resistor terminals as possible. Any measurement will then exclude all circuit resistance not within the two potential connections. Accuracy. The accuracy of measurements made using this bridge are dependent on a number of factors. The accuracy of the standard resistor ("R""s") is of prime importance. Also of importance is how close the ratio of "R"1 to "R"2 is to the ratio of "R"′1 to "R"′2. As shown above, if the ratio is exactly the same, the error caused by the parasitic resistance ("R"par) is eliminated. In a practical bridge, the aim is to make this ratio as close as possible, but it is not possible to make it "exactly" the same. If the difference in ratio is small enough, then the last term of the balance equation above becomes small enough that it is negligible. Measurement accuracy is also increased by setting the current flowing through "R""s" and "R""x" to be as large as the rating of those resistors allows. This gives the greatest potential difference between the innermost potential connections ("R"2 and "R"′2) to those resistors and consequently sufficient voltage for the change in "R"′1 and "R"′2 to have its greatest effect. Commercial Kelvin Bridges were initially using Galvanometers replaced by Micro - Ampmeters and that was limiting factor of the precision, when volage difference comes close to zero. Further improvement in precision was achieved using Null Detectors with sensitivity of nanovolts. There are some commercial bridges reaching accuracies of better than 2% for resistance ranges from 1 microohm to 25 ohms. One such type is illustrated above. Modern digital meters exceed 0.25%. Laboratory bridges are usually constructed with high accuracy variable resistors in the two potential arms of the bridge and achieve accuracies suitable for calibrating standard resistors. In such an application, the 'standard' resistor ("R""s") will in reality be a sub-standard type (that is a resistor having an accuracy some 10 times better than the required accuracy of the standard resistor being calibrated). For such use, the error introduced by the mis-match of the ratio in the two potential arms would mean that the presence of the parasitic resistance "R"par could have a significant impact on the very high accuracy required. To minimise this problem, the current connections to the standard resistor ("R""x"); the sub-standard resistor ("R""s") and the connection between them ("R"par) are designed to have as low a resistance as possible, and the connections both in the resistors and the bridge more resemble bus bars rather than wire. Some ohmmeters include Kelvin bridges in order to obtain large measurement ranges. Instruments for measuring sub-ohm values are often referred to as low-resistance ohmmeters, milli-ohmmeters, micro-ohmmeters, etc. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{R_x}{R_s}=\\frac{R_2}{R_1}+\\frac{R_\\text{par}}{R_s} \\cdot \\frac{R'_1}{R'_1+R'_2+R_\\text{par}} \\cdot \\left( \\frac{R_2}{R_1}-\\frac{R'_2}{R'_1} \\right) " }, { "math_id": 1, "text": "\\frac{R_x}{R_s}=\\frac{R_2}{R_1}" }, { "math_id": 2, "text": " R_x=R_2 \\cdot \\frac{R_s}{R_1}" } ]
https://en.wikipedia.org/wiki?curid=11680848
11683021
Segmented regression
Segmented regression, also known as piecewise regression or broken-stick regression, is a method in regression analysis in which the independent variable is partitioned into intervals and a separate line segment is fit to each interval. Segmented regression analysis can also be performed on multivariate data by partitioning the various independent variables. Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions. The boundaries between the segments are "breakpoints". Segmented linear regression is segmented regression whereby the relations in the intervals are obtained by linear regression. Segmented linear regression, two segments. Segmented linear regression with two segments separated by a "breakpoint" can be useful to quantify an abrupt change of the response function (Yr) of a varying influential factor (x). The breakpoint can be interpreted as a "critical", "safe", or "threshold" value beyond or below which (un)desired effects occur. The breakpoint can be important in decision making The figures illustrate some of the results and regression types obtainable. A segmented regression analysis is based on the presence of a set of ( y, x ) data, in which y is the dependent variable and x the independent variable. The least squares method applied separately to each segment, by which the two regression lines are made to fit the data set as closely as possible while minimizing the "sum of squares of the differences" (SSD) between observed (y) and calculated (Yr) values of the dependent variable, results in the following two equations: where:&lt;br&gt; Yr is the expected (predicted) value of y for a certain value of x; A1 and A2 are regression coefficients (indicating the slope of the line segments); K1 and K2 are "regression constants" (indicating the intercept at the y-axis). The data may show many types or trends, see the figures. The method also yields two correlation coefficients (R): and where:&lt;br&gt; formula_2 is the minimized SSD per segment and Ya1 and Ya2 are the average values of y in the respective segments. In the determination of the most suitable trend, statistical tests must be performed to ensure that this trend is reliable (significant). When no significant breakpoint can be detected, one must fall back on a regression without breakpoint. Example. For the blue figure at the right that gives the relation between yield of mustard (Yr = Ym, t/ha) and soil salinity (x = Ss, expressed as electric conductivity of the soil solution EC in dS/m) it is found that: BP = 4.93, A1 = 0, K1 = 1.74, A2 = −0.129, K2 = 2.38, R12 = 0.0035 (insignificant), R22 = 0.395 (significant) and: indicating that soil salinities &lt; 4.93 dS/m are safe and soil salinities &gt; 4.93 dS/m reduce the yield @ 0.129 t/ha per unit increase of soil salinity. The figure also shows confidence intervals and uncertainty as elaborated hereunder. Test procedures. The following "statistical tests" are used to determine the type of trend: In addition, use is made of the correlation coefficient of all data (Ra), the coefficient of determination or coefficient of explanation, confidence intervals of the regression functions, and ANOVA analysis. The coefficient of determination for all data (Cd), that is to be maximized under the conditions set by the significance tests, is found from: where Yr is the expected (predicted) value of y according to the former regression equations and Ya is the average of all y values. The Cd coefficient ranges between 0 (no explanation at all) to 1 (full explanation, perfect match). &lt;br&gt; In a pure, unsegmented, linear regression, the values of Cd and Ra2 are equal. In a segmented regression, Cd needs to be significantly larger than Ra2 to justify the segmentation. The optimal value of the breakpoint may be found such that the Cd coefficient is maximum. No-effect range. Segmented regression is often used to detect over which range an explanatory variable (X) has no effect on the dependent variable (Y), while beyond the reach there is a clear response, be it positive or negative. The reach of no effect may be found at the initial part of X domain or conversely at its last part. For the "no effect" analysis, application of the least squares method for the segmented regression analysis may not be the most appropriate technique because the aim is rather to find the longest stretch over which the Y-X relation can be considered to possess zero slope while beyond the reach the slope is significantly different from zero but knowledge about the best value of this slope is not material. The method to find the no-effect range is progressive partial regression over the range, extending the range with small steps until the regression coefficient gets significantly different from zero. In the next figure the break point is found at X=7.9 while for the same data (see blue figure above for mustard yield), the least squares method yields a break point only at X=4.9. The latter value is lower, but the fit of the data beyond the break point is better. Hence, it will depend on the purpose of the analysis which method needs to be employed.
[ { "math_id": 0, "text": "R_1 ^ 2 = 1 - \\frac{\\sum (y - Y_r) ^ 2 }{ \\sum (y - Y_{a1})^2}" }, { "math_id": 1, "text": "R_2 ^ 2 = 1 - \\frac{\\sum (y - Y_r) ^ 2 }{ \\sum (y - Y_{a2})^2}" }, { "math_id": 2, "text": " \\sum (y - Y_r) ^2 " }, { "math_id": 3, "text": "C_d=1-{\\sum (y-Y_r)^2\\over\\sum (y-Y_a)^2}" } ]
https://en.wikipedia.org/wiki?curid=11683021
1168486
Plücker coordinates
Method of assigning coordinates to every line in projective 3-space In geometry, Plücker coordinates, introduced by Julius Plücker in the 19th century, are a way to assign six homogeneous coordinates to each line in projective 3-space, &amp;NoBreak;&amp;NoBreak;. Because they satisfy a quadratic constraint, they establish a one-to-one correspondence between the 4-dimensional space of lines in &amp;NoBreak;&amp;NoBreak; and points on a quadric in &amp;NoBreak;&amp;NoBreak; (projective 5-space). A predecessor and special case of Grassmann coordinates (which describe k-dimensional linear subspaces, or "flats", in an n-dimensional Euclidean space), Plücker coordinates arise naturally in geometric algebra. They have proved useful for computer graphics, and also can be extended to coordinates for the screws and wrenches in the theory of kinematics used for robot control. Geometric intuition. A line L in 3-dimensional Euclidean space is determined by two distinct points that it contains, or by two distinct planes that contain it. Consider the first case, with points formula_0 and formula_1 The vector displacement from x to y is nonzero because the points are distinct, and represents the "direction" of the line. That is, every displacement between points on L is a scalar multiple of "d" = "y" – "x". If a physical particle of unit mass were to move from x to y, it would have a moment about the origin. The geometric equivalent to this moment, is a vector whose direction is perpendicular to the plane containing L and the origin, and whose length equals twice the area of the triangle formed by the displacement and the origin. Treating the points as displacements from the origin, the moment is m = x × y, where "×" denotes the vector cross product. For a fixed line, L, the area of the triangle is proportional to the length of the segment between x and y, considered as the base of the triangle; it is not changed by sliding the base along the line, parallel to itself. By definition the moment vector is perpendicular to every displacement along the line, so d ⋅ m = 0, where "⋅" denotes the vector dot product. Although neither d nor m alone is sufficient to determine L, together the pair does so uniquely, up to a common (nonzero) scalar multiple which depends on the distance between x and y. That is, the coordinates formula_2 may be considered homogeneous coordinates for L, in the sense that all pairs (λd : λm), for λ ≠ 0, can be produced by points on L and only L, and any such pair determines a unique line so long as d is not zero and d ⋅ m = 0. Furthermore, this approach extends to include points, lines, and a plane "at infinity", in the sense of projective geometry. In addition a point formula_3 lies on the line L if and only if formula_4. Example. Let x = (2, 3, 7) and y = (2, 1, 0). Then (d : m) = (0 : −2 : −7 : −7 : 14 : −4). Alternatively, let the equations for points x of two distinct planes containing L be formula_5 Then their respective planes are perpendicular to vectors a and b, and the direction of L must be perpendicular to both. Hence we may set d = a × b, which is nonzero because a, b are neither zero nor parallel (the planes being distinct and intersecting). If point x satisfies both plane equations, then it also satisfies the linear combination formula_6 That is, formula_7 is a vector perpendicular to displacements to points on L from the origin; it is, in fact, a moment consistent with the d previously defined from a and b. "Proof 1": Need to show that formula_8"what is "r"?" Without loss of generality, let formula_9 Point B is the origin. Line L passes through point D and is orthogonal to the plane of the picture. The two planes pass through CD and DE and are both orthogonal to the plane of the picture. Points C and E are the closest points on those planes to the origin B, therefore angles ∠ "BCD" and ∠ "BED" are right angles and so the points B, C, D, E lie on a circle (due to a corollary of Thales's theorem). BD is the diameter of that circle. formula_10 Angle ∠ "BHF" is a right angle due to the following argument. Let ε := ∠ "BEC". Since △ "BEC" ≅ △ "BFG" (by side-angle-side congruence), then ∠ "BFG" = ε. Since ∠ "BEC" + ∠ "CED" = 90°, let ε' := 90° – ε = ∠ "CED". By the inscribed angle theorem, ∠ "DEC" = ∠ "DBC", so ∠ "DBC" = ε'. ∠ "HBF" + ∠ "BFH" + ∠ "FHB" = 180°; ε' + ε + ∠ "FHB" = 180°, ε + ε' = 90°; therefore, ∠ "FHB" = 90°. Then ∠ "DHF" must be a right angle as well. Angles ∠ "DCF", ∠ "DHF" are right angles, so the four points C, D, H, F lie on a circle, and (by the intersecting secants theorem) formula_11 that is, formula_12 "Proof 2": Let formula_9 This implies that formula_13 According to the vector triple product formula, formula_14 Then formula_15 When formula_16 the line L passes the origin with direction d. If formula_17 the line has direction d; the plane that includes the origin and the line L has normal vector m; the line is tangent to a circle on that plane (normal to m and perpendicular to the plane of the picture) centered at the origin and with radius formula_18 Example. Let "a"0 = 2, a = (−1, 0, 0) and "b"0 = −7, b = (0, 7, −2). Then (d : m) = (0 : −2 : −7 : −7 : 14 : −4). Although the usual algebraic definition tends to obscure the relationship, (d : m) are the Plücker coordinates of L. Algebraic definition. Primal coordinates. In a 3-dimensional projective space &amp;NoBreak;&amp;NoBreak;, let L be a line through distinct points x and y with homogenous coordinates ("x"0 : "x"1 : "x"2 : "x"3) and ("y"0 : "y"1 : "y"2 : "y"3). The Plücker coordinates pij are defined as follows: formula_19 (the skew symmetric matrix whose elements are pij is also called the Plücker matrix )&lt;br&gt; This implies "pii" = 0 and "pij" = −"pji", reducing the possibilities to only six (4 choose 2) independent quantities. The sextuple formula_20 is uniquely determined by L up to a common nonzero scale factor. Furthermore, not all six components can be zero. Thus the Plücker coordinates of L may be considered as homogeneous coordinates of a point in a 5-dimensional projective space, as suggested by the colon notation. To see these facts, let M be the 4×2 matrix with the point coordinates as columns. formula_21 The Plücker coordinate pij is the determinant of rows i and j of M. Because x and y are distinct points, the columns of M are linearly independent; M has rank 2. Let M′ be a second matrix, with columns x′, y′ a different pair of distinct points on L. Then the columns of M′ are linear combinations of the columns of M; so for some 2×2 nonsingular matrix Λ, formula_22 In particular, rows i and j of M′ and M are related by formula_23 Therefore, the determinant of the left side 2×2 matrix equals the product of the determinants of the right side 2×2 matrices, the latter of which is a fixed scalar, det Λ. Furthermore, all six 2×2 subdeterminants in M cannot be zero because the rank of M is 2. Plücker map. Denote the set of all lines (linear images of &amp;NoBreak;&amp;NoBreak;) in &amp;NoBreak;&amp;NoBreak; by "G"1,3. We thus have a map: formula_24 where formula_25 Dual coordinates. Alternatively, a line can be described as the intersection of two planes. Let L be a line contained in distinct planes a and b with homogeneous coefficients ("a"0 : "a"1 : "a"2 : "a"3) and ("b"0 : "b"1 : "b"2 : "b"3), respectively. (The first plane equation is formula_26 for example.) The dual Plücker coordinate pij is formula_27 Dual coordinates are convenient in some computations, and they are equivalent to primary coordinates: formula_28 Here, equality between the two vectors in homogeneous coordinates means that the numbers on the right side are equal to the numbers on the left side up to some common scaling factor λ. Specifically, let ("i", "j", "k", "ℓ") be an even permutation of (0, 1, 2, 3); then formula_29 Geometry. To relate back to the geometric intuition, take "x"0 = 0 as the plane at infinity; thus the coordinates of points "not" at infinity can be normalized so that "x"0 = 1. Then M becomes formula_30 and setting formula_0 and formula_31, we have formula_32and formula_33. Dually, we have formula_34 and formula_35 Bijection between lines and Klein quadric. Plane equations. If the point formula_36 lies on L, then the columns of formula_37 are linearly dependent, so that the rank of this larger matrix is still 2. This implies that all 3×3 submatrices have determinant zero, generating four (4 choose 3) plane equations, such as formula_38 The four possible planes obtained are as follows. formula_39 Using dual coordinates, and letting ("a"0 : "a"1 : "a"2 : "a"3) be the line coefficients, each of these is simply "ai" = "pij", or formula_40 Each Plücker coordinate appears in two of the four equations, each time multiplying a different variable; and as at least one of the coordinates is nonzero, we are guaranteed non-vacuous equations for two distinct planes intersecting in L. Thus the Plücker coordinates of a line determine that line uniquely, and the map α is an injection. Quadratic relation. The image of α is not the complete set of points in &amp;NoBreak;&amp;NoBreak;; the Plücker coordinates of a line L satisfy the quadratic Plücker relation formula_41 For proof, write this homogeneous polynomial as determinants and use Laplace expansion (in reverse). formula_42 Since both 3×3 determinants have duplicate columns, the right hand side is identically zero. Another proof may be done like this: Since vector formula_43 is perpendicular to vector formula_44 (see above), the scalar product of d and m must be zero. q.e.d. Point equations. Letting ("x"0 : "x"1 : "x"2 : "x"3) be the point coordinates, four possible points on a line each have coordinates "xi" = "pij", for "j" = 0, 1, 2, 3. Some of these possible points may be inadmissible because all coordinates are zero, but since at least one Plücker coordinate is nonzero, at least two distinct points are guaranteed. Bijectivity. If formula_45 are the homogeneous coordinates of a point in &amp;NoBreak;&amp;NoBreak;, without loss of generality assume that "q"01 is nonzero. Then the matrix formula_46 has rank 2, and so its columns are distinct points defining a line L. When the &amp;NoBreak;&amp;NoBreak; coordinates, qij, satisfy the quadratic Plücker relation, they are the Plücker coordinates of L. To see this, first normalize "q"01 to 1. Then we immediately have that for the Plücker coordinates computed from M, "pij = "qij", except for formula_47 But if the qij satisfy the Plücker relation formula_48 then "p"23 = "q"23, completing the set of identities. Consequently, α is a surjection onto the algebraic variety consisting of the set of zeros of the quadratic polynomial formula_49 And since α is also an injection, the lines in &amp;NoBreak;&amp;NoBreak; are thus in bijective correspondence with the points of this quadric in &amp;NoBreak;&amp;NoBreak;, called the Plücker quadric or Klein quadric. Uses. Plücker coordinates allow concise solutions to problems of line geometry in 3-dimensional space, especially those involving incidence. Line-line crossing. Two lines in &amp;NoBreak;&amp;NoBreak; are either skew or coplanar, and in the latter case they are either coincident or intersect in a unique point. If pij and p′ij are the Plücker coordinates of two lines, then they are coplanar precisely when formula_50 as shown by formula_51 When the lines are skew, the sign of the result indicates the sense of crossing: positive if a right-handed screw takes L into L′, else negative. The quadratic Plücker relation essentially states that a line is coplanar with itself. Line-line join. In the event that two lines are coplanar but not parallel, their common plane has equation formula_52 where formula_53 The slightest perturbation will destroy the existence of a common plane, and near-parallelism of the lines will cause numeric difficulties in finding such a plane even if it does exist. Line-line meet. Dually, two coplanar lines, neither of which contains the origin, have common point formula_54 To handle lines not meeting this restriction, see the references. Plane-line meet. Given a plane with equation formula_55 or more concisely, formula_56 and given a line not in it with Plücker coordinates (d : m), then their point of intersection is formula_57 The point coordinates, ("x"0 : "x"1 : "x"2 : "x"3), can also be expressed in terms of Plücker coordinates as formula_58 Point-line join. Dually, given a point ("y"0 : y) and a line not containing it, their common plane has equation formula_59 The plane coordinates, ("a"0 : "a"1 : "a"2 : "a"3), can also be expressed in terms of dual Plücker coordinates as formula_60 Line families. Because the Klein quadric is in &amp;NoBreak;&amp;NoBreak;, it contains linear subspaces of dimensions one and two (but no higher). These correspond to one- and two-parameter families of lines in &amp;NoBreak;&amp;NoBreak;. For example, suppose L, L′ are distinct lines in &amp;NoBreak;&amp;NoBreak; determined by points x, y and x′, y′, respectively. Linear combinations of their determining points give linear combinations of their Plücker coordinates, generating a one-parameter family of lines containing L and "L"′. This corresponds to a one-dimensional linear subspace belonging to the Klein quadric. Lines in plane. If three distinct and non-parallel lines are coplanar; their linear combinations generate a two-parameter family of lines, all the lines in the plane. This corresponds to a two-dimensional linear subspace belonging to the Klein quadric. Lines through point. If three distinct and non-coplanar lines intersect in a point, their linear combinations generate a two-parameter family of lines, all the lines through the point. This also corresponds to a two-dimensional linear subspace belonging to the Klein quadric. Ruled surface. A ruled surface is a family of lines that is not necessarily linear. It corresponds to a curve on the Klein quadric. For example, a hyperboloid of one sheet is a quadric surface in &amp;NoBreak;&amp;NoBreak; ruled by two different families of lines, one line of each passing through each point of the surface; each family corresponds under the Plücker map to a conic section within the Klein quadric in &amp;NoBreak;&amp;NoBreak;. Line geometry. During the nineteenth century, "line geometry" was studied intensively. In terms of the bijection given above, this is a description of the intrinsic geometry of the Klein quadric. Ray tracing. Line geometry is extensively used in ray tracing application where the geometry and intersections of rays need to be calculated in 3D. An implementation is described in Introduction to Plücker Coordinates written for the Ray Tracing forum by Thouis Jones.
[ { "math_id": 0, "text": "x=(x_1,x_2,x_3)" }, { "math_id": 1, "text": "y=(y_1,y_2,y_3)." }, { "math_id": 2, "text": "(\\mathbf d : \\mathbf m ) = (d_1:d_2:d_3\\ :\\ m_1:m_2:m_3)" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "x \\times d = m" }, { "math_id": 5, "text": "\\begin{align}\n0 &= a + \\mathbf a \\cdot \\mathbf x, \\\\ \n0 &= b + \\mathbf b \\cdot \\mathbf x.\n\\end{align}" }, { "math_id": 6, "text": "\\begin{align}\n0 &= a (b + \\mathbf b \\cdot \\mathbf x) - b(a+ \\mathbf a \\cdot \\mathbf x) \\\\\n &= (a \\mathbf b - b \\mathbf a) \\cdot \\mathbf x\n\\end{align}" }, { "math_id": 7, "text": "\\mathbf m = a \\mathbf b - b \\mathbf a" }, { "math_id": 8, "text": "\\mathbf m = a \\mathbf b - b \\mathbf a = \\mathbf r \\times \\mathbf d = \\mathbf r \\times (\\mathbf a \\times \\mathbf b)." }, { "math_id": 9, "text": "\\mathbf a \\cdot \\mathbf a = \\mathbf b \\cdot \\mathbf b = 1." }, { "math_id": 10, "text": "\\begin{align}\n& \\mathbf a := \\frac{BE}{||BE||}, \\quad \\mathbf b := \\frac{BC}{||BC||}, \\quad \\mathbf r := BD; \\\\[4pt]\n& - \\! a = ||BE|| = ||BF||, \\quad -b = ||BC|| = ||BG||; \\\\[4pt]\n& \\mathbf m = a \\mathbf b - b \\mathbf a = FG \\\\[4pt]\n& || \\mathbf d || = || \\mathbf a \\times \\mathbf b || = \\sin\\angle FBG\n\\end{align}" }, { "math_id": 11, "text": "||BF|| \\, ||BC|| = ||BH|| \\, ||BD||" }, { "math_id": 12, "text": "\\begin{align}\n&ab \\sin\\angle FBG = ||BH|| \\, || \\mathbf r || \\sin\\angle FBG , \\\\[4pt]\n& 2 \\, \\text{Area}_{\\triangle BFG} = ab \\sin\\angle FBG = ||BH|| \\, ||FG|| = ||BH|| \\, || \\mathbf r || \\sin\\angle FBG, \\\\[4pt]\n& || \\mathbf m || = ||FG|| = || \\mathbf r || \\sin\\angle FBG = || \\mathbf r || \\, || \\mathbf d ||, \\\\[4pt]\n& \\mathbf m = \\mathbf r \\times \\mathbf d. \\blacksquare\n\\end{align}" }, { "math_id": 13, "text": "a = -||BE||, \\quad b = -||BC||." }, { "math_id": 14, "text": "\\mathbf r \\times (\\mathbf a \\times \\mathbf b) = (\\mathbf r \\cdot \\mathbf b) \\mathbf a - (\\mathbf r \\cdot \\mathbf a) \\mathbf b." }, { "math_id": 15, "text": "\\begin{align}\n\\mathbf r \\times (\\mathbf a \\times \\mathbf b) \n &= \\mathbf a \\, || \\mathbf r || \\, || \\mathbf b || \\cos\\angle DBC - \\mathbf b \\, ||\\mathbf r || \\, || \\mathbf a || \\cos\\angle DBE \\\\[4pt]\n &= \\mathbf a \\, || \\mathbf r || \\cos\\angle DBC - \\mathbf b \\, || \\mathbf r || \\cos\\angle DBE \\\\[4pt]\n &= \\mathbf a \\, || BC || - \\mathbf b \\, || BE || \\\\[4pt]\n &= -b \\mathbf a - (-a) \\mathbf b \\\\[4pt]\n &= a \\mathbf b - b \\mathbf a\\ \\ \\blacksquare\n\\end{align}" }, { "math_id": 16, "text": "|| \\mathbf r || = 0," }, { "math_id": 17, "text": "|| \\mathbf r || > 0," }, { "math_id": 18, "text": "|| \\mathbf r ||." }, { "math_id": 19, "text": "p_{ij} = \\begin{vmatrix} x_{i} & y_{i} \\\\ x_{j} & y_{j}\\end{vmatrix} = x_{i}y_{j}-x_{j}y_{i} . " }, { "math_id": 20, "text": "(p_{01}:p_{02}:p_{03}:p_{23}:p_{31}:p_{12}) " }, { "math_id": 21, "text": " M = \\begin{bmatrix} x_0 & y_0 \\\\ x_1 & y_1 \\\\ x_2 & y_2 \\\\ x_3 & y_3 \\end{bmatrix}" }, { "math_id": 22, "text": " M' = M\\Lambda . " }, { "math_id": 23, "text": " \\begin{bmatrix} x'_{i} & y'_{i}\\\\x'_{j}& y'_{j} \\end{bmatrix} = \\begin{bmatrix} x_{i} & y_{i}\\\\x_{j}& y_{j} \\end{bmatrix} \\begin{bmatrix} \\lambda_{00} & \\lambda_{01} \\\\ \\lambda_{10} & \\lambda_{11} \\end{bmatrix} . " }, { "math_id": 24, "text": "\\begin{align}\n\\alpha \\colon \\mathrm{G}_{1,3} & \\rightarrow \\mathbb P^5 \\\\\nL & \\mapsto L^{\\alpha},\n\\end{align}" }, { "math_id": 25, "text": " L^{\\alpha}=(p_{01}:p_{02}:p_{03}:p_{23}:p_{31}:p_{12}) . " }, { "math_id": 26, "text": "\\sum_k a^k x_k =0," }, { "math_id": 27, "text": "p^{ij} = \\begin{vmatrix} a^{i} & a^{j} \\\\ b^{i} & b^{j}\\end{vmatrix} = a^{i}b^{j}-a^{j}b^{i} . " }, { "math_id": 28, "text": "\n(p_{01}:p_{02}:p_{03}:p_{23}:p_{31}:p_{12})=\n(p^{23}:p^{31}:p^{12}:p^{01}:p^{02}:p^{03})\n" }, { "math_id": 29, "text": "p_{ij} = \\lambda p^{k\\ell} . " }, { "math_id": 30, "text": " M = \\begin{bmatrix} 1 & 1 \\\\ x_1 & y_1 \\\\ x_2& y_2 \\\\ x_3 & y_3 \\end{bmatrix} , " }, { "math_id": 31, "text": "y=(y_1,y_2,y_3)" }, { "math_id": 32, "text": "d=(p_{01},p_{02},p_{03})" }, { "math_id": 33, "text": "m=(p_{23},p_{31},p_{12})" }, { "math_id": 34, "text": "d=(p^{23},p^{31},p^{12})" }, { "math_id": 35, "text": "m=(p^{01},p^{02},p^{03})." }, { "math_id": 36, "text": "\\mathbf z = (z_0:z_1:z_2:z_3)" }, { "math_id": 37, "text": " \\begin{bmatrix} x_0 & y_0 & z_0 \\\\ x_1 & y_1 & z_1 \\\\ x_2 & y_2 & z_2 \\\\ x_3 & y_3 & z_3 \\end{bmatrix} " }, { "math_id": 38, "text": "\n\\begin{align}\n0 & = \\begin{vmatrix} x_0 & y_0 & z_0 \\\\ x_1 & y_1 & z_1 \\\\ x_2 & y_2 & z_2 \\end{vmatrix} \\\\[5pt]\n& = \\begin{vmatrix} x_1 & y_1 \\\\ x_2 & y_2 \\end{vmatrix} z_0 - \\begin{vmatrix} x_0 & y_0 \\\\ x_2 & y_2 \\end{vmatrix} z_1 + \\begin{vmatrix} x_0 & y_0 \\\\ x_1 & y_1 \\end{vmatrix} z_2 \\\\[5pt]\n& = p_{12} z_0 - p_{02} z_1 + p_{01} z_2 . \\\\[5pt]\n& = p^{03} z_0 + p^{13} z_1 + p^{23} z_2 .\n\\end{align}\n" }, { "math_id": 39, "text": " \\begin{matrix}\n0 & = & {}+ p_{12} z_0 & {}- p_{02} z_1 & {}+ p_{01} z_2 & \\\\\n0 & = & {}- p_{31} z_0 & {}- p_{03} z_1 & & {}+ p_{01} z_3 \\\\\n0 & = & {}+p_{23} z_0 & & {}- p_{03} z_2 & {}+ p_{02} z_3 \\\\\n0 & = & & {}+p_{23} z_1 & {}+ p_{31} z_2 & {}+ p_{12} z_3\n\\end{matrix} " }, { "math_id": 40, "text": " 0 = \\sum_{i=0}^3 p^{ij} z_i , \\qquad j = 0,\\ldots,3 . " }, { "math_id": 41, "text": "\n\\begin{align}\n0 & = p_{01}p^{01}+p_{02}p^{02}+p_{03}p^{03} \\\\\n& = p_{01}p_{23}+p_{02}p_{31}+p_{03}p_{12}.\n\\end{align}\n" }, { "math_id": 42, "text": "\n\\begin{align}\n0 & = \\begin{vmatrix}x_0&y_0\\\\x_1&y_1\\end{vmatrix}\\begin{vmatrix}x_2&y_2\\\\x_3&y_3\\end{vmatrix}+\n\\begin{vmatrix}x_0&y_0\\\\x_2&y_2\\end{vmatrix}\\begin{vmatrix}x_3&y_3\\\\x_1&y_1\\end{vmatrix}+\n\\begin{vmatrix}x_0&y_0\\\\x_3&y_3\\end{vmatrix}\\begin{vmatrix}x_1&y_1\\\\x_2&y_2\\end{vmatrix} \\\\[5pt]\n& = (x_0 y_1-y_0 x_1)\\begin{vmatrix}x_2&y_2\\\\x_3&y_3\\end{vmatrix}-\n(x_0 y_2-y_0 x_2)\\begin{vmatrix}x_1&y_1\\\\x_3&y_3\\end{vmatrix}+\n(x_0 y_3-y_0 x_3)\\begin{vmatrix}x_1&y_1\\\\x_2&y_2\\end{vmatrix} \\\\[5pt]\n& = x_0 \\left(y_1\\begin{vmatrix}x_2&y_2\\\\x_3&y_3\\end{vmatrix}-\ny_2\\begin{vmatrix}x_1&y_1\\\\x_3&y_3\\end{vmatrix}+\ny_3\\begin{vmatrix}x_1&y_1\\\\x_2&y_2\\end{vmatrix}\\right)\n-y_0 \\left(x_1\\begin{vmatrix}x_2&y_2\\\\x_3&y_3\\end{vmatrix}-\nx_2\\begin{vmatrix}x_1&y_1\\\\x_3&y_3\\end{vmatrix}+\nx_3\\begin{vmatrix}x_1&y_1\\\\x_2&y_2\\end{vmatrix}\\right) \\\\[5pt]\n& = x_0 \\begin{vmatrix}x_1&y_1&y_1\\\\x_2&y_2&y_2\\\\x_3&y_3&y_3\\end{vmatrix}\n-y_0 \\begin{vmatrix}x_1&x_1&y_1\\\\x_2&x_2&y_2\\\\x_3&x_3&y_3\\end{vmatrix}\n\\end{align}\n" }, { "math_id": 43, "text": " d = \\left( p_{01}, p_{02}, p_{03} \\right) " }, { "math_id": 44, "text": " m = \\left( p_{23}, p_{31}, p_{12} \\right) " }, { "math_id": 45, "text": "(q_{01}:q_{02}:q_{03}:q_{23}:q_{31}:q_{12})" }, { "math_id": 46, "text": " M = \\begin{bmatrix} q_{01} & 0 \\\\ 0 & q_{01} \\\\ -q_{12} & q_{02} \\\\ q_{31} & q_{03} \\end{bmatrix} " }, { "math_id": 47, "text": " p_{23} = - q_{03} q_{12} - q_{02} q_{31} . " }, { "math_id": 48, "text": "q_{23} + q_{02}q_{31} + q_{03}q_{12} = 0," }, { "math_id": 49, "text": " p_{01}p_{23}+p_{02}p_{31}+p_{03}p_{12} . " }, { "math_id": 50, "text": "\\mathbf d \\cdot \\mathbf m' + \\mathbf m \\cdot \\mathbf d' = 0," }, { "math_id": 51, "text": "\n\\begin{align}\n0 & = p_{01}p'_{23} + p_{02}p'_{31} + p_{03}p'_{12} + p_{23}p'_{01} + p_{31}p'_{02} + p_{12}p'_{03} \\\\[5pt]\n& = \\begin{vmatrix}x_0&y_0&x'_0&y'_0\\\\\nx_1&y_1&x'_1&y'_1\\\\\nx_2&y_2&x'_2&y'_2\\\\\nx_3&y_3&x'_3&y'_3\\end{vmatrix}.\n\\end{align}\n" }, { "math_id": 52, "text": "0 = (\\mathbf m \\cdot \\mathbf d')x_0 + (\\mathbf d \\times \\mathbf d')\\cdot \\mathbf x," }, { "math_id": 53, "text": "x=(x_1,x_2,x_3)." }, { "math_id": 54, "text": "(x_0:\\mathbf x) = (\\mathbf d \\cdot \\mathbf m': \\mathbf m \\times \\mathbf m')." }, { "math_id": 55, "text": " 0 = a^0x_0 + a^1x_1 + a^2x_2 + a^3x_3 , " }, { "math_id": 56, "text": "0 = a^0x_0 + \\mathbf a \\cdot \\mathbf x;" }, { "math_id": 57, "text": "(x_0 : \\mathbf x) = (\\mathbf a \\cdot \\mathbf d : \\mathbf a \\times \\mathbf m - a_0\\mathbf d) ." }, { "math_id": 58, "text": " x_i = \\sum_{j \\ne i} a^j p_{ij} , \\qquad i = 0 \\ldots 3 . " }, { "math_id": 59, "text": "0 = (\\mathbf y \\cdot \\mathbf m) x_0 + (\\mathbf y \\times \\mathbf d - y_0 \\mathbf m)\\cdot \\mathbf x." }, { "math_id": 60, "text": " a^i = \\sum_{j \\ne i} y_j p^{ij} , \\qquad i = 0 \\ldots 3 . " } ]
https://en.wikipedia.org/wiki?curid=1168486
11685115
Overlap–add method
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal formula_0 with a finite impulse response (FIR) filter formula_1: where formula_2 for formula_3 outside the region formula_4  This article uses common abstract notations, such as formula_5 or formula_6 in which it is understood that the functions should be thought of in their totality, rather than at specific instants formula_7 (see Convolution#Notation). The concept is to divide the problem into multiple convolutions of formula_1 with short segments of formula_0: formula_8 where formula_9 is an arbitrary segment length. Then: formula_10 and formula_11 can be written as a sum of short convolutions: formula_12 where the linear convolution formula_13 is zero outside the region formula_14 And for any parameter formula_15 it is equivalent to the formula_16-point circular convolution of formula_17 with formula_18 in the region formula_19  The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: Pseudocode. The following is a pseudocode of the algorithm: ("Overlap-add algorithm for linear convolution") h = FIR_filter M = length(h) Nx = length(x) N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.) step_size = N - (M-1) (L in the text above) H = DFT(h, N) position = 0 y(1 : Nx + M-1) = 0 while position + step_size ≤ Nx do y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H) position = position + step_size end Efficiency considerations. When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about: For example, when formula_21 and formula_22 Eq.3 equals formula_23 whereas direct evaluation of Eq.1 would require up to formula_24 complex multiplications per output sample, the worst case being when both formula_25 and formula_26 are complex-valued. Also note that for any given formula_27 Eq.3 has a minimum with respect to formula_28 Figure 2 is a graph of the values of formula_16 that minimize Eq.3 for a range of filter lengths (formula_29). Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length formula_30 samples. The total number of complex multiplications would be: formula_31 Comparatively, the number of complex multiplications required by the pseudocode algorithm is: formula_32 Hence the "cost" of the overlap–add method scales almost as formula_33 while the cost of a single, large circular convolution is almost formula_34. The two methods are also compared in Figure 3, created by Matlab simulation. The contours are lines of constant ratio of the times it takes to perform both methods. When the overlap-add method is faster, the ratio exceeds 1, and ratios as high as 3 are seen. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x[n]" }, { "math_id": 1, "text": "h[n]" }, { "math_id": 2, "text": "h[m] = 0" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "[1,M]." }, { "math_id": 5, "text": "y(t) = x(t) * h(t)," }, { "math_id": 6, "text": "y(t) = \\mathcal{H}\\{x(t)\\}," }, { "math_id": 7, "text": "t" }, { "math_id": 8, "text": "x_k[n]\\ \\triangleq\\ \\begin{cases}\n x[n + kL], & n = 1, 2, \\ldots, L\\\\\n 0, & \\text{otherwise},\n\\end{cases}\n" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": "x[n] = \\sum_{k} x_k[n - kL],\\," }, { "math_id": 11, "text": "y[n]" }, { "math_id": 12, "text": "\\begin{align}\n y[n] = \\left(\\sum_{k} x_k[n - kL]\\right) * h[n]\n &= \\sum_{k} \\left(x_k[n - kL] * h[n]\\right)\\\\\n &= \\sum_{k} y_k[n - kL],\n\\end{align}" }, { "math_id": 13, "text": "y_k[n]\\ \\triangleq\\ x_k[n] * h[n]\\," }, { "math_id": 14, "text": "[1,L+M-1]." }, { "math_id": 15, "text": "N \\ge L + M - 1,\\," }, { "math_id": 16, "text": "N" }, { "math_id": 17, "text": "x_k[n]\\," }, { "math_id": 18, "text": "h[n]\\," }, { "math_id": 19, "text": "[1,N]." }, { "math_id": 20, "text": "N=L+M-1" }, { "math_id": 21, "text": "M=201" }, { "math_id": 22, "text": "N=1024," }, { "math_id": 23, "text": "13.67," }, { "math_id": 24, "text": "201" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": "h" }, { "math_id": 27, "text": "M," }, { "math_id": 28, "text": "N." }, { "math_id": 29, "text": "M" }, { "math_id": 30, "text": "N_x" }, { "math_id": 31, "text": "N_x\\cdot (\\log_2(N_x) + 1)." }, { "math_id": 32, "text": "N_x\\cdot (\\log_2(N) + 1)\\cdot \\frac{N}{N-M+1}." }, { "math_id": 33, "text": "O\\left(N_x\\log_2 N\\right)" }, { "math_id": 34, "text": "O\\left(N_x\\log_2 N_x \\right)" } ]
https://en.wikipedia.org/wiki?curid=11685115
1168608
Adelic algebraic group
Semitopological group in abstract algebra In abstract algebra, an adelic algebraic group is a semitopological group defined by an algebraic group "G" over a number field "K", and the adele ring "A" = "A"("K") of "K". It consists of the points of "G" having values in "A"; the definition of the appropriate topology is straightforward only in case "G" is a linear algebraic group. In the case of "G" being an abelian variety, it presents a technical obstacle, though it is known that the concept is potentially useful in connection with Tamagawa numbers. Adelic algebraic groups are widely used in number theory, particularly for the theory of automorphic representations, and the arithmetic of quadratic forms. In case "G" is a linear algebraic group, it is an affine algebraic variety in affine "N"-space. The topology on the adelic algebraic group formula_0 is taken to be the subspace topology in "A""N", the Cartesian product of "N" copies of the adele ring. In this case, formula_0 is a topological group. History of the terminology. Historically the "idèles" () were introduced by Chevalley (1936) under the name "élément idéal", which is "ideal element" in French, which then abbreviated to "idèle" following a suggestion of Hasse. (In these papers he also gave the ideles a non-Hausdorff topology.) This was to formulate class field theory for infinite extensions in terms of topological groups. defined (but did not name) the ring of adeles in the function field case and pointed out that Chevalley's group of "Idealelemente" was the group of invertible elements of this ring. defined the ring of adeles as a restricted direct product, though he called its elements "valuation vectors" rather than adeles. defined the ring of adeles in the function field case, under the name "repartitions"; the contemporary term "adèle" stands for 'additive idèles', and can also be a French woman's name. The term adèle was in use shortly afterwards and may have been introduced by André Weil. The general construction of adelic algebraic groups by followed the algebraic group theory founded by Armand Borel and Harish-Chandra. Ideles. An important example, the idele group (ideal element group) "I"("K"), is the case of formula_1. Here the set of ideles consists of the invertible adeles; but the topology on the idele group is "not" their topology as a subset of the adeles. Instead, considering that formula_2 lies in two-dimensional affine space as the 'hyperbola' defined parametrically by formula_3 the topology correctly assigned to the idele group is that induced by inclusion in "A"2; composing with a projection, it follows that the ideles carry a finer topology than the subspace topology from "A". Inside "A""N", the product "K""N" lies as a discrete subgroup. This means that "G"("K") is a discrete subgroup of "G"("A"), also. In the case of the idele group, the quotient group formula_4 is the idele class group. It is closely related to (though larger than) the ideal class group. The idele class group is not itself compact; the ideles must first be replaced by the ideles of norm 1, and then the image of those in the idele class group is a compact group; the proof of this is essentially equivalent to the finiteness of the class number. The study of the Galois cohomology of idele class groups is a central matter in class field theory. Characters of the idele class group, now usually called Hecke characters or Größencharacters, give rise to the most basic class of L-functions. Tamagawa numbers. For more general "G", the Tamagawa number is defined (or indirectly computed) as the measure of "G"("A")/"G"("K"). Tsuneo Tamagawa's observation was that, starting from an invariant differential form ω on "G", defined "over K", the measure involved was well-defined: while ω could be replaced by "c"ω with "c" a non-zero element of "K", the product formula for valuations in "K" is reflected by the independence from "c" of the measure of the quotient, for the product measure constructed from ω on each effective factor. The computation of Tamagawa numbers for semisimple groups contains important parts of classical quadratic form theory.
[ { "math_id": 0, "text": "G(A)" }, { "math_id": 1, "text": "G = GL_1" }, { "math_id": 2, "text": "GL_1" }, { "math_id": 3, "text": " \\{(t,t^{-1})\\}, " }, { "math_id": 4, "text": " I(K)/K^\\times \\, " } ]
https://en.wikipedia.org/wiki?curid=1168608
11686201
D'Agostino's K-squared test
Goodness-of-fit measure in statistics In statistics, D'Agostino's "K"2 test, named for Ralph D'Agostino, is a goodness-of-fit measure of departure from normality, that is the test aims to gauge the compatibility of given data with the null hypothesis that the data is a realization of independent, identically distributed Gaussian random variables. The test is based on transformations of the sample kurtosis and skewness, and has power only against the alternatives that the distribution is skewed and/or kurtic. Skewness and kurtosis. In the following, { "xi" } denotes a sample of "n" observations, "g"1 and "g"2 are the sample skewness and kurtosis, "mj"’s are the "j"-th sample central moments, and formula_0 is the sample mean. Frequently in the literature related to normality testing, the skewness and kurtosis are denoted as √"β"1 and "β"2 respectively. Such notation can be inconvenient since, for example, √"β"1 can be a negative quantity. The sample skewness and kurtosis are defined as formula_1 These quantities consistently estimate the theoretical skewness and kurtosis of the distribution, respectively. Moreover, if the sample indeed comes from a normal population, then the exact finite sample distributions of the skewness and kurtosis can themselves be analysed in terms of their means "μ"1, variances "μ"2, skewnesses "γ"1, and kurtosis "γ"2. This has been done by , who derived the following expressions: formula_2 and formula_3 For example, a sample with size "n" 1000 drawn from a normally distributed population can be expected to have a skewness of 0, SD 0.08 and a kurtosis of 0, SD 0.15, where SD indicates the standard deviation. Transformed sample skewness and kurtosis. The sample skewness "g"1 and kurtosis "g"2 are both asymptotically normal. However, the rate of their convergence to the distribution limit is frustratingly slow, especially for "g"2. For example even with "n" = 5000 observations the sample kurtosis "g"2 has both the skewness and the kurtosis of approximately 0.3, which is not negligible. In order to remedy this situation, it has been suggested to transform the quantities "g"1 and "g"2 in a way that makes their distribution as close to standard normal as possible. In particular, suggested the following transformation for sample skewness: formula_4 where constants "α" and "δ" are computed as formula_5 and where "μ"2 = "μ"2("g"1) is the variance of "g"1, and "γ"2 = "γ"2("g"1) is the kurtosis — the expressions given in the previous section. Similarly, suggested a transformation for "g"2, which works reasonably well for sample sizes of 20 or greater: formula_6 where formula_7 and "μ"1 = "μ"1("g"2), "μ"2 = "μ"2("g"2), "γ"1 = "γ"1("g"2) are the quantities computed by Pearson. Omnibus "K"2 statistic. Statistics "Z"1 and "Z"2 can be combined to produce an omnibus test, able to detect deviations from normality due to either skewness or kurtosis : formula_8 If the null hypothesis of normality is true, then "K"2 is approximately "χ"2-distributed with 2 degrees of freedom. Note that the statistics "g"1, "g"2 are not independent, only uncorrelated. Therefore, their transforms "Z"1, "Z"2 will be dependent also , rendering the validity of "χ"2 approximation questionable. Simulations show that under the null hypothesis the "K"2 test statistic is characterized by References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bar{x}" }, { "math_id": 1, "text": "\\begin{align}\n & g_1 = \\frac{ m_3 }{ m_2^{3/2} } = \\frac{\\frac{1}{n} \\sum_{i=1}^n \\left( x_i - \\bar{x} \\right)^3}{\\left( \\frac{1}{n} \\sum_{i=1}^n \\left( x_i - \\bar{x} \\right)^2 \\right)^{3/2}}\\ , \\\\\n & g_2 = \\frac{ m_4 }{ m_2^{2} }-3 = \\frac{\\frac{1}{n} \\sum_{i=1}^n \\left( x_i - \\bar{x} \\right)^4}{\\left( \\frac{1}{n} \\sum_{i=1}^n \\left( x_i - \\bar{x} \\right)^2 \\right)^2} - 3\\ .\n \\end{align}" }, { "math_id": 2, "text": "\\begin{align}\n & \\mu_1(g_1) = 0, \\\\\n & \\mu_2(g_1) = \\frac{ 6(n-2) }{ (n+1)(n+3) }, \\\\\n & \\gamma_1(g_1) \\equiv \\frac{\\mu_3(g_1)}{\\mu_2(g_1)^{3/2}} = 0, \\\\\n & \\gamma_2(g_1) \\equiv \\frac{\\mu_4(g_1)}{\\mu_2(g_1)^{2}}-3 = \\frac{ 36(n-7)(n^2+2n-5) }{ (n-2)(n+5)(n+7)(n+9) }.\n \\end{align}" }, { "math_id": 3, "text": "\\begin{align}\n & \\mu_1(g_2) = - \\frac{6}{n+1}, \\\\\n & \\mu_2(g_2) = \\frac{ 24n(n-2)(n-3) }{ (n+1)^2(n+3)(n+5) }, \\\\\n & \\gamma_1(g_2) \\equiv \\frac{\\mu_3(g_2)}{\\mu_2(g_2)^{3/2}} = \\frac{6(n^2-5n+2)}{(n+7)(n+9)} \\sqrt{\\frac{6(n+3)(n+5)}{n(n-2)(n-3)}}, \\\\\n & \\gamma_2(g_2) \\equiv \\frac{\\mu_4(g_2)}{\\mu_2(g_2)^{2}}-3 = \\frac{ 36(15n^6-36n^5-628n^4+982n^3+5777n^2-6402n+900) }{ n(n-3)(n-2)(n+7)(n+9)(n+11)(n+13) }.\n \\end{align}" }, { "math_id": 4, "text": "\n Z_1(g_1) = \\delta \\operatorname{asinh}\\left( \\frac{g_1}{\\alpha\\sqrt{\\mu_2}} \\right),\n " }, { "math_id": 5, "text": "\\begin{align}\n & W^2 = \\sqrt{2\\gamma_2 + 4} - 1, \\\\\n & \\delta = 1 / \\sqrt{\\ln W}, \\\\\n & \\alpha^2 = 2 / (W^2-1),\n \\end{align}" }, { "math_id": 6, "text": "\n Z_2(g_2) = \\sqrt{\\frac{9A}{2}} \\left\\{1 - \\frac{2}{9A} - \\left(\\frac{ 1-2/A }{ 1+\\frac{g_2-\\mu_1}{\\sqrt{\\mu_2}}\\sqrt{2/(A-4)} }\\right)^{\\!1/3}\\right\\},\n " }, { "math_id": 7, "text": "\n A = 6 + \\frac{8}{\\gamma_1} \\left( \\frac{2}{\\gamma_1} + \\sqrt{1+4/\\gamma_1^2}\\right),\n " }, { "math_id": 8, "text": "\n K^2 = Z_1(g_1)^2 + Z_2(g_2)^2\\,\n " } ]
https://en.wikipedia.org/wiki?curid=11686201
1168653
Tamagawa number
In mathematics, the Tamagawa number formula_0 of a semisimple algebraic group defined over a global field "k" is the measure of formula_1, where formula_2 is the adele ring of "k". Tamagawa numbers were introduced by Tamagawa (1966), and named after him by Weil (1959). Tsuneo Tamagawa's observation was that, starting from an invariant differential form ω on "G", defined over "k", the measure involved was well-defined: while "ω" could be replaced by "cω" with "c" a non-zero element of formula_3, the product formula for valuations in "k" is reflected by the independence from "c" of the measure of the quotient, for the product measure constructed from "ω" on each effective factor. The computation of Tamagawa numbers for semisimple groups contains important parts of classical quadratic form theory. Definition. Let "k" be a global field, "A" its ring of adeles, and "G" a semisimple algebraic group defined over "k". Choose Haar measures on the completions "k""v" of "k" such that "O""v" has volume 1 for all but finitely many places "v". These then induce a Haar measure on "A", which we further assume is normalized so that "A"/"k" has volume 1 with respect to the induced quotient measure. The Tamagawa measure on the adelic algebraic group "G"("A") is now defined as follows. Take a left-invariant "n"-form "ω" on "G"("k") defined over "k", where "n" is the dimension of "G". This, together with the above choices of Haar measure on the "k""v", induces Haar measures on "G"("k""v") for all places of "v". As "G" is semisimple, the product of these measures yields a Haar measure on "G"("A"), called the "Tamagawa measure". The Tamagawa measure does not depend on the choice of ω, nor on the choice of measures on the "k""v", because multiplying "ω" by an element of "k"* multiplies the Haar measure on "G"("A") by 1, using the product formula for valuations. The Tamagawa number "τ"("G") is defined to be the Tamagawa measure of "G"("A")/"G"("k"). Weil's conjecture on Tamagawa numbers. "Weil's conjecture on Tamagawa numbers" states that the Tamagawa number "τ"("G") of a simply connected (i.e. not having a proper "algebraic" covering) simple algebraic group defined over a number field is 1. Weil (1959) calculated the Tamagawa number in many cases of classical groups and observed that it is an integer in all considered cases and that it was equal to 1 in the cases when the group is simply connected. found examples where the Tamagawa numbers are not integers, but the conjecture about the Tamagawa number of simply connected groups was proven in general by several works culminating in a paper by Kottwitz (1988) and for the analogue over function fields over finite fields by .
[ { "math_id": 0, "text": "\\tau(G)" }, { "math_id": 1, "text": "G(\\mathbb{A})/G(k)" }, { "math_id": 2, "text": "\\mathbb{A}" }, { "math_id": 3, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=1168653
1168888
Zero matrix
In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of formula_0 matrices, and is denoted by the symbol formula_1 or formula_2 followed by subscripts corresponding to the dimension of the matrix as the context sees fit. Some examples of zero matrices are formula_3 Properties. The set of formula_0 matrices with entries in a ring K forms a ring formula_4. The zero matrix formula_5 in formula_6 is the matrix with all entries equal to formula_7, where formula_8 is the additive identity in K. formula_9 The zero matrix is the additive identity in formula_6. That is, for all formula_10 it satisfies the equation formula_11 There is exactly one zero matrix of any given dimension "m"×"n" (with entries from a given ring), so when the context is clear, one often refers to "the" zero matrix. In general, the zero element of a ring is unique, and is typically denoted by 0 without any subscript indicating the parent ring. Hence the examples above represent zero matrices over any ring. The zero matrix also represents the linear transformation which sends all the vectors to the zero vector. It is idempotent, meaning that when it is multiplied by itself, the result is itself. The zero matrix is the only matrix whose rank is 0. Occurrences. In ordinary least squares regression, if there is a perfect fit to the data, the annihilator matrix is the zero matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m \\times n" }, { "math_id": 1, "text": "O" }, { "math_id": 2, "text": "0" }, { "math_id": 3, "text": "\n0_{1,1} = \\begin{bmatrix}\n0 \\end{bmatrix}\n,\\ \n0_{2,2} = \\begin{bmatrix}\n0 & 0 \\\\\n0 & 0 \\end{bmatrix}\n,\\ \n0_{2,3} = \\begin{bmatrix}\n0 & 0 & 0 \\\\\n0 & 0 & 0 \\end{bmatrix}\n.\\ \n" }, { "math_id": 4, "text": "K_{m,n}" }, { "math_id": 5, "text": "0_{K_{m,n}} \\, " }, { "math_id": 6, "text": "K_{m,n} \\, " }, { "math_id": 7, "text": "0_K \\, " }, { "math_id": 8, "text": "0_K " }, { "math_id": 9, "text": "\n0_{K_{m,n}} = \\begin{bmatrix}\n0_K & 0_K & \\cdots & 0_K \\\\\n0_K & 0_K & \\cdots & 0_K \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n0_K & 0_K & \\cdots & 0_K \\end{bmatrix}_{m \\times n}\n" }, { "math_id": 10, "text": "A \\in K_{m,n} \\, " }, { "math_id": 11, "text": "0_{K_{m,n}}+A = A + 0_{K_{m,n}} = A." } ]
https://en.wikipedia.org/wiki?curid=1168888
11691
Functional decomposition
Expression of a function as the composition of two functions In engineering, functional decomposition is the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts. This process of decomposition may be undertaken to gain insight into the identity of the constituent components, which may reflect individual physical processes of interest. Also, functional decomposition may result in a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of "modularity" (i.e., independence or non-interaction). between the components are critical to the function of the collection. All interactions may not be , but possibly deduced through repetitive , synthesis, validation and verification of composite behavior. Motivation for decomposition. Decomposition of a function into non-interacting components generally permits more economical representations of the function. Intuitively, this reduction in representation size is achieved simply because each variable depends only on a subset of the other variables. Thus, variable formula_0 only depends directly on variable formula_1, rather than depending on the "entire set" of variables. We would say that variable formula_1 "screens off" variable formula_0 from the rest of the world. Practical examples of this phenomenon surround us. Consider the particular case of "northbound traffic on the West Side Highway." Let us assume this variable (formula_2) takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now, let's say the variable formula_2 depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "GW Bridge traffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada, butterfly flapping in Japan, etc.) and the Bridge traffic variable (e.g., an accident on I-95, presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variables "screen off" West Side Highway traffic from all other potential influences. That is, all other influences act "through" them. Applications. Practical applications of functional decomposition are found in Bayesian networks, structural equation modeling, linear systems, and database systems. Knowledge representation. Processes related to functional decomposition are prevalent throughout the fields of knowledge representation and machine learning. Hierarchical model induction techniques such as Logic circuit minimization, decision trees, grammatical inference, hierarchical clustering, and quadtree decomposition are all examples of function decomposition. Many statistical inference methods can be thought of as implementing a function decomposition process in the presence of noise; that is, where functional dependencies are only expected to hold "approximately". Among such models are mixture models and the recently popular methods referred to as "causal decompositions" or Bayesian networks. Database theory. See database normalization. Machine learning. In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations. However, while perfect functional decomposition is usually impossible, the spirit lives on in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, the joint distribution on system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution. As an example, Bayesian network methods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing. Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms. Software architecture. Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. The software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity. Signal processing. Functional decomposition is used in the analysis of many signal processing systems, such as LTI systems. The input signal to an LTI system can be expressed as a function, formula_3. Then formula_3 can be decomposed into a linear combination of other functions, called component signals: formula_4 Here, formula_5 are the component signals. Note that formula_6 are constants. This decomposition aids in analysis, because now the output of the system can be expressed in terms of the components of the input. If we let formula_7 represent the effect of the system, then the output signal is formula_8, which can be expressed as: formula_9 formula_10 In other words, the system can be seen as acting separately on each of the components of the input signal. Commonly used examples of this type of decomposition are the Fourier series and the Fourier transform. Systems engineering. Functional decomposition in systems engineering refers to the process of defining a system in functional terms, then defining lower-level functions and sequencing relationships from these higher level systems functions. The basic idea is to try to divide a system in such a way that each block of a block diagram can be described without an "and" or "or" in the description. This exercise forces each part of the system to have a pure function. When a system is designed as pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function. For example, say that one needs to make a stereo system. One might functionally decompose this into speakers, amplifier, a tape deck and a front panel. Later, when a different model needs an audio CD, it can probably fit the same interfaces. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "x_1" }, { "math_id": 1, "text": "x_2" }, { "math_id": 2, "text": "{x_1}" }, { "math_id": 3, "text": "f(t)" }, { "math_id": 4, "text": " f(t) = a_1 \\cdot g_1(t) + a_2 \\cdot g_2(t) + a_3 \\cdot g_3(t) + \\dots + a_n \\cdot g_n(t) " }, { "math_id": 5, "text": " \\{g_1(t), g_2(t), g_3(t), \\dots , g_n(t)\\} " }, { "math_id": 6, "text": " \\{a_1, a_2, a_3, \\dots , a_n\\} " }, { "math_id": 7, "text": "T\\{\\}" }, { "math_id": 8, "text": "T\\{f(t)\\}" }, { "math_id": 9, "text": " T\\{f(t)\\} = T\\{ a_1 \\cdot g_1(t) + a_2 \\cdot g_2(t) + a_3 \\cdot g_3(t) + \\dots + a_n \\cdot g_n(t)\\}" }, { "math_id": 10, "text": " = a_1 \\cdot T\\{g_1(t)\\} + a_2 \\cdot T\\{g_2(t)\\} + a_3 \\cdot T\\{g_3(t)\\} + \\dots + a_n \\cdot T\\{g_n(t)\\}" } ]
https://en.wikipedia.org/wiki?curid=11691
11692451
Centimorgan
Unit for measuring genetic linkage In genetics, a centimorgan (abbreviated cM) or map unit (m.u.) is a unit for measuring genetic linkage. It is defined as the distance between chromosome positions (also termed loci or markers) for which the expected average number of intervening chromosomal crossovers in a single generation is 0.01. It is often used to infer distance along a chromosome. However, it is not a true physical distance. Relation to physical distance. The number of base pairs to which it corresponds varies widely across the genome (different regions of a chromosome have different propensities towards crossover) and it also depends on whether the meiosis in which the crossing-over takes place is a part of oogenesis (formation of female gametes) or spermatogenesis (formation of male gametes). One centimorgan corresponds to about 1 million base pairs in humans on average. The relationship is only rough, as the physical chromosomal distance corresponding to one centimorgan varies from place to place in the genome, and also varies between males and females since recombination during gamete formation in females is significantly more frequent than in males. Kong et al. calculated that the female genome is 4460 cM long, while the male genome is only 2590 cM long. "Plasmodium falciparum" has an average recombination distance of ~15 kb per centimorgan: markers separated by 15 kb of DNA (15,000 nucleotides) have an expected rate of chromosomal crossovers of 0.01 per generation. Note that non-syntenic genes (genes residing on different chromosomes) are inherently unlinked, and cM distances are not applicable to them. Relation to the probability of recombination. Because genetic recombination between two markers is detected only if there are an odd number of chromosomal crossovers between the two markers, the distance in centimorgans does not correspond exactly to the probability of genetic recombination. Assuming the Haldane Mapping Function, eponymously devised by J. B. S. Haldane, the number of chromosomal crossovers is distributed according to a Poisson distribution, a genetic distance of "d" centimorgans will lead to an odd number of chromosomal crossovers, and hence a detectable genetic recombination, with probability formula_0 formula_1 where sinh is the hyperbolic sine function. The probability of recombination is approximately "d"/100 for small values of "d" and approaches 50% as "d" goes to infinity. The formula can be inverted, giving the distance in centimorgans as a function of the recombination probability: formula_2 Etymology. The centimorgan was named in honor of geneticist Thomas Hunt Morgan by J. B. S. Haldane. However, its parent unit, the morgan, is rarely used today. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ P(\\text{recombination}|\\text{linkage of }d\\text{ cM}) = \\sum_{k=0}^{\\infty} \\ P(2k + 1 \\text{ crossovers}|\\text{linkage of }d\\text{ cM})" }, { "math_id": 1, "text": "{} = \\sum_{k=0}^{\\infty} e^{-d/100} \\frac{(d/100)^{2\\,k+1}}{(2\\,k+1)!} = e^{-d/100} \\sinh(d/100) = \\frac{1 - e^{-2d/100}}{2}\\,," }, { "math_id": 2, "text": "d=50 \\ln\\left({\\frac{1}{1 - 2 \\ P(\\text{recombination})}}\\right)\\,." } ]
https://en.wikipedia.org/wiki?curid=11692451
11694610
Two-body problem in general relativity
The two-body problem in general relativity (or relativistic two-body problem) is the determination of the motion and gravitational field of two bodies as described by the field equations of general relativity. Solving the Kepler problem is essential to calculate the bending of light by gravity and the motion of a planet orbiting its sun. Solutions are also used to describe the motion of binary stars around each other, and estimate their gradual loss of energy through gravitational radiation. General relativity describes the gravitational field by curved space-time; the field equations governing this curvature are nonlinear and therefore difficult to solve in a closed form. No exact solutions of the Kepler problem have been found, but an approximate solution has: the Schwarzschild solution. This solution pertains when the mass "M" of one body is overwhelmingly greater than the mass "m" of the other. If so, the larger mass may be taken as stationary and the sole contributor to the gravitational field. This is a good approximation for a photon passing a star and for a planet orbiting its sun. The motion of the lighter body (called the "particle" below) can then be determined from the Schwarzschild solution; the motion is a geodesic ("shortest path between two points") in the curved space-time. Such geodesic solutions account for the anomalous precession of the planet Mercury, which is a key piece of evidence supporting the theory of general relativity. They also describe the bending of light in a gravitational field, another prediction famously used as evidence for general relativity. If both masses are considered to contribute to the gravitational field, as in binary stars, the Kepler problem can be solved only approximately. The earliest approximation method to be developed was the post-Newtonian expansion, an iterative method in which an initial solution is gradually corrected. More recently, it has become possible to solve Einstein's field equation using a computer instead of mathematical formulae. As the two bodies orbit each other, they will emit gravitational radiation; this causes them to lose energy and angular momentum gradually, as illustrated by the binary pulsar PSR B1913+16. For binary black holes, the numerical solution of the two-body problem was achieved after four decades of research in 2005 when three groups devised breakthrough techniques. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Historical context. Classical Kepler problem. The Kepler problem derives its name from Johannes Kepler, who worked as an assistant to the Danish astronomer Tycho Brahe. Brahe took extraordinarily accurate measurements of the motion of the planets of the Solar System. From these measurements, Kepler was able to formulate Kepler's laws, the first modern description of planetary motion: Kepler published the first two laws in 1609 and the third law in 1619. They supplanted earlier models of the Solar System, such as those of Ptolemy and Copernicus. Kepler's laws apply only in the limited case of the two-body problem. Voltaire and Émilie du Châtelet were the first to call them "Kepler's laws". Nearly a century later, Isaac Newton had formulated his three laws of motion. In particular, Newton's second law states that a force "F" applied to a mass "m" produces an acceleration "a" given by the equation "F"="ma". Newton then posed the question: what must the force be that produces the elliptical orbits seen by Kepler? His answer came in his law of universal gravitation, which states that the force between a mass "M" and another mass "m" is given by the formula formula_0 where "r" is the distance between the masses and "G" is the gravitational constant. Given this force law and his equations of motion, Newton was able to show that two point masses attracting each other would each follow perfectly elliptical orbits. The ratio of sizes of these ellipses is "m"/"M", with the larger mass moving on a smaller ellipse. If "M" is much larger than "m", then the larger mass will appear to be stationary at the focus of the elliptical orbit of the lighter mass "m". This model can be applied approximately to the Solar System. Since the mass of the Sun is much larger than those of the planets, the force acting on each planet is principally due to the Sun; the gravity of the planets for each other can be neglected to first approximation. Apsidal precession. If the potential energy between the two bodies is not exactly the 1/"r" potential of Newton's gravitational law but differs only slightly, then the ellipse of the orbit gradually rotates (among other possible effects). This apsidal precession is observed for all the planets orbiting the Sun, primarily due to the oblateness of the Sun (it is not perfectly spherical) and the attractions of the other planets to one another. The apsides are the two points of closest and furthest distance of the orbit (the periapsis and apoapsis, respectively); apsidal precession corresponds to the rotation of the line joining the apsides. It also corresponds to the rotation of the Laplace–Runge–Lenz vector, which points along the line of apsides. Newton's law of gravitation soon became accepted because it gave very accurate predictions of the motion of all the planets. These calculations were carried out initially by Pierre-Simon Laplace in the late 18th century, and refined by Félix Tisserand in the later 19th century. Conversely, if Newton's law of gravitation did "not" predict the apsidal precessions of the planets accurately, it would have to be discarded as a theory of gravitation. Such an anomalous precession was observed in the second half of the 19th century. Anomalous precession of Mercury. In 1859, Urbain Le Verrier discovered that the orbital precession of the planet Mercury was not quite what it should be; the ellipse of its orbit was rotating (precessing) slightly faster than predicted by the traditional theory of Newtonian gravity, even after all the effects of the other planets had been accounted for. The effect is small (roughly 43 arcseconds of rotation per century), but well above the measurement error (roughly 0.1 arcseconds per century). Le Verrier realized the importance of his discovery immediately, and challenged astronomers and physicists alike to account for it. Several classical explanations were proposed, such as interplanetary dust, unobserved oblateness of the Sun, an undetected moon of Mercury, or a new planet named Vulcan. After these explanations were discounted, some physicists were driven to the more radical hypothesis that Newton's inverse-square law of gravitation was incorrect. For example, some physicists proposed a power law with an exponent that was slightly different from 2. Others argued that Newton's law should be supplemented with a velocity-dependent potential. However, this implied a conflict with Newtonian celestial dynamics. In his treatise on celestial mechanics, Laplace had shown that if the gravitational influence does not act instantaneously, then the motions of the planets themselves will not exactly conserve momentum (and consequently some of the momentum would have to be ascribed to the mediator of the gravitational interaction, analogous to ascribing momentum to the mediator of the electromagnetic interaction.) As seen from a Newtonian point of view, if gravitational influence does propagate at a finite speed, then at all points in time a planet is attracted to a point where the Sun was some time before, and not towards the instantaneous position of the Sun. On the assumption of the classical fundamentals, Laplace had shown that if gravity would propagate at a velocity on the order of the speed of light then the solar system would be unstable, and would not exist for a long time. The observation that the solar system is old enough allowed him to put a lower limit on the speed of gravity that turned out to be many orders of magnitude faster than the speed of light. Laplace's estimate for the speed of gravity is not correct in a field theory which respects the principle of relativity. Since electric and magnetic fields combine, the attraction of a point charge which is moving at a constant velocity is towards the extrapolated instantaneous position, not to the apparent position it seems to occupy when looked at. To avoid those problems, between 1870 and 1900 many scientists used the electrodynamic laws of Wilhelm Eduard Weber, Carl Friedrich Gauss, Bernhard Riemann to produce stable orbits and to explain the perihelion shift of Mercury's orbit. In 1890, Maurice Lévy succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light in his theory. And in another attempt Paul Gerber (1898) even succeeded in deriving the correct formula for the perihelion shift (which was identical to that formula later used by Einstein). However, because the basic laws of Weber and others were wrong (for example, Weber's law was superseded by Maxwell's theory), those hypotheses were rejected. Another attempt by Hendrik Lorentz (1900), who already used Maxwell's theory, produced a perihelion shift which was too low. Einstein's theory of general relativity. Around 1904–1905, the works of Hendrik Lorentz, Henri Poincaré and finally Albert Einstein's special theory of relativity, exclude the possibility of propagation of any effects faster than the speed of light. It followed that Newton's law of gravitation would have to be replaced with another law, compatible with the principle of relativity, while still obtaining the Newtonian limit for circumstances where relativistic effects are negligible. Such attempts were made by Henri Poincaré (1905), Hermann Minkowski (1907) and Arnold Sommerfeld (1910). In 1907 Einstein came to the conclusion that to achieve this a successor to special relativity was needed. From 1907 to 1915, Einstein worked towards a new theory, using his equivalence principle as a key concept to guide his way. According to this principle, a uniform gravitational field acts equally on everything within it and, therefore, cannot be detected by a free-falling observer. Conversely, all local gravitational effects should be reproducible in a linearly accelerating reference frame, and vice versa. Thus, gravity acts like a fictitious force such as the centrifugal force or the Coriolis force, which result from being in an accelerated reference frame; all fictitious forces are proportional to the inertial mass, just as gravity is. To effect the reconciliation of gravity and special relativity and to incorporate the equivalence principle, something had to be sacrificed; that something was the long-held classical assumption that our space obeys the laws of Euclidean geometry, e.g., that the Pythagorean theorem is true experimentally. Einstein used a more general geometry, pseudo-Riemannian geometry, to allow for the curvature of space and time that was necessary for the reconciliation; after eight years of work (1907–1915), he succeeded in discovering the precise way in which space-time should be curved in order to reproduce the physical laws observed in Nature, particularly gravitation. Gravity is distinct from the fictitious forces centrifugal force and coriolis force in the sense that the curvature of spacetime is regarded as physically real, whereas the fictitious forces are not regarded as forces. The very first solutions of his field equations explained the anomalous precession of Mercury and predicted an unusual bending of light, which was confirmed "after" his theory was published. These solutions are explained below. General relativity, special relativity and geometry. In the normal Euclidean geometry, triangles obey the Pythagorean theorem, which states that the square distance "ds"2 between two points in space is the sum of the squares of its perpendicular components formula_1 where "dx", "dy" and "dz" represent the infinitesimal differences between the "x", "y" and "z" coordinates of two points in a Cartesian coordinate system. Now imagine a world in which this is not quite true; a world where the distance is instead given by formula_2 where "F", "G" and "H" are arbitrary functions of position. It is not hard to imagine such a world; we live on one. The surface of the earth is curved, which is why it is impossible to make a perfectly accurate flat map of the earth. Non-Cartesian coordinate systems illustrate this well; for example, in the spherical coordinates ("r", "θ", "φ"), the Euclidean distance can be written formula_3 Another illustration would be a world in which the rulers used to measure length were untrustworthy, rulers that changed their length with their position and even their orientation. In the most general case, one must allow for cross-terms when calculating the distance "ds" formula_4 where the nine functions "g"xx, "g"xy, ..., "g"zz constitute the metric tensor, which defines the geometry of the space in Riemannian geometry. In the spherical-coordinates example above, there are no cross-terms; the only nonzero metric tensor components are "g"rr = 1, "g"θθ = "r"2 and "g"φφ = "r"2 sin2 θ. In his special theory of relativity, Albert Einstein showed that the distance "ds" between two spatial points is not constant, but depends on the motion of the observer. However, there is a measure of separation between two points in space-time — called "proper time" and denoted with the symbol dτ — that "is" invariant; in other words, it does not depend on the motion of the observer. formula_5 which may be written in spherical coordinates as formula_6 This formula is the natural extension of the Pythagorean theorem and similarly holds only when there is no curvature in space-time. In general relativity, however, space and time may have curvature, so this distance formula must be modified to a more general form formula_7 just as we generalized the formula to measure distance on the surface of the Earth. The exact form of the metric "g""μν" depends on the gravitating mass, momentum and energy, as described by the Einstein field equations. Einstein developed those field equations to match the then known laws of Nature; however, they predicted never-before-seen phenomena (such as the bending of light by gravity) that were confirmed later. Geodesic equation. According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In uncurved space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is formula_8 where Γ represents the Christoffel symbol and the variable "q" parametrizes the particle's path through space-time, its so-called world line. The Christoffel symbol depends only on the metric tensor "g"μν, or rather on how it changes with position. The variable "q" is a constant multiple of the proper time "τ" for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon), the proper time is zero and, strictly speaking, cannot be used as the variable "q". Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass "m" goes to zero while holding its total energy fixed. Schwarzschild solution. An exact solution to the Einstein field equations is the Schwarzschild metric, which corresponds to the external gravitational field of a stationary, uncharged, non-rotating, spherically symmetric body of mass "M". It is characterized by a length scale "r"s, known as the Schwarzschild radius, which is defined by the formula formula_9 where "G" is the gravitational constant. The classical Newtonian theory of gravity is recovered in the limit as the ratio "r"s/"r" goes to zero. In that limit, the metric returns to that defined by special relativity. In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius "r"s of the Earth is roughly 9 mm (&lt;templatestyles src="Fraction/styles.css" /&gt;3⁄8 inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio "r"s/"r" is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes. Orbits about the central mass. The orbits of a test particle of infinitesimal mass formula_10 about the central mass formula_11 is given by the equation of motion formula_12 where formula_13 is the specific relative angular momentum, formula_14 and formula_15 is the reduced mass. This can be converted into an equation for the orbit formula_16 where, for brevity, two length-scales, formula_17 and formula_18, have been introduced. They are constants of the motion and depend on the initial conditions (position and velocity) of the test particle. Hence, the solution of the orbit equation is formula_19 Effective radial potential energy. The equation of motion for the particle derived above formula_20 can be rewritten using the definition of the Schwarzschild radius "r"s as formula_21 which is equivalent to a particle moving in a one-dimensional effective potential formula_22 The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy; however, the third term is an attractive energy unique to general relativity. As shown below and elsewhere, this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution formula_23 where "A" is the semi-major axis and "e" is the eccentricity. Here "δφ" is "not" the change in the "φ"-coordinate in ("t", "r", "θ", "φ") coordinates but the change in the argument of periapsis of the classical closed orbit. The third term is attractive and dominates at small "r" values, giving a critical inner radius "r"inner at which a particle is drawn inexorably inwards to "r" = 0; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the "a" length-scale defined above. Circular orbits and their stability. The effective potential "V" can be re-written in terms of the length "a" = "h"/"c": formula_24 Circular orbits are possible when the effective force is zero: formula_25 i.e., when the two attractive forces—Newtonian gravity (first term) and the attraction unique to general relativity (third term)—are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as "r"inner and "r"outer: formula_26 which are obtained using the quadratic formula. The inner radius "r"inner is unstable, because the attractive third force strengthens much faster than the other two forces when "r" becomes small; if the particle slips slightly inwards from "r"inner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to "r" = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem. When "a" is much greater than "r"s (the classical case), these formulae become approximately formula_27 Substituting the definitions of "a" and "r"s into "r"outer yields the classical formula for a particle of mass "m" orbiting a body of mass "M". The following equation formula_28 where "ω""φ" is the orbital angular speed of the particle, is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force: formula_29 where formula_15 is the reduced mass. In our notation, the classical orbital angular speed equals formula_30 At the other extreme, when "a"2 approaches 3"r"s2 from above, the two radii converge to a single value formula_31 The quadratic solutions above ensure that "r"outer is always greater than 3"r"s, whereas "r"inner lies between &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄2 "r"s and 3"r"s. Circular orbits smaller than &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄2 "r"s are not possible. For massless particles, "a" goes to infinity, implying that there is a circular orbit for photons at "r"inner = &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄2 "r"s. The sphere of this radius is sometimes known as the photon sphere. Precession of elliptical orbits. The orbital precession rate may be derived using this radial effective potential "V". A small radial deviation from a circular orbit of radius "r"outer will oscillate in a stable manner with an angular frequency formula_32 which equals formula_33 Taking the square root of both sides and expanding using the binomial theorem yields the formula formula_34 Multiplying by the period "T" of one revolution gives the precession of the orbit per revolution formula_35 where we have used "ω""φ""T" = 2π and the definition of the length-scale "a". Substituting the definition of the Schwarzschild radius "r"s gives formula_36 This may be simplified using the elliptical orbit's semi-major axis "A" and eccentricity "e" related by the formula formula_37 to give the precession angle formula_38 Since the closed classical orbit is an ellipse in general, the quantity "A"(1 − "e"2) is the semi-latus rectum "l" of the ellipse. Hence, the final formula of angular apsidal precession for a unit complete revolution is formula_39 Beyond the Schwarzschild solution. Post-Newtonian expansion. In the Schwarzschild solution, it is assumed that the larger mass "M" is stationary and it alone determines the gravitational field (i.e., the geometry of space-time) and, hence, the lesser mass "m" follows a geodesic path through that fixed space-time. This is a reasonable approximation for photons and the orbit of Mercury, which is roughly 6 million times lighter than the Sun. However, it is inadequate for binary stars, in which the masses may be of similar magnitude. The metric for the case of two comparable masses cannot be solved in closed form and therefore one has to resort to approximation techniques such as the post-Newtonian approximation or numerical approximations. In passing, we mention one particular exception in lower dimensions (see "R" = "T" model for details). In (1+1) dimensions, i.e. a space made of one spatial dimension and one time dimension, the metric for two bodies of equal masses can be solved analytically in terms of the Lambert W function. However, the gravitational energy between the two bodies is exchanged via dilatons rather than gravitons which require three-space in which to propagate. The post-Newtonian expansion is a calculational method that provides a series of ever more accurate solutions to a given problem. The method is iterative; an initial solution for particle motions is used to calculate the gravitational fields; from these derived fields, new particle motions can be calculated, from which even more accurate estimates of the fields can be computed, and so on. This approach is called "post-Newtonian" because the Newtonian solution for the particle orbits is often used as the initial solution. The theory can be divided into two parts: first one finds the two-body effective potential that captures the GR corrections to the Newtonian potential. Secondly, one should solve the resulting equations of motion. Modern computational approaches. Einstein's equations can also be solved on a computer using sophisticated numerical methods. Given sufficient computer power, such solutions can be more accurate than post-Newtonian solutions. However, such calculations are demanding because the equations must generally be solved in a four-dimensional space. Nevertheless, beginning in the late 1990s, it became possible to solve difficult problems such as the merger of two black holes, which is a very difficult version of the Kepler problem in general relativity. Gravitational radiation. If there is no incoming gravitational radiation, according to general relativity, two bodies orbiting one another will emit gravitational radiation, causing the orbits to gradually lose energy. The formulae describing the loss of energy and angular momentum due to gravitational radiation from the two bodies of the Kepler problem have been calculated. The rate of losing energy (averaged over a complete orbit) is given by formula_40 where "e" is the orbital eccentricity and "a" is the semimajor axis of the elliptical orbit. The angular brackets on the left-hand side of the equation represent the averaging over a single orbit. Similarly, the average rate of losing angular momentum equals formula_41 The rate of period decrease is given by formula_42 where "P""b" is orbital period. The losses in energy and angular momentum increase significantly as the eccentricity approaches one, i.e., as the ellipse of the orbit becomes ever more elongated. The radiation losses also increase significantly with a decreasing size "a" of the orbit. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "F = G \\frac{M m}{r^2}," }, { "math_id": 1, "text": "ds^2 = dx^2 + dy^2 + dz^2 " }, { "math_id": 2, "text": "ds^2 = F(x, y, z) \\,dx^2 + G(x, y, z) \\,dy^2 + H(x, y, z)\\,dz^2 " }, { "math_id": 3, "text": "ds^2 = dr^2 + r^2 \\, d\\theta^2 + r^2 \\sin^2 \\theta \\, d\\varphi^2 " }, { "math_id": 4, "text": "ds^2 = g_{xx} \\,dx^2 + g_{xy} \\, dx \\, dy + g_{xz} \\, dx \\, dz + \\cdots + g_{zy} \\, dz \\, dy + g_{zz} \\, dz^2 " }, { "math_id": 5, "text": "c^2 \\, d\\tau^2 = c^2 \\, dt^2 - dx^2 - dy^2 - dz^2 " }, { "math_id": 6, "text": "c^2 \\, d\\tau^2 = c^2 \\, dt^2 - dr^2 - r^2 \\, d\\theta^2 - r^2 \\sin^2 \\theta \\, d\\varphi^2 " }, { "math_id": 7, "text": "c^2 \\, d\\tau^2 = g_{\\mu\\nu} dx^\\mu \\, dx^\\nu " }, { "math_id": 8, "text": "\\frac{d^2x^\\mu}{d q^2} + \\Gamma^\\mu_{\\nu\\lambda} \\frac{dx^\\nu}{d q} \\frac{dx^\\lambda}{dq} = 0" }, { "math_id": 9, "text": "\nr_s = \\frac{2GM}{c^2}\n" }, { "math_id": 10, "text": "m" }, { "math_id": 11, "text": "M" }, { "math_id": 12, "text": "\n\\left( \\frac{dr}{d\\tau} \\right)^2 = \\frac{E^2}{m^2 c^2} - \\left( 1 - \\frac{r_s}{r} \\right) \\left( c^2 + \\frac{h^2}{r^2} \\right).\n" }, { "math_id": 13, "text": "h" }, { "math_id": 14, "text": "h = r \\times v = { L \\over \\mu }" }, { "math_id": 15, "text": "\\mu" }, { "math_id": 16, "text": "\n\\left( \\frac{dr}{d\\varphi} \\right)^2 = \\frac{r^4}{b^2} - \\left( 1 - \\frac{r_s}{r} \\right) \\left( \\frac{r^4}{a^2} + r^2 \\right),\n" }, { "math_id": 17, "text": "a = \\frac{h}{c}" }, { "math_id": 18, "text": "b = \\frac{Lc}{E}" }, { "math_id": 19, "text": "\n\\varphi = \\int \\frac{1}{r^2} \\left[\\frac{1}{b^2} - \\left(1 - \\frac{r_\\mathrm{s}}{r}\\right) \\left(\\frac{1}{a^2} + \\frac{1}{r^2} \\right)\\right]^{-1/2} \\, dr.\n" }, { "math_id": 20, "text": "\n\\left( \\frac{dr}{d\\tau} \\right)^2 = \\frac{E^2}{m^2 c^2} - c^2 + \\frac{r_s c^2}{r} - \\frac{h^2}{r^2} + \\frac{r_s h^2}{r^3}\n" }, { "math_id": 21, "text": "\n\\frac{1}{2} m \\left( \\frac{dr}{d\\tau} \\right)^2 = \\left[ \\frac{E^2}{2 m c^2} - \\frac{1}{2} m c^2 \\right] + \\frac{GMm}{r} - \\frac{ L^2 }{ 2 \\mu r^2 } + \\frac{ G(M+m) L^2 }{c^2 \\mu r^3}\n" }, { "math_id": 22, "text": "\nV(r) = -\\frac{GMm}{r} + \\frac{ L^2 }{ 2 \\mu r^2 } - \\frac{ G(M+m) L^2 }{ c^2 \\mu r^3 }\n" }, { "math_id": 23, "text": "\n\\delta \\varphi \\approx \\frac{ 6\\pi G(M + m) }{c^2 A \\left( 1 - e^2 \\right)}\n" }, { "math_id": 24, "text": "\nV(r) = \\frac{mc^2}{2} \\left[ - \\frac{r_s}{r} + \\frac{a^2}{r^2} - \\frac{r_{s} a^2}{r^3} \\right].\n" }, { "math_id": 25, "text": "\nF = -\\frac{dV}{dr} = -\\frac{mc^2}{2r^4} \\left[ r_{s} r^2 - 2a^2 r + 3r_s a^2 \\right] = 0;\n" }, { "math_id": 26, "text": "\\begin{align}\n r_{\\mathrm{outer}} &= \\frac{a^2}{r_s} \\left( 1 + \\sqrt{1 - \\frac{3r_s^2}{a^2}} \\right) \\\\\n r_{\\mathrm{inner}} &= \\frac{a^2}{r_s} \\left( 1 - \\sqrt{1 - \\frac{3r_s^2}{a^2}} \\right) = \\frac{3a^2}{r_{\\mathrm{outer}}},\n\\end{align}" }, { "math_id": 27, "text": "\\begin{align}\n r_{\\mathrm{outer}} &\\approx \\frac{2a^2}{r_s} \\\\\n r_{\\mathrm{inner}} &\\approx \\frac{3}{2} r_s\n\\end{align}" }, { "math_id": 28, "text": "r_{\\text{outer}}^3 = \\frac{G(M+m)}{\\omega_\\varphi^2}" }, { "math_id": 29, "text": " \\frac{GMm}{r^2} = \\mu \\omega_\\varphi^2 r " }, { "math_id": 30, "text": "\n\\omega_\\varphi^2 \\approx \\frac{GM}{r_{\\mathrm{outer}}^3} = \\left( \\frac{r_s c^2}{2r_{\\mathrm{outer}}^3} \\right) = \\left( \\frac{r_s c^2}{2} \\right) \\left( \\frac{r_s^3}{8a^6}\\right) = \\frac{c^2 r_s^4}{16 a^6}\n" }, { "math_id": 31, "text": " r_{\\text{outer}} \\approx r_{\\text{inner}} \\approx 3 r_s " }, { "math_id": 32, "text": "\\omega_r^2 = \\frac{1}{m} \\left[ \\frac{d^2V}{dr^2} \\right]_{r=r_{\\text{outer}}}" }, { "math_id": 33, "text": "\n\\omega_r^2 = \\left( \\frac{c^2 r_s}{2 r_{\\text{outer}}^4} \\right) \\left( r_{\\text{outer}} - r_{\\text{inner}} \\right) = \\omega_\\varphi^2 \\sqrt{1 - \\frac{3r_s^2}{a^2}} \n" }, { "math_id": 34, "text": "\\omega_r = \\omega_\\varphi \\left( 1 - \\frac{3r_s^2}{4a^2} + \\cdots \\right)" }, { "math_id": 35, "text": "\n\\delta \\varphi = T(\\omega_\\varphi - \\omega_r) \\approx 2\\pi \\left( \\frac{3r_s^2}{4a^2} \\right) = \n\\frac{3\\pi m^2 c^2}{2L^2} r_s^2\n" }, { "math_id": 36, "text": "\n\\delta \\varphi \\approx \\frac{3\\pi m^2 c^2}{2L^2} \\left( \\frac{4G^2 M^2}{c^4} \\right) = \\frac{6\\pi G^2 M^2 m^2}{c^2 L^2}\n" }, { "math_id": 37, "text": "\n\\frac{ h^2 }{ G(M + m) } = A\\left(1 - e^2\\right)\n" }, { "math_id": 38, "text": "\n\\delta \\varphi \\approx \\frac{6\\pi G(M + m)}{c^2 A\\left(1 - e^2\\right)}\n" }, { "math_id": 39, "text": "\\delta \\varphi \\approx \\frac{6\\pi G(M + m)}{c^2 l}" }, { "math_id": 40, "text": "\n-\\left\\langle \\frac{dE}{dt} \\right\\rangle = \n\\frac{32G^4 m_1^2 m_2^2(m_1 + m_2)}{5c^5 a^5 \\left(1 - e^2\\right)^{7/2}} \\left( 1 + \\frac{73}{24} e^2 + \\frac{37}{96} e^4 \\right)\n" }, { "math_id": 41, "text": "\n-\\left\\langle \\frac{dL_z}{dt} \\right\\rangle = \\frac{32G^{7/2} m_1^2 m_2^2 \\sqrt{m_1 + m_2}}{5c^5 a^{7/2} \\left(1 - e^2\\right)^2} \n\\left( 1 + \\frac{7}{8} e^2 \\right)\n" }, { "math_id": 42, "text": "\n-\\left\\langle \\frac{dP_b}{dt} \\right\\rangle = \n\\frac{192 \\pi G^{5/3} m_1 m_2 (m_1 + m_2)^{-1/3}}{5c^5 \\left(1 - e^2\\right)^{7/2}} \n\\left( 1 + \\frac{73}{24} e^2 + \\frac{37}{96} e^4 \\right) \\left(\\frac{P_b}{2 \\pi}\\right)^{-{5/3}}\n" } ]
https://en.wikipedia.org/wiki?curid=11694610
11695296
Stimpmeter
Golf putting green measurement device The Stimpmeter is a device used to measure the speed of a golf course putting green by applying a known velocity to a golf ball and measuring the distance traveled in feet. History. It was designed in 1935 by golfer Edward S. Stimpson, Sr. (1904–1985). The Massachusetts state amateur champion and former Harvard golf team captain, Stimpson was a spectator at the 1935 U.S. Open at Oakmont near Pittsburgh, where the winning score was 299 (+11). After witnessing a putt by a top professional (Gene Sarazen, a two-time champion) roll off a green, Stimpson was convinced the greens were unreasonably fast, but wondered how he could prove it. He developed a device, made of wood, now known as the Stimpmeter, which is an angled track that releases a ball at a known velocity so that the distance it rolls on a green's surface can be measured. In 1976, it was redesigned from aluminum by Frank Thomas of the United States Golf Association (USGA). It was first used by the USGA during the 1976 U.S. Open at Atlanta and made available to golf course superintendents in 1978. The 1976 version is painted green. In January 2013, the USGA announced a third generation device based on work by Steven Quintavalla, a senior research engineer at the USGA labs. A second hole in this version enables the option of a shorter run-out. This version is painted blue, and is manufactured to a higher engineering tolerance to improve accuracy and precision. Description. The 1976 device is an extruded aluminum bar, long and wide, with a 145° V-shaped groove extending along its entire length, supporting the ball at two points, apart. It is tapered at one end by removing metal from its underside to reduce the bounce of the ball as it rolls onto the green. It has a notch at a right angle to the length of the bar from the lower tapered end where the ball is placed. The notch may be a hole completely through the bar or just a depression in it. The ball is pulled out of the notch by gravity when the device is slowly raised to an angle of about 20°, rolling onto the green at a repeatable velocity of . The distance travelled by the ball in feet is the 'speed' of the putting green. Six distances, three in each of two opposite directions, should be averaged on a flat section of the putting green. The three balls in each direction must be within of each other for USGA validation of the test. Sloped greens. One problem is finding a near level surface as required in the USGA handbook. Many greens cannot be correctly measured: there may not be an area where the measured distance (or green speed) in opposing directions is less than a foot, particularly when they are very fast, thus requiring a very long level surface. A formula, based on the work of Isaac Newton, as derived and extensively tested by A. Douglas Brede, solves that problem. The formula is: formula_0 (where S↑ is speed up the slope and S↓ is speed down the slope on the same path). This eliminates the effect of the slope and provides a true green speed even on severely sloped greens. Recommendations. The USGA stimpmetered putting greens across the country to produce the following recommendations: For the U.S. Open, they recommend: The greens at Oakmont Country Club (where the device was conceived) are some of the fastest in the world, with readings of . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{2\\times S\\uparrow \\times\\ S\\downarrow}{S\\uparrow +\\ S\\downarrow}" } ]
https://en.wikipedia.org/wiki?curid=11695296
11699089
Kirkpatrick–Seidel algorithm
The Kirkpatrick–Seidel algorithm, proposed by its authors as a potential "ultimate planar convex hull algorithm", is an algorithm for computing the convex hull of a set of points in the plane, with formula_0 time complexity, where formula_1 is the number of input points and formula_2 is the number of points (non dominated or maximal points, as called in some texts) in the hull. Thus, the algorithm is output-sensitive: its running time depends on both the input size and the output size. Another output-sensitive algorithm, the gift wrapping algorithm, was known much earlier, but the Kirkpatrick–Seidel algorithm has an asymptotic running time that is significantly smaller and that always improves on the formula_3 bounds of non-output-sensitive algorithms. The Kirkpatrick–Seidel algorithm is named after its inventors, David G. Kirkpatrick and Raimund Seidel. Although the algorithm is asymptotically optimal, it is not very practical for moderate-sized problems. Algorithm. The basic idea of the algorithm is a kind of reversal of the divide-and-conquer algorithm for convex hulls of Preparata and Hong, dubbed "marriage-before-conquest" by the authors. The traditional divide-and-conquer algorithm splits the input points into two equal parts, e.g., by a vertical line, recursively finds convex hulls for the left and right subsets of the input, and then merges the two hulls into one by finding the "bridge edges", bitangents that connect the two hulls from above and below. The Kirkpatrick–Seidel algorithm splits the input as before, by finding the median of the "x"-coordinates of the input points. However, the algorithm reverses the order of the subsequent steps: its next step is to find the edges of the convex hull that intersect the vertical line defined by this median x-coordinate, which turns out to require linear time. The points on the left and right sides of the splitting line that cannot contribute to the eventual hull are discarded, and the algorithm proceeds recursively on the remaining points. In more detail, the algorithm performs a separate recursion for the upper and lower parts of the convex hull; in the recursion for the upper hull, the noncontributing points to be discarded are those below the bridge edge vertically, while in the recursion for the lower hull the points above the bridge edge vertically are discarded. At the formula_4th level of the recursion, the algorithm solves at most formula_5 subproblems, each of size at most formula_6. The total number of subproblems considered is at most formula_2, since each subproblem finds a new convex hull edge. The worst case occurs when no points can be discarded and the subproblems are as large as possible; that is, when there are exactly formula_5 subproblems in each level of recursion up to level formula_7 . For this worst case, there are formula_8 levels of recursion and formula_9 points considered within each level, so the total running time is formula_0 as stated. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{O}(n \\log h)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "\\mathcal{O}(n \\log n)" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "2^i" }, { "math_id": 6, "text": "\\frac{n}{2^i}" }, { "math_id": 7, "text": "\\log_2 h" }, { "math_id": 8, "text": "\\mathcal{O}(\\log h)" }, { "math_id": 9, "text": "\\mathcal{O}(n)" } ]
https://en.wikipedia.org/wiki?curid=11699089
1169924
Returns to scale
Microeconomic concept In economics, the concept of returns to scale arises in the context of a firm's production function. It explains the long-run linkage of increase in output (production) relative to associated increases in the inputs (factors of production). In the long run, all factors of production are variable and subject to change in response to a given increase in production scale. In other words, returns to scale analysis is a long-term theory because a company can only change the scale of production in the long run by changing factors of production, such as building new facilities, investing in new machinery, or improving technology. There are three possible types of returns to scale: A firm's production function could exhibit different types of returns to scale in different ranges of output. Typically, there could be increasing returns at relatively low output levels, decreasing returns at relatively high output levels, and constant returns at some range of output levels between those extremes. In mainstream microeconomics, the returns to scale faced by a firm are purely technologically imposed and are not influenced by economic decisions or by market conditions (i.e., conclusions about returns to scale are derived from the specific mathematical structure of the production function "in isolation"). As production scales up, companies can use more advanced and sophisticated technologies, resulting in more streamlined and specialised production within the company. Example. When the usages of all inputs increase by a factor of 2, new values for output will be: Assuming that the factor costs are constant (that is, that the firm is a perfect competitor in all input markets) and the production function is homothetic, a firm experiencing constant returns will have constant long-run average costs, a firm experiencing decreasing returns will have increasing long-run average costs, and a firm experiencing increasing returns will have decreasing long-run average costs. However, this relationship breaks down if the firm does not face perfectly competitive factor markets (i.e., in this context, the price one pays for a good does depend on the amount purchased). For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range. Formal definitions. Formally, a production function formula_0 is defined to have: where "K" and "L" are factors of production—capital and labor, respectively. In a more general set-up, for a multi-input-multi-output production processes, one may assume technology can be represented via some technology set, call it formula_5, which must satisfy some regularity conditions of production theory. In this case, the property of constant returns to scale is equivalent to saying that technology set formula_5 is a cone, i.e., satisfies the property formula_6. In turn, if there is a production function that will describe the technology set formula_5 it will have to be homogeneous of degree 1. Formal example. If the Cobb–Douglas production function has its general form formula_7 with formula_8 and formula_9 then formula_10 and, for "a" &gt; 1, there are increasing returns if "b" + "c" &gt; 1, constant returns if "b" + "c" = 1, and decreasing returns if "b" + "c" &lt; 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ F(K,L)" }, { "math_id": 1, "text": "\\ F(aK,aL)=aF(K,L) " }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "\\ F(aK,aL)<aF(K,L) " }, { "math_id": 4, "text": "\\ F(aK,aL)>aF(K,L) " }, { "math_id": 5, "text": "\\ T " }, { "math_id": 6, "text": "\\ aT=T, \\forall a>0 " }, { "math_id": 7, "text": "\\ F(K,L)=AK^{b}L^{c}" }, { "math_id": 8, "text": "0<b<1" }, { "math_id": 9, "text": "0<c<1," }, { "math_id": 10, "text": "\\ F(aK,aL)=A(aK)^{b}(aL)^{c}=Aa^{b}a^{c}K^{b}L^{c}=a^{b+c}AK^{b}L^{c}=a^{b+c}F(K,L)," } ]
https://en.wikipedia.org/wiki?curid=1169924
1169984
Markov blanket
Subset of variables that contains all the useful information In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. If a Markov blanket is minimal, meaning that it cannot drop any variable without losing information, it is called a Markov boundary. Identifying a Markov blanket or a Markov boundary helps to extract useful features. The terms of Markov blanket and Markov boundary were coined by Judea Pearl in 1988. A Markov blanket can be constituted by a set of Markov chains. Markov blanket. A Markov blanket of a random variable formula_0 in a random variable set formula_1 is any subset formula_2 of formula_3, conditioned on which other variables are independent with formula_0: formula_4 It means that formula_2 contains at least all the information one needs to infer formula_0, where the variables in formula_5 are redundant. In general, a given Markov blanket is not unique. Any set in formula_3 that contains a Markov blanket is also a Markov blanket itself. Specifically, formula_3 is a Markov blanket of formula_0 in formula_3. Markov boundary. A Markov boundary of formula_0 in formula_3 is a subset formula_6 of formula_3, such that formula_6 itself is a Markov blanket of formula_0, but any proper subset of formula_6 is not a Markov blanket of formula_0. In other words, a Markov boundary is a minimal Markov blanket. The Markov boundary of a node formula_7 in a Bayesian network is the set of nodes composed of formula_7's parents, formula_7's children, and formula_7's children's other parents. In a Markov random field, the Markov boundary for a node is the set of its neighboring nodes. In a dependency network, the Markov boundary for a node is the set of its parents. Uniqueness of Markov boundary. The Markov boundary always exists. Under some mild conditions, the Markov boundary is unique. However, for most practical and theoretical scenarios multiple Markov boundaries may provide alternative solutions. When there are multiple Markov boundaries, quantities measuring causal effect could fail.
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "\\mathcal{S}=\\{X_1,\\ldots,X_n\\}" }, { "math_id": 2, "text": "\\mathcal{S}_1" }, { "math_id": 3, "text": "\\mathcal{S}" }, { "math_id": 4, "text": "Y\\perp \\!\\!\\! \\perp\\mathcal{S}\\backslash\\mathcal{S}_1 \\mid \\mathcal{S}_1." }, { "math_id": 5, "text": "\\mathcal{S}\\backslash\\mathcal{S}_1" }, { "math_id": 6, "text": "\\mathcal{S}_2" }, { "math_id": 7, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=1169984
11700418
Bifolium
Quartic plane curve A bifolium is a quartic plane curve with equation in Cartesian coordinates: formula_0 Construction and equations. Given a circle C through a point O, and line L tangent to the circle at point O: for each point Q on C, define the point P such that PQ is parallel to the tangent line L, and PQ = OQ. The collection of points P forms the bifolium. In polar coordinates, the bifolium's equation is formula_1 For "a" = 1, the total included area is approximately 0.10. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x^2 + y^2)^2 = ax^2y." }, { "math_id": 1, "text": "\\rho = a \\sin\\theta \\cos^2\\theta." } ]
https://en.wikipedia.org/wiki?curid=11700418
11700432
Convex hull algorithms
Class of algorithms in computational geometry Algorithms that construct convex hulls of various objects have a broad range of applications in mathematics and computer science. In computational geometry, numerous algorithms are proposed for computing the convex hull of a finite set of points, with various computational complexities. Computing the convex hull means that a non-ambiguous and efficient representation of the required convex shape is constructed. The complexity of the corresponding algorithms is usually estimated in terms of "n", the number of input points, and sometimes also in terms of "h", the number of points on the convex hull. Planar case. Consider the general case when the input to the algorithm is a finite unordered set of points on a Cartesian plane. An important special case, in which the points are given in the order of traversal of a simple polygon's boundary, is described later in a separate subsection. If not all points are on the same line, then their convex hull is a convex polygon whose vertices are some of the points in the input set. Its most common representation is the list of its vertices ordered along its boundary clockwise or counterclockwise. In some applications it is convenient to represent a convex polygon as an intersection of a set of half-planes. Lower bound on computational complexity. For a finite set of points in the plane, the lower bound on the computational complexity of finding the convex hull represented as a convex polygon is easily shown to be the same as for sorting using the following reduction. For the set formula_0 numbers to sort consider the set formula_1 of points in the plane. Since they lie on a parabola, which is a convex curve, it is easy to see that the vertices of the convex hull, when traversed along the boundary, produce the sorted order of the numbers formula_0. Clearly, linear time is required for the described transformation of numbers into points and then extracting their sorted order. Therefore, in the general case the convex hull of "n" points cannot be computed more quickly than sorting. The standard Ω("n" log "n") lower bound for sorting is proven in the decision tree model of computing, in which only numerical comparisons but not arithmetic operations can be performed; however, in this model, convex hulls cannot be computed at all. Sorting also requires Ω("n" log "n") time in the algebraic decision tree model of computation, a model that is more suitable for convex hulls, and in this model convex hulls also require Ω("n" log "n") time. However, in models of computer arithmetic that allow numbers to be sorted more quickly than "O"("n" log "n") time, for instance by using integer sorting algorithms, planar convex hulls can also be computed more quickly: the Graham scan algorithm for convex hulls consists of a single sorting step followed by a linear amount of additional work. Optimal output-sensitive algorithms. As stated above, the complexity of finding a convex hull as a function of the input size "n" is lower bounded by Ω("n" log "n"). However, the complexity of some convex hull algorithms can be characterized in terms of both input size "n" and the output size "h" (the number of points in the hull). Such algorithms are called output-sensitive algorithms. They may be asymptotically more efficient than Θ("n" log "n") algorithms in cases when "h" = "o"("n"). The lower bound on worst-case running time of output-sensitive convex hull algorithms was established to be Ω("n" log "h") in the planar case. There are several algorithms which attain this optimal time complexity. The earliest one was introduced by Kirkpatrick and Seidel in 1986 (who called it "the ultimate convex hull algorithm"). A much simpler algorithm was developed by Chan in 1996, and is called Chan's algorithm. Algorithms. Known convex hull algorithms are listed below, ordered by the date of first publication. Time complexity of each algorithm is stated in terms of the number of inputs points "n" and the number of points on the hull "h". Note that in the worst case "h" may be as large as "n". Akl–Toussaint heuristic. The following simple heuristic is often used as the first step in implementations of convex hull algorithms to improve their performance. It is based on the efficient convex hull algorithm by Selim Akl and G. T. Toussaint, 1978. The idea is to quickly exclude many points that would not be part of the convex hull anyway. This method is based on the following idea. Find the two points with the lowest and highest x-coordinates, and the two points with the lowest and highest y-coordinates. (Each of these operations takes O("n").) These four points form a convex quadrilateral, and all points that lie in this quadrilateral (except for the four initially chosen vertices) are not part of the convex hull. Finding all of these points that lie in this quadrilateral is also O("n"), and thus, the entire operation is O("n"). Optionally, the points with smallest and largest sums of x- and y-coordinates as well as those with smallest and largest differences of x- and y-coordinates can also be added to the quadrilateral, thus forming an irregular convex octagon, whose insides can be safely discarded. If the points are random variables, then for a narrow but commonly encountered class of probability density functions, this "throw-away" pre-processing step will make a convex hull algorithm run in linear expected time, even if the worst-case complexity of the convex hull algorithm is quadratic in "n". On-line and dynamic convex hull problems. The discussion above considers the case when all input points are known in advance. One may consider two other settings. Insertion of a point may increase the number of vertices of a convex hull at most by 1, while deletion may convert an "n"-vertex convex hull into an "n-1"-vertex one. The online version may be handled with O(log "n") per point, which is asymptotically optimal. The dynamic version may be handled with O(log2 "n") per operation. Simple polygon. The convex hull of a simple polygon is divided by the polygon into pieces, one of which is the polygon itself and the rest are "pockets" bounded by a piece of the polygon boundary and a single hull edge. Although many algorithms have been published for the problem of constructing the convex hull of a simple polygon, nearly half of them are incorrect. McCallum and Avis provided the first correct algorithm. A later simplification by and uses only a single stack data structure. Their algorithm traverses the polygon clockwise, starting from its leftmost vertex. As it does, it stores a convex sequence of vertices on the stack, the ones that have not yet been identified as being within pockets. At each step, the algorithm follows a path along the polygon from the stack top to the next vertex that is not in one of the two pockets adjacent to the stack top. Then, while the top two vertices on the stack together with this new vertex are not in convex position, it pops the stack, before finally pushing the new vertex onto the stack. When the clockwise traversal reaches the starting point, the algorithm returns the sequence of stack vertices as the hull. Higher dimensions. A number of algorithms are known for the three-dimensional case, as well as for arbitrary dimensions. Chan's algorithm is used for dimensions 2 and 3, and Quickhull is used for computation of the convex hull in higher dimensions. For a finite set of points, the convex hull is a convex polyhedron in three dimensions, or in general a convex polytope for any number of dimensions, whose vertices are some of the points in the input set. Its representation is not so simple as in the planar case, however. In higher dimensions, even if the vertices of a convex polytope are known, construction of its faces is a non-trivial task, as is the dual problem of constructing the vertices given the faces. The size of the output face information may be exponentially larger than the size of the input vertices, and even in cases where the input and output are both of comparable size the known algorithms for high-dimensional convex hulls are not output-sensitive due both to issues with degenerate inputs and with intermediate results of high complexity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_1,\\dots,x_n" }, { "math_id": 1, "text": "(x_1, x^2_1),\\dots,(x_n, x^2_n)" } ]
https://en.wikipedia.org/wiki?curid=11700432
1170097
Hopfield network
Form of artificial neural network A Hopfield network (associative memory or Ising–Lenz–Little model or Nakano-Amari-Hopfield network) is a spin glass system used to model neural networks, based on Ernst Ising's work with Wilhelm Lenz on the Ising model of magnetic materials. Hopfield networks were first described with respect to recurrent neural networks independently by Kaoru Nakano in 1971 and Shun'ichi Amari in 1972, and with respect to biological neural networks by William Little in 1974, and were popularised by John Hopfield in 1982. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables. Hopfield networks also provide a model for understanding human memory. History. The Ising model itself was published in 1920s as a model of magnetism, however it studied at the thermal equilibrium, which does not change with time. Glauber in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium (Glauber dynamics), adding in the component of time. The second component to be added was adaptation to stimulus. Shun'ichi Amari in 1972 proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory. The same idea was published by William A. Little in 1974, who was acknowledged by Hopfield in his 1982 paper. The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. Networks with continuous dynamics were developed by Hopfield in his 1984 paper. A major advance in memory storage capacity was developed by Krotov and Hopfield in 2016 through a change in network dynamics and energy function. This idea was further extended by Demircigil and collaborators in 2017. The continuous dynamics of large memory capacity models was developed in a series of papers between 2016 and 2020. Large memory storage capacity Hopfield Networks are now called Dense Associative Memories or modern Hopfield networks. Structure. The units in Hopfield nets are binary threshold units, i.e. the units only take on two different values for their states, and the value is determined by whether or not the unit's input exceeds its threshold formula_0. Discrete Hopfield nets describe relationships between binary (firing or not-firing) neurons formula_1. At a certain time, the state of the neural net is described by a vector formula_2, which records which neurons are firing in a binary word of formula_3 bits. The interactions formula_4 between neurons have units that usually take on values of 1 or −1, and this convention will be used throughout this article. However, other literature might use units that take values of 0 and 1. These interactions are "learned" via Hebb's law of association, such that, for a certain state formula_5 and distinct nodes formula_6 formula_7 but formula_8. Once the network is trained, formula_4 no longer evolve. If a new state of neurons formula_11 is introduced to the neural network, the net acts on neurons such that where formula_16 is the threshold value of the i'th neuron (often taken to be 0). In this way, Hopfield networks have the ability to "remember" states stored in the interaction matrix, because if a new state formula_17 is subjected to the interaction matrix, each neuron will change until it matches the original state formula_18 (see the Updates section below). The connections in a Hopfield net typically have the following restrictions: The constraint that weights are symmetric guarantees that the energy function decreases monotonically while following the activation rules. A network with asymmetric weights may exhibit some periodic or chaotic behaviour; however, Hopfield found that this behavior is confined to relatively small parts of the phase space and does not impair the network's ability to act as a content-addressable associative memory system. Hopfield also modeled neural nets for continuous values, in which the electric output of each neuron is not binary but some value between 0 and 1. He found that this type of network was also able to store and reproduce memorized states. Notice that every pair of units "i" and "j" in a Hopfield network has a connection that is described by the connectivity weight formula_4. In this sense, the Hopfield network can be formally described as a complete undirected graph formula_21, where formula_22 is a set of McCulloch–Pitts neurons and formula_23 is a function that links pairs of units to a real value, the connectivity weight. Updating. Updating one unit (node in the graph simulating the artificial neuron) in the Hopfield network is performed using the following rule: formula_24 where: Updates in the Hopfield network can be performed in two different ways: Neurons "attract or repel each other" in state space. The weight between two units has a powerful impact upon the values of the neurons. Consider the connection weight formula_25 between two neurons i and j. If formula_28, the updating rule implies that: Thus, the values of neurons "i" and "j" will converge if the weight between them is positive. Similarly, they will diverge if the weight is negative. Working principles of discrete and continuous Hopfield networks. Bruck shed light on the behavior of a neuron in the discrete Hopfield network when proving its convergence in his paper in 1990.  A subsequent paper further investigated the behavior of any neuron in both discrete-time and continuous-time Hopfield networks when the corresponding energy function is minimized during an optimization process. Bruck shows that neuron "j" changes its state "if and only if" it further decreases the following biased pseudo-cut. The discrete Hopfield network minimizes the following biased pseudo-cut for the synaptic weight matrix of the Hopfield net. formula_34 where formula_35 and formula_36 represents the set of neurons which are −1 and +1, respectively, at time formula_37. For further details, see the recent paper. The discrete-time Hopfield Network always minimizes exactly the following pseudo-cut formula_38 The continuous-time Hopfield network always minimizes an upper bound to the following weighted cut formula_39 where formula_40 is a zero-centered sigmoid function. The complex Hopfield network, on the other hand, generally tends to minimize the so-called shadow-cut of the complex weight matrix of the net. Energy. Hopfield nets have a scalar value associated with each state of the network, referred to as the "energy", "E", of the network, where: formula_41 This quantity is called "energy" because it either decreases or stays the same upon network units being updated. Furthermore, under repeated updating the network will eventually converge to a state which is a local minimum in the energy function (which is considered to be a Lyapunov function). Thus, if a state is a local minimum in the energy function it is a stable state for the network. Note that this energy function belongs to a general class of models in physics under the name of Ising models; these in turn are a special case of Markov networks, since the associated probability measure, the Gibbs measure, has the Markov property. Hopfield network in optimization. Hopfield and Tank presented the Hopfield network application in solving the classical traveling-salesman problem in 1985. Since then, the Hopfield network has been widely used for optimization. The idea of using the Hopfield network in optimization problems is straightforward: If a constrained/unconstrained cost function can be written in the form of the Hopfield energy function E, then there exists a Hopfield network whose equilibrium points represent solutions to the constrained/unconstrained optimization problem.  Minimizing the Hopfield energy function both minimizes the objective function and satisfies the constraints also as the constraints are “embedded” into the synaptic weights of the network. Although including the optimization constraints into the synaptic weights in the best possible way is a challenging task, many difficult optimization problems with constraints in different disciplines have been converted to the Hopfield energy function: Associative memory systems, Analog-to-Digital conversion, job-shop scheduling problem, quadratic assignment and other related NP-complete problems, channel allocation problem in wireless networks, mobile ad-hoc network routing problem, image restoration, system identification, combinatorial optimization, etc, just to name a few. Further details can be found in e.g. the paper. Initialization and running. Initialization of the Hopfield networks is done by setting the values of the units to the desired start pattern. Repeated updates are then performed until the network converges to an attractor pattern. Convergence is generally assured, as Hopfield proved that the attractors of this nonlinear dynamical system are stable, not periodic or chaotic as in some other systems. Therefore, in the context of Hopfield networks, an attractor pattern is a final stable state, a pattern that cannot change any value within it under updating. Training. Training a Hopfield net involves lowering the energy of states that the net should "remember". This allows the net to serve as a content addressable memory system, that is to say, the network will converge to a "remembered" state if it is given only part of the state. The net can be used to recover from a distorted input to the trained state that is most similar to that input. This is called associative memory because it recovers memories on the basis of similarity. For example, if we train a Hopfield net with five units so that the state (1, −1, 1, −1, 1) is an energy minimum, and we give the network the state (1, −1, −1, −1, 1) it will converge to (1, −1, 1, −1, 1). Thus, the network is properly trained when the energy of states which the network should remember are local minima. Note that, in contrast to Perceptron training, the thresholds of the neurons are never updated. Learning rules. There are various different learning rules that can be used to store information in the memory of the Hopfield network. It is desirable for a learning rule to have both of the following two properties: These properties are desirable, since a learning rule satisfying them is more biologically plausible. For example, since the human brain is always learning new concepts, one can reason that human learning is incremental. A learning system that was not incremental would generally be trained only once, with a huge batch of training data. Hebbian learning rule for Hopfield networks. Hebbian theory was introduced by Donald Hebb in 1949 in order to explain "associative learning," in which simultaneous activation of neuron cells leads to pronounced increases in synaptic strength between those cells. It is often summarized as "Neurons that fire together wire together. Neurons that fire out of sync fail to link". The Hebbian rule is both local and incremental. For the Hopfield networks, it is implemented in the following manner when learning formula_42 binary patterns: formula_43 where formula_44 represents bit i from pattern formula_45. If the bits corresponding to neurons i and j are equal in pattern formula_45, then the product formula_46 will be positive. This would, in turn, have a positive effect on the weight formula_47 and the values of i and j will tend to become equal. The opposite happens if the bits corresponding to neurons i and j are different. Storkey learning rule. This rule was introduced by Amos Storkey in 1997 and is both local and incremental. Storkey also showed that a Hopfield network trained using this rule has a greater capacity than a corresponding network trained using the Hebbian rule. The weight matrix of an attractor neural network is said to follow the Storkey learning rule if it obeys: formula_48 where formula_49 is a form of "local field" at neuron i. This learning rule is local, since the synapses take into account only neurons at their sides. The rule makes use of more information from the patterns and weights than the generalized Hebbian rule, due to the effect of the local field. Spurious patterns. Patterns that the network uses for training (called "retrieval states") become attractors of the system. Repeated updates would eventually lead to convergence to one of the retrieval states. However, sometimes the network will converge to spurious patterns (different from the training patterns). The energy in these spurious patterns is also a local minimum. For each stored pattern x, the negation -x is also a spurious pattern. A spurious state can also be a linear combination of an odd number of retrieval states. For example, when using 3 patterns formula_50, one can get the following spurious state: formula_51 Spurious patterns that have an even number of states cannot exist, since they might sum up to zero Capacity. The Network capacity of the Hopfield network model is determined by neuron amounts and connections within a given network. Therefore, the number of memories that are able to be stored is dependent on neurons and connections. Furthermore, it was shown that the recall accuracy between vectors and nodes was 0.138 (approximately 138 vectors can be recalled from storage for every 1000 nodes) (Hertz et al., 1991). Therefore, it is evident that many mistakes will occur if one tries to store a large number of vectors. When the Hopfield model does not recall the right pattern, it is possible that an intrusion has taken place, since semantically related items tend to confuse the individual, and recollection of the wrong pattern occurs. Therefore, the Hopfield network model is shown to confuse one stored item with that of another upon retrieval. Perfect recalls and high capacity, &gt;0.14, can be loaded in the network by Storkey learning method; ETAM, ETAM experiments also in. Ulterior models inspired by the Hopfield network were later devised to raise the storage limit and reduce the retrieval error rate, with some being capable of one-shot learning. The storage capacity can be given as formula_52 where formula_42 is the number of neurons in the net. Human memory. The Hopfield model accounts for associative memory through the incorporation of memory vectors. Memory vectors can be slightly used, and this would spark the retrieval of the most similar vector in the network. However, we will find out that due to this process, intrusions can occur. In associative memory for the Hopfield network, there are two types of operations: auto-association and hetero-association. The first being when a vector is associated with itself, and the latter being when two different vectors are associated in storage. Furthermore, both types of operations are possible to store within a single memory matrix, but only if that given representation matrix is not one or the other of the operations, but rather the combination (auto-associative and hetero-associative) of the two. Hopfield's network model utilizes the same learning rule as Hebb's (1949) learning rule, which characterised learning as being a result of the strengthening of the weights in cases of neuronal activity. Rizzuto and Kahana (2001) were able to show that the neural network model can account for repetition on recall accuracy by incorporating a probabilistic-learning algorithm. During the retrieval process, no learning occurs. As a result, the weights of the network remain fixed, showing that the model is able to switch from a learning stage to a recall stage. By adding contextual drift they were able to show the rapid forgetting that occurs in a Hopfield model during a cued-recall task. The entire network contributes to the change in the activation of any single node. McCulloch and Pitts' (1943) dynamical rule, which describes the behavior of neurons, does so in a way that shows how the activations of multiple neurons map onto the activation of a new neuron's firing rate, and how the weights of the neurons strengthen the synaptic connections between the new activated neuron (and those that activated it). Hopfield would use McCulloch–Pitts's dynamical rule in order to show how retrieval is possible in the Hopfield network. However, Hopfield would do so in a repetitious fashion. Hopfield would use a nonlinear activation function, instead of using a linear function. This would therefore create the Hopfield dynamical rule and with this, Hopfield was able to show that with the nonlinear activation function, the dynamical rule will always modify the values of the state vector in the direction of one of the stored patterns. Dense associative memory or modern Hopfield network. Hopfield networks are recurrent neural networks with dynamical trajectories converging to fixed point attractor states and described by an energy function. The state of each model neuron formula_53 is defined by a time-dependent variable formula_54, which can be chosen to be either discrete or continuous. A complete model describes the mathematics of how the future state of activity of each neuron depends on the known present or previous activity of all the neurons. In the original Hopfield model of associative memory, the variables were binary, and the dynamics were described by a one-at-a-time update of the state of the neurons. An energy function quadratic in the formula_54 was defined, and the dynamics consisted of changing the activity of each single neuron formula_55 only if doing so would lower the total energy of the system. This same idea was extended to the case of formula_54 being a continuous variable representing the output of neuron formula_55, and formula_54 being a monotonic function of an input current. The dynamics became expressed as a set of first-order differential equations for which the "energy" of the system always decreased.  The energy in the continuous case has one term which is quadratic in the formula_54 (as in the binary model), and a second term which depends on the gain function (neuron's activation function). While having many desirable properties of associative memory, both of these classical systems suffer from a small memory storage capacity, which scales linearly with the number of input features. In contrast, by increasing the number of parameters in the model so that there are not just pair-wise but also higher-order interactions between the neurons, one can increase the memory storage capacity. Dense Associative Memories (also known as the modern Hopfield networks) are generalizations of the classical Hopfield Networks that break the linear scaling relationship between the number of input features and the number of stored memories. This is achieved by introducing stronger non-linearities (either in the energy function or neurons’ activation functions) leading to super-linear (even an exponential) memory storage capacity as a function of the number of feature neurons, in effect increasing the order of interactions between the neurons. The network still requires a sufficient number of hidden neurons. The key theoretical idea behind dense associative memory networks is to use an energy function and an update rule that is more sharply peaked around the stored memories in the space of neuron’s configurations compared to the classical model, as demonstrated when the higher-order interactions and subsequent energy landscapes are explicitly modelled. Discrete variables. A simple example of the modern Hopfield network can be written in terms of binary variables formula_54 that represent the active formula_56 and inactive formula_57 state of the model neuron formula_55.formula_58In this formula the weights formula_59 represent the matrix of memory vectors (index formula_60 enumerates different memories, and index formula_61 enumerates the content of each memory corresponding to the formula_55-th feature neuron), and the function formula_62 is a rapidly growing non-linear function. The update rule for individual neurons (in the asynchronous case) can be written in the following form formula_63which states that in order to calculate the updated state of the formula_64-th neuron the network compares two energies: the energy of the network with the formula_55-th neuron in the ON state and the energy of the network with the formula_55-th neuron in the OFF state, given the states of the remaining neuron. The updated state of the formula_55-th neuron selects the state that has the lowest of the two energies. In the limiting case when the non-linear energy function is quadratic formula_65 these equations reduce to the familiar energy function and the update rule for the classical binary Hopfield Network. The memory storage capacity of these networks can be calculated for random binary patterns. For the power energy function formula_66 the maximal number of memories that can be stored and retrieved from this network without errors is given byformula_67For an exponential energy function formula_68 the memory storage capacity is exponential in the number of feature neuronsformula_69 Continuous variables. Modern Hopfield networks or dense associative memories can be best understood in continuous variables and continuous time. Consider the network architecture, shown in Fig.1, and the equations for neuron's states evolutionwhere the currents of the feature neurons are denoted by formula_70, and the currents of the memory neurons are denoted by formula_71 (formula_72 stands for hidden neurons). There are no synaptic connections among the feature neurons or the memory neurons. A matrix formula_73 denotes the strength of synapses from a feature neuron formula_55 to the memory neuron formula_45. The synapses are assumed to be symmetric, so that the same value characterizes a different physical synapse from the memory neuron formula_45 to the feature neuron formula_55. The outputs of the memory neurons and the feature neurons are denoted by formula_74 and formula_75, which are non-linear functions of the corresponding currents. In general these outputs can depend on the currents of all the neurons in that layer so that formula_76 and formula_77. It is convenient to define these activation functions as derivatives of the Lagrangian functions for the two groups of neuronsThis way the specific form of the equations for neuron's states is completely defined once the Lagrangian functions are specified. Finally, the time constants for the two groups of neurons are denoted by formula_78 and formula_79, formula_80 is the input current to the network that can be driven by the presented data.  General systems of non-linear differential equations can have many complicated behaviors that can depend on the choice of the non-linearities and the initial conditions. For Hopfield Networks, however, this is not the case - the dynamical trajectories always converge to a fixed point attractor state. This property is achieved because these equations are specifically engineered so that they have an underlying energy function The terms grouped into square brackets represent a Legendre transform of the Lagrangian function with respect to the states of the neurons. If the Hessian matrices of the Lagrangian functions are positive semi-definite, the energy function is guaranteed to decrease on the dynamical trajectory This property makes it possible to prove that the system of dynamical equations describing temporal evolution of neurons' activities will eventually reach a fixed point attractor state. In certain situations one can assume that the dynamics of hidden neurons equilibrates at a much faster time scale compared to the feature neurons, formula_81. In this case the steady state solution of the second equation in the system (1) can be used to express the currents of the hidden units through the outputs of the feature neurons. This makes it possible to reduce the general theory (1) to an effective theory for feature neurons only. The resulting effective update rules and the energies for various common choices of the Lagrangian functions are shown in Fig.2. In the case of log-sum-exponential Lagrangian function the update rule (if applied once) for the states of the feature neurons is the attention mechanism commonly used in many modern AI systems (see Ref. for the derivation of this result from the continuous time formulation). Relationship to classical Hopfield network with continuous variables. Classical formulation of continuous Hopfield Networks can be understood as a special limiting case of the modern Hopfield networks with one hidden layer. Continuous Hopfield Networks for neurons with graded response are typically described by the dynamical equations and the energy function where formula_82, and formula_83 is the inverse of the activation function formula_84. This model is a special limit of the class of models that is called models A, with the following choice of the Lagrangian functions that, according to the definition (2), leads to the activation functions If we integrate out the hidden neurons the system of equations (1) reduces to the equations on the feature neurons (5) with formula_85, and the general expression for the energy (3) reduces to the effective energy While the first two terms in equation (6) are the same as those in equation (9), the third terms look superficially different. In equation (9) it is a Legendre transform of the Lagrangian for the feature neurons, while in (6) the third term is an integral of the inverse activation function. Nevertheless, these two expressions are in fact equivalent, since the derivatives of a function and its Legendre transform are inverse functions of each other. The easiest way to see that these two terms are equal explicitly is to differentiate each one with respect to formula_86. The results of these differentiations for both expressions are equal to formula_87. Thus, the two expressions are equal up to an additive constant. This completes the proof that the classical Hopfield Network with continuous states is a special limiting case of the modern Hopfield network (1) with energy (3). General formulation of the modern Hopfield network. Biological neural networks have a large degree of heterogeneity in terms of different cell types. This section describes a mathematical model of a fully connected modern Hopfield network assuming the extreme degree of heterogeneity: every single neuron is different. Specifically, an energy function and the corresponding dynamical equations are described assuming that each neuron has its own activation function and kinetic time scale.  The network is assumed to be fully connected, so that every neuron is connected to every other neuron using a symmetric matrix of weights formula_88, indices formula_89 and formula_90 enumerate different neurons in the network, see Fig.3. The easiest way to mathematically formulate this problem is to define the architecture through a Lagrangian function formula_91 that depends on the activities of all the neurons in the network. The activation function for each neuron is defined as a partial derivative of the Lagrangian  with respect to that neuron's activity From the biological perspective one can think about formula_92 as an axonal output of the neuron formula_89. In the simplest case, when the Lagrangian is additive for different neurons, this definition results in the activation that is a non-linear function of that neuron's activity. For non-additive Lagrangians this activation function can depend on the activities of a group of neurons. For instance, it can contain contrastive (softmax) or divisive normalization. The dynamical equations describing temporal evolution of a given neuron are given by This equation belongs to the class of models called firing rate models in neuroscience. Each neuron formula_89 collects the axonal outputs formula_93 from all the neurons, weights them with the synaptic coefficients formula_88 and produces its own time-dependent activity formula_94. The temporal evolution has a time constant formula_95, which in general can be different for every neuron. This network has a global energy function where the first two terms represent the Legendre transform of the Lagrangian function with respect to the neurons' currents formula_94. The temporal derivative of this energy function can be computed on the dynamical trajectories leading to (see for details) The last inequality sign holds provided that the matrix formula_96 (or its symmetric part) is positive semi-definite. If, in addition to this, the energy function is bounded from below the non-linear dynamical equations are guaranteed to converge to a fixed point attractor state. The advantage of formulating this network in terms of the Lagrangian functions is that it makes it possible to easily experiment with different choices of the activation functions and different architectural arrangements of neurons. For all those flexible choices the conditions of convergence are determined by the properties of the matrix formula_97 and the existence of the lower bound on the energy function. Hierarchical associative memory network. The neurons can be organized in layers so that every neuron in a given layer has the same activation function and the same dynamic time scale. If we assume that there are no horizontal connections between the neurons within the layer (lateral connections) and there are no skip-layer connections, the general fully connected network (11), (12) reduces to the architecture shown in Fig.4. It has formula_98 layers of recurrently connected neurons with the states described by continuous variables formula_99 and the activation functions formula_100, index formula_101 enumerates the layers of the network, and index formula_55 enumerates individual neurons in that layer. The activation functions can depend on the activities of all the neurons in the layer. Every layer can have a different number of neurons formula_102. These neurons are recurrently connected with the neurons in the preceding and the subsequent layers. The matrices of weights that connect neurons in layers formula_101 and formula_103 are denoted by formula_104 (the order of the upper indices for weights is the same as the order of the lower indices, in the example above this means that the index formula_55 enumerates neurons in the layer formula_101, and index formula_105 enumerates neurons in the layer formula_103). The feedforward weights and the feedback weights are equal. The dynamical equations for the neurons' states can be written as with boundary conditions The main difference between these equations and those from the conventional feedforward networks is the presence of the second term, which is responsible for the feedback from higher layers. These top-down signals help neurons in lower layers to decide on their response to the presented stimuli. Following the general recipe it is convenient to introduce a Lagrangian function formula_106 for the formula_101-th hidden layer, which depends on the activities of all the neurons in that layer. The activation functions in that layer can be defined as partial derivatives of the Lagrangian With these definitions the energy (Lyapunov) function is given by If the Lagrangian functions, or equivalently the activation functions, are chosen in such a way that the Hessians for each layer are positive semi-definite and the overall energy is bounded from below, this system is guaranteed to converge to a fixed point attractor state. The temporal derivative of this energy function is given by Thus, the hierarchical layered network is indeed an attractor network with the global energy function. This network is described by a hierarchical set of synaptic weights that can be learned for each specific problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " U_i " }, { "math_id": 1, "text": "1,2,\\ldots,i,j,\\ldots,N" }, { "math_id": 2, "text": " V " }, { "math_id": 3, "text": " N " }, { "math_id": 4, "text": " w_{ij} " }, { "math_id": 5, "text": " V^s " }, { "math_id": 6, "text": "i,j" }, { "math_id": 7, "text": " w_{ij} = V_i^s V_j^s " }, { "math_id": 8, "text": " w_{ii} = 0 " }, { "math_id": 9, "text": " w_{ij} = (2V_i^s - 1)(2V_j^s -1) " }, { "math_id": 10, "text": " \\{0, 1\\} " }, { "math_id": 11, "text": " V^{s'} " }, { "math_id": 12, "text": "V^{s'}_i \\rightarrow 1 " }, { "math_id": 13, "text": " \\sum_j w_{ij} V^{s'}_j > U_i " }, { "math_id": 14, "text": "V^{s'}_i \\rightarrow -1 " }, { "math_id": 15, "text": " \\sum_j w_{ij} V^{s'}_j < U_i " }, { "math_id": 16, "text": "U_i" }, { "math_id": 17, "text": "V^{s'} " }, { "math_id": 18, "text": "V^{s} " }, { "math_id": 19, "text": "w_{ii}=0, \\forall i" }, { "math_id": 20, "text": "w_{ij} = w_{ji}, \\forall i,j" }, { "math_id": 21, "text": " G = \\langle V, f\\rangle " }, { "math_id": 22, "text": "V" }, { "math_id": 23, "text": "f:V^2 \\rightarrow \\mathbb R" }, { "math_id": 24, "text": "s_i \\leftarrow \\left\\{\\begin{array}{ll} +1 & \\text{if }\\sum_{j}{w_{ij}s_j}\\geq\\theta_i, \\\\\n -1 & \\text{otherwise.}\\end{array}\\right." }, { "math_id": 25, "text": "w_{ij}" }, { "math_id": 26, "text": "s_i" }, { "math_id": 27, "text": "\\theta_i" }, { "math_id": 28, "text": "w_{ij} > 0 " }, { "math_id": 29, "text": "s_j = 1" }, { "math_id": 30, "text": "s_{i}" }, { "math_id": 31, "text": "s_{i} = 1" }, { "math_id": 32, "text": "s_j = -1" }, { "math_id": 33, "text": "s_i = -1" }, { "math_id": 34, "text": " J_{pseudo-cut}(k) = \n\\sum_{i \\in C_1(k)} \\sum_{j \\in C_2(k)} w_{ij} + \\sum_{j \\in C_1(k)} {\\theta_j} " }, { "math_id": 35, "text": " C_1(k) " }, { "math_id": 36, "text": " C_2(k) " }, { "math_id": 37, "text": " k " }, { "math_id": 38, "text": " U(k) = \\sum_{i=1}^N \\sum_{j=1}^{N} w_{ij} ( s_i(k) - s_j(k) )^2 + 2 \\sum_{j=1}^N \\theta_j s_j(k) " }, { "math_id": 39, "text": " V(t) = \\sum_{i=1}^N \\sum_{j=1}^N w_{ij} ( f(s_i(t)) - f(s_j(t) )^2 + 2 \\sum_{j=1}^N \\theta_j f(s_j(t)) " }, { "math_id": 40, "text": " f(\\cdot) " }, { "math_id": 41, "text": "E = -\\frac12\\sum_{i,j} w_{ij} s_i s_j -\\sum_i \\theta_i s_i" }, { "math_id": 42, "text": "n" }, { "math_id": 43, "text": " w_{ij}=\\frac{1}{n}\\sum_{\\mu=1}^{n}\\epsilon_{i}^\\mu \\epsilon_{j}^\\mu " }, { "math_id": 44, "text": "\\epsilon_i^\\mu" }, { "math_id": 45, "text": "\\mu" }, { "math_id": 46, "text": " \\epsilon_{i}^\\mu \\epsilon_{j}^\\mu " }, { "math_id": 47, "text": "w_{ij} " }, { "math_id": 48, "text": " w_{ij}^{\\nu} = w_{ij}^{\\nu-1}\n\t\t +\\frac{1}{n}\\epsilon_{i}^{\\nu} \\epsilon_{j}^{\\nu} \n\t\t -\\frac{1}{n}\\epsilon_{i}^{\\nu} h_{ji}^{\\nu}\n\t\t -\\frac{1}{n}\\epsilon_{j}^{\\nu} h_{ij}^{\\nu}\n\t\t " }, { "math_id": 49, "text": " h_{ij}^{\\nu} = \\sum_{k=1~:~i\\neq k\\neq j}^{n} w_{ik}^{\\nu-1}\\epsilon_{k}^{\\nu} " }, { "math_id": 50, "text": " \\mu_1, \\mu_2, \\mu_3" }, { "math_id": 51, "text": " \\epsilon_{i}^{\\rm{mix}} = \\pm \\sgn(\\pm \\epsilon_{i}^{\\mu_{1}} \n\t\t\t \\pm \\epsilon_{i}^{\\mu_{2}}\n\t\t\t \\pm \\epsilon_{i}^{\\mu_{3}})\n" }, { "math_id": 52, "text": "C \\cong \\frac{n}{2\\log_2n}" }, { "math_id": 53, "text": "i " }, { "math_id": 54, "text": "V_i" }, { "math_id": 55, "text": "i" }, { "math_id": 56, "text": "V_i=+1" }, { "math_id": 57, "text": "V_i=-1" }, { "math_id": 58, "text": "E = - \\sum\\limits_{\\mu = 1}^{N_\\text{mem}} F\\Big(\\sum\\limits_{i=1}^{N_f}\\xi_{\\mu i} V_i\\Big)" }, { "math_id": 59, "text": "\\xi_{\\mu i}" }, { "math_id": 60, "text": "\\mu = 1...N_\\text{mem}" }, { "math_id": 61, "text": "i=1...N_f" }, { "math_id": 62, "text": "F(x)" }, { "math_id": 63, "text": "V^{(t+1)}_i = Sign\\bigg[ \\sum\\limits_{\\mu=1}^{N_\\text{mem}} \\bigg(F\\Big(\\xi_{\\mu i} + \\sum\\limits_{j\\neq i}\\xi_{\\mu j} V^{(t)}_j\\Big) - F\\Big(-\\xi_{\\mu i} + \\sum\\limits_{j\\neq i}\\xi_{\\mu j} V^{(t)}_j\\Big) \\bigg)\\bigg]" }, { "math_id": 64, "text": "i" }, { "math_id": 65, "text": "F(x) = x^2" }, { "math_id": 66, "text": "F(x)=x^n" }, { "math_id": 67, "text": "N^{max}_{\\text{mem}}\\approx \\frac{1}{2 (2n-3)!!} \\frac{N_f^{n-1}}{\\ln(N_f)}" }, { "math_id": 68, "text": "F(x)=e^x" }, { "math_id": 69, "text": "N^{max}_{\\text{mem}}\\approx 2^{N_f/2}" }, { "math_id": 70, "text": "x_i" }, { "math_id": 71, "text": "h_\\mu" }, { "math_id": 72, "text": "h" }, { "math_id": 73, "text": "\\xi_{\\mu i}" }, { "math_id": 74, "text": "f_\\mu" }, { "math_id": 75, "text": "g_i" }, { "math_id": 76, "text": "f_\\mu = f(\\{h_\\mu\\})" }, { "math_id": 77, "text": "g_i = g(\\{x_i\\})" }, { "math_id": 78, "text": "\\tau_f" }, { "math_id": 79, "text": "\\tau_h" }, { "math_id": 80, "text": "I_i" }, { "math_id": 81, "text": "\\tau_h\\ll\\tau_f" }, { "math_id": 82, "text": "V_i = g(x_i)" }, { "math_id": 83, "text": "g^{-1}(z)" }, { "math_id": 84, "text": "g(x)" }, { "math_id": 85, "text": "T_{ij} = \\sum\\limits_{\\mu=1}^{N_h} \\xi_{\\mu i }\\xi_{\\mu j}" }, { "math_id": 86, "text": "x_i" }, { "math_id": 87, "text": "x_i g(x_i)'" }, { "math_id": 88, "text": "W_{IJ}" }, { "math_id": 89, "text": "I" }, { "math_id": 90, "text": "J" }, { "math_id": 91, "text": "L(\\{x_I\\})" }, { "math_id": 92, "text": "g_I" }, { "math_id": 93, "text": "g_J" }, { "math_id": 94, "text": "x_I" }, { "math_id": 95, "text": "\\tau_I" }, { "math_id": 96, "text": "M_{IK}" }, { "math_id": 97, "text": "M_{IJ}" }, { "math_id": 98, "text": "N_\\text{layer}" }, { "math_id": 99, "text": "x_i^{A}" }, { "math_id": 100, "text": "g_i^{A}" }, { "math_id": 101, "text": "A" }, { "math_id": 102, "text": "N_A" }, { "math_id": 103, "text": "B" }, { "math_id": 104, "text": "\\xi^{(A,B)}_{ij}" }, { "math_id": 105, "text": "j" }, { "math_id": 106, "text": "L^A(\\{x^A_i\\})" } ]
https://en.wikipedia.org/wiki?curid=1170097
1170160
Chirality (mathematics)
Property of an object that is not congruent to its mirror image In geometry, a figure is chiral (and said to have chirality) if it is not identical to its mirror image, or, more precisely, if it cannot be mapped to its mirror image by rotations and translations alone. An object that is not chiral is said to be "achiral". A chiral object and its mirror image are said to be enantiomorphs. The word "chirality" is derived from the Greek (cheir), the hand, the most familiar chiral object; the word "enantiomorph" stems from the Greek (enantios) 'opposite' + (morphe) 'form'. Examples. Some chiral three-dimensional objects, such as the helix, can be assigned a right or left handedness, according to the right-hand rule. Many other familiar objects exhibit the same chiral symmetry of the human body, such as gloves and shoes. Right shoes differ from left shoes only by being mirror images of each other. In contrast thin gloves may not be considered chiral if you can wear them inside-out. The J-, L-, S- and Z-shaped "tetrominoes" of the popular video game Tetris also exhibit chirality, but only in a two-dimensional space. Individually they contain no mirror symmetry in the plane. Chirality and symmetry group. A figure is achiral if and only if its symmetry group contains at least one "orientation-reversing" isometry. (In Euclidean geometry any isometry can be written as formula_0 with an orthogonal matrix formula_1 and a vector formula_2. The determinant of formula_1 is either 1 or −1 then. If it is −1 the isometry is orientation-reversing, otherwise it is orientation-preserving. A general definition of chirality based on group theory exists. It does not refer to any orientation concept: an isometry is direct if and only if it is a product of squares of isometries, and if not, it is an indirect isometry. The resulting chirality definition works in spacetime. Chirality in two dimensions. In two dimensions, every figure which possesses an axis of symmetry is achiral, and it can be shown that every "bounded" achiral figure must have an axis of symmetry. (An "axis of symmetry" of a figure formula_3 is a line formula_4, such that formula_3 is invariant under the mapping formula_5, when formula_4 is chosen to be the formula_6-axis of the coordinate system.) For that reason, a triangle is achiral if it is equilateral or isosceles, and is chiral if it is scalene. Consider the following pattern: This figure is chiral, as it is not identical to its mirror image: But if one prolongs the pattern in both directions to infinity, one receives an (unbounded) achiral figure which has no axis of symmetry. Its symmetry group is a frieze group generated by a single glide reflection. Chirality in three dimensions. In three dimensions, every figure that possesses a mirror plane of symmetry "S1", an inversion center of symmetry "S2", or a higher improper rotation (rotoreflection) "Sn" axis of symmetry is achiral. (A "plane of symmetry" of a figure formula_3 is a plane formula_7, such that formula_3 is invariant under the mapping formula_8, when formula_7 is chosen to be the formula_6-formula_9-plane of the coordinate system. A "center of symmetry" of a figure formula_3 is a point formula_10, such that formula_3 is invariant under the mapping formula_11, when formula_10 is chosen to be the origin of the coordinate system.) Note, however, that there are achiral figures lacking both plane and center of symmetry. An example is the figure formula_12 which is invariant under the orientation reversing isometry formula_13 and thus achiral, but it has neither plane nor center of symmetry. The figure formula_14 also is achiral as the origin is a center of symmetry, but it lacks a plane of symmetry. Achiral figures can have a center axis. Knot theory. A knot is called achiral if it can be continuously deformed into its mirror image, otherwise it is called a chiral knot. For example, the unknot and the figure-eight knot are achiral, whereas the trefoil knot is chiral. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v\\mapsto Av+b" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "F" }, { "math_id": 4, "text": "L" }, { "math_id": 5, "text": "(x,y)\\mapsto(x,-y)" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "P" }, { "math_id": 8, "text": "(x,y,z)\\mapsto(x,y,-z)" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "C" }, { "math_id": 11, "text": "(x,y,z)\\mapsto(-x,-y,-z)" }, { "math_id": 12, "text": "F_0=\\left\\{(1,0,0),(0,1,0),(-1,0,0),(0,-1,0),(2,1,1),(-1,2,-1),(-2,-1,1),(1,-2,-1)\\right\\}" }, { "math_id": 13, "text": "(x,y,z)\\mapsto(-y,x,-z)" }, { "math_id": 14, "text": "F_1=\\left\\{(1,0,0),(-1,0,0),(0,2,0),(0,-2,0),(1,1,1),(-1,-1,-1)\\right\\}" } ]
https://en.wikipedia.org/wiki?curid=1170160
1170169
Chirality (physics)
Property of particles related to spin A chiral phenomenon is one that is not identical to its mirror image (see the article on mathematical chirality). The spin of a particle may be used to define a handedness, or helicity, for that particle, which, in the case of a massless particle, is the same as chirality. A symmetry transformation between the two is called parity transformation. Invariance under parity transformation by a Dirac fermion is called chiral symmetry. Chirality and helicity. The helicity of a particle is positive ("right-handed") if the direction of its spin is the same as the direction of its motion. It is negative ("left-handed") if the directions of spin and motion are opposite. So a standard clock, with its spin vector defined by the rotation of its hands, has left-handed helicity if tossed with its face directed forwards. Mathematically, "helicity" is the sign of the projection of the spin vector onto the momentum vector: "left" is negative, "right" is positive. The chirality of a particle is more abstract: It is determined by whether the particle transforms in a right- or left-handed representation of the Poincaré group. For massless particles – photons, gluons, and (hypothetical) gravitons – chirality is the same as helicity; a given massless particle appears to spin in the same direction along its axis of motion regardless of point of view of the observer. For massive particles – such as electrons, quarks, and neutrinos – chirality and helicity must be distinguished: In the case of these particles, it is possible for an observer to change to a reference frame moving faster than the spinning particle, in which case the particle will then appear to move backwards, and its helicity (which may be thought of as "apparent chirality") will be reversed. That is, helicity is a constant of motion, but it is not Lorentz invariant. Chirality is Lorentz invariant, but is not a constant of motion: a massive left-handed spinor, when propagating, will evolve into a right handed spinor over time, and vice versa. A "massless" particle moves with the speed of light, so no real observer (who must always travel at less than the speed of light) can be in any reference frame where the particle appears to reverse its relative direction of spin, meaning that all real observers see the same helicity. Because of this, the direction of spin of massless particles is not affected by a change of inertial reference frame (a Lorentz boost) in the direction of motion of the particle, and the sign of the projection (helicity) is fixed for all reference frames: The helicity of massless particles is a "relativistic invariant" (a quantity whose value is the same in all inertial reference frames) which always matches the massless particle's chirality. The discovery of neutrino oscillation implies that neutrinos have mass, so the photon is the only confirmed massless particle; gluons are expected to also be massless, although this has not been conclusively tested. Hence, these are the only two particles now known for which helicity could be identical to chirality, and only the photon has been confirmed by measurement. All other observed particles have mass and thus may have different helicities in different reference frames. Chiral theories. Particle physicists have only observed or inferred left-chiral fermions and right-chiral antifermions engaging in the charged weak interaction. In the case of the weak interaction, which can in principle engage with both left- and right-chiral fermions, only two left-handed fermions interact. Interactions involving right-handed or opposite-handed fermions have not been shown to occur, implying that the universe has a preference for left-handed chirality. This preferential treatment of one chiral realization over another violates parity, as first noted by Chien Shiung Wu in her famous experiment known as the Wu experiment. This is a striking observation, since parity is a symmetry that holds for all other fundamental interactions. Chirality for a Dirac fermion ψ is defined through the operator "γ"5, which has eigenvalues ±1; the eigenvalue's sign is equal to the particle's chirality: +1 for right-handed, −1 for left-handed. Any Dirac field can thus be projected into its left- or right-handed component by acting with the projection operators (1 − "γ"5) or (1 + "γ"5) on ψ. The coupling of the charged weak interaction to fermions is proportional to the first projection operator, which is responsible for this interaction's parity symmetry violation. A common source of confusion is due to conflating the "γ"5, chirality operator with the helicity operator. Since the helicity of massive particles is frame-dependent, it might seem that the same particle would interact with the weak force according to one frame of reference, but not another. The resolution to this paradox is that the chirality operator is equivalent to helicity for massless fields only, for which helicity is not frame-dependent. By contrast, for massive particles, chirality is not the same as helicity, or, alternatively, helicity is not Lorentz invariant, so there is no frame dependence of the weak interaction: a particle that couples to the weak force in one frame does so in every frame. A theory that is asymmetric with respect to chiralities is called a chiral theory, while a non-chiral (i.e., parity-symmetric) theory is sometimes called a vector theory. Many pieces of the Standard Model of physics are non-chiral, which is traceable to anomaly cancellation in chiral theories. Quantum chromodynamics is an example of a vector theory, since both chiralities of all quarks appear in the theory, and couple to gluons in the same way. The electroweak theory, developed in the mid 20th century, is an example of a chiral theory. Originally, it assumed that neutrinos were massless, and assumed the existence of only left-handed neutrinos and right-handed antineutrinos. After the observation of neutrino oscillations, which imply that neutrinos are massive (like all other fermions) the revised theories of the electroweak interaction now include both right- and left-handed neutrinos. However, it is still a chiral theory, as it does not respect parity symmetry. The exact nature of the neutrino is still unsettled and so the electroweak theories that have been proposed are somewhat different, but most accommodate the chirality of neutrinos in the same way as was already done for all other fermions. Chiral symmetry. Vector gauge theories with massless Dirac fermion fields ψ exhibit chiral symmetry, i.e., rotating the left-handed and the right-handed components independently makes no difference to the theory. We can write this as the action of rotation on the fields: formula_0  and  formula_1 or formula_2  and   formula_3 With N flavors, we have unitary rotations instead: U("N")L × U("N")R. More generally, we write the right-handed and left-handed states as a projection operator acting on a spinor. The right-handed and left-handed projection operators are formula_4 and formula_5 Massive fermions do not exhibit chiral symmetry, as the mass term in the Lagrangian, "m""ψ""ψ", breaks chiral symmetry explicitly. Spontaneous chiral symmetry breaking may also occur in some theories, as it most notably does in quantum chromodynamics. The chiral symmetry transformation can be divided into a component that treats the left-handed and the right-handed parts equally, known as vector symmetry, and a component that actually treats them differently, known as axial symmetry. (cf. "Current algebra".) A scalar field model encoding chiral symmetry and its breaking is the chiral model. The most common application is expressed as equal treatment of clockwise and counter-clockwise rotations from a fixed frame of reference. The general principle is often referred to by the name chiral symmetry. The rule is absolutely valid in the classical mechanics of Newton and Einstein, but results from quantum mechanical experiments show a difference in the behavior of left-chiral versus right-chiral subatomic particles. Example: u and d quarks in QCD. Consider quantum chromodynamics (QCD) with two "massless" quarks u and d (massive fermions do not exhibit chiral symmetry). The Lagrangian reads formula_6 In terms of left-handed and right-handed spinors, it reads formula_7 Defining formula_9 it can be written as formula_10 The Lagrangian is unchanged under a rotation of "q"L by any 2×2 unitary matrix L, and "q"R by any 2×2 unitary matrix R. This symmetry of the Lagrangian is called "flavor chiral symmetry", and denoted as U(2)L × U(2)R. It decomposes into formula_11 The singlet vector symmetry, U(1)"V", acts as formula_12 and thus invariant under U(1) gauge symmetry. This corresponds to baryon number conservation. The singlet axial group U(1)"A" transforms as the following global transformation formula_13 However, it does not correspond to a conserved quantity, because the associated axial current is not conserved. It is explicitly violated by a quantum anomaly. The remaining chiral symmetry SU(2)L × SU(2)R turns out to be spontaneously broken by a quark condensate formula_14 formed through nonperturbative action of QCD gluons, into the diagonal vector subgroup SU(2)"V" known as isospin. The Goldstone bosons corresponding to the three broken generators are the three pions. As a consequence, the effective theory of QCD bound states like the baryons, must now include mass terms for them, ostensibly disallowed by unbroken chiral symmetry. Thus, this chiral symmetry breaking induces the bulk of hadron masses, such as those for the nucleons — in effect, the bulk of the mass of all visible matter. In the real world, because of the nonvanishing and differing masses of the quarks, SU(2)L × SU(2)R is only an approximate symmetry to begin with, and therefore the pions are not massless, but have small masses: they are pseudo-Goldstone bosons. More flavors. For more "light" quark species, N flavors in general, the corresponding chiral symmetries are U("N")L × U("N")R′, decomposing into formula_15 and exhibiting a very analogous chiral symmetry breaking pattern. Most usually, "N" = 3 is taken, the u, d, and s quarks taken to be light (the eightfold way), so then approximately massless for the symmetry to be meaningful to a lowest order, while the other three quarks are sufficiently heavy to barely have a residual chiral symmetry be visible for practical purposes. An application in particle physics. In theoretical physics, the electroweak model breaks parity maximally. All its fermions are chiral Weyl fermions, which means that the charged weak gauge bosons W+ and W− only couple to left-handed quarks and leptons. Some theorists found this objectionable, and so conjectured a GUT extension of the weak force which has new, high energy W′ and Z′ bosons, which "do" couple with right handed quarks and leptons: formula_16 to formula_17 Here, SU(2)L (pronounced "SU(2) left") is SU(2)W from above, while "B−L" is the baryon number minus the lepton number. The electric charge formula in this model is given by formula_18 where formula_19 and formula_20 are the left and right weak isospin values of the fields in the theory. There is also the chromodynamic SU(3)C. The idea was to restore parity by introducing a left-right symmetry. This is a group extension of formula_21 (the left-right symmetry) by formula_22 to the semidirect product formula_23 This has two connected components where formula_21 acts as an automorphism, which is the composition of an involutive outer automorphism of SU(3)C with the interchange of the left and right copies of SU(2) with the reversal of U(1)"B−L". It was shown by Mohapatra &amp; Senjanovic (1975) that left-right symmetry can be spontaneously broken to give a chiral low energy theory, which is the Standard Model of Glashow, Weinberg, and Salam, and also connects the small observed neutrino masses to the breaking of left-right symmetry via the seesaw mechanism. In this setting, the chiral quarks formula_24 and formula_25 are unified into an irreducible representation ("irrep") formula_26 The leptons are also unified into an irreducible representation formula_27 The Higgs bosons needed to implement the breaking of left-right symmetry down to the Standard Model are formula_28 This then provides three sterile neutrinos which are perfectly consistent with current[ [update]] neutrino oscillation data. Within the seesaw mechanism, the sterile neutrinos become superheavy without affecting physics at low energies. Because the left–right symmetry is spontaneously broken, left–right models predict domain walls. This left-right symmetry idea first appeared in the Pati–Salam model (1974) and Mohapatra–Pati models (1975). See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi_{\\rm L}\\rightarrow e^{i\\theta_{\\rm L}}\\psi_{\\rm L}" }, { "math_id": 1, "text": "\\psi_{\\rm R}\\rightarrow \\psi_{\\rm R}" }, { "math_id": 2, "text": "\\psi_{\\rm L}\\rightarrow \\psi_{\\rm L}" }, { "math_id": 3, "text": "\\psi_{\\rm R}\\rightarrow e^{i\\theta_{\\rm R}}\\psi_{\\rm R}." }, { "math_id": 4, "text": " P_{\\rm R} = \\frac{1 + \\gamma^5}{2}" }, { "math_id": 5, "text": " P_{\\rm L} = \\frac{1 - \\gamma^5}{2}" }, { "math_id": 6, "text": "\\mathcal{L} = \\overline{u}\\,i\\displaystyle{\\not}D \\,u + \\overline{d}\\,i\\displaystyle{\\not}D\\, d + \\mathcal{L}_\\mathrm{gluons}~." }, { "math_id": 7, "text": "\\mathcal{L} = \\overline{u}_{\\rm L}\\,i\\displaystyle{\\not}D \\,u_{\\rm L} + \\overline{u}_{\\rm R}\\,i\\displaystyle{\\not}D \\,u_{\\rm R} + \\overline{d}_{\\rm L}\\,i\\displaystyle{\\not}D \\,d_{\\rm L} + \\overline{d}_{\\rm R}\\,i\\displaystyle{\\not}D \\,d_{\\rm R} + \\mathcal{L}_\\mathrm{gluons} ~." }, { "math_id": 8, "text": "\\displaystyle{\\not}D" }, { "math_id": 9, "text": "q = \\begin{bmatrix} u \\\\ d \\end{bmatrix} ," }, { "math_id": 10, "text": "\\mathcal{L} = \\overline{q}_{\\rm L}\\,i\\displaystyle{\\not}D \\,q_{\\rm L} + \\overline{q}_{\\rm R}\\,i\\displaystyle{\\not}D\\, q_{\\rm R} + \\mathcal{L}_\\mathrm{gluons} ~." }, { "math_id": 11, "text": "\\mathrm{SU}(2)_\\text{L} \\times \\mathrm{SU}(2)_\\text{R} \\times \\mathrm{U}(1)_V \\times \\mathrm{U}(1)_A ~." }, { "math_id": 12, "text": "\nq_\\text{L} \\rightarrow e^{i\\theta(x)} q_\\text{L} \\qquad\nq_\\text{R} \\rightarrow e^{i\\theta(x)} q_\\text{R} ~,\n" }, { "math_id": 13, "text": "\nq_\\text{L} \\rightarrow e^{i\\theta} q_\\text{L} \\qquad\nq_\\text{R} \\rightarrow e^{-i\\theta} q_\\text{R} ~.\n" }, { "math_id": 14, "text": "\\textstyle \\langle \\bar{q}^a_\\text{R} q^b_\\text{L} \\rangle = v \\delta^{ab}" }, { "math_id": 15, "text": "\\mathrm{SU}(N)_\\text{L} \\times \\mathrm{SU}(N)_\\text{R} \\times \\mathrm{U}(1)_V \\times \\mathrm{U}(1)_A ~," }, { "math_id": 16, "text": "\\frac{ \\mathrm{SU}(2)_\\text{W}\\times \\mathrm{U}(1)_Y }{ \\mathbb{Z}_2 }" }, { "math_id": 17, "text": "\\frac{ \\mathrm{SU}(2)_\\text{L}\\times \\mathrm{SU}(2)_\\text{R}\\times \\mathrm{U}(1)_{B-L} }{ \\mathbb{Z}_2 }." }, { "math_id": 18, "text": "Q = T_{\\rm 3L} + T_{\\rm 3R} + \\frac{B-L}{2}\\,;" }, { "math_id": 19, "text": "\\ T_{\\rm 3L}\\ " }, { "math_id": 20, "text": "\\ T_{\\rm 3R}\\ " }, { "math_id": 21, "text": " \\mathbb{Z}_2 " }, { "math_id": 22, "text": "\\frac{ \\mathrm{SU}(3)_\\text{C}\\times \\mathrm{SU}(2)_\\text{L} \\times \\mathrm{SU}(2)_\\text{R} \\times \\mathrm{U}(1)_{B-L} }{ \\mathbb{Z}_6}" }, { "math_id": 23, "text": "\\frac{ \\mathrm{SU}(3)_\\text{C} \\times \\mathrm{SU}(2)_\\text{L} \\times \\mathrm{SU}(2)_\\text{R} \\times \\mathrm{U}(1)_{B-L} }{ \\mathbb{Z}_6 } \\rtimes \\mathbb{Z}_2\\ ." }, { "math_id": 24, "text": "(3,2,1)_{+{1 \\over 3}}" }, { "math_id": 25, "text": "\\left(\\bar{3},1,2\\right)_{-{1 \\over 3}}" }, { "math_id": 26, "text": "(3,2,1)_{+{1 \\over 3}} \\oplus \\left(\\bar{3},1,2\\right)_{-{1 \\over 3}}\\ ." }, { "math_id": 27, "text": "(1,2,1)_{-1} \\oplus (1,1,2)_{+1}\\ ." }, { "math_id": 28, "text": "(1,3,1)_2 \\oplus (1,1,3)_2\\ ." } ]
https://en.wikipedia.org/wiki?curid=1170169
1170258
Disphenocingulum
90th Johnson solid (22 faces) In geometry, the disphenocingulum is a Johnson solid with 20 equilateral triangles and 4 squares as its faces. Properties. The disphenocingulum is named by . The prefix "dispheno-" refers to two wedgelike complexes, each formed by two adjacent lunes—a figure of two equilateral triangles at the opposite sides of a square. The suffix "-cingulum", literally 'belt', refers to a band of 12 triangles joining the two wedges. The resulting polyhedron has 20 equilateral triangles and 4 squares, making 24 faces.. All of the faces are regular, categorizing the disphenocingulum as a Johnson solid—a convex polyhedron in which all of its faces are regular polygon—enumerated as 90th Johnson solid formula_0.. It is an elementary polyhedron, meaning that it cannot be separated by a plane into two small regular-faced polyhedra. The surface area of a disphenocingulum with edge length formula_1 can be determined by adding all of its faces, the area of 20 equilateral triangles and 4 squares formula_2, and its volume is formula_3. Cartesian coordinates. Let formula_4 be the second smallest positive root of the polynomial formula_5 and formula_6 and formula_7. Then, the Cartesian coordinates of a disphenocingulum with edge length 2 are given by the union of the orbits of the points formula_8 under the action of the group generated by reflections about the xz-plane and the yz-plane. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{90} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " (4 + 5\\sqrt{3})a^2 \\approx 12.6603a^2 " }, { "math_id": 3, "text": " 3.7776a^3 " }, { "math_id": 4, "text": " a \\approx 0.76713 " }, { "math_id": 5, "text": " \\begin{align} &256x^{12} - 512x^{11} - 1664x^{10} + 3712x^9 + 1552x^8 - 6592x^7 \\\\ &\\quad{} + 1248x^6 + 4352x^5 - 2024x^4 - 944x^3 + 672x^2 - 24x - 23 \\end{align}" }, { "math_id": 6, "text": "h = \\sqrt{2+8a-8a^2}" }, { "math_id": 7, "text": "c = \\sqrt{1-a^2}" }, { "math_id": 8, "text": "\\left(1,2a,\\frac{h}{2}\\right),\\ \\left(1,0,2c+\\frac{h}{2}\\right),\\ \\left(1+\\frac{\\sqrt{3-4a^2}}{c},0,2c-\\frac{1}{c}+\\frac{h}{2}\\right)" } ]
https://en.wikipedia.org/wiki?curid=1170258
1170286
Bilunabirotunda
91st Johnson solid (14 faces) In geometry, the bilunabirotunda is a Johnson solid with faces of 8 equilateral triangles, 2 squares, and 4 regular pentagons. Properties. The bilunabirotunda is named from the prefix "lune", meaning a figure featuring two triangles adjacent to opposite sides of a square. Therefore, the faces of a bilunabirotunda possess 8 equilateral triangles, 2 squares, and 4 regular pentagons as it faces. It is one of the Johnson solids—a convex polyhedron in which all of the faces are regular polygon—enumerated as 91st Johnson solid formula_0. It is known as the elementary polyhedron, meaning that it cannot be separated by a plane into two small regular-faced polyhedra. The surface area of a bilunabirotunda with edge length formula_1 is: formula_2 and the volume of a bilunabirotunda is: formula_3 Cartesian coordinates. One way to construct a bilunabirotunda with edge length formula_4 is by union of the orbits of the coordinates formula_5 under the group's action (of order 8) generated by reflections about coordinate planes. Applications. discusses the bilunabirotunda as a shape that could be used in architecture. Related polyhedra and honeycombs. Six bilunabirotundae can be augmented around a cube with pyritohedral symmetry. B. M. Stewart labeled this six-bilunabirotunda model as 6J91(P4). Such clusters combine with regular dodecahedra to form a space-filling honeycomb. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{91} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " \\left(2 + 2\\sqrt{3} + \\sqrt{5(5 + 2\\sqrt{5})}\\right)a^2 \\approx 12.346a^2, " }, { "math_id": 3, "text": " \\frac{17 + 9\\sqrt{5}}{12}a^3 \\approx 3.0937a^3. " }, { "math_id": 4, "text": " \\sqrt{5} - 1 " }, { "math_id": 5, "text": " (0, 0, 1), \\left( \\frac{\\sqrt{5} - 1}{2}, 1, \\frac{\\sqrt{5} - 1}{2} \\right), \\left( \\frac{\\sqrt{5} - 1}{2}, \\frac{\\sqrt{5} + 1}{2} \\right). " } ]
https://en.wikipedia.org/wiki?curid=1170286
1170291
Paul Sabatier (chemist)
French chemist (1854–1941) Prof Paul Sabatier FRS(For) HFRSE (; 5 November 1854 – 14 August 1941) was a French chemist, born in Carcassonne. In 1912, Sabatier was awarded the Nobel Prize in Chemistry along with Victor Grignard. Sabatier was honoured for his work improving the hydrogenation of organic species in the presence of metals. Education. Sabatier studied at the École Normale Supérieure, starting in 1874. Three years later, he graduated at the top of his class. In 1880, he was awarded a Doctor of Science degree from the College de France. In 1883 Sabatier succeeded Édouard Filhol at the Faculty of Science, and began a long collaboration with Jean-Baptiste Senderens, so close that it was impossible to distinguish the work of either man. They jointly published 34 notes in the "Accounts of the Academy of Science", 11 memoirs in the "Bulletin of the French Chemical Society" and 2 joint memoirs to the "Annals of Chemistry and Physics". The methanation reactions of COx were first discovered by Sabatier and Senderens in 1902. Sabatier and Senderen shared the Academy of Science's Jecker Prize in 1905 for their discovery of the Sabatier–Senderens Process. After 1905–06 Senderens and Sabatier published few joint works, perhaps due to the classic problem of recognition of the merit of contributions to joint work. Sabatier taught science classes most of his life before he became Dean of the Faculty of Science at the University of Toulouse in 1905. Research. Sabatier's earliest research concerned the thermochemistry of sulfur and metallic sulfates, the subject for the thesis leading to his doctorate. In Toulouse, he continued his physical and chemical investigations to sulfides, chlorides, chromates and copper compounds. He also studied the oxides of nitrogen and nitrosodisulfonic acid and its salts and carried out fundamental research on partition coefficients and absorption spectra. Sabatier greatly facilitated the industrial use of hydrogenation. In 1897, building on the recent biochemical work of the American chemist, James Boyce, he discovered that the introduction of a trace amount of nickel (as a catalyst) facilitated the addition of hydrogen to molecules of most carbon compounds. Sabatier reaction. Sabatier is best known for the Sabatier process and his works such as "La Catalyse en Chimie Organique" (Catalysis in organic chemistry) which was published in 1913. He won the Nobel Prize in Chemistry jointly with fellow Frenchman Victor Grignard in 1912. The reduction of carbon dioxide using hydrogen at high temperature and pressure is another use of nickel catalyst to produce methane. formula_0 ∆"H" = −165.0 kJ/mol (some initial energy/heat is required to start the reaction) Sabatier principle. He is also known for the Sabatier principle of catalysis. Personal life. Sabatier was married and had four daughters, one of whom wed the Italian chemist Emilio Pomilio. The Paul Sabatier University in Toulouse, France is named in honour of Paul Sabatier, as is one of Carcassonne's high schools. Paul Sabatier was a co-founder of the Annales de la Faculté des Sciences de Toulouse, together with the mathematician Thomas Joannes Stieltjes. Sabatier died on 14 August, 1941 in Toulouse at the age of 86. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{matrix}{}\\\\\n\\ce{{CO2} + 4H2 ->[Catalyst + 400\\ ^\\circ \\ce{C}][\\ce{pressure}] {CH4} + 2H2O}\\\\{}\n\\end{matrix}" } ]
https://en.wikipedia.org/wiki?curid=1170291
1170312
Triangular hebesphenorotunda
92nd Johnson solid (20 faces) In geometry, the triangular hebesphenorotunda is a Johnson solid with 13 equilateral triangles, 3 squares, 3 regular pentagons, and 1 regular hexagon, making the total of its faces is 20. Properties. The triangular hebesphenorotunda is named by , with the prefix "hebespheno-" referring to a blunt wedge-like complex formed by three adjacent "lunes"—a figure where two equilateral triangles are attached at the opposite sides of a square. The suffix (triangular) "-rotunda" refers to the complex of three equilateral triangles and three regular pentagons surrounding another equilateral triangle, which bears a structural resemblance to the pentagonal rotunda. Therefore, the triangular hebesphenorotunda has 20 faces: 13 equilateral triangles, 3 squares, 3 regular pentagons, and 1 regular hexagon. The faces are all regular polygons, categorizing the triangular hebesphenorotunda as the Johnson solid, enumerated the last one formula_0. It is elementary polyhedra, meaning that it cannot be separated by a plane into two small regular-faced polyhedra. The surface area of a triangular hebesphenorotunda of edge length formula_1 as: formula_2 and its volume as: formula_3 Cartesian coordinates. The triangular hebesphenorotunda with edge length formula_4 can be constructed by the union of the orbits of the Cartesian coordinates: formula_5 under the action of the group generated by rotation by 120° around the z-axis and the reflection about the yz-plane. Here, formula_6 is denoted as the golden ratio. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{92} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " A = \\left(3+\\frac{1}{4}\\sqrt{1308+90\\sqrt{5}+114\\sqrt{75+30\\sqrt{5}}}\\right)a^2 \\approx 16.389a^2, " }, { "math_id": 3, "text": " V = \\frac{1}{6}\\left(15+7\\sqrt{5}\\right)a^3\\approx5.10875a^3. " }, { "math_id": 4, "text": " \\sqrt{5} - 1 " }, { "math_id": 5, "text": " \\begin{align}\n \\left( 0,-\\frac{2}{\\tau\\sqrt{3}},\\frac{2\\tau}{\\sqrt{3}} \\right), \\qquad &\\left( \\tau,\\frac{1}{\\sqrt{3}\\tau^2},\\frac{2}{\\sqrt{3}} \\right) \\\\\n \\left( \\tau,-\\frac{\\tau}{\\sqrt{3}},\\frac{2}{\\sqrt{3}\\tau} \\right), \\qquad &\\left(\\frac{2}{\\tau},0,0\\right),\n\\end{align} " }, { "math_id": 6, "text": " \\tau " } ]
https://en.wikipedia.org/wiki?curid=1170312
1170314
Wingtip vortices
Turbulence caused by difference in air pressure on either side of wing Wingtip vortices are circular patterns of rotating air left behind a wing as it generates lift.5.14 The name is a misnomer because the cores of the vortices are slightly inboard of the wing tips. Wingtip vortices are sometimes named "trailing" or "lift-induced vortices" because they also occur at points other than at the wing tips.5.14 Indeed, vorticity is trailed at any point on the wing where the lift varies span-wise (a fact described and quantified by the lifting-line theory); it eventually rolls up into large vortices near the wingtip, at the edge of flap devices, or at other abrupt changes in wing planform. Wingtip vortices are associated with induced drag, the imparting of downwash, and are a fundamental consequence of three-dimensional lift generation.5.17, 8.9 Careful selection of wing geometry (in particular, wingspan), as well as of cruise conditions, are design and operational methods to minimize induced drag. Wingtip vortices form the primary component of wake turbulence. Depending on ambient atmospheric humidity as well as the geometry and wing loading of aircraft, water may condense or freeze in the core of the vortices, making the vortices visible. Generation of trailing vortices. When a wing generates aerodynamic lift, it results in a region of downwash between the two vortices.8.1.1 Three-dimensional lift and the occurrence of wingtip vortices can be approached with the concept of horseshoe vortex and described accurately with the Lanchester–Prandtl theory. In this view, the trailing vortex is a continuation of the "wing-bound vortex" inherent to the lift generation. Effects and mitigation. Wingtip vortices are associated with induced drag, an unavoidable consequence of three-dimensional lift generation. The rotary motion of the air within the shed wingtip vortices (sometimes described as a "leakage") reduces the effective angle of attack of the air on the wing. The lifting-line theory describes the shedding of trailing vortices as span-wise changes in lift distribution. For a given wing span and surface, minimal induced drag is obtained with an elliptical lift distribution. For a given lift distribution and wing planform area, induced drag is reduced with increasing aspect ratio. As a consequence, aircraft for which a high lift-to-drag ratio is desirable, such as gliders or long-range airliners, typically have high aspect ratio wings. Such wings however have disadvantages with respect to structural constraints and maneuverability, as evidenced by combat and aerobatic planes which usually feature short, stubby wings despite the efficiency losses. Another method of reducing induced drag is the use of winglets, as seen on most modern airliners. Winglets increase the effective aspect ratio of the wing, changing the pattern and magnitude of the vorticity in the vortex pattern. A reduction is achieved in the kinetic energy in the circular air flow, which reduces the amount of fuel expended to perform work upon the spinning air[""]. After NASA became concerned about the increasing density of air traffic potentially causing vortex related accidents at airports, an experiment by NASA Ames Research Center wind tunnel testing with a 747 model found that the configuration of the flaps could be changed on existing aircraft to break the vortex into three smaller and less disturbing vortexes. This primarily involved changing the settings of the outboard flaps, and could theoretically be retrofitted to existing aircraft. Visibility of vortices. The cores of the vortices can sometimes be visible when the water present in them condenses from gas (vapor) to liquid. This water can sometimes even freeze, forming ice particles. Condensation of water vapor in wing tip vortices is most common on aircraft flying at high angles of attack, such as fighter aircraft in high "g" maneuvers, or airliners taking off and landing on humid days. Aerodynamic condensation and freezing. The cores of vortices spin at very high speed and are regions of very low pressure. To first approximation, these low-pressure regions form with little exchange of heat with the neighboring regions (i.e., adiabatically), so the local temperature in the low-pressure regions drops, too. If it drops below the local dew point, there results a condensation of water vapor present in the cores of wingtip vortices, making them visible. The temperature may even drop below the local freezing point, in which case ice crystals will form inside the cores. The phase of water (i.e., whether it assumes the form of a solid, liquid, or gas) is determined by its temperature and pressure. For example, in the case of liquid-gas transition, at each pressure there is a special "transition temperature" formula_0 such that if the sample temperature is even a little above formula_0, the sample will be a gas, but, if the sample temperature is even a little below formula_0, the sample will be a liquid; see phase transition. For example, at the standard atmospheric pressure, formula_0 is 100 °C = 212 °F. The transition temperature formula_0 decreases with decreasing pressure (which explains why water boils at lower temperatures at higher altitudes and at higher temperatures in a pressure cooker; see here for more information). In the case of water vapor in air, the formula_0 corresponding to the partial pressure of water vapor is called the dew point. (The solid–liquid transition also happens around a specific transition temperature called the melting point. For most substances, the melting point also decreases with decreasing pressure, although water ice in particular - in its Ih form, which is the most familiar one - is a prominent exception to this rule.) Vortex cores are regions of low pressure. As a vortex core begins to form, the water in the air (in the region that is about to become the core) is in vapor phase, which means that the local temperature is above the local dew point. After the vortex core forms, the pressure inside it has decreased from the ambient value, and so the local dew point (formula_0) has dropped from the ambient value. Thus, "in and of itself", a drop in pressure would tend to keep water in vapor form: The initial dew point was already below the ambient air temperature, and the formation of the vortex has made the local dew point even lower. However, as the vortex core forms, its pressure (and so its dew point) is not the only property that is dropping: The vortex-core temperature is dropping also, and in fact it can drop by much more than the dew point does. To first approximation, the formation of vortex cores is thermodynamically an adiabatic process, i.e., one with no exchange of heat. In such a process, the drop in pressure is accompanied by a drop in temperature, according to the equation formula_1 Here formula_2 and formula_3 are the absolute temperature and pressure at the beginning of the process (here equal to the ambient air temperature and pressure), formula_4 and formula_5 are the absolute temperature and pressure in the vortex core (which is the end result of the process), and the constant formula_6 is about 7/5 = 1.4 for air (see here). Thus, even though the local dew point inside the vortex cores is even lower than in the ambient air, the water vapor may nevertheless condense — if the formation of the vortex brings the local temperature below the new local dew point. For a typical transport aircraft landing at an airport, these conditions are as follows: formula_2 and formula_3 have values corresponding to the so-called standard conditions, i.e., formula_3 = 1 atm = 1013.25 mb = 101formula_7325 Pa and formula_2 = 293.15 K (which is 20 °C = 68 °F). The relative humidity is a comfortable 35% (dew point of 4.1 °C = 39.4 °F). This corresponds to a partial pressure of water vapor of 820 Pa = 8.2 mb. In a vortex core, the pressure (formula_5) drops to about 80% of the ambient pressure, i.e., to about 80 000 Pa. The temperature in the vortex core is given by the equation above as formula_8 or 0.86 °C = 33.5 °F. Next, the partial pressure of water in the vortex core drops in proportion to the drop in the total pressure (i.e., by the same percentage), to about 650 Pa = 6.5 mb. According to a dew point calculator, that partial pressure results in the local dew point of about 0.86 °C; in other words, the new local dew point is about equal to the new local temperature. Therefore, this is a marginal case; if the relative humidity of the ambient air were even a bit higher (with the total pressure and temperature remaining as above), then the local dew point inside the vortices would rise, while the local temperature would remain the same. Thus, the local temperature would now be "lower" than the local dew point, and so the water vapor inside the vortices would indeed condense. Under the right conditions, the local temperature in vortex cores may drop below the local freezing point, in which case ice particles will form inside the vortex cores. The water-vapor condensation mechanism in wingtip vortices is thus driven by local changes in air pressure and temperature. This is to be contrasted to what happens in another well-known case of water condensation related to airplanes: the contrails from airplane engine exhausts. In the case of contrails, the local air pressure and temperature do not change significantly; what matters instead is that the exhaust contains both water vapor (which increases the local water-vapor concentration and so its partial pressure, resulting in elevated dew point and freezing point) as well as aerosols (which provide nucleation centers for the condensation and freezing). Formation flight. One theory on migrating bird flight states that many larger bird species fly in a V formation so that all but the leader bird can take advantage of the upwash part of the wingtip vortex of the bird ahead. Hazards. Wingtip vortices can pose a hazard to aircraft, especially during the landing and takeoff phases of flight. The intensity or strength of the vortex is a function of aircraft size, speed, and configuration (flap setting, etc.). The strongest vortices are produced by heavy aircraft, flying slowly, with wing flaps and landing gear retracted ("heavy, slow and clean"). Large jet aircraft can generate vortices that can persist for many minutes, drifting with the wind. The hazardous aspects of wingtip vortices are most often discussed in the context of wake turbulence. If a light aircraft immediately follows a heavy aircraft, wake turbulence from the heavy aircraft can roll the light aircraft faster than can be resisted by use of ailerons. At low altitudes, in particular during takeoff and landing, this can lead to an upset from which recovery is not possible. ("Light" and "heavy" are relative terms, and even smaller jets have been rolled by this effect.) Air traffic controllers attempt to ensure an adequate separation between departing and arriving aircraft by issuing wake turbulence warnings to pilots. In general, to avoid vortices an aircraft is safer if its takeoff is before the rotation point of the airplane that took off before it. However care must be taken to stay upwind (or otherwise away) from any vortices that were generated by the previous aircraft. On landing behind an airplane the aircraft should stay above the earlier one's flight path and touch down further along the runway. Glider pilots routinely practice flying in wingtip vortices when they do a maneuver called "boxing the wake". This involves descending from the higher to lower position behind a tow plane. This is followed by making a rectangular figure by holding the glider at high and low points away from the towing plane before coming back up through the vortices. (For safety this is not done below 1500 feet above the ground, and usually with an instructor present.) Given the relatively slow speeds and lightness of both aircraft the procedure is safe but does instill a sense of how strong and where the turbulence is located. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_{c}" }, { "math_id": 1, "text": "\\frac{T_{\\text{f}}}{T_{\\text{i}}}=\\left(\\frac{p_{\\text{f}}}{p_{\\text{i}}}\\right)^{\\frac{\\gamma -1}{\\gamma}}." }, { "math_id": 2, "text": "T_{\\text{i}}" }, { "math_id": 3, "text": "p_{\\text{i}}" }, { "math_id": 4, "text": "T_{\\text{f}}" }, { "math_id": 5, "text": "p_{\\text{f}}" }, { "math_id": 6, "text": "\\gamma" }, { "math_id": 7, "text": "\\," }, { "math_id": 8, "text": "T_{\\text{f}}=\\left(\\frac{\\scriptstyle 80\\,000}{\\scriptstyle 101\\,325}\\right)^{\\scriptscriptstyle 0.4/1.4}\\,T_{\\text{i}}= 0.935\\,\\times\\,293.15=274\\;\\text{K}," } ]
https://en.wikipedia.org/wiki?curid=1170314
11708087
Minimum deviation
Condition when the angle of deviation is minimal in a prism In a prism, the angle of deviation (δ) decreases with increase in the angle of incidence (i) up to a particular angle. This angle of incidence where the angle of deviation in a prism is minimum is called the minimum deviation position of the prism and that very deviation angle is known as the minimum angle of deviation (denoted by "δ"min, Dλ, or Dm). The angle of minimum deviation is related with the refractive index as: formula_0 This is useful to calculate the refractive index of a material. Rainbow and halo occur at minimum deviation. Also, a thin prism is always set at minimum deviation. Formula. In minimum deviation, the refracted ray in the prism is parallel to its base. In other words, the light ray is symmetrical about the axis of symmetry of the prism. Also, the angles of refractions are equal i.e. "r"1 = "r"2. The angle of incidence and angle of emergence equal each other ("i" = "e"). This is clearly visible in the graph below. The formula for minimum deviation can be derived by exploiting the geometry in the prism. The approach involves replacing the variables in the Snell's law in terms of the Deviation and Prism Angles by making the use of the above properties. From the angle sum of formula_1, formula_2 formula_3 formula_4 Using the exterior angle theorem in formula_5, formula_6 formula_7 formula_8 formula_9 formula_10 This can also be derived by putting "i" = "e" in the prism formula: "i" + "e" = "A" + "δ" From Snell's law, formula_11 formula_12 formula_13 This is a convenient way used to measure the refractive index of a material(liquid or gas) by directing a light ray through a prism of negligible thickness at minimum deviation filled with the material or in a glass prism dipped in it. Worked out examples: Also, the variation of the angle of deviation with an arbitrary angle of incidence can be encapsulated into a single equation by expressing δ in terms of i in the prism formula using Snell's law: formula_14 Finding the minima of this equation will also give the same relation for minimum deviation as above. Putting formula_15, we get, formula_16, and by solving this equation we can obtain the value of angle of incidence for a definite value of angle of prism and the value of relative refractive index of the prism for which the minimum angle of deviation will be obtained. The equation and description are given here For thin prism. In a thin or small angle prism, as the angles become very small, the sine of the angle nearly equals the angle itself and this yields many useful results. Because Dm and A are very small, formula_17 formula_18 Using a similar approach with the Snell's law and the prism formula for an in general thin-prism ends up in the very same result for the deviation angle. Because i, e and r are small, formula_19 From the prism formula, formula_20 Thus, it can be said that a thin prism is always in minimum deviation. Experimental determination. Minimum deviation can be found manually or with spectrometer. Either the prism is kept fixed and the incidence angle is adjusted or the prism is rotated keeping the light source fixed. Minimum angle of dispersion. The minimum angle of dispersion for white light is the difference in minimum deviation angle between red and violet rays of a light ray through a prism. For a thin prism, the deviation of violet light, formula_21 is formula_22 and that of red light, formula_23 is formula_24. The difference in the deviation between red and violet light, formula_25 is called the Angular Dispersion produced by the prism. Applications. One of the factors that causes a rainbow is the bunching of light rays at the minimum deviation angle that is close to the rainbow angle (42°). It is also responsible for phenomena like halos and sundogs, produced by the deviation of sunlight in mini prisms of hexagonal ice crystals in the air bending light with a minimum deviation of 22°.
[ { "math_id": 0, "text": " \nn_{21} = \\dfrac{\\sin \\left(\\dfrac{A + D_{m}}{2}\\right)}{\\sin \\left(\\dfrac{A}{2}\\right)}\n" }, { "math_id": 1, "text": "\\triangle OPQ" }, { "math_id": 2, "text": "A + \\angle OPQ + \\angle OQP = 180^\\circ" }, { "math_id": 3, "text": "\\implies A = 180^\\circ - (90 - r) - (90 - r)" }, { "math_id": 4, "text": "\\implies r = \\frac{A}{2}" }, { "math_id": 5, "text": " \\triangle PQR" }, { "math_id": 6, "text": " D_{m} = \\angle RPQ + \\angle RQP " }, { "math_id": 7, "text": " \\implies D_{m} = i - r + i - r " }, { "math_id": 8, "text": " \\implies 2r + D_{m}= 2i " }, { "math_id": 9, "text": " \\implies A + D_{m} = 2i " }, { "math_id": 10, "text": " \\implies i = \\frac{A + D_{m}} {2} " }, { "math_id": 11, "text": "n_{21} = \\dfrac{\\sin i}{\\sin r}" }, { "math_id": 12, "text": " \\therefore n_{21} = \\dfrac{\\sin \\left(\\dfrac{A + D_{m}}{2}\\right)}{\\sin \\left(\\dfrac{A}{2}\\right)}\n" }, { "math_id": 13, "text": "\\therefore D_m = 2 \\sin^{-1} \\left(n \\sin \\left(\\frac{A}{2}\\right)\\right) - A " }, { "math_id": 14, "text": "\\delta = i - A + \\sin^{-1} \\left(n \\cdot \\sin\\left(A - \\sin^{-1}\\left(\\frac{\\sin i}{n}\\right)\\right)\\right)=f(i)(say)" }, { "math_id": 15, "text": "f'(i)=0" }, { "math_id": 16, "text": "\\frac{\\cos\\left(A-\\sin^{-1}\\left(\\frac{\\sin i}{u}\\right)\\right)\\cos i}{\\sqrt{\\left(1-u^{2}\\sin^{2}\\left(A-\\sin^{-1}\\left(\\frac{\\sin i}{u}\\right)\\right)\\right)\\left(1-\\frac{\\sin^{2}i}{u^{2}}\\right)}}=1" }, { "math_id": 17, "text": "\n\n\\begin{align}\nn & \\approx \\dfrac{\\frac{A + D_{m}}{2}}{\\frac{A}{2}}\\\\\nn & = \\frac{A + D_m}{A}\\\\\nD_m & = An - A\n\\end{align}\n" }, { "math_id": 18, "text": " \\therefore D_{m} = A(n - 1) " }, { "math_id": 19, "text": " n \\approx \\frac{i}{r_1}, n \\approx \\frac{e}{r_2} " }, { "math_id": 20, "text": "\n\\begin{align}\n\\delta & = n r_1 + n r_2 - A \\\\\n& = n(r_1 + r_2) - A \\\\\n& = nA - A \\\\\n& = A(n - 1)\n\\end{align}\n" }, { "math_id": 21, "text": "\\delta_v" }, { "math_id": 22, "text": "(n_v-1)A" }, { "math_id": 23, "text": "\\delta_r" }, { "math_id": 24, "text": "(n_r-1)A" }, { "math_id": 25, "text": "(\\delta_v-\\delta_r)=(n_v-n_r)A" } ]
https://en.wikipedia.org/wiki?curid=11708087
11709017
Global language system
Connections between language groups The global language system is the "ingenious pattern of connections between language groups". Dutch sociologist Abram de Swaan developed this theory in 2001 in his book "Words of the World: The Global Language System" and according to him, "the multilingual connections between language groups do not occur haphazardly, but, on the contrary, they constitute a surprisingly strong and efficient network that ties together – directly or indirectly – the six billion inhabitants of the earth." The global language system draws upon the world system theory to account for the relationships between the world's languages and divides them into a hierarchy consisting of four levels, namely the peripheral, central, supercentral and hypercentral languages. Theory. Background. According to de Swaan, the global language system has been constantly evolving since the time period of the early 'military-agrarian' regimes. Under these regimes, the rulers imposed their own language and so the first 'central' languages emerged, linking the peripheral languages of the agrarian communities via bilingual speakers to the language of the conquerors. Then was the formation of empires, which resulted in the next stage of integration of the world language system. Firstly, Latin emerged from Rome. Under the rule of the Roman Empire, which ruled an extensive group of states, the usage of Latin stretched along the Mediterranean coast, the southern half of Europe, and more sparsely to the North and then into the Germanic and Celtic lands. Thus, Latin evolved to become a central language in Europe from 27 BC to 476 AD. Secondly, there was the widespread usage of the pre-classical version of Han Chinese in contemporary China due to the unification of China in 221 BC by Qin Shi Huang. Thirdly, Sanskrit started to become widely spoken in South Asia from the widespread teaching of Hinduism and Buddhism in South Asian countries. Fourthly, the expansion of the Arabic empire also led to the increased usage of Arabic as a language in the Afro-Eurasian land mass. Military conquests of preceding centuries generally determine the distribution of languages today. Supercentral languages spread by land and sea. Land-bound languages spread via marching empires: German, Russian, Arabic, Hindi, Chinese and Japanese. Languages like Bengali, Tamil, Italian and Turkish too are less considered as land-bound languages. However, when the conquerors were defeated and were forced to move out of the territory, the spread of the languages receded. As a result, some of these languages are currently barely supercentral languages and are instead confined to their remaining state territories, as is evident from German, Russian and Japanese. On the other hand, sea-bound languages spread by conquests overseas: English, French, Portuguese, Spanish. Consequently, these languages became widespread in areas settled by European colonisers and relegated the indigenous people and their languages to peripheral positions. Besides, the world-systems theory also allowed the global language system to expand further. It focuses on the existence of the core, semi-peripheral and peripheral nations. The core countries are the most economically powerful and the wealthiest countries. Besides, they also have a strong governmental system in the country, which oversees the bureaucracies in the governmental departments. There is also the prevalent existence of the bourgeois, and core nations have significant influence over the non-core, smaller nations. Historically, the core countries were found in northwestern Europe and include countries such as England, France and the Netherlands. They were the dominant countries that had colonized many other nations from the early 15th century to the early 19th century. Then is the existence of the periphery countries, the countries with the slowest economic growth. They also have relatively weak governments and a poor social structure and often depend on primary industries as the main source of economic activity for the country. The extracting and exporting of raw materials from the peripheral nations to core nations is the activity bringing about the most economic benefits to the country. Much of the population that is poor and uneducated, and the countries are also extensively influenced by core nations and the multinational corporations found there. Historically, peripheral nations were found outside Europe, the continent of colonial masters. Many countries in Latin America were peripheral nations during the period of colonization, and today peripheral countries are in sub-Saharan Africa. Lastly, the presence of the semiperiphery countries, those in between the core and the periphery. They tend to be those which started out as peripheral nations and are currently moving towards industrialization and the development of more diversified labour markets and economies. They can as well come about from declining core countries. They are not dominant players in the international trade market. As compared to the peripheral nations, semi-peripheries are not as susceptible to manipulation by the core countries. However, most of these nations have economic or political relations with the core. Semi-peripheries also tend to exert influence and control over peripheries and can serve to be a buffer between the core and peripheral nations and ease political tensions. Historically, Spain and Portugal were semi-peripheral nations after they fell from their dominant core positions. As they still maintained a certain level of influence and dominance in Latin America over their colonies, they could still maintain their semi-peripheral position. According to Immanuel Wallerstein, one of the most well-known theorists who developed the world-systems approach, a core nation is dominant over the non-core nations from its economic and trade dominance. The abundance of cheap and unskilled labour in the peripheral nations makes many large multinational corporations (MNCs), from core countries, often outsource their production to the peripheral countries to cut costs, by employing cheap labour. Hence, the languages from the core countries could penetrate into the peripheries from the setting up of the foreign MNCs in the peripheries. A significant percentage of the population living in the core countries had also migrated to the core countries in search of jobs with higher wages. The gradual expansion of the population of migrants makes the language used in their home countries be brought into the core countries, thus allowing for further integration and expansion of the world language system. The semi-peripheries also maintain economic and financial trade with the peripheries and core countries. That allows for the penetration of languages used in the semi-peripheries into the core and peripheral nations, with the flow of migrants moving out of the semi-peripheral nations to the core and periphery for trade purposes. Thus, the global language system examines rivalries and accommodations using a global perspective and establishes that the linguistic dimension of the world system goes hand in hand with the political, economic, cultural and ecological aspects. Specifically, the present global constellation of languages is the product of prior conquest and domination and of ongoing relations of power and exchange. Q-value. formula_0 is the communicative value of a language "i", its potential to connect a speaker with other speakers of a constellation or subconstellation, "S". It is defined as follows: formula_1 The prevalence formula_2 of language "i", means the number of competent speakers in "i", formula_3, divided by all the speakers, formula_4 of constellation "S". Centrality, formula_5 is the number of multilingual speakers formula_6 who speak language "i" divided by all the multilingual speakers in constellation "S", formula_7. Thus, the Q-value or communication value is the product of the prevalence formula_2 and the centrality formula_5 of language "i" in constellation "S". Consequently, a peripheral language has a low Q-value and the Q-values increase along the sociology classification of languages, with the Q-value of the hypercentral language being the highest. De Swaan has been calculating the Q-values of the official European Union (EU) languages since 1957 to explain the acquisition of languages by EU citizens in different phases. In 1970, when there were only four language constellations, Q-value decreased in the order of French, German, Italian, Dutch. In 1975, the European Commission enlarged to include Britain, Denmark and Ireland. English had the highest Q-value followed by French and German. In the following years, the European Commission grew, with the addition of countries like Austria, Finland and Sweden. Q-value of English still remained the highest, but French and German swapped places. In EU23, which refers to the 23 official languages spoken in the European Union, the Q-values for English, German and French were 0.194, 0.045 and 0.036 respectively. Theoretical framework. De Swaan likens the global language system to contemporary political macrosociology and states that language constellations are a social phenomenon, which can be understood by using social science theories. In his theory, de Swaan uses the Political Sociology of Language and Political Economy of Language to explain the rivalry and accommodation between language groups. Political sociology. This theoretical perspective centres on the interconnections among the state, nation and citizenship. Accordingly, bilingual elite groups try to take control of the opportunities for mediation between the monolingual group and the state. Subsequently, they use the official language to dominate the sectors of government and administration and the higher levels of employment. It assumes that both the established and outsider groups are able to communicate in a shared vernacular, but the latter groups lack the literacy skills that could allow them to learn the written form of the central or supercentral language, which would, in turn allow, them to move up the social ladder. Political economy. This perspective centres on the inclinations that people have towards learning one language over the other. The presumption is that if given a chance, people will learn the language that gives them more communication advantage. In other words, a higher Q-Value. Certain languages such as English or Chinese have high Q-values since they are spoken in many countries across the globe and would thus be more economically useful than to less spoken languages, such as Romanian or Hungarian. From an economic perspective, languages are ‘hypercollective’ goods since they exhibit properties of collective goods and produce external network effects. Thus, the more speakers a language has, the higher its communication value for each speaker. The hypercollective nature and Q-Value of languages thus help to explain the dilemma that a speaker of a peripheral language faces when deciding whether to learn the central or hypercentral language. The hypercollective nature and Q-value also help to explain the accelerating spread and abandonment of various languages. In that sense, when people feel that a language is gaining new speakers, they would assign a greater Q-value to this language and abandon their own native language in place of a more central language. The hypercollective nature and Q-value also explain, in an economic sense, the ethnic and cultural movements for language conservation. Specifically, a minimal Q-value of a language is guaranteed when there is a critical mass of speakers committed to protecting it, thus preventing the language from being forsaken. Characteristics. The global language system theorises that language groups are engaged in unequal competition on different levels globally. Using the notions of a periphery, semi-periphery and a core, which are concepts of the world system theory, de Swaan relates them to the four levels present in the hierarchy of the global language system: peripheral, central, supercentral and hypercentral. De Swaan also argues that the greater the range of potential uses and users of a language, the higher the tendency of an individual to move up the hierarchy in the global language system and learn a more "central" language. Thus, de Swaan views the learning of second languages as proceeding up rather than down the hierarchy, in the sense that they learn a language that is on the next level up. For instance, speakers of Catalan, a peripheral language, have to learn Spanish, a central language to function in their own society, Spain. Meanwhile, speakers of Persian, a central language, have to learn Arabic, a supercentral language, to function in their region. On the other hand, speakers of a supercentral language have to learn the hypercentral language to function globally, as is evident from the huge number of non-native English speakers. According to de Swaan, languages exist in "constellations" and the global language system comprises a sociological classification of languages based on their social role for their speakers. The world's languages and multilinguals are connected in a strongly ordered, hierarchical pattern. There are thousands of peripheral or minority languages in the world, each of which are connected to one of a hundred central languages. The connections and patterns between each language is what makes up the global language system. The four levels of language are the peripheral, central, supercentral and hypercentral languages. Peripheral languages. At the lowest level, peripheral languages, or minority languages, form the majority of languages spoken in the world; 98% of the world's languages are peripheral languages and spoken by less than 10% of the world’s population. Unlike central languages, these are "languages of conversation and narration rather than reading and writing, of memory and remembrance rather than record". They are used by native speakers within a particular area and are in danger of becoming extinct with increasing globalisation, which sees more and more speakers of peripheral languages acquiring more central languages in order to communicate with others. Central languages. The next level constitutes about 100 central languages, spoken by 95% of the world's population and generally used in education, media and administration. Typically, they are the 'national' and official languages of the ruling state. These are the languages of record, and much of what has been said and written in those languages is saved in newspaper reports, minutes and proceedings, stored in archives, included in history books, collections of the 'classics', of folk talks and folk ways, increasingly recorded on electronic media and thus conserved for posterity. Many speakers of central languages are multilingual because they are either native speakers of a peripheral language and have acquired the central language, or they are native speakers of the central language and have learned a supercentral language. Supercentral languages. At the second highest level, 12 supercentral languages are very widely spoken languages that serve as connectors between speakers of central languages: Arabic, Chinese, English, French, German, Hindi, Japanese, Malay, Portuguese, Russian, Spanish and Swahili. These languages often have colonial traces and "were once imposed by a colonial power and after independence continued to be used in politics, administration, law, big business, technology and higher education". Hypercentral languages. At the highest level is the language that connects speakers of the supercentral languages. Today, English is the only example of a hypercentral language as the standard for science, literature, business, and law, as well as being the most widely spoken second language. Applications. Pyramid of languages of the world. According to David Graddol (1997), in his book titled "The Future of English", the languages of the world comprise a "hierarchical pyramid", as follows: Translation systems. The global language system is also seen in the international translation process as explained by Johan Heilbron, a historical sociologist: "translations and the manifold activities these imply are embedded in and dependent on a world system of translation, including both the source and the target cultures". The hierarchical relationship between global languages is reflected in the global system for translations. The more "central" a language, the greater is its capability to function as a bridge or vehicular language to facilitate communication between peripheral and semi-central languages. Heilbron's version of the global system of language in translations has four levels: Level 1: Hypercentral position — English currently holds the largest market share of the global market for translations; 55–60% of all book translations are from English. It strongly dominates the hierarchical nature of book translation system. Level 2: Central position — German and French each hold 10% of the global translation market. Level 3: Semi-central position — There are 7 or 8 languages "neither very central on a global level nor very peripheral", each making up 1 to 3% of the world market (like Spanish, Italian and Russian). Level 4: Peripheral position — Languages from which "less than 1% of the book translations worldwide are made", including Chinese, Hindi, Japanese, Malay, Swahili, Turkish and Arabic. Despite having large populations of speakers, "their role in the translation economy is peripheral as compared to more central languages". Acceptance. According to the Google Scholar website, de Swaan's book, "Words of the world: The global language system", has been cited by 2990 other papers, as of 25 August 2021. However, there have also been several concerns regarding the global language system: Importance of Q-value. Van Parijs (2004) claimed that 'frequency' or likelihood of contact is adequate as an indicator of language learning and language spread. However, de Swaan (2007) argued that it alone is not sufficient. Rather, the Q-value, which comprises both frequency (better known as prevalence) and 'centrality', helps to explain the spread of (super)central languages, especially former colonial languages in newly independent countries where in which only the elite minority spoke the language initially. Frequency alone would not be able to explain the spread of such languages, but Q-value, which includes centrality, would be able to. In another paper, Cook and Li (2009) examined the ways to categorise language users into various groups. They suggested two theories: one by Siegel (2006) who used 'sociolinguistic settings', which is based on the notion of dominant language, and another one by de Swaan (2001) that used the concept of hierarchy in the global language system. According to them, de Swaan's hierarchy is more appropriate, as it does not imply dominance in power terms. Rather, de Swaan's applies the concepts of geography and function to group languages and hence language users according to the global language system. De Swaan (2001) views the acquisition of second languages (L2) as typically going up the hierarchy. However, Cook and Li argues that this analysis is not adequate in accounting for the many groups of L2 users to whom the two areas of territory and function hardly apply. The two areas of territory and function can be associated respectively with the prevalence and centrality of the Q-value. This group of L2 users typically does not acquire an L2 going up the hierarchy, such as users in an intercultural marriage or users who come from a particular cultural or ethnic group and wish to learn its language for identity purposes. Thus, Cook and Li argue that de Swaan's theory, though highly relevant, still has its drawbacks in that the concept behind Q-value is insufficient in accounting for some L2 users. Choice of supercentral languages. There is disagreement as to which languages should be considered more central. The theory states that a language is central if it connects speakers of "a series of central languages". Robert Phillipson questioned why Japanese is included as one of the supercentral languages but Bengali, which has more speakers, is not on the list. Inadequate evidence for a system. Michael Morris argued that while it is clear that there is language hierarchy from the "ongoing interstate competition and power politics", there is little evidence provided that shows that the "global language interaction is so intense and systematic that it constitutes a global language system, and that the entire system is held together by one global language, English". He claimed that de Swaan's case studies demonstrated that hierarchy in different regions of the world but did not show the existence of a system within a region or across regions. The global language system is supposed to be part of the international system but is "notoriously vague and lacking in operational importance" and therefore cannot be shown to exist. However, Morris believes that this lack of evidence could be from the lack of global language data and not negligence on de Swaan's part. Morris also believes that any theory on a global system, if later proved, would be much more complex than what is proposed by de Swaan. Questions on how the hypercentral language English holds together the system must also be answered by such a global language system. Theory built on inadequate foundations. Robert Phillipson states that the theory is based on selective theoretical foundations. He claimed that there is a lack of consideration about the effects of globalization, which is especially important when the theory is about a global system: "De Swaan nods occasionally in the direction of linguistic and cultural capital, but does not link this to class or linguistically defined social stratification (linguicism) or linguistic inequality" and that "key concepts in the sociology of language, language maintenance and shift, and language spread are scarcely mentioned". On the other hand, de Swaan's work in the field of sociolinguistics has been noted by other scholars to be focused on "issues of economic and political sociology" and "politic and economic patterns", which may explain why he makes only "cautious references to socio-linguistic parameters". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_i" }, { "math_id": 1, "text": "Q_i=p_i \\times c_i = \\left ( \\frac{P_i}{N^S} \\right ) \\times \\left ( \\frac{C_i}{M^S} \\right ) " }, { "math_id": 2, "text": "p_i" }, { "math_id": 3, "text": "P_i" }, { "math_id": 4, "text": "N^S" }, { "math_id": 5, "text": "c_i" }, { "math_id": 6, "text": "C_i" }, { "math_id": 7, "text": "M^S" } ]
https://en.wikipedia.org/wiki?curid=11709017
11709182
Radiosity (radiometry)
Physical quantity in radiometry In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiosity is the watt per square metre (W/m2), while that of spectral radiosity in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral radiosity in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (W·m−2·nm−1). The CGS unit erg per square centimeter per second (erg·cm−2·s−1) is often used in astronomy. Radiosity is often called intensity in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity. Mathematical definitions. Radiosity. Radiosity of a "surface", denoted "J"e ("e" for "energetic", to avoid confusion with photometric quantities), is defined as formula_0 where For an "opaque" surface, the "transmitted" component of radiosity "J"e,tr vanishes and only two components remain: formula_6 In heat transfer, combining these two factors into one radiosity term helps in determining the net energy exchange between multiple surfaces. Spectral radiosity. Spectral radiosity in frequency of a "surface", denoted "J"e,ν, is defined as formula_7 where "ν" is the frequency. Spectral radiosity in wavelength of a "surface", denoted "J"e,λ, is defined as formula_8 where "λ" is the wavelength. Radiosity method. The radiosity of an "opaque", gray and diffuse surface is given by formula_9 where Normally, "E"e is the unknown variable and will depend on the surrounding surfaces. So, if some surface "i" is being hit by radiation from some other surface "j", then the radiation energy incident on surface "i" is "E"e,"ji" "A""i" = "F""ji" "A""j" "J"e,"j" where "F""ji" is the "view factor" or "shape factor", from surface "j" to surface "i". So, the irradiance of surface "i" is the sum of radiation energy from all other surfaces per unit surface of area "A""i": formula_10 Now, employing the reciprocity relation for view factors "F""ji" "A""j" = "F""ij" "A""i", formula_11 and substituting the irradiance into the equation for radiosity, produces formula_12 For an "N" surface enclosure, this summation for each surface will generate "N" linear equations with "N" unknown radiosities, and "N" unknown temperatures. For an enclosure with only a few surfaces, this can be done by hand. But, for a room with many surfaces, linear algebra and a computer are necessary. Once the radiosities have been calculated, the net heat transfer formula_13 at a surface can be determined by finding the difference between the incoming and outgoing energy: formula_14 Using the equation for radiosity "J"e,"i" = "ε""i"σ"T""i"4 + (1 − "ε""i")"E"e,"i", the irradiance can be eliminated from the above to obtain formula_15 where "M"e,"i"° is the radiant exitance of a black body. Circuit analogy. For an enclosure consisting of only a few surfaces, it is often easier to represent the system with an analogous circuit rather than solve the set of linear radiosity equations. To do this, the heat transfer at each surface is expressed as formula_16 where "R""i" = (1 − "ε""i")/("A""i""ε""i") is the resistance of the surface. Likewise, "M"e,"i"° − "J"e,"i" is the blackbody exitance minus the radiosity and serves as the 'potential difference'. These quantities are formulated to resemble those from an electrical circuit "V" = "IR". Now performing a similar analysis for the heat transfer from surface "i" to surface "j", formula_17 where "R""ij" = 1/("A""i" "F""ij"). Because the above is "between" surfaces, "R""ij" is the resistance of the space between the surfaces and "J"e,"i" − "J"e,"j" serves as the potential difference. Combining the surface elements and space elements, a circuit is formed. The heat transfer is found by using the appropriate potential difference and equivalent resistances, similar to the process used in analyzing electrical circuits. Other methods. In the radiosity method and circuit analogy, several assumptions were made to simplify the model. The most significant is that the surface is a diffuse emitter. In such a case, the radiosity does not depend on the angle of incidence of reflecting radiation and this information is lost on a diffuse surface. In reality, however, the radiosity will have a specular component from the reflected radiation. So, the heat transfer between two surfaces relies on both the view factor and the angle of reflected radiation. It was also assumed that the surface is a gray body, that is to say its emissivity is independent of radiation frequency or wavelength. However, if the range of radiation spectrum is large, this will not be the case. In such an application, the radiosity must be calculated spectrally and then integrated over the range of radiation spectrum. Yet another assumption is that the surface is isothermal. If it is not, then the radiosity will vary as a function of position along the surface. However, this problem is solved by simply subdividing the surface into smaller elements until the desired accuracy is obtained. SI radiometry units. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "J_\\mathrm{e} = \\frac{\\partial \\Phi_\\mathrm{e}}{\\partial A} = J_\\mathrm{e,em} + J_\\mathrm{e,r} + J_\\mathrm{e,tr}," }, { "math_id": 1, "text": "\\Phi_e" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "J_{e,em} = M_e" }, { "math_id": 4, "text": "J_{e,r}" }, { "math_id": 5, "text": "J_{e,tr}" }, { "math_id": 6, "text": "J_\\mathrm{e} = M_\\mathrm{e} + J_\\mathrm{e,r}." }, { "math_id": 7, "text": "J_{\\mathrm{e},\\nu} = \\frac{\\partial J_\\mathrm{e}}{\\partial \\nu}," }, { "math_id": 8, "text": "J_{\\mathrm{e},\\lambda} = \\frac{\\partial J_\\mathrm{e}}{\\partial \\lambda}," }, { "math_id": 9, "text": "J_\\mathrm{e} = M_\\mathrm{e} + J_\\mathrm{e,r} = \\varepsilon \\sigma T^4 + (1 - \\varepsilon) E_\\mathrm{e}," }, { "math_id": 10, "text": "E_{\\mathrm{e},i} = \\frac{\\sum_{j = 1}^N F_{ji}A_j J_{\\mathrm{e},j}}{A_i}." }, { "math_id": 11, "text": "E_{\\mathrm{e},i} = \\sum_{j = 1}^N F_{ij} J_{\\mathrm{e},j}," }, { "math_id": 12, "text": "J_{\\mathrm{e},i} = \\varepsilon_i \\sigma T_i^4 + (1 - \\varepsilon_i)\\sum_{j = 1}^N F_{ij} J_{\\mathrm{e},j}." }, { "math_id": 13, "text": "\\dot Q_i" }, { "math_id": 14, "text": "\\dot Q_i = A_i\\left(J_{\\mathrm{e},i} - E_{\\mathrm{e},i}\\right)." }, { "math_id": 15, "text": "\\dot Q_i = \\frac{A_i \\varepsilon_i}{1 - \\varepsilon_i}\\left(\\sigma T_i^4 - J_{\\mathrm{e},i}\\right) = \\frac{A_i \\varepsilon_i}{1 - \\varepsilon_i}\\left(M_{\\mathrm{e},i}^\\circ - J_{\\mathrm{e},i}\\right)," }, { "math_id": 16, "text": "\\dot{Q_i} = \\frac{M_{\\mathrm{e},i}^\\circ - J_{\\mathrm{e},i}}{R_i}," }, { "math_id": 17, "text": "\\dot Q_{ij} = A_i F_{ij} (J_{\\mathrm{e},i} - J_{\\mathrm{e},j}) = \\frac{J_{\\mathrm{e},i} - J_{\\mathrm{e},j}}{R_{ij}}," } ]
https://en.wikipedia.org/wiki?curid=11709182
1170930
Augmented sphenocorona
87th Johnson solid (17 faces) In geometry, the augmented sphenocorona is the Johnson solid that can be constructed by attaching an equilateral square pyramid to one of the square faces of the sphenocorona. It is the only Johnson solid arising from "cut and paste" manipulations where the components are not all prisms, antiprisms or sections of Platonic or Archimedean solids. Construction. The augmented sphenocorona is constructed by attaching equilateral square pyramid to the sphenocorona, a process known as the augmentation. This pyramid covers one square face of the sphenocorona, replacing them with equilateral triangles. As a result, the augmented sphenocorona has 16 equilateral triangles and 1 square as its faces. The convex polyhedron with its faces are regular is the Johnson solid; the augmented sphenocorona is one of them, enumerated as formula_0, the 87th Johnson solid. Properties. For the edge length formula_1, the surface area of an augmented sphenocorona is by summing the area of 16 equilateral triangles and 1 square: formula_2 Its volume can be calculated by slicing it into a sphenocorona and an equilateral square pyramid, and adding the volume subsequently: formula_3 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{87} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " \\left(1+4\\sqrt{3}\\right)a^2\\approx7.92820a^2," }, { "math_id": 3, "text": " \\left(\\frac{1}{2}\\sqrt{1 + 3 \\sqrt{\\frac{3}{2}} + \\sqrt{13 + 3 \\sqrt{6}}}+\\frac{1}{3\\sqrt{2}}\\right)a^3\\approx1.75105a^3." } ]
https://en.wikipedia.org/wiki?curid=1170930
1171044
High-energy nuclear physics
Intersection of nuclear physics and high-energy physics High-energy nuclear physics studies the behavior of nuclear matter in energy regimes typical of high-energy physics. The primary focus of this field is the study of heavy-ion collisions, as compared to lighter atoms in other particle accelerators. At sufficient collision energies, these types of collisions are theorized to produce the quark–gluon plasma. In peripheral nuclear collisions at high energies one expects to obtain information on the electromagnetic production of leptons and mesons that are not accessible in electron–positron colliders due to their much smaller luminosities. Previous high-energy nuclear accelerator experiments have studied heavy-ion collisions using projectile energies of 1 GeV/nucleon at JINR and LBNL-Bevalac up to 158 GeV/nucleon at CERN-SPS. Experiments of this type, called "fixed-target" experiments, primarily accelerate a "bunch" of ions (typically around 106 to 108 ions per bunch) to speeds approaching the speed of light (0.999"c") and smash them into a target of similar heavy ions. While all collision systems are interesting, great focus was applied in the late 1990s to symmetric collision systems of gold beams on gold targets at Brookhaven National Laboratory's Alternating Gradient Synchrotron (AGS) and uranium beams on uranium targets at CERN's Super Proton Synchrotron. High-energy nuclear physics experiments are continued at the Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) and at the CERN Large Hadron Collider. At RHIC the programme began with four experiments— PHENIX, STAR, PHOBOS, and BRAHMS—all dedicated to study collisions of highly relativistic nuclei. Unlike fixed-target experiments, collider experiments steer two accelerated beams of ions toward each other at (in the case of RHIC) six interaction regions. At RHIC, ions can be accelerated (depending on the ion size) from 100 GeV/nucleon to 250 GeV/nucleon. Since each colliding ion possesses this energy moving in opposite directions, the maximal energy of the collisions can achieve a center-of-mass collision energy of 200 GeV/nucleon for gold and 500 GeV/nucleon for protons. The (A Large Ion Collider Experiment) detector at the LHC at CERN is specialized in studying Pb–Pb nuclei collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. All major LHC detectors—ALICE, ATLAS, CMS and LHCb—participate in the heavy-ion programme. History. The exploration of hot hadron matter and of multiparticle production has a long history initiated by theoretical work on multiparticle production by Enrico Fermi in the US and Lev Landau in the USSR. These efforts paved the way to the development in the early 1960s of the thermal description of multiparticle production and the statistical bootstrap model by Rolf Hagedorn. These developments led to search for and discovery of quark-gluon plasma. Onset of the production of this new form of matter remains under active investigation. First collisions. The first heavy-ion collisions at modestly relativistic conditions were undertaken at the Lawrence Berkeley National Laboratory (LBNL, formerly LBL) at Berkeley, California, U.S.A., and at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, USSR. At the LBL, a transport line was built to carry heavy ions from the heavy-ion accelerator HILAC to the Bevatron. The energy scale at the level of 1–2 GeV per nucleon attained initially yields compressed nuclear matter at few times normal nuclear density. The demonstration of the possibility of studying the properties of compressed and excited nuclear matter motivated research programs at much higher energies in accelerators available at BNL and CERN with relativist beams targeting laboratory fixed targets. The first collider experiments started in 1999 at RHIC, and LHC begun colliding heavy ions at one order of magnitude higher energy in 2010. CERN operation. The LHC collider at CERN operates one month a year in the nuclear-collision mode, with Pb nuclei colliding at 2.76 TeV per nucleon pair, about 1500 times the energy equivalent of the rest mass. Overall 1250 valence quarks collide, generating a hot quark–gluon soup. Heavy atomic nuclei stripped of their electron cloud are called heavy ions, and one speaks of (ultra)relativistic heavy ions when the kinetic energy exceeds significantly the rest energy, as it is the case at LHC. The outcome of such collisions is production of very many strongly interacting particles. In August 2012 ALICE scientists announced that their experiments produced quark–gluon plasma with temperature at around 5.5 trillion kelvins, the highest temperature achieved in any physical experiments thus far. This temperature is about 38% higher than the previous record of about 4 trillion kelvins, achieved in the 2010 experiments at the Brookhaven National Laboratory. The ALICE results were announced at the August 13 "Quark Matter 2012" conference in Washington, D.C. The quark–gluon plasma produced by these experiments approximates the conditions in the universe that existed microseconds after the Big Bang, before the matter coalesced into atoms. Objectives. There are several scientific objectives of this international research program: Experimental program. This experimental program follows on a decade of research at the RHIC collider at BNL and almost two decades of studies using fixed targets at SPS at CERN and AGS at BNL. This experimental program has already confirmed that the extreme conditions of matter necessary to reach QGP phase can be reached. A typical temperature range achieved in the QGP created formula_0 is more than times greater than in the center of the Sun. This corresponds to an energy density formula_1. The corresponding relativistic-matter pressure is formula_2 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nT = 300~\\text{MeV}/k_\\text{B} = 3.3 \\times 10^{12}~\\text{K}\n" }, { "math_id": 1, "text": "\n\\epsilon = 10~\\text{GeV/fm}^3 = 1.8 \\times 10^{16}~\\text{g/cm}^3\n" }, { "math_id": 2, "text": "P \\simeq \\frac{1}{3} \\epsilon = 0.52 \\times 10^{31}~\\text{bar}.\n" } ]
https://en.wikipedia.org/wiki?curid=1171044
11710718
Well drainage
Drainage of agricultural lands by wells Well drainage means drainage of agricultural lands by wells. Agricultural land is drained by pumped wells (vertical drainage) to improve the soils by controlling water table levels and soil salinity. Introduction. Subsurface (groundwater) drainage for water table and soil salinity in agricultural land can be done by horizontal and vertical drainage systems.&lt;br&gt; "Horizontal drainage" systems are drainage systems using open ditches (trenches) or buried pipe drains.&lt;br&gt; "Vertical drainage" systems are drainage systems using pumped wells, either open dug wells or tube wells.&lt;br&gt; Both systems serve the same "purposes", namely water table control and soil salinity control . &lt;br&gt; Both systems can facilitate the "reuse of drainage water" (e.g. for irrigation), but wells offer more flexibility.&lt;br&gt; Reuse is only feasible if the quality of the groundwater is acceptable and the salinity is low. Design. Although one well may be sufficient to solve groundwater and soil salinity problems in a few hectares, one usually needs a number of wells, because the problems may be widely spread.&lt;br&gt; The wells may be arranged in a triangular, square or rectangular pattern.&lt;br&gt; The design of the well field concerns depth, capacity, discharge, and spacing of the wells. The determination of the optimum depth of the water table is the realm of drainage research . Flow to wells. The basic, steady state, equation for flow to "fully penetrating" wells (i.e. wells reaching the impermeable base) in a regularly spaced well field in a uniform unconfined (phreatic) aquifer with a hydraulic conductivity that is isotropic is: formula_0 where Q = safe well discharge - i.e. the steady state discharge at which no overdraught or groundwater depletion occurs - (m3/day), K = uniform hydraulic conductivity of the soil (m/day), D = depth below soil surface, formula_1 = depth of the bottom of the well equal to the depth of the impermeable base (m), formula_2 = depth of the watertable midway between the wells (m), formula_3 is the depth of the water level inside the well (m), formula_4 = radius of influence of the well (m) and formula_5 is the radius of the well (m). The radius of influence of the wells depends on the pattern of the well field, which may be triangular, square, or rectangular. It can be found as: formula_6 where formula_7 = total surface area of the well field (m2)and N = number of wells in the well field. The safe well discharge (Q) can also be found from: formula_8 where q is the safe yield or drainable surplus of the aquifer (m/day) and formula_9 is the operation intensity of the wells (hours/24 per day). Thus the basic equation can also be written as: formula_10 Well spacing. With a well spacing equation one can calculate various design "alternatives" to arrive at the most attractive or economical solution for watertable control in agricultural land. The basic flow equation cannot be used for determining the well spacing in a "partially penetrating" well-field in a non-uniform and anisotropic aquifer, but one needs a numerical solution of more complicated equations. The costs of the "most attractive solution" can be compared with the costs of a horizontal drainage system - for which the drain spacing can be calculated with a drainage equation - serving the same purpose, to decide which system deserves preference. The well design proper is described in An illustration of the "parameters" involved is shown in the figure. The hydraulic conductivity can be found from an aquifer test. Software. The numerical computer program WellDrain for well spacing calculations takes into account fully and partially penetrating wells, layered aquifers, anisotropy (different vertical and horizontal hydraulic conductivity or permeability) and entrance resistance. Modelling. With a groundwater model that includes the possibility to introduce wells, one can study the impact of a well drainage system on the hydrology of the project area. There are also models that give the opportunity to evaluate the water quality. SahysMod is such a polygonal groundwater model permitting to assess the use of well water for irrigation, the effects on soil salinity and on depth of the water table.
[ { "math_id": 0, "text": "Q = 2\\pi K \\frac{\\left(D_b - D_m\\right) \\left(D_w - D_m\\right) }{\\ln \\frac{R_i}{R_w} }" }, { "math_id": 1, "text": "D_b" }, { "math_id": 2, "text": "D_m" }, { "math_id": 3, "text": "D_w" }, { "math_id": 4, "text": "R_i" }, { "math_id": 5, "text": "R_w" }, { "math_id": 6, "text": "R_i = \\sqrt{\\left( \\frac{A_t}{\\pi N} \\right)}" }, { "math_id": 7, "text": "A_t" }, { "math_id": 8, "text": "Q = q \\frac{A_t}{N F_w}" }, { "math_id": 9, "text": "F_w" }, { "math_id": 10, "text": "D_w - D_m = \\frac{q A_t}{2\\pi K (D_b - D_m) N F_w} \\ln \\left( \\frac{R_i}{R_w} \\right)" } ]
https://en.wikipedia.org/wiki?curid=11710718
11713215
Vladimir Mazya
Swedish Mathematician Vladimir Gilelevich Maz'ya (; born 31 December 1937) (the family name is sometimes transliterated as Mazya, Maz'ja or Mazja) is a Russian-born Swedish mathematician, hailed as "one of the most distinguished analysts of our time" and as "an outstanding mathematician of worldwide reputation", who strongly influenced the development of mathematical analysis and the theory of partial differential equations. Mazya's early achievements include: his work on Sobolev spaces, in particular the discovery of the equivalence between Sobolev and isoperimetric/isocapacitary inequalities (1960), his counterexamples related to Hilbert's 19th and Hilbert's 20th problem (1968), his solution, together with Yuri Burago, of a problem in harmonic potential theory (1967) posed by , his extension of the Wiener regularity test to p–Laplacian and the proof of its sufficiency for the boundary regularity. Maz'ya solved Vladimir Arnol'd's problem for the oblique derivative boundary value problem (1970) and Fritz John's problem on the oscillations of a fluid in the presence of an immersed body (1977). In recent years, he proved a Wiener's type criterion for higher order elliptic equations, together with Mikhail Shubin solved a problem in the spectral theory of the Schrödinger operator formulated by Israel Gelfand in 1953, found necessary and sufficient conditions for the validity of maximum principles for elliptic and parabolic systems of PDEs and introduced the so–called approximate approximations. He also contributed to the development of the theory of capacities, nonlinear potential theory, the asymptotic and qualitative theory of arbitrary order elliptic equations, the theory of ill-posed problems, the theory of boundary value problems in domains with piecewise smooth boundary. Biography. Life and academic career. Vladimir Maz'ya was born on 31 December 1937 in a Jewish family. His father died in December 1941 at the World War II front, and all four grandparents died during the siege of Leningrad. His mother, a state accountant, chose to not remarry and dedicated her life to him: they lived on her meager salary in a 9 square meters room in a big communal apartment, shared with other four families. As a secondary school student, he repeatedly won the city's mathematics and physics olympiads and graduated with a gold medal. In 1955, at the age of 18, Maz'ya entered the Mathematics and Mechanics Department of Leningrad University. Taking part to the traditional mathematical olympiad of the faculty, he solved the problems for both first year and second year students and, since he did not make this a secret, the other participants did not submit their solutions causing the invalidation of the contest by the jury which therefore did not award the prize. However, he attracted the attention of Solomon Mikhlin who invited him at his home, thus starting their lifelong friendship: and this friendship had a great influence on him, helping him develop his mathematical style more than anyone else. According to , in the years to come, "Maz'ya was never a formal student of Mikhlin, but Mikhlin was more than a teacher for him. Maz'ya had found the topics of his dissertations by himself, while Mikhlin taught him mathematical ethics and rules of writing, referring and reviewing". More details on the life of Vladimir Maz'ya, from his birth to the year 1968, can be found in his autobiography . Maz'ya graduated from Leningrad University in 1960. The same year he gave two talks at Smirnov's seminar: their contents were published as a short report in the Proceedings of the USSR Academy of Sciences and later evolved in his "kandidat nauk" thesis, "Classes of sets and embedding theorems for function spaces", which was defended in 1962. In 1965 he earned the Doktor nauk degree, again from Leningrad University, defending the dissertation "Dirichlet and Neumann problems in Domains with irregular boundaries"", when he was only 27. Neither the first nor his second thesis were written under the guidance of an advisor: Vladimir Maz'ya never had a formal scientific adviser, choosing the research problems he worked to by himself. From 1960 up to 1986, he worked as a "research fellow" at the Research Institute of Mathematics and Mechanics of Leningrad University (RIMM), being promoted from junior to senior research fellow in 1965. From 1968 to 1978 he taught at the Leningrad Shipbuilding Institute, where he was awarded the title of "professor" in 1976. From 1986 to 1990 he worked to the Leningrad Section of the Blagonravov Research Institute of Mechanical Engineering of the USSR Academy of Sciences, where he created and directed the Laboratory of Mathematical Models in Mechanics and the Consulting Center in Mathematics for Engineers. In 1978 he married Tatyana Shaposhnikova, a former doctoral student of Solomon Mikhlin, and they have a son, Michael: In 1990, they left the URSS for Sweden, where Prof. Maz'ya obtained the Swedish citizenship and started to work at Linköping University. Currently, he is honorary Senior Fellow of Liverpool University and Professor Emeritus at Linköping University: he is also member of the editorial board of several mathematical journals. Honors. In 1962 Maz'ya was awarded the "Young Mathematician" prize by the Leningrad Mathematical Society, for his results on Sobolev spaces: he was the first winner of the prize. In 1990 he was awarded an honorary doctorate from Rostock University. In 1999, Maz'ya received the Humboldt Prize. He was elected member of the Royal Society of Edinburgh in 2000, and of the Swedish Academy of Science in 2002. In March 2003, he, jointly with Tatyana Shaposhnikova, was awarded the Verdaguer Prize by the French Academy of Sciences. On 31 August 2004 he was awarded the Celsius Gold Medal, the Royal Society of Sciences in Uppsala's top award, ""for his outstanding research on partial differential equations and hydrodynamics". He was awarded the Senior Whitehead Prize by the London Mathematical Society on 20 November 2009. In 2012 he was elected fellow of the American Mathematical Society. On 30 October 2013 he was elected foreign member of the Georgian National Academy of Sciences. Starting from 1993, several conferences have been held to honor him: the first one, held in that year at the University of Kyoto, was a conference on Sobolev spaces. On the occasion of his 60th birthday in 1998, two international conferences were held in his honor: the one at the University of Rostock was on Sobolev spaces, while the other, at the École Polytechnique in Paris, was on the boundary element method. He was invited speaker at the International Mathematical Congress held in Beijing in 2002: his talk is an exposition on his work on Wiener–type criteria for higher order elliptic equations. Other two conferences were held on the occasion of his 70th birthday: ""Analysis, PDEs and Applications on the occasion of the 70th birthday of Vladimir Maz'ya" was held in Rome, while the "Nordic – Russian Symposium in honour of Vladimir Maz'ya on the occasion of his 70th birthday"" was held in Stockholm. On the same occasion, also a volume of the Proceedings of Symposia in Pure Mathematics was dedicated to him. On the occasion of his 80th birthday, a "Workshop on Sobolev Spaces and Partial Differential Equations" was held on 17–18 May 2018 was held at the Accademia Nazionale dei Lincei to honor him. On the 26–31 May 2019, the international conference "Harmonic Analysis and PDE" was held in his honor at the Holon Institute of Technology. Work. Research activity. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Because of Maz'ya's ability to give complete solutions to problems which are generally considered as unsolvable, Fichera once compared Maz'ya with Santa Rita, the 14th century Italian nun who is the Patron Saint of Impossible Causes. Maz'ya authored/coauthored more than 500 publications, including 20 research monographs. Several survey articles describing his work can be found in the book , and also the paper by Dorina and Marius Mitrea (2008) describes extensively his research achievements, so these references are the main ones in this section: in particular, the classification of the research work of Vladimir Maz'ya is the one proposed by the authors of these two references. He is also the author of Seventy (Five) Thousand Unsolved Problems in Analysis and Partial Differential Equations which collects problems he considers to be important research directions in the field Theory of boundary value problems in nonsmooth domains. In one of his early papers, considers the Dirichlet problem for the following linear elliptic equation: 1formula_0 where He proves the following a priori estimate 2formula_1 for the weak solution u of equation 1, where K is a constant depending on n, s, r κ and other parameters but not depending on the moduli of continuity of the coefficients. The integrability exponents of the "Lp" norms in Estimate 2 are subject to the relations the first one of which answers positively to a conjecture proposed by Guido Stampacchia (1958, p. 237). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{L} u = \\nabla(A(x)\\nabla)u+\\mathbf{b}(x)\\nabla u + c(x)u=f\\qquad x\\in\\Omega\\subset\\mathbf{R}^n" }, { "math_id": 1, "text": "\\Vert u \\Vert_{L_s(\\Omega)} \\leq K \\left[ \\Vert f \\Vert_{L_r(\\Omega)} + \\Vert u \\Vert_{L(\\Omega)} \\right]" } ]
https://en.wikipedia.org/wiki?curid=11713215
11716020
Electrode array
An electrode array is a configuration of electrodes used for measuring either an electric current or voltage. Some electrode arrays can operate in a bidirectional fashion, in that they can also be used to provide a stimulating pattern of electric current or voltage. Common arrays include: Resistivity. Resistivity measurement of bulk materials is a frequent application of electrode arrays. The figure shows a Wenner array, one of the possible ways of achieving this. Injecting the current through electrodes separate from those being used for measurement of potential has the advantage of eliminating any inaccuracies caused by the injecting circuit resistance, particularly the contact resistance between the probe and the surface, which can be high. Assuming the material is homogenous, the resistivity in the Wenner array is given by: formula_0 where formula_1 is the distance between probes. Electrode arrays are widely used to measure resistivity in geophysics applications. It is also used in the semiconductor industry to measure the bulk resistivity of silicon wafers, which in turn can be taken as a measure of the doping that has been applied to the wafer, before further manufacturing processes are undertaken. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho=2 \\pi a \\frac {V}{I} " }, { "math_id": 1, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=11716020
11717900
Appell's equation of motion
Formulation of classical mechanics &lt;templatestyles src="Hlist/styles.css"/&gt; In classical mechanics, Appell's equation of motion (aka the Gibbs–Appell equation of motion) is an alternative general formulation of classical mechanics described by Josiah Willard Gibbs in 1879 and Paul Émile Appell in 1900. Statement. The Gibbs-Appell equation reads formula_0 where formula_1 is an arbitrary generalized acceleration, or the second time derivative of the generalized coordinates formula_2, and formula_3 is its corresponding generalized force. The generalized force gives the work done formula_4 where the index formula_5 runs over the formula_6 generalized coordinates formula_2, which usually correspond to the degrees of freedom of the system. The function formula_7 is defined as the mass-weighted sum of the particle accelerations squared, formula_8 where the index formula_9 runs over the formula_10 particles, and formula_11 is the acceleration of the formula_12-th particle, the second time derivative of its position vector formula_13. Each formula_13 is expressed in terms of generalized coordinates, and formula_14 is expressed in terms of the generalized accelerations. Relations to other formulations of classical mechanics. Appell's formulation does not introduce any new physics to classical mechanics and as such is equivalent to other reformulations of classical mechanics, such as Lagrangian mechanics, and Hamiltonian mechanics. All classical mechanics is contained within Newton's laws of motion. In some cases, Appell's equation of motion may be more convenient than the commonly used Lagrangian mechanics, particularly when nonholonomic constraints are involved. In fact, Appell's equation leads directly to Lagrange's equations of motion. Moreover, it can be used to derive Kane's equations, which are particularly suited for describing the motion of complex spacecraft. Appell's formulation is an application of Gauss' principle of least constraint. Derivation. The change in the particle positions r"k" for an infinitesimal change in the "D" generalized coordinates is formula_15 Taking two derivatives with respect to time yields an equivalent equation for the accelerations formula_16 The work done by an infinitesimal change "dqr" in the generalized coordinates is formula_17 where Newton's second law for the "k"th particle formula_18 has been used. Substituting the formula for "d"r"k" and swapping the order of the two summations yields the formulae formula_19 Therefore, the generalized forces are formula_20 This equals the derivative of "S" with respect to the generalized accelerations formula_21 yielding Appell's equation of motion formula_22 Examples. Euler's equations of rigid body dynamics. Euler's equations provide an excellent illustration of Appell's formulation. Consider a rigid body of "N" particles joined by rigid rods. The rotation of the body may be described by an angular velocity vector formula_23, and the corresponding angular acceleration vector formula_24 The generalized force for a rotation is the torque formula_25, since the work done for an infinitesimal rotation formula_26 is formula_27. The velocity of the formula_28-th particle is given by formula_29 where formula_30 is the particle's position in Cartesian coordinates; its corresponding acceleration is formula_31 Therefore, the function formula_32 may be written as formula_33 Setting the derivative of "S" with respect to formula_34 equal to the torque yields Euler's equations formula_35 formula_36 formula_37 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_{r} = \\frac{\\partial S}{\\partial \\alpha_{r}}, " }, { "math_id": 1, "text": "\\alpha_r = \\ddot{q}_r" }, { "math_id": 2, "text": "q_r " }, { "math_id": 3, "text": "Q_r " }, { "math_id": 4, "text": "dW = \\sum_{r=1}^{D} Q_{r} dq_{r}, " }, { "math_id": 5, "text": "r " }, { "math_id": 6, "text": "D " }, { "math_id": 7, "text": "S " }, { "math_id": 8, "text": "S = \\frac{1}{2} \\sum_{k=1}^{N} m_{k} \\mathbf{a}_{k}^{2}\\,," }, { "math_id": 9, "text": "k " }, { "math_id": 10, "text": "K " }, { "math_id": 11, "text": "\\mathbf{a}_k = \\ddot{\\mathbf{r}}_k = \\frac{d^2 \\mathbf{r}_k}{dt^2} " }, { "math_id": 12, "text": "\nk\n" }, { "math_id": 13, "text": "\\mathbf{r}_k " }, { "math_id": 14, "text": "\\mathbf{a}_k " }, { "math_id": 15, "text": "\nd\\mathbf{r}_{k} = \\sum_{r=1}^{D} dq_{r} \\frac{\\partial \\mathbf{r}_{k}}{\\partial q_{r}}\n" }, { "math_id": 16, "text": "\n\\frac{\\partial \\mathbf{a}_{k}}{\\partial \\alpha_{r}} = \\frac{\\partial \\mathbf{r}_{k}}{\\partial q_{r}}\n" }, { "math_id": 17, "text": "\ndW = \\sum_{r=1}^{D} Q_{r} dq_{r} = \\sum_{k=1}^{N} \\mathbf{F}_{k} \\cdot d\\mathbf{r}_{k} = \\sum_{k=1}^{N} m_{k} \\mathbf{a}_{k} \\cdot d\\mathbf{r}_{k}\n" }, { "math_id": 18, "text": "\\mathbf{F}_k = m_k\\mathbf{a}_k" }, { "math_id": 19, "text": "\ndW = \\sum_{r=1}^{D} Q_{r} dq_{r} = \\sum_{k=1}^{N} m_{k} \\mathbf{a}_{k} \\cdot \\sum_{r=1}^{D} dq_{r} \\left( \\frac{\\partial \\mathbf{r}_{k}}{\\partial q_{r}} \\right) = \n\\sum_{r=1}^{D} dq_{r} \\sum_{k=1}^{N} m_{k} \\mathbf{a}_{k} \\cdot \\left( \\frac{\\partial \\mathbf{r}_{k}}{\\partial q_{r}} \\right)\n\n" }, { "math_id": 20, "text": "\nQ_{r} = \n\\sum_{k=1}^{N} m_{k} \\mathbf{a}_{k} \\cdot \\left( \\frac{\\partial \\mathbf{r}_{k}}{\\partial q_{r}} \\right) =\n\\sum_{k=1}^{N} m_{k} \\mathbf{a}_{k} \\cdot \\left( \\frac{\\partial \\mathbf{a}_{k}}{\\partial \\alpha_{r}} \\right)\n" }, { "math_id": 21, "text": "\n\\frac{\\partial S}{\\partial \\alpha_{r}} = \n\\frac{\\partial}{\\partial \\alpha_{r}} \\frac{1}{2} \\sum_{k=1}^{N} m_{k} \\left| \\mathbf{a}_{k} \\right|^{2} = \n\\sum_{k=1}^{N} m_{k} \\mathbf{a}_{k} \\cdot \\left( \\frac{\\partial \\mathbf{a}_{k}}{\\partial \\alpha_{r}} \\right)\n" }, { "math_id": 22, "text": "\n\\frac{\\partial S}{\\partial \\alpha_{r}} = Q_{r}.\n" }, { "math_id": 23, "text": "\\boldsymbol\\omega" }, { "math_id": 24, "text": "\n\\boldsymbol\\alpha = \\frac{d\\boldsymbol\\omega}{dt}\n" }, { "math_id": 25, "text": "\\textbf{N}" }, { "math_id": 26, "text": "\\delta \\boldsymbol\\phi" }, { "math_id": 27, "text": "dW = \\mathbf{N} \\cdot \\delta \\boldsymbol\\phi" }, { "math_id": 28, "text": "k" }, { "math_id": 29, "text": "\n\\mathbf{v}_{k} = \\boldsymbol\\omega \\times \\mathbf{r}_{k}\n" }, { "math_id": 30, "text": "\n\\mathbf{r}_{k}\n" }, { "math_id": 31, "text": "\n\\mathbf{a}_{k} = \\frac{d\\mathbf{v}_{k}}{dt} = \n\\boldsymbol\\alpha \\times \\mathbf{r}_{k} + \\boldsymbol\\omega \\times \\mathbf{v}_{k}\n" }, { "math_id": 32, "text": "\nS\n" }, { "math_id": 33, "text": "\nS = \\frac{1}{2} \\sum_{k=1}^{N} m_{k} \\left( \\mathbf{a}_{k} \\cdot \\mathbf{a}_{k} \\right)\n= \\frac{1}{2} \\sum_{k=1}^{N} m_{k} \\left\\{ \\left(\\boldsymbol\\alpha \\times \\mathbf{r}_{k} \\right)^{2} \n+ \\left( \\boldsymbol\\omega \\times \\mathbf{v}_{k} \\right)^{2} \n+ 2 \\left( \\boldsymbol\\alpha \\times \\mathbf{r}_{k} \\right) \\cdot \\left(\\boldsymbol\\omega \\times \\mathbf{v}_{k}\\right) \\right\\}\n" }, { "math_id": 34, "text": "\\boldsymbol\\alpha" }, { "math_id": 35, "text": "\nI_{xx} \\alpha_{x} - \\left( I_{yy} - I_{zz} \\right)\\omega_{y} \\omega_{z} = N_{x}\n" }, { "math_id": 36, "text": "\nI_{yy} \\alpha_{y} - \\left( I_{zz} - I_{xx} \\right)\\omega_{z} \\omega_{x} = N_{y}\n" }, { "math_id": 37, "text": "\nI_{zz} \\alpha_{z} - \\left( I_{xx} - I_{yy} \\right)\\omega_{x} \\omega_{y} = N_{z}\n" } ]
https://en.wikipedia.org/wiki?curid=11717900
11718631
Liouville dynamical system
In classical mechanics, a Liouville dynamical system is an exactly solvable dynamical system in which the kinetic energy "T" and potential energy "V" can be expressed in terms of the "s" generalized coordinates "q" as follows: formula_0 formula_1 The solution of this system consists of a set of separably integrable equations formula_2 where "E = T + V" is the conserved energy and the formula_3 are constants. As described below, the variables have been changed from "qs" to φs, and the functions "us" and "ws" substituted by their counterparts "χs" and "ωs". This solution has numerous applications, such as the orbit of a small planet about two fixed stars under the influence of Newtonian gravity. The Liouville dynamical system is one of several things named after Joseph Liouville, an eminent French mathematician. Example of bicentric orbits. In classical mechanics, Euler's three-body problem describes the motion of a particle in a plane under the influence of two fixed centers, each of which attract the particle with an inverse-square force such as Newtonian gravity or Coulomb's law. Examples of the bicenter problem include a planet moving around two slowly moving stars, or an electron moving in the electric field of two positively charged nuclei, such as the first ion of the hydrogen molecule H2, namely the hydrogen molecular ion or H2+. The strength of the two attractions need not be equal; thus, the two stars may have different masses or the nuclei two different charges. Solution. Let the fixed centers of attraction be located along the "x"-axis at ±"a". The potential energy of the moving particle is given by formula_4 The two centers of attraction can be considered as the foci of a set of ellipses. If either center were absent, the particle would move on one of these ellipses, as a solution of the Kepler problem. Therefore, according to Bonnet's theorem, the same ellipses are the solutions for the bicenter problem. Introducing elliptic coordinates, formula_5 formula_6 the potential energy can be written as formula_7 and the kinetic energy as formula_8 This is a Liouville dynamical system if ξ and η are taken as φ1 and φ2, respectively; thus, the function "Y" equals formula_9 and the function "W" equals formula_10 Using the general solution for a Liouville dynamical system below, one obtains formula_11 formula_12 Introducing a parameter "u" by the formula formula_13 gives the parametric solution formula_14 Since these are elliptic integrals, the coordinates ξ and η can be expressed as elliptic functions of "u". Constant of motion. The bicentric problem has a constant of motion, namely, formula_15 from which the problem can be solved using the method of the last multiplier. Derivation. New variables. To eliminate the "v" functions, the variables are changed to an equivalent set formula_16 giving the relation formula_17 which defines a new variable "F". Using the new variables, the u and w functions can be expressed by equivalent functions χ and ω. Denoting the sum of the χ functions by "Y", formula_18 the kinetic energy can be written as formula_19 Similarly, denoting the sum of the ω functions by "W" formula_20 the potential energy "V" can be written as formula_21 Lagrange equation. The Lagrange equation for the "r"th variable formula_22 is formula_23 Multiplying both sides by formula_24, re-arranging, and exploiting the relation 2"T = YF" yields the equation formula_25 which may be written as formula_26 where "E = T + V" is the (conserved) total energy. It follows that formula_27 which may be integrated once to yield formula_28 where the formula_29 are constants of integration subject to the energy conservation formula_30 Inverting, taking the square root and separating the variables yields a set of separably integrable equations: formula_31 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nT = \\frac{1}{2} \\left\\{ u_{1}(q_{1}) + u_{2}(q_{2}) + \\cdots + u_{s}(q_{s}) \\right\\}\n\\left\\{ v_{1}(q_{1}) \\dot{q}_{1}^{2} + v_{2}(q_{2}) \\dot{q}_{2}^{2} + \\cdots + v_{s}(q_{s}) \\dot{q}_{s}^{2} \\right\\}\n" }, { "math_id": 1, "text": "\nV = \\frac{w_{1}(q_{1}) + w_{2}(q_{2}) + \\cdots + w_{s}(q_{s}) }{u_{1}(q_{1}) + u_{2}(q_{2}) + \\cdots + u_{s}(q_{s}) }\n" }, { "math_id": 2, "text": "\n\\frac{\\sqrt{2}}{Y}\\, dt = \\frac{d\\varphi_{1}}{\\sqrt{E \\chi_{1} - \\omega_{1} + \\gamma_{1}}} = \n\\frac{d\\varphi_{2}}{\\sqrt{E \\chi_{2} - \\omega_{2} + \\gamma_{2}}} = \\cdots =\n\\frac{d\\varphi_{s}}{\\sqrt{E \\chi_{s} - \\omega_{s} + \\gamma_{s}}} \n" }, { "math_id": 3, "text": "\\gamma_{s}" }, { "math_id": 4, "text": "\nV(x, y) = \\frac{-\\mu_{1}}{\\sqrt{\\left( x - a \\right)^{2} + y^{2}}} - \\frac{\\mu_{2}}{\\sqrt{\\left( x + a \\right)^{2} + y^{2}}} .\n" }, { "math_id": 5, "text": "\nx = a \\cosh \\xi \\cos \\eta,\n" }, { "math_id": 6, "text": "\ny = a \\sinh \\xi \\sin \\eta,\n" }, { "math_id": 7, "text": "\nV(\\xi, \\eta) = \\frac{-\\mu_{1}}{a\\left( \\cosh \\xi - \\cos \\eta \\right)} - \\frac{\\mu_{2}}{a\\left( \\cosh \\xi + \\cos \\eta \\right)}\n= \\frac{-\\mu_{1} \\left( \\cosh \\xi + \\cos \\eta \\right) - \\mu_{2} \\left( \\cosh \\xi - \\cos \\eta \\right)}{a\\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right)}," }, { "math_id": 8, "text": "\nT = \\frac{ma^{2}}{2} \\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right) \\left( \\dot{\\xi}^{2} + \\dot{\\eta}^{2} \\right).\n" }, { "math_id": 9, "text": "\nY = \\cosh^{2} \\xi - \\cos^{2} \\eta\n" }, { "math_id": 10, "text": "\nW = -\\mu_{1} \\left( \\cosh \\xi + \\cos \\eta \\right) - \\mu_{2} \\left( \\cosh \\xi - \\cos \\eta \\right) \n" }, { "math_id": 11, "text": "\n\\frac{ma^{2}}{2} \\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right)^{2} \\dot{\\xi}^{2} = E \\cosh^{2} \\xi + \\left( \\frac{\\mu_{1} + \\mu_{2}}{a} \\right) \\cosh \\xi - \\gamma\n" }, { "math_id": 12, "text": "\n\\frac{ma^{2}}{2} \\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right)^{2} \\dot{\\eta}^{2} = -E \\cos^{2} \\eta + \\left( \\frac{\\mu_{1} - \\mu_{2}}{a} \\right) \\cos \\eta + \\gamma\n" }, { "math_id": 13, "text": "\ndu = \\frac{d\\xi}{\\sqrt{E \\cosh^{2} \\xi + \\left( \\frac{\\mu_{1} + \\mu_{2}}{a} \\right) \\cosh \\xi - \\gamma}} = \n\\frac{d\\eta}{\\sqrt{-E \\cos^{2} \\eta + \\left( \\frac{\\mu_{1} - \\mu_{2}}{a} \\right) \\cos \\eta + \\gamma}},\n" }, { "math_id": 14, "text": "\nu = \\int \\frac{d\\xi}{\\sqrt{E \\cosh^{2} \\xi + \\left( \\frac{\\mu_{1} + \\mu_{2}}{a} \\right) \\cosh \\xi - \\gamma}} = \n\\int \\frac{d\\eta}{\\sqrt{-E \\cos^{2} \\eta + \\left( \\frac{\\mu_{1} - \\mu_{2}}{a} \\right) \\cos \\eta + \\gamma}}.\n" }, { "math_id": 15, "text": "\nr_{1}^{2}\\,r_{2}^{2} \\frac{d\\theta_{1}}{dt} \\frac{d\\theta_{2}}{dt} + \n2\\,c \\left( \\mu_{1} \\cos \\theta_{1} - \\mu_{2} \\cos \\theta_{2} \\right),\n" }, { "math_id": 16, "text": "\n\\varphi_{r} = \\int dq_{r} \\sqrt{v_{r}(q_{r})},\n" }, { "math_id": 17, "text": "\nv_{1}(q_{1}) \\dot{q}_{1}^{2} + v_{2}(q_{2}) \\dot{q}_{2}^{2} + \\cdots + v_{s}(q_{s}) \\dot{q}_{s}^{2} =\n\\dot{\\varphi}_{1}^{2} + \\dot{\\varphi}_{2}^{2} + \\cdots + \\dot{\\varphi}_{s}^{2} = F,\n" }, { "math_id": 18, "text": "\nY = \\chi_{1}(\\varphi_{1}) + \\chi_{2}(\\varphi_{2}) + \\cdots + \\chi_{s}(\\varphi_{s}),\n" }, { "math_id": 19, "text": "\nT = \\frac{1}{2} Y F.\n" }, { "math_id": 20, "text": "\nW = \\omega_{1}(\\varphi_{1}) + \\omega_{2}(\\varphi_{2}) + \\cdots + \\omega_{s}(\\varphi_{s}),\n" }, { "math_id": 21, "text": "\nV = \\frac{W}{Y}.\n" }, { "math_id": 22, "text": "\\varphi_{r}" }, { "math_id": 23, "text": "\n\\frac{d}{dt} \\left( \\frac{\\partial T}{\\partial \\dot{\\varphi}_{r}} \\right) = \n\\frac{d}{dt} \\left( Y \\dot{\\varphi}_{r} \\right) = \\frac{1}{2} F \\frac{\\partial Y}{\\partial \\varphi_{r}} \n-\\frac{\\partial V}{\\partial \\varphi_{r}}.\n" }, { "math_id": 24, "text": "2 Y \\dot{\\varphi}_{r}" }, { "math_id": 25, "text": "\n2 Y \\dot{\\varphi}_{r} \\frac{d}{dt} \\left(Y \\dot{\\varphi}_{r}\\right) = \n2T\\dot{\\varphi}_{r} \\frac{\\partial Y}{\\partial \\varphi_{r}} - 2 Y \\dot{\\varphi}_{r} \\frac{\\partial V}{\\partial \\varphi_{r}} = \n2 \\dot{\\varphi}_{r} \\frac{\\partial}{\\partial \\varphi_{r}} \\left[ (E-V) Y \\right],\n" }, { "math_id": 26, "text": "\n\\frac{d}{dt} \\left(Y^{2} \\dot{\\varphi}_{r}^{2} \\right) = \n2 E \\dot{\\varphi}_{r} \\frac{\\partial Y}{\\partial \\varphi_{r}} - 2 \\dot{\\varphi}_{r} \\frac{\\partial W}{\\partial \\varphi_{r}} = \n2E \\dot{\\varphi}_{r} \\frac{d\\chi_{r} }{d\\varphi_{r}} - 2 \\dot{\\varphi}_{r} \\frac{d\\omega_{r}}{d\\varphi_{r}},\n" }, { "math_id": 27, "text": "\n\\frac{d}{dt} \\left(Y^{2} \\dot{\\varphi}_{r}^{2} \\right) = \n2\\frac{d}{dt} \\left( E \\chi_{r} - \\omega_{r} \\right),\n" }, { "math_id": 28, "text": "\n\\frac{1}{2} Y^{2} \\dot{\\varphi}_{r}^{2} = E \\chi_{r} - \\omega_{r} + \\gamma_{r},\n" }, { "math_id": 29, "text": "\\gamma_{r}" }, { "math_id": 30, "text": "\n\\sum_{r=1}^{s} \\gamma_{r} = 0.\n" }, { "math_id": 31, "text": "\n\\frac{\\sqrt{2}}{Y} dt = \\frac{d\\varphi_{1}}{\\sqrt{E \\chi_{1} - \\omega_{1} + \\gamma_{1}}} = \n\\frac{d\\varphi_{2}}{\\sqrt{E \\chi_{2} - \\omega_{2} + \\gamma_{2}}} = \\cdots =\n\\frac{d\\varphi_{s}}{\\sqrt{E \\chi_{s} - \\omega_{s} + \\gamma_{s}}}.\n" } ]
https://en.wikipedia.org/wiki?curid=11718631
1171957
Floating rate note
Bonds that have a variable coupon Floating rate notes (FRNs) are bonds that have a variable coupon, equal to a money market reference rate, like SOFR or federal funds rate, plus a quoted spread (also known as quoted margin). The spread is a rate that remains constant. Almost all FRNs have quarterly coupons, i.e. they pay out interest every three months. At the beginning of each coupon period, the coupon is calculated by taking the fixing of the reference rate for that day and adding the spread. A typical coupon would look like 3 months USD SOFR +0.20%. Issuers. In the United States, banks and financial service companies have been among the largest issuers of these securities. The U.S. Treasury began issuing them in 2014, and government sponsored enterprises (GSEs) such as the Federal Home Loan Banks, the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) are important issuers. In Europe, the main issuers are banks. Variations. Some FRNs have special features such as maximum or minimum coupons, called "capped FRNs" and "floored FRNs". Those with both minimum and maximum coupons are called "collared FRNs". "Perpetual FRNs" are another form of FRNs that are also called irredeemable or unrated FRNs and are akin to a form of capital. FRNs can also be obtained synthetically by the combination of a fixed rate bond and an interest rate swap. This combination is known as an asset swap. A deleveraged floating-rate note is one bearing a coupon that is the product of the index and a leverage factor, where the leverage factor is between zero and one. A deleveraged floater, which gives the investor decreased exposure to the underlying index, can be replicated by buying a pure FRN and entering into a swap to pay floating and receive fixed, on a notional amount of less than the face value of the FRN. Deleveraged FRN = long pure FRN + short (1 - leverage factor) x swap A leveraged or super floater gives the investor increased exposure to an underlying index: the leverage factor is always greater than one. Leveraged floaters also require a floor, since the coupon rate can never be negative. Leveraged FRN = long pure FRN + long (leverage factor - 1) x swap + long (leverage factor) x floor Risks. Credit risk. Floating-rate notes issued by corporations, such as banks and financial firms, are subject to credit risk, depending on the credit-worthiness of the issuer. Those issued by the U.S. Treasury, which entered the market in 2014, are traditionally regarded as having minimal credit risk. Interest rate risk. Opinion is divided as to the efficacy of floating-rate notes in protecting the investor from interest rate risk. Some believe that these securities carry little interest rate risk because 1) a floating rate note's Macaulay Duration is approximately equal to the time remaining until the next interest rate adjustment; therefore its price shows very low sensitivity to changes in market rates; and 2) when market rates rise, the expected coupons of the FRN increase in line with the increase in forward rates, which means its price remains constant, as compared to fixed rate bonds, whose prices decline when market rates rise. This point of view holds that floating rate notes are conservative investments for investors who believe market rates will increase. A somewhat different view is held by author Dr. Annette Thau: "The rationale for floaters is that as interest rates change, resetting the coupon rate... will tend to maintain the price of the bond at or close to par. In practice this has tended not to work out quite as well as had been hoped, for a number of reasons. First, during times of extreme interest rate volatility, rates are not reset quickly enough to prevent price fluctuations. Secondly, the coupon rates of floaters are usually well below those of long-term bonds and often not very attractive when compared to shorter maturity bonds." Complexity. Commenting on the complexity of these securities, Richard S. Wilson of the credit rating firm Fitch Investors Services noted: "Financial engineers worked overtime on floating-rate securities and have created debt instruments with a variety of terms and features different from those of conventional fixed-coupon bonds...The major investment firms with their worldwide trading capabilities participate in these markets 24 hours a day. But floaters are complex instruments, and investors who don't understand them should stay away. This applies to individuals as well as institutional portfolio managers." Trading. Securities dealers make markets in FRNs. They are traded over-the-counter, instead of on a stock exchange. In Europe, most FRNs are liquid, as the biggest investors are banks. In the U.S., FRNs are mostly held to maturity, so the markets aren't as liquid. In the wholesale markets, FRNs are typically quoted as a spread over the reference rate. Example. Suppose a new 5 year FRN pays a coupon of 3 months SOFR +0.20%, and is issued at par (100.00). If the perception of the credit-worthiness of the issuer goes down, investors will demand a higher interest rate, say SOFR +0.25%. If a trade is agreed, the price is calculated. In this example, SOFR +0.25% would be roughly equivalent to a price of 99.75. This can be calculated as par, minus the difference between the coupon and the price that was agreed (0.05%), multiplied by the maturity (5 year). Yield measures. Metrics such as yield to maturity and internal rate of return cannot be used to estimate the potential return from a floating rate note. That is the case because it is impossible to forecast the stream of coupon payments with accuracy, since they are tied to a benchmark that is constantly subject to change. Instead, metrics known as the effective spread and the simple margin can be used. Effective spread. The effective spread is the average margin over the benchmark rate that is expected to be earned over the life of the security. For a floating rate note selling at par value, the effective margin is merely the contractual spread over the benchmark rate specified in the note's prospectus. For notes that sell at a discount or premium, finance scholar Dr. Frank Fabozzi outlines a present value approach: project the future coupon cash flows assuming that the benchmark rate does not change and find the discount rate that makes the present value of the future cash flows equal to the market price of the note. That discount rate is the effective spread. This approach takes into account the premium or discount to par value and the time value of money, but suffers from the simplifying assumption that holds the benchmark rate at a single value for the life of the note. Simple margin. A simpler approach begins with computing the sum of the quoted spread of the FRN and the capital gain (or loss) an investor will earn if the note is held to maturity: formula_0 Second, adjust the above for the fact that the note is bought at a discount or premium to the nominal value: formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{100 - \\text{Clean price}}{\\text{Maturity in years} } + \\text{Spread}.\n" }, { "math_id": 1, "text": "\n\\frac{100}{\\text{Clean price}} \\times \\left(\\frac{100 - \\text{Clean price}}{\\text{Maturity in years} } + \\text{Spread}\\right).\n" } ]
https://en.wikipedia.org/wiki?curid=1171957
11720017
Log-Laplace distribution
In probability theory and statistics, the log-Laplace distribution is the probability distribution of a random variable whose logarithm has a Laplace distribution. If "X" has a Laplace distribution with parameters "μ" and "b", then "Y" = "e""X" has a log-Laplace distribution. The distributional properties can be derived from the Laplace distribution. Characterization. A random variable has a log-Laplace("μ", "b") distribution if its probability density function is: formula_0 The cumulative distribution function for "Y" when "y" &gt; 0, is formula_1 Generalization. Versions of the log-Laplace distribution based on an asymmetric Laplace distribution also exist. Depending on the parameters, including asymmetry, the log-Laplace may or may not have a finite mean and a finite variance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x|\\mu,b) = \\frac{1}{2bx} \\exp \\left( -\\frac{|\\ln x-\\mu|}{b} \\right) " }, { "math_id": 1, "text": "F(y) = 0.5\\,[1 + \\sgn(\\ln(y)-\\mu)\\,(1-\\exp(-|\\ln(y)-\\mu|/b))]." } ]
https://en.wikipedia.org/wiki?curid=11720017
11720315
Hilbert's theorem (differential geometry)
No complete regular surface of constant negative gaussian curvature immerses in R3 In differential geometry, Hilbert's theorem (1901) states that there exists no complete regular surface formula_0 of constant negative gaussian curvature formula_1 immersed in formula_2. This theorem answers the question for the negative case of which surfaces in formula_2 can be obtained by isometrically immersing complete manifolds with constant curvature. Proof. The proof of Hilbert's theorem is elaborate and requires several lemmas. The idea is to show the nonexistence of an isometric immersion formula_3 of a plane formula_4 to the real space formula_2. This proof is basically the same as in Hilbert's paper, although based in the books of Do Carmo and Spivak. "Observations": In order to have a more manageable treatment, but without loss of generality, the curvature may be considered equal to minus one, formula_5. There is no loss of generality, since it is being dealt with constant curvatures, and similarities of formula_2 multiply formula_1 by a constant. The exponential map formula_6 is a local diffeomorphism (in fact a covering map, by Cartan-Hadamard theorem), therefore, it induces an inner product in the tangent space of formula_0 at formula_7: formula_8. Furthermore, formula_4 denotes the geometric surface formula_8 with this inner product. If formula_9 is an isometric immersion, the same holds for formula_10. The first lemma is independent from the other ones, and will be used at the end as the counter statement to reject the results from the other lemmas. Lemma 1: The area of formula_4 is infinite. "Proof's Sketch:" The idea of the proof is to create a global isometry between formula_11 and formula_4. Then, since formula_11 has an infinite area, formula_4 will have it too. The fact that the hyperbolic plane formula_11 has an infinite area comes by computing the surface integral with the corresponding coefficients of the First fundamental form. To obtain these ones, the hyperbolic plane can be defined as the plane with the following inner product around a point formula_12 with coordinates formula_13 formula_14 Since the hyperbolic plane is unbounded, the limits of the integral are infinite, and the area can be calculated through formula_15 Next it is needed to create a map, which will show that the global information from the hyperbolic plane can be transfer to the surface formula_4, i.e. a global isometry. formula_16 will be the map, whose domain is the hyperbolic plane and image the 2-dimensional manifold formula_4, which carries the inner product from the surface formula_0 with negative curvature. formula_17 will be defined via the exponential map, its inverse, and a linear isometry between their tangent spaces, formula_18. That is formula_19, where formula_20. That is to say, the starting point formula_21 goes to the tangent plane from formula_11 through the inverse of the exponential map. Then travels from one tangent plane to the other through the isometry formula_22, and then down to the surface formula_4 with another exponential map. The following step involves the use of polar coordinates, formula_23 and formula_24, around formula_7 and formula_25 respectively. The requirement will be that the axis are mapped to each other, that is formula_26 goes to formula_27. Then formula_17 preserves the first fundamental form. In a geodesic polar system, the Gaussian curvature formula_1 can be expressed as formula_28. In addition K is constant and fulfills the following differential equation formula_29 Since formula_11 and formula_4 have the same constant Gaussian curvature, then they are locally isometric (Minding's Theorem). That means that formula_17 is a local isometry between formula_11 and formula_4. Furthermore, from the Hadamard's theorem it follows that formula_17 is also a covering map. Since formula_4 is simply connected, formula_17 is a homeomorphism, and hence, a (global) isometry. Therefore, formula_11 and formula_4 are globally isometric, and because formula_11 has an infinite area, then formula_30 has an infinite area, as well. formula_31 Lemma 2: For each formula_32 exists a parametrization formula_33, such that the coordinate curves of formula_34 are asymptotic curves of formula_35 and form a Tchebyshef net. Lemma 3: Let formula_36 be a coordinate neighborhood of formula_4 such that the coordinate curves are asymptotic curves in formula_37. Then the area A of any quadrilateral formed by the coordinate curves is smaller than formula_38. The next goal is to show that formula_34 is a parametrization of formula_4. Lemma 4: For a fixed formula_39, the curve formula_40, is an asymptotic curve with formula_41 as arc length. The following 2 lemmas together with lemma 8 will demonstrate the existence of a parametrization formula_42 Lemma 5: formula_34 is a local diffeomorphism. Lemma 6: formula_34 is surjective. Lemma 7: On formula_4 there are two differentiable linearly independent vector fields which are tangent to the asymptotic curves of formula_4. Lemma 8: formula_34 is injective. "Proof of Hilbert's Theorem:" First, it will be assumed that an isometric immersion from a complete surface formula_0 with negative curvature exists: formula_9 As stated in the observations, the tangent plane formula_8 is endowed with the metric induced by the exponential map formula_6 . Moreover, formula_43 is an isometric immersion and Lemmas 5,6, and 8 show the existence of a parametrization formula_42 of the whole formula_4, such that the coordinate curves of formula_34 are the asymptotic curves of formula_4. This result was provided by Lemma 4. Therefore, formula_4 can be covered by a union of "coordinate" quadrilaterals formula_44 with formula_45. By Lemma 3, the area of each quadrilateral is smaller than formula_46. On the other hand, by Lemma 1, the area of formula_4 is infinite, therefore has no bounds. This is a contradiction and the proof is concluded. formula_31 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "\\mathbb{R}^{3}" }, { "math_id": 3, "text": "\\varphi = \\psi \\circ \\exp_p: S' \\longrightarrow \\mathbb{R}^{3}" }, { "math_id": 4, "text": "S'" }, { "math_id": 5, "text": "K=-1" }, { "math_id": 6, "text": "\\exp_p: T_p(S) \\longrightarrow S" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "T_p(S)" }, { "math_id": 9, "text": "\\psi:S \\longrightarrow \\mathbb{R}^{3}" }, { "math_id": 10, "text": "\\varphi = \\psi \\circ \\exp_o:S' \\longrightarrow \\mathbb{R}^{3}" }, { "math_id": 11, "text": "H" }, { "math_id": 12, "text": "q\\in \\mathbb{R}^{2}" }, { "math_id": 13, "text": "(u,v)" }, { "math_id": 14, "text": "E = \\left\\langle \\frac{\\partial}{\\partial u}, \\frac{\\partial}{\\partial u} \\right\\rangle = 1 \\qquad F = \\left\\langle \\frac{\\partial}{\\partial u}, \\frac{\\partial}{\\partial v} \\right\\rangle = \\left\\langle \\frac{\\partial}{\\partial v}, \\frac{\\partial}{\\partial u} \\right\\rangle = 0 \\qquad G = \\left\\langle \\frac{\\partial}{\\partial v}, \\frac{\\partial}{\\partial v} \\right\\rangle = e^{u} " }, { "math_id": 15, "text": "\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{u} du dv = \\infty" }, { "math_id": 16, "text": "\\varphi: H \\rightarrow S'" }, { "math_id": 17, "text": "\\varphi" }, { "math_id": 18, "text": "\\psi:T_p(H) \\rightarrow T_{p'}(S')" }, { "math_id": 19, "text": "\\varphi = \\exp_{p'} \\circ \\psi \\circ \\exp_p^{-1}" }, { "math_id": 20, "text": "p\\in H, p' \\in S'" }, { "math_id": 21, "text": "p\\in H" }, { "math_id": 22, "text": "\\psi" }, { "math_id": 23, "text": "(\\rho, \\theta)" }, { "math_id": 24, "text": "(\\rho', \\theta')" }, { "math_id": 25, "text": "p'" }, { "math_id": 26, "text": "\\theta=0" }, { "math_id": 27, "text": "\\theta'=0" }, { "math_id": 28, "text": "K = - \\frac{(\\sqrt{G})_{\\rho \\rho}}{\\sqrt{G}}" }, { "math_id": 29, "text": "(\\sqrt{G})_{\\rho \\rho} + K\\cdot \\sqrt{G} = 0" }, { "math_id": 30, "text": "S'=T_p(S)" }, { "math_id": 31, "text": "\\square" }, { "math_id": 32, "text": "p\\in S'" }, { "math_id": 33, "text": "x:U \\subset \\mathbb{R}^{2} \\longrightarrow S', \\qquad p \\in x(U)" }, { "math_id": 34, "text": "x" }, { "math_id": 35, "text": " x(U) = V'" }, { "math_id": 36, "text": "V' \\subset S'" }, { "math_id": 37, "text": "V'" }, { "math_id": 38, "text": "2\\pi" }, { "math_id": 39, "text": "t" }, { "math_id": 40, "text": "x(s,t), -\\infty < s < +\\infty " }, { "math_id": 41, "text": "s" }, { "math_id": 42, "text": "x:\\mathbb{R}^{2} \\longrightarrow S'" }, { "math_id": 43, "text": "\\varphi = \\psi \\circ \\exp_p:S' \\longrightarrow \\mathbb{R}^{3}" }, { "math_id": 44, "text": "Q_{n}" }, { "math_id": 45, "text": " Q_{n} \\subset Q_{n+1}" }, { "math_id": 46, "text": "2 \\pi " } ]
https://en.wikipedia.org/wiki?curid=11720315
1172161
Biaxial nematic
A biaxial nematic is a spatially homogeneous liquid crystal with three distinct optical axes. This is to be contrasted to a simple nematic, which has a single preferred axis, around which the system is rotationally symmetric. The symmetry group of a biaxial nematic is formula_0 i.e. that of a rectangular right parallelepiped, having 3 orthogonal formula_1 axes and three orthogonal mirror planes. In a frame co-aligned with optical axes the second rank order parameter tensor, the so-called Q tensor of a biaxial nematic has the form formula_2 where formula_3 is the standard nematic scalar order parameter and formula_4 is a measure of the biaxiality. The first report of a thermotropic biaxial nematic appeared in 2004 based on a boomerang shaped oxadiazole bent-core mesogen. The biaxial nematic phase for this particular compound only occurs at temperatures around 200 °C and is preceded by as yet unidentified smectic phases. It is also found that this material can segregate into chiral domains of opposite handedness. For this to happen the boomerang shaped molecules adopt a helical superstructure. In one azo bent-core mesogen a thermal transition is found from a uniaxial Nu to a biaxial nematic Nb mesophase, as predicted by theory and simulation. This transition is observed on heating from the Nu phase with Polarizing optical microscopy as a change in Schlieren texture and increased light transmittance and from x-ray diffraction as the splitting of the nematic reflection. The transition is a second order transition with low energy content and therefore not observed in differential scanning calorimetry. The positional order parameter for the uniaxial nematic phase is 0.75 to 1.5 times the mesogen length and for the biaxial nematic phase 2 to 3.3 times the mesogen length. Another strategy towards biaxial nematics is the use of mixtures of classical rodlike mesogens and disklike discotic mesogens. The biaxial nematic phase is expected to be located below the minimum in the rod-disk phase diagram. In one study a miscible system of rods and disks is actually found although the biaxial nematic phase remains elusive. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_{2h}" }, { "math_id": 1, "text": "C_2" }, { "math_id": 2, "text": "\n\\mathbf Q=\n\\begin{pmatrix}\n-\\frac{1}{2}(S+P) & 0 &0 \\\\\n0 &-\\frac{1}{2}(S-P) & 0 \\\\\n0 & 0& S\\\\\n\\end{pmatrix} \n" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "P" } ]
https://en.wikipedia.org/wiki?curid=1172161
1172230
Saponification value
Milligrams of a base required to saponify 1g of fat Saponification value or saponification number (SV or SN) represents the number of milligrams of potassium hydroxide (KOH) or sodium hydroxide (NaOH) required to saponify one gram of fat under the conditions specified. It is a measure of the average molecular weight (or chain length) of all the fatty acids present in the sample in form of triglycerides. The higher the saponification value, the lower the fatty acids average length, the lighter the mean molecular weight of triglycerides and vice versa. Practically, fats or oils with high saponification value (such as coconut and palm oil) are more suitable for soap making. Determination. To determine saponification value, the sample is treated with an excess of alkali (usually an ethanolic solution of potassium hydroxide) for half an hour under reflux. The KOH is consumed by reaction with triglycerides, which consume three equivalents of base. Diglycerides consume two equivalents of KOH. Monoglycerides and free fatty acids, as well as by other esters such as lactones consume one equivalent of base At the end of the reaction the quantity of KOH is determined by titration using standard solution of hydrochloric acid (HCl). Key to the method is the use of phenolphthalein indicator, which indicates the consumption of strong base (KOH) by the acid, not the weak base (potassium carboxylates). The SV (mg KOH/ g of sample) is calculated as following: Eq. 1 where: formula_0 is the volume of HCl solution used for the blank run, in mL; formula_1 is the volume of HCl solution used for the tested sample, in mL; formula_2 is the molarity of HCl solution, in mol / L; is the molecular weight of KOH, in g / mol; formula_3 is the weight of sample, in g. For example, standard methods for determination of SV of vegetable and animal fats are as follows: The SV can also be calculated from the fatty acid composition as determined by gas chromatography (AOCS Cd 3a-94). Handmade soap makers who aim for bar soap use sodium hydroxide (NaOH), commonly known as lye, rather than KOH (caustic potash) which produces soft paste, gel or liquid soaps. In order to calculate the lye amount needed to make bar soap, KOH values of SV can be converted to NaOH values by dividing KOH values by the ratio of the molecular weights of KOH and NaOH (1.403). Calculation of average molecular weight of fats and oils. The theoretical SV of a pure triglyceride molecule can be calculated by the following equation (where MW is its molecular weight): Eq. 2 where: 3 is the number of fatty acids residues per triglyceride 1000 is the conversion factor for milligrams to grams 56.1 is the molar mass of KOH. For instance, triolein, a triglyceride occurring in many fats and oils, has three oleic acid residues esterified to a molecule of glycerol with a total MW of 885.4 (g / mol). Therefore, its SV equals 190 mg KOH / g sample. In comparison, trilaurin with three shorter fatty acid residues (lauric acid) has a MW of 639 and an SV of 263. As it can be seen from equation (2), the SV of a given fat is inversely proportional to its molecular weight. Actually, as fats and oils contain a mix of different triglycerides species, the average MW can be calculated according to the following relation: Eq. 3 This means that coconut oil with an abundance of medium chain fatty acids (mainly lauric acid) contain more fatty acids per unit of weight than, for example, olive oil (mainly oleic acid). Consequently, more ester saponifiable functions were present per g of coconut oil, which means more KOH is required to saponify the same amount of matter, and thus a higher SV. The calculated molecular weight (Eq. 3) is not applicable to fats and oils containing high amounts of unsaponifiable material, free fatty acids (&gt; 0.1%), or mono- and diacylglycerols (&gt; 0.1%). Unsaponifiables. Unsaponifiables are components of a fatty substance (oil, fat, wax) that fail to form soaps when treated with alkali and remain insoluble in water but soluble in organic solvents. For instance, typical soybean oil contains, by weight, 1.5 – 2.5% of unsaponifiable matter. Unsaponifiables include nonvolatile components : alkanes, sterols, triterpenes, fatty alcohols, tocopherols and carotenoids as well as those that mainly result from the saponification of fatty esters (sterols esters, wax esters, tocopherols esters, ...). This fraction may also contain environmental contaminants and residues of plasticizers, pesticides, mineral oil hydrocarbons and aromatics. Unsaponifiable constituents are an important consideration when selecting oil mixtures for the manufacture of soaps. Unsaponifiables can be beneficial to a soap formula because they may have properties such as moisturization, conditioning, antioxidant, texturing etc. On the other hand, when proportion of unsaponifiables is too high (&gt; 3%), or the specific unsaponifiables present do not provide significant benefits, a defective or inferior soap product can result. For example, shark oil is not suitable for soap making as it may contain more than 10% of unsaponifiable matter. For edible oils, the tolerated limit of unsaponifiable matter is 1.5% (olive, refined soybean), while inferior quality crude or pomace oil could reach 3%. Determination of unsaponifiables involves a saponification step of the sample followed by extraction of the unsaponifiable using an organic solvent (i.e. diethyl ether). Official methods for animal and vegetable fats and oils are described by ASTM D1065 - 18, ISO 3596: 2000 or 18609: 2000, AOCS method Ca 6a-40. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textrm{B}" }, { "math_id": 1, "text": "\\textrm{S}" }, { "math_id": 2, "text": "\\textrm{M}" }, { "math_id": 3, "text": "{\\textrm{W}_\\textrm{oil/fat}}" } ]
https://en.wikipedia.org/wiki?curid=1172230
11724178
1s Slater-type function
A normalized 1s Slater-type function is a function which is used in the descriptions of atoms and in a broader way in the description of atoms in molecules. It is particularly important as the accurate quantum theory description of the smallest free atom, hydrogen. It has the form formula_0 It is a particular case of a Slater-type orbital (STO) in which the principal quantum number n is 1. The parameter formula_1 is called the Slater orbital exponent. Related sets of functions can be used to construct STO-nG basis sets which are used in quantum chemistry. Applications for hydrogen-like atomic systems. A hydrogen-like atom or a hydrogenic atom is an atom with one electron. Except for the hydrogen atom itself (which is neutral) these atoms carry positive charge formula_2, where formula_3 is the atomic number of the atom. Because hydrogen-like atoms are two-particle systems with an interaction depending only on the distance between the two particles, their (non-relativistic) Schrödinger equation can be exactly solved in analytic form. The solutions are one-electron functions and are referred to as "hydrogen-like atomic orbitals". The electronic Hamiltonian (in atomic units) of a Hydrogenic system is given by formula_4, where formula_3 is the nuclear charge of the hydrogenic atomic system. The 1s electron of a hydrogenic systems can be accurately described by the corresponding Slater orbital: formula_5, where formula_6 is the Slater exponent. This state, the ground state, is the only state that can be described by a Slater orbital. Slater orbitals have no radial nodes, while the excited states of the hydrogen atom have radial nodes. Exact energy of a hydrogen-like atom. The energy of a hydrogenic system can be exactly calculated analytically as follows : formula_7, where formula_8 formula_9 formula_10 formula_11. Using the expression for Slater orbital, formula_5 the integrals can be exactly solved. Thus, formula_12 formula_13 The optimum value for formula_6 is obtained by equating the differential of the energy with respect to formula_6 as zero. formula_14. Thus formula_15 Non-relativistic energy. The following energy values are thus calculated by using the expressions for energy and for the Slater exponent. Hydrogen : H formula_16 and formula_17 formula_18−0.5 Eh formula_18−13.60569850 eV formula_18−313.75450000 kcal/mol Gold : Au(78+) formula_19 and formula_20 formula_18−3120.5 Eh formula_18−84913.16433850 eV formula_18−1958141.8345 kcal/mol. Relativistic energy of Hydrogenic atomic systems. Hydrogenic atomic systems are suitable models to demonstrate the relativistic effects in atomic systems in a simple way. The energy expectation value can calculated by using the Slater orbitals with or without considering the relativistic correction for the Slater exponent formula_21. The relativistically corrected Slater exponent formula_22 is given as formula_23. The relativistic energy of an electron in 1s orbital of a hydrogenic atomic systems is obtained by solving the Dirac equation. formula_24. Following table illustrates the relativistic corrections in energy and it can be seen how the relativistic correction scales with the atomic number of the system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi_{1s}(\\zeta, \\mathbf{r - R}) = \\left(\\frac{\\zeta^3}{\\pi}\\right)^{1 \\over 2} \\, e^{-\\zeta |\\mathbf{r - R}|}." }, { "math_id": 1, "text": "\\zeta" }, { "math_id": 2, "text": "e(\\mathbf Z-1)" }, { "math_id": 3, "text": "\\mathbf Z" }, { "math_id": 4, "text": "\\mathbf{\\hat{H}}_e = - \\frac{\\nabla^2}{2} - \\frac{\\mathbf Z}{r}" }, { "math_id": 5, "text": "\\mathbf \\psi_{1s} = \\left (\\frac{\\zeta^3}{\\pi} \\right ) ^{0.50}e^{-\\zeta r}" }, { "math_id": 6, "text": "\\mathbf \\zeta" }, { "math_id": 7, "text": "\\mathbf E_{1s} = \\frac{\\langle\\psi_{1s}|\\mathbf{\\hat{H}}_e|\\psi_{1s}\\rangle}{\\langle\\psi_{1s}|\\psi_{1s}\\rangle}" }, { "math_id": 8, "text": "\\mathbf{\\langle\\psi_{1s}|\\psi_{1s}\\rangle} = 1" }, { "math_id": 9, "text": "\\mathbf E_{1s} = \\langle\\psi_{1s}|\\mathbf - \\frac{\\nabla^2}{2} - \\frac{\\mathbf Z}{r}|\\psi_{1s}\\rangle" }, { "math_id": 10, "text": "\\mathbf E_{1s} = \\langle\\psi_{1s}|\\mathbf - \\frac{\\nabla^2}{2}|\\psi_{1s}\\rangle+\\langle\\psi_{1s}| - \\frac{\\mathbf Z}{r}|\\psi_{1s}\\rangle" }, { "math_id": 11, "text": "\\mathbf E_{1s} = \\langle\\psi_{1s}|\\mathbf - \\frac{1}{2r^2}\\frac{\\partial}{\\partial r}\\left (r^2 \\frac{\\partial}{\\partial r}\\right )|\\psi_{1s}\\rangle+\\langle\\psi_{1s}| - \\frac{\\mathbf Z}{r}|\\psi_{1s}\\rangle" }, { "math_id": 12, "text": "\\mathbf E_{1s} = \\left\\langle \\left(\\frac{\\zeta^3}{\\pi} \\right)^{0.50} e^{-\\zeta r} \\right|\\left. -\\left(\\frac{\\zeta^3}{\\pi} \\right)^{0.50}e^{-\\zeta r}\\left[\\frac{-2r\\zeta+r^2\\zeta^2}{2r^2}\\right]\\right\\rangle+\\langle\\psi_{1s}| - \\frac{\\mathbf Z}{r}|\\psi_{1s}\\rangle" }, { "math_id": 13, "text": "\\mathbf E_{1s} = \\frac{\\zeta^2}{2}-\\zeta \\mathbf Z." }, { "math_id": 14, "text": " \\frac{d\\mathbf E_{1s}}{d\\zeta}=\\zeta-\\mathbf Z=0" }, { "math_id": 15, "text": " \\mathbf \\zeta=\\mathbf Z." }, { "math_id": 16, "text": " \\mathbf Z=1" }, { "math_id": 17, "text": " \\mathbf \\zeta=1" }, { "math_id": 18, "text": " \\mathbf E_{1s}=" }, { "math_id": 19, "text": " \\mathbf Z=79" }, { "math_id": 20, "text": " \\mathbf \\zeta=79" }, { "math_id": 21, "text": " \\mathbf \\zeta " }, { "math_id": 22, "text": " \\mathbf \\zeta_{rel} " }, { "math_id": 23, "text": " \\mathbf \\zeta_{rel}= \\frac{\\mathbf Z}{\\sqrt {1-\\mathbf Z^2/c^2}}" }, { "math_id": 24, "text": "\\mathbf E_{1s}^{rel} = -(c^2+\\mathbf Z\\zeta)+\\sqrt{c^4+\\mathbf Z^2\\zeta^2}" } ]
https://en.wikipedia.org/wiki?curid=11724178
11724245
Hydrogen-like atom
Atoms with a single valence electron, so they behave like hydrogen A hydrogen-like atom (or hydrogenic atom) is any atom or ion with a single valence electron. These atoms are isoelectronic with hydrogen. Examples of hydrogen-like atoms include, but are not limited to, hydrogen itself, all alkali metals such as Rb and Cs, singly ionized alkaline earth metals such as Ca+ and Sr+ and other ions such as He+, Li2+, and Be3+ and isotopes of any of the above. A hydrogen-like atom includes a positively charged core consisting of the atomic nucleus and any core electrons as well as a single valence electron. Because helium is common in the universe, the spectroscopy of singly ionized helium is important in EUV astronomy, for example, of DO white dwarf stars. The non-relativistic Schrödinger equation and relativistic Dirac equation for the hydrogen atom can be solved analytically, owing to the simplicity of the two-particle physical system. The one-electron wave function solutions are referred to as "hydrogen-like atomic orbitals". Hydrogen-like atoms are of importance because their corresponding orbitals bear similarity to the hydrogen atomic orbitals. Other systems may also be referred to as "hydrogen-like atoms", such as muonium (an electron orbiting an antimuon), positronium (an electron and a positron), certain exotic atoms (formed with other particles), or Rydberg atoms (in which one electron is in such a high energy state that it sees the rest of the atom effectively as a point charge). Schrödinger solution. In the solution to the Schrödinger equation, which is non-relativistic, hydrogen-like atomic orbitals are eigenfunctions of the one-electron angular momentum operator L and its "z" component "L"z. A hydrogen-like atomic orbital is uniquely identified by the values of the principal quantum number "n", the angular momentum quantum number "l", and the magnetic quantum number "m". The energy eigenvalues do not depend on "l" or "m", but solely on "n". To these must be added the two-valued spin quantum number "ms" = ±&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, setting the stage for the Aufbau principle. This principle restricts the allowed values of the four quantum numbers in electron configurations of more-electron atoms. In hydrogen-like atoms all degenerate orbitals of fixed "n" and "l", "m" and "s" varying between certain values (see below) form an atomic shell. The Schrödinger equation of atoms or ions with more than one electron has not been solved analytically, because of the computational difficulty imposed by the Coulomb interaction between the electrons. Numerical methods must be applied in order to obtain (approximate) wavefunctions or other properties from quantum mechanical calculations. Due to the spherical symmetry (of the Hamiltonian), the total angular momentum J of an atom is a conserved quantity. Many numerical procedures start from products of atomic orbitals that are eigenfunctions of the one-electron operators L and "L"z. The radial parts of these atomic orbitals are sometimes numerical tables or are sometimes Slater orbitals. By angular momentum coupling many-electron eigenfunctions of J2 (and possibly S2) are constructed. In quantum chemical calculations hydrogen-like atomic orbitals cannot serve as an expansion basis, because they are not complete. The non-square-integrable continuum (E &gt; 0) states must be included to obtain a complete set, i.e., to span all of one-electron Hilbert space. In the simplest model, the atomic orbitals of hydrogen-like atoms/ions are solutions to the Schrödinger equation in a spherically symmetric potential. In this case, the potential term is the potential given by Coulomb's law: formula_0 where After writing the wave function as a product of functions: formula_1 (in spherical coordinates), where formula_2 are spherical harmonics, we arrive at the following Schrödinger equation: formula_3 where formula_4 is, approximately, the mass of the electron (more accurately, it is the reduced mass of the system consisting of the electron and the nucleus), and formula_5 is the reduced Planck constant. Different values of "l" give solutions with different angular momentum, where "l" (a non-negative integer) is the quantum number of the orbital angular momentum. The magnetic quantum number "m" (satisfying formula_6) is the (quantized) projection of the orbital angular momentum on the "z"-axis. See here for the steps leading to the solution of this equation. Non-relativistic wavefunction and energy. In addition to "l" and "m", a third integer "n" &gt; 0, emerges from the boundary conditions placed on "R". The functions "R" and "Y" that solve the equations above depend on the values of these integers, called "quantum numbers". It is customary to subscript the wave functions with the values of the quantum numbers they depend on. The final expression for the normalized wave function is: formula_7 formula_8 where: parity due to angular wave function is formula_19. Quantum numbers. The quantum numbers formula_20, formula_21 and formula_22 are integers and can have the following values: formula_23 formula_24 formula_25 For a group-theoretical interpretation of these quantum numbers, see this article. Among other things, this article gives group-theoretical reasons why formula_26 and formula_27. Angular momentum. Each atomic orbital is associated with an angular momentum L. It is a vector operator, and the eigenvalues of its square "L"2 ≡ "L""x"2 + "L""y"2 + "L""z"2 are given by: formula_28 The projection of this vector onto an arbitrary direction is quantized. If the arbitrary direction is called "z", the quantization is given by: formula_29 where "m" is restricted as described above. Note that "L"2 and "L""z" commute and have a common eigenstate, which is in accordance with Heisenberg's uncertainty principle. Since "L""x" and "L""y" do not commute with "L""z", it is not possible to find a state that is an eigenstate of all three components simultaneously. Hence the values of the "x" and "y" components are not sharp, but are given by a probability function of finite width. The fact that the "x" and "y" components are not well-determined, implies that the direction of the angular momentum vector is not well determined either, although its component along the "z"-axis is sharp. These relations do not give the total angular momentum of the electron. For that, electron spin must be included. This quantization of angular momentum closely parallels that proposed by Niels Bohr (see Bohr model) in 1913, with no knowledge of wavefunctions. Including spin–orbit interaction. In a real atom, the spin of a moving electron can interact with the electric field of the nucleus through relativistic effects, a phenomenon known as spin–orbit interaction. When one takes this coupling into account, the spin and the orbital angular momentum are no longer conserved, which can be pictured by the electron precessing. Therefore, one has to replace the quantum numbers "l", "m" and the projection of the spin "ms" by quantum numbers that represent the total angular momentum (including spin), "j" and "mj", as well as the quantum number of parity. See the next section on the Dirac equation for a solution that includes the coupling. Solution to Dirac equation. In 1928 in England Paul Dirac found an equation that was fully compatible with special relativity. The equation was solved for hydrogen-like atoms the same year (assuming a simple Coulomb potential around a point charge) by the German Walter Gordon. Instead of a single (possibly complex) function as in the Schrödinger equation, one must find four complex functions that make up a bispinor. The first and second functions (or components of the spinor) correspond (in the usual basis) to spin "up" and spin "down" states, as do the third and fourth components. The terms "spin up" and "spin down" are relative to a chosen direction, conventionally the z direction. An electron may be in a superposition of spin up and spin down, which corresponds to the spin axis pointing in some other direction. The spin state may depend on location. An electron in the vicinity of a nucleus necessarily has non-zero amplitudes for the third and fourth components. Far from the nucleus these may be small, but near the nucleus they become large. The eigenfunctions of the Hamiltonian, which means functions with a definite energy (and which therefore do not evolve except for a phase shift), have energies characterized not by the quantum number "n" only (as for the Schrödinger equation), but by "n" and a quantum number "j", the total angular momentum quantum number. The quantum number "j" determines the sum of the squares of the three angular momenta to be "j"("j"+1) (times "ħ"2, see Planck constant). These angular momenta include both orbital angular momentum (having to do with the angular dependence of ψ) and spin angular momentum (having to do with the spin state). The splitting of the energies of states of the same principal quantum number "n" due to differences in "j" is called fine structure. The total angular momentum quantum number "j" ranges from 1/2 to "n"−1/2. The orbitals for a given state can be written using two radial functions and two angle functions. The radial functions depend on both the principal quantum number "n" and an integer "k", defined as: formula_30 where ℓ is the azimuthal quantum number that ranges from 0 to "n"−1. The angle functions depend on "k" and on a quantum number "m" which ranges from −"j" to "j" by steps of 1. The states are labeled using the letters S, P, D, F et cetera to stand for states with ℓ equal to 0, 1, 2, 3 et cetera (see azimuthal quantum number), with a subscript giving "j". For instance, the states for "n"=4 are given in the following table (these would be prefaced by "n", for example 4S1/2): These can be additionally labeled with a subscript giving "m". There are 2"n"2 states with principal quantum number "n", 4"j"+2 of them with any allowed "j" except the highest ("j"="n"−1/2) for which there are only 2"j"+1. Since the orbitals having given values of "n" and "j" have the same energy according to the Dirac equation, they form a basis for the space of functions having that energy. The energy, as a function of "n" and |"k"| (equal to "j"+1/2), is: formula_31 (The energy of course depends on the zero-point used.) Note that if Z were able to be more than 137 (higher than any known element) then we would have a negative value inside the square root for the S1/2 and P1/2 orbitals, which means they would not exist. The Schrödinger solution corresponds to replacing the inner bracket in the second expression by 1. The accuracy of the energy difference between the lowest two hydrogen states calculated from the Schrödinger solution is about 9 ppm (90 μeV too low, out of around 10 eV), whereas the accuracy of the Dirac equation for the same energy difference is about 3 ppm (too high). The Schrödinger solution always puts the states at slightly higher energies than the more accurate Dirac equation. The Dirac equation gives some levels of hydrogen quite accurately (for instance the 4P1/2 state is given an energy only about eV too high), others less so (for instance, the 2S1/2 level is about eV too low). The modifications of the energy due to using the Dirac equation rather than the Schrödinger solution is of the order of α2, and for this reason α is called the fine-structure constant. The solution to the Dirac equation for quantum numbers "n", "k", and "m", is: formula_32 where the Ωs are columns of the two spherical harmonics functions shown to the right. formula_33 signifies a spherical harmonic function: formula_34 in which formula_35 is an associated Legendre polynomial. (Note that the definition of Ω may involve a spherical harmonic that doesn't exist, like formula_36, but the coefficient on it will be zero.) Here is the behavior of some of these angular functions. The normalization factor is left out to simplify the expressions. formula_37 formula_38 formula_39 formula_40 From these we see that in the S1/2 orbital ("k" = −1), the top two components of Ψ have zero orbital angular momentum like Schrödinger S orbitals, but the bottom two components are orbitals like the Schrödinger P orbitals. In the P1/2 solution ("k" = 1), the situation is reversed. In both cases, the spin of each component compensates for its orbital angular momentum around the "z" axis to give the right value for the total angular momentum around the "z" axis. The two Ω spinors obey the relationship: formula_41 To write the functions formula_42 and formula_43 let us define a scaled radius ρ: formula_44 with formula_45 where E is the energy (formula_46) given above. We also define γ as: formula_47 When "k" = −"n" (which corresponds to the highest "j" possible for a given "n", such as 1S1/2, 2P3/2, 3D5/2...), then formula_42 and formula_43 are: formula_48 formula_49 where "A" is a normalization constant involving the gamma function: formula_50 Notice that because of the factor Zα, "f"("r)" is small compared to "g"("r"). Also notice that in this case, the energy is given by formula_51 and the radial decay constant "C" by formula_52 In the general case (when "k" is not −"n"), formula_53 are based on two generalized Laguerre polynomials of order formula_54 and formula_55: formula_56 formula_57 with "A" now defined as formula_58 Again "f" is small compared to "g" (except at very small "r") because when "k" is positive the first terms dominate, and α is big compared to γ−"k", whereas when "k" is negative the second terms dominate and α is small compared to γ−"k". Note that the dominant term is quite similar to corresponding the Schrödinger solution – the upper index on the Laguerre polynomial is slightly less (2γ+1 or 2γ−1 rather than 2ℓ+1, which is the nearest integer), as is the power of ρ (γ or γ−1 instead of ℓ, the nearest integer). The exponential decay is slightly faster than in the Schrödinger solution. The normalization factor makes the integral over all space of the square of the absolute value equal to 1. 1S orbital. Here is the 1S1/2 orbital, spin up, without normalization: formula_59 Note that γ is a little less than 1, so the top function is similar to an exponentially decreasing function of "r" except that at very small "r" it theoretically goes to infinity. But the value of the formula_60 only surpasses 10 at a value of "r" smaller than formula_61 which is a very small number (much less than the radius of a proton) unless Z is very large. The 1S1/2 orbital, spin down, without normalization, comes out as: formula_62 We can mix these in order to obtain orbitals with the spin oriented in some other direction, such as: formula_63 which corresponds to the spin and angular momentum axis pointing in the x direction. Adding "i" times the "down" spin to the "up" spin gives an orbital oriented in the y direction. 2P1/2 and 2S1/2 orbitals. To give another example, the 2P1/2 orbital, spin up, is proportional to: formula_64 Notice that when ρ is small compared to α (or "r" is small compared to formula_66) the "S" type orbital dominates (the third component of the bispinor). For the 2S1/2 spin up orbital, we have: formula_67 Now the first component is S-like and there is a radius near ρ = 2 where it goes to zero, whereas the bottom two-component part is P-like. Negative-energy solutions. In addition to bound states, in which the energy is less than that of an electron infinitely separated from the nucleus, there are solutions to the Dirac equation at higher energy, corresponding to an unbound electron interacting with the nucleus. These solutions are not normalizable, but solutions can be found which tend toward zero as r goes to infinity (which is not possible when formula_68 except at the above-mentioned bound-state values of E). There are similar solutions with formula_69 These negative-energy solutions are just like positive-energy solutions having the opposite energy but for a case in which the nucleus repels the electron instead of attracting it, except that the solutions for the top two components switch places with those for the bottom two. Negative-energy solutions to Dirac's equation exist even in the absence of a Coulomb force exerted by a nucleus. Dirac hypothesized that we can consider almost all of these states to be already filled. If one of these negative-energy states is not filled, this manifests itself as though there is an electron which is "repelled" by a positively-charged nucleus. This prompted Dirac to hypothesize the existence of positively-charged electrons, and his prediction was confirmed with the discovery of the positron. Beyond Gordon's solution to the Dirac equation. The Dirac equation with a simple Coulomb potential generated by a point-like non-magnetic nucleus was not the last word, and its predictions differ from experimental results as mentioned earlier. More accurate results include the Lamb shift (radiative corrections arising from quantum electrodynamics) and hyperfine structure. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V(r) = -\\frac{1}{4 \\pi \\varepsilon_0} \\frac{Ze^2}{r}" }, { "math_id": 1, "text": "\\psi(r, \\theta, \\phi) = R_{nl}(r)Y_{\\ell m}(\\theta,\\phi)" }, { "math_id": 2, "text": "Y_{\\ell m}" }, { "math_id": 3, "text": "\n- \\frac{\\hbar^2}{2\\mu} \\left[\\frac{1}{r^2} \\frac{\\partial}{\\partial r}\\left(r^2 \\frac{\\partial R(r)}{\\partial r}\\right) - \\frac{l(l+1)R(r)}{r^2} \\right] + V(r)R(r) = E R(r),\n" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "\\hbar" }, { "math_id": 6, "text": "-l\\le m\\le l" }, { "math_id": 7, "text": "\\psi_{n \\ell m} = R_{n \\ell}(r)\\, Y_{\\ell m}(\\theta,\\phi)" }, { "math_id": 8, "text": " R_{n \\ell} (r) = \\sqrt {{\\left ( \\frac{2 Z}{n a_{\\mu}} \\right ) }^3\\frac{(n-\\ell-1)!}{2n{(n+\\ell)!}} } e^{- Z r / {n a_{\\mu}}} \\left ( \\frac{2 Z r}{n a_{\\mu}} \\right )^{\\ell} L_{n-\\ell-1}^{(2\\ell+1)} \\left ( \\frac{2 Z r}{n a_{\\mu}} \\right ) " }, { "math_id": 9, "text": "L_{n-\\ell-1}^{(2 \\ell+1)}" }, { "math_id": 10, "text": "a_{\\mu} = \\frac{4\\pi\\varepsilon_0\\hbar^2}{\\mu e^2} = \\frac{\\hbar c}{\\alpha\\mu c^2} =\\frac{m_{\\mathrm{e}}}{\\mu} a_0" }, { "math_id": 11, "text": " \\alpha " }, { "math_id": 12, "text": "\\mu = {{m_{\\mathrm{N}} m_{\\mathrm{e}}}\\over{m_{\\mathrm{N}}+m_{\\mathrm{e}}}}" }, { "math_id": 13, "text": "m_{\\mathrm{N}}" }, { "math_id": 14, "text": "\\mu \\approx m_{\\mathrm{e}}" }, { "math_id": 15, "text": "\\mu=m_{\\mathrm{e}}/2" }, { "math_id": 16, "text": "a_0" }, { "math_id": 17, "text": " E_{n} = -\\left(\\frac{Z^2 \\mu e^4}{32 \\pi^2\\epsilon_0^2\\hbar^2}\\right)\\frac{1}{n^2} = -\\left(\\frac{Z^2\\hbar^2}{2\\mu a_{\\mu}^2}\\right)\\frac{1}{n^2} = -\\frac{\\mu c^2Z^2\\alpha^2}{2n^2}." }, { "math_id": 18, "text": "Y_{\\ell m} (\\theta,\\phi)\\," }, { "math_id": 19, "text": "{\\left ( {-1} \\right ) }^\\ell" }, { "math_id": 20, "text": "n" }, { "math_id": 21, "text": "\\ell" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "n=1,2,3,4, \\dots" }, { "math_id": 24, "text": "\\ell=0,1,2,\\dots,n-1" }, { "math_id": 25, "text": "m=-\\ell,-\\ell+1,\\ldots,0,\\ldots,\\ell-1,\\ell" }, { "math_id": 26, "text": "\\ell < n\\," }, { "math_id": 27, "text": "-\\ell \\le m \\le \\,\\ell " }, { "math_id": 28, "text": "\\hat{L}^2 Y_{\\ell m} = \\hbar^2 \\ell(\\ell+1) Y_{\\ell m} " }, { "math_id": 29, "text": "\\hat{L}_z Y_{\\ell m} = \\hbar m Y_{\\ell m}, " }, { "math_id": 30, "text": "k = \\begin{cases}\n-j-\\tfrac 1 2 & \\text{if }j=\\ell+\\tfrac 1 2 \\\\\nj+\\tfrac 1 2 & \\text{if }j=\\ell-\\tfrac 1 2\n\\end{cases}" }, { "math_id": 31, "text": "\\begin{array}{rl}\nE_{n\\,j} & = \\mu c^2\\left(1+\\left[\\dfrac{Z\\alpha}{n-|k|+\\sqrt{k^2-Z^2\\alpha^2}}\\right]^2\\right)^{-1/2}\\\\\n&\\\\\n& \\approx \\mu c^2\\left\\{1-\\dfrac{Z^2\\alpha^2}{2n^2} \\left[1 + \\dfrac{Z^2\\alpha^2}n\\left(\\dfrac 1{|k|} - \\dfrac 3{4n} \\right) \\right]\\right\\}\n\\end{array}" }, { "math_id": 32, "text": "\\Psi=\\begin{pmatrix}\ng_{n,k}(r)r^{-1}\\Omega_{k,m}(\\theta,\\phi)\\\\\nif_{n,k}(r)r^{-1}\\Omega_{-k,m}(\\theta,\\phi)\n\\end{pmatrix}=\\begin{pmatrix}\ng_{n,k}(r)r^{-1}\\sqrt{(k+\\tfrac 1 2 -m)/(2k+1)}Y_{k,m-1/2}(\\theta,\\phi)\\\\\n-g_{n,k}(r)r^{-1}\\sgn k\\sqrt{(k+\\tfrac 1 2 +m)/(2k+1)}Y_{k,m+1/2}(\\theta,\\phi)\\\\\nif_{n,k}(r)r^{-1}\\sqrt{(-k+\\tfrac 1 2 -m)/(-2k+1)}Y_{-k,m-1/2}(\\theta,\\phi)\\\\\n-if_{n,k}(r)r^{-1}\\sgn k\\sqrt{(-k+\\tfrac 1 2 +m)/(-2k+1)}Y_{-k,m+1/2}(\\theta,\\phi)\n\\end{pmatrix}" }, { "math_id": 33, "text": "Y_{a,b}(\\theta,\\phi)" }, { "math_id": 34, "text": "Y_{a,b}(\\theta,\\phi)= \\begin{cases}\n(-1)^b\\sqrt{\\frac{2a+1}{4\\pi}\\frac{(a-b)!}{(a+b)!}}P_a^b(\\cos\\theta)e^{ib\\phi} & \\text{if }a>0\\\\\nY_{-a-1,b}(\\theta,\\phi)& \\text{if }a<0\n\\end{cases}" }, { "math_id": 35, "text": "P_a^b" }, { "math_id": 36, "text": "Y_{0,1}" }, { "math_id": 37, "text": "\\Omega_{-1,-1/2}\\propto\\binom 0 1" }, { "math_id": 38, "text": "\\Omega_{-1,1/2}\\propto\\binom 1 0" }, { "math_id": 39, "text": "\\Omega_{1,-1/2}\\propto\\binom{(x-iy)/r}{z/r}" }, { "math_id": 40, "text": "\\Omega_{1,1/2}\\propto\\binom{z/r}{(x+iy)/r}" }, { "math_id": 41, "text": "\\Omega_{k,m}=\\begin{pmatrix}\nz/r & (x-iy)/r\\\\\n(x+iy)/r & -z/r\n\\end{pmatrix}\\Omega_{-k,m}" }, { "math_id": 42, "text": "g_{n,k}(r)" }, { "math_id": 43, "text": "f_{n,k}(r)" }, { "math_id": 44, "text": "\\rho\\equiv 2Cr" }, { "math_id": 45, "text": "C=\\frac{\\sqrt{\\mu^2c^4-E^2}}{\\hbar c}" }, { "math_id": 46, "text": "E_{n\\,j}" }, { "math_id": 47, "text": "\\gamma\\equiv\\sqrt{k^2-Z^2\\alpha^2}" }, { "math_id": 48, "text": "g_{n,-n}(r)=A(n+\\gamma)\\rho^\\gamma e^{-\\rho/2}" }, { "math_id": 49, "text": "f_{n,-n}(r)=AZ\\alpha\\rho^\\gamma e^{-\\rho/2}" }, { "math_id": 50, "text": "A=\\frac 1{\\sqrt{2n(n+\\gamma)}}\\sqrt\\frac C{\\gamma\\Gamma(2\\gamma)}" }, { "math_id": 51, "text": "E_{n,n-1/2}=\\frac\\gamma n\\mu c^2=\\sqrt{1-\\frac{Z^2\\alpha^2}{n^2}}\\,\\mu c^2" }, { "math_id": 52, "text": "C=\\frac{Z\\alpha}n\\frac{\\mu c^2}{\\hbar c}." }, { "math_id": 53, "text": "g_{n,k}(r)\\text{ and }f_{n,k}(r)" }, { "math_id": 54, "text": "n-|k|-1" }, { "math_id": 55, "text": "n-|k|" }, { "math_id": 56, "text": "g_{n,k}(r)=A\\rho^\\gamma e^{-\\rho/2}\\left(Z\\alpha\\rho L_{n-|k|-1}^{(2\\gamma+1)}(\\rho)+(\\gamma-k)\\frac{\\gamma\\mu c^2-kE}{\\hbar cC}L_{n-|k|}^{(2\\gamma-1)}(\\rho)\\right)" }, { "math_id": 57, "text": "f_{n,k}(r)=A\\rho^\\gamma e^{-\\rho/2}\\left((\\gamma-k)\\rho L_{n-|k|-1}^{(2\\gamma+1)}(\\rho)+Z\\alpha\\frac{\\gamma\\mu c^2-kE}{\\hbar cC}L_{n-|k|}^{(2\\gamma-1)}(\\rho)\\right)" }, { "math_id": 58, "text": "A=\\frac 1{\\sqrt{2k(k-\\gamma)}}\\sqrt{\\frac C{n-|k|+\\gamma}\\frac{(n-|k|-1)!}{\\Gamma(n-|k|+2\\gamma+1)}\\frac 1 2\\left(\\left(\\frac{Ek}{\\gamma\\mu c^2}\\right)^2+\\frac{Ek}{\\gamma\\mu c^2}\\right)}" }, { "math_id": 59, "text": "\\Psi\\propto\\begin{pmatrix}\n(1+\\gamma)r^{\\gamma-1}e^{-Cr}\\\\\n0\\\\\niZ\\alpha r^{\\gamma-1}e^{-Cr}z/r\\\\\niZ\\alpha r^{\\gamma-1}e^{-Cr}(x+iy)/r\n\\end{pmatrix}" }, { "math_id": 60, "text": "r^{\\gamma-1}" }, { "math_id": 61, "text": "10^{1/(\\gamma-1)}," }, { "math_id": 62, "text": "\\Psi\\propto\\begin{pmatrix}\n0\\\\\n(1+\\gamma)r^{\\gamma-1}e^{-Cr}\\\\\niZ\\alpha r^{\\gamma-1}e^{-Cr}(x-iy)/r\\\\\n-iZ\\alpha r^{\\gamma-1}e^{-Cr}z/r\n\\end{pmatrix}" }, { "math_id": 63, "text": "\\Psi\\propto\\begin{pmatrix}\n(1+\\gamma)r^{\\gamma-1}e^{-Cr}\\\\\n(1+\\gamma)r^{\\gamma-1}e^{-Cr}\\\\\niZ\\alpha r^{\\gamma-1}e^{-Cr}(x-iy+z)/r\\\\\niZ\\alpha r^{\\gamma-1}e^{-Cr}(x+iy-z)/r\n\\end{pmatrix}" }, { "math_id": 64, "text": "\\Psi\\propto\\begin{pmatrix}\n\\rho^{\\gamma-1} e^{-\\rho/2}\\left(Z\\alpha\\rho+(\\gamma-1)\\frac{\\gamma\\mu c^2-E}{\\hbar cC}(-\\rho+2\\gamma)\\right)z/r\\\\\n\\rho^{\\gamma-1} e^{-\\rho/2}\\left(Z\\alpha\\rho+(\\gamma-1)\\frac{\\gamma\\mu c^2-E}{\\hbar cC}(-\\rho+2\\gamma)\\right)(x+iy)/r\\\\\ni\\rho^{\\gamma-1}e^{-\\rho/2}\\left((\\gamma-1)\\rho+Z\\alpha\\frac{\\gamma\\mu c^2-E}{\\hbar cC}(-\\rho+2\\gamma)\\right)\\\\\n0\n\\end{pmatrix}" }, { "math_id": 65, "text": "\\rho=2rC" }, { "math_id": 66, "text": "\\hbar c/(\\mu c^2)" }, { "math_id": 67, "text": "\\Psi\\propto\\begin{pmatrix}\n\\rho^{\\gamma-1} e^{-\\rho/2}\\left(Z\\alpha\\rho+(\\gamma+1)\\frac{\\gamma\\mu c^2+E}{\\hbar cC}(-\\rho+2\\gamma)\\right)\\\\\n0\\\\\ni\\rho^{\\gamma-1}e^{-\\rho/2}\\left((\\gamma+1)\\rho+Z\\alpha\\frac{\\gamma\\mu c^2+E}{\\hbar cC}(-\\rho+2\\gamma)\\right)z/r\\\\\ni\\rho^{\\gamma-1}e^{-\\rho/2}\\left((\\gamma+1)\\rho+Z\\alpha\\frac{\\gamma\\mu c^2+E}{\\hbar cC}(-\\rho+2\\gamma)\\right)(x+iy)/r\n\\end{pmatrix}" }, { "math_id": 68, "text": "|E|<\\mu c^2" }, { "math_id": 69, "text": "E<-\\mu c^2." } ]
https://en.wikipedia.org/wiki?curid=11724245
11724761
Trophic level
Position of an organism in a food chain The trophic level of an organism is the position it occupies in a food web. Within a food web, a food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a part of a wider food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word "trophic" derives from the Greek τροφή (trophē) referring to food or nourishment. History. The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview. The three basic ways in which organisms get food are as producers, consumers, and decomposers. Trophic levels can be represented by numbers, starting at level 1 with plants. Further trophic levels are numbered subsequently according to how far the organism is along the food chain. In real-world ecosystems, there is more than one food chain for most organisms, since most organisms eat more than one kind of food or are eaten by more than one type of predator. A diagram that sets out the intricate network of intersecting and overlapping food chains for an ecosystem is called its food web. Decomposers are often left off food webs, but if included, they mark the end of a food chain. Thus food chains start with primary producers and end with decay and decomposers. Since decomposers recycle nutrients, leaving them so they can be reused by primary producers, they are sometimes regarded as occupying their own trophic level. The trophic level of a species may vary if it has a choice of diet. Virtually all plants and phytoplankton are purely phototrophic and are at exactly level 1.0. Many worms are at around 2.1; insects 2.2; jellyfish 3.0; birds 3.6. A 2013 study estimates the average trophic level of human beings at 2.21, similar to pigs or anchovies. This is only an average, and plainly both modern and ancient human eating habits are complex and vary greatly. For example, a traditional Inuit living on a diet consisting primarily of seals would have a trophic level of nearly 5. Biomass transfer efficiency. In general, each trophic level relates to the one below it by absorbing some of the energy it consumes, and in this way can be regarded as resting on, or supported by, the next lower trophic level. Food chains can be diagrammed to illustrate the amount of energy that moves from one feeding level to the next in a food chain. This is called an energy pyramid. The energy transferred between levels can also be thought of as approximating to a transfer in biomass, so energy pyramids can also be viewed as biomass pyramids, picturing the amount of biomass that results at higher levels from biomass consumed at lower levels. However, when primary producers grow rapidly and are consumed rapidly, the biomass at any one moment may be low; for example, phytoplankton (producer) biomass can be low compared to the zooplankton (consumer) biomass in the same area of ocean. The efficiency with which energy or biomass is transferred from one trophic level to the next is called the ecological efficiency. Consumers at each level convert on average only about 10% of the chemical energy in their food to their own organic tissue (the ten-per cent law). For this reason, food chains rarely extend for more than 5 or 6 levels. At the lowest trophic level (the bottom of the food chain), plants convert about 1% of the sunlight they receive into chemical energy. It follows from this that the total energy originally present in the incident sunlight that is finally embodied in a tertiary consumer is about 0.001% Evolution. Both the number of trophic levels and the complexity of relationships between them evolve as life diversifies through time, the exception being intermittent mass extinction events. Fractional trophic levels. Food webs largely define ecosystems, and the trophic levels define the position of organisms within the webs. But these trophic levels are not always simple integers, because organisms often feed at more than one trophic level. For example, some carnivores also eat plants, and some plants are carnivores. A large carnivore may eat both smaller carnivores and herbivores; the bobcat eats rabbits, but the mountain lion eats both bobcats and rabbits. Animals can also eat each other; the bullfrog eats crayfish and crayfish eat young bullfrogs. The feeding habits of a juvenile animal, and, as a consequence, its trophic level, can change as it grows up. The fisheries scientist Daniel Pauly sets the values of trophic levels to one in plants and detritus, two in herbivores and detritivores (primary consumers), three in secondary consumers, and so on. The definition of the trophic level, TL, for any consumer species is: formula_0 where formula_1 is the fractional trophic level of the prey "j", and formula_2 represents the fraction of "j" in the diet of "i". That is, the consumer trophic level is one plus the weighted average of how much different trophic levels contribute to its food. In the case of marine ecosystems, the trophic level of most fish and other marine consumers takes a value between 2.0 and 5.0. The upper value, 5.0, is unusual, even for large fish, though it occurs in apex predators of marine mammals, such as polar bears and orcas. In addition to observational studies of animal behavior, and quantification of animal stomach contents, trophic level can be quantified through stable isotope analysis of animal tissues such as muscle, skin, hair, bone collagen. This is because there is a consistent increase in the nitrogen isotopic composition at each trophic level caused by fractionations that occur with the synthesis of biomolecules; the magnitude of this increase in nitrogen isotopic composition is approximately 3–4‰. Mean trophic level. In fisheries, the mean trophic level for the fisheries catch across an entire area or ecosystem is calculated for year y as: formula_3 where formula_4 is the annual catch of the species or group i in year y, and formula_5 is the trophic level for species i as defined above. Fish at higher trophic levels usually have a higher economic value, which can result in overfishing at the higher trophic levels. Earlier reports found precipitous declines in mean trophic level of fisheries catch, in a process known as fishing down the food web. However, more recent work finds no relation between economic value and trophic level; and that mean trophic levels in catches, surveys and stock assessments have not in fact declined, suggesting that fishing down the food web is not a global phenomenon. However Pauly "et al". note that trophic levels peaked at 3.4 in 1970 in the northwest and west-central Atlantic, followed by a subsequent decline to 2.9 in 1994. They report a shift away from long-lived, piscivorous, high-trophic-level bottom fishes, such as cod and haddock, to short-lived, planktivorous, low-trophic-level invertebrates (e.g., shrimp) and small, pelagic fish (e.g., herring). This shift from high-trophic-level fishes to low-trophic-level invertebrates and fishes is a response to changes in the relative abundance of the preferred catch. They consider that this is part of the global fishery collapse, which finds an echo in the overfished Mediterranean Sea. Humans have a mean trophic level of about 2.21, about the same as a pig or an anchovy. FiB index. Since biomass transfer efficiencies are only about 10%, it follows that the rate of biological production is much greater at lower trophic levels than it is at higher levels. Fisheries catch, at least, to begin with, will tend to increase as the trophic level declines. At this point the fisheries will target species lower in the food web. In 2000, this led Pauly and others to construct a "Fisheries in Balance" index, usually called the FiB index. The FiB index is defined, for any year "y", by formula_6 where formula_7 is the catch at year "y", formula_8 is the mean trophic level of the catch at year "y", formula_9 is the catch, formula_10 the mean trophic level of the catch at the start of the series being analyzed, and formula_11 is the transfer efficiency of biomass or energy between trophic levels. The FiB index is stable (zero) over periods of time when changes in trophic levels are matched by appropriate changes in the catch in the opposite direction. The index increases if catches increase for any reason, e.g. higher fish biomass, or geographic expansion. Such decreases explain the "backward-bending" plots of trophic level versus catch originally observed by Pauly and others in 1998. Tritrophic and other interactions. One aspect of trophic levels is called tritrophic interaction. Ecologists often restrict their research to two trophic levels as a way of simplifying the analysis; however, this can be misleading if tritrophic interactions (such as plant–herbivore–predator) are not easily understood by simply adding pairwise interactions (plant-herbivore plus herbivore–predator, for example). Significant interactions can occur between the first trophic level (plant) and the third trophic level (a predator) in determining herbivore population growth, for example. Simple genetic changes may yield morphological variants in plants that then differ in their resistance to herbivores because of the effects of the plant architecture on enemies of the herbivore. Plants can also develop defenses against herbivores such as chemical defenses. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " TL_i=1 + \\sum_j (TL_j \\cdot DC_{ij})\\! " }, { "math_id": 1, "text": "TL_j" }, { "math_id": 2, "text": "DC_{ij}" }, { "math_id": 3, "text": " TL_y = \\frac{\\sum_i (TL_i \\cdot Y_{iy})}{\\sum_i Y_{iy}} " }, { "math_id": 4, "text": "Y_{iy}" }, { "math_id": 5, "text": "\\ TL_i\\ " }, { "math_id": 6, "text": " FiB_y=\\log\\frac{Y_y/(TE)^{TL_y}}{Y_0/(TE)^{TL_0}} " }, { "math_id": 7, "text": "Y_y" }, { "math_id": 8, "text": "TL_y" }, { "math_id": 9, "text": "Y_0" }, { "math_id": 10, "text": "TL_0" }, { "math_id": 11, "text": "TE" } ]
https://en.wikipedia.org/wiki?curid=11724761
11726298
Linear extension
Mathematical ordering of a partial order In order theory, a branch of mathematics, a linear extension of a partial order is a total order (or linear order) that is compatible with the partial order. As a classic example, the lexicographic order of totally ordered sets is a linear extension of their product order. Definitions. Linear extension of a partial order. A partial order is a reflexive, transitive and antisymmetric relation. Given any partial orders formula_0 and formula_1 on a set formula_2 formula_1 is a linear extension of formula_0 exactly when It is that second property that leads mathematicians to describe formula_1 as extending formula_6 Alternatively, a linear extension may be viewed as an order-preserving bijection from a partially ordered set formula_7 to a chain formula_8 on the same ground set. Linear extension of a preorder. A preorder is a reflexive and transitive relation. The difference between a preorder and a partial-order is that a preorder allows two different items to be considered "equivalent", that is, both formula_9 and formula_10 hold, while a partial-order allows this only when formula_11. A relation formula_12 is called a linear extension of a preorder formula_13 if: The difference between these definitions is only in condition 3. When the extension is a partial order, condition 3 need not be stated explicitly, since it follows from condition 2. "Proof": suppose that formula_17 and not formula_18. By condition 2, formula_14. By reflexivity, "not formula_18" implies that formula_19. Since formula_12 is a partial order, formula_14 and formula_19 imply "not formula_20". Therefore, formula_16. However, for general preorders, condition 3 is needed to rule out trivial extensions. Without this condition, the preorder by which all elements are equivalent (formula_18 and formula_17 hold for all pairs "x","y") would be an extension of every preorder. Order-extension principle. The statement that every partial order can be extended to a total order is known as the order-extension principle. A proof using the axiom of choice was first published by Edward Marczewski (Szpilrajin) in 1930. Marczewski writes that the theorem had previously been proven by Stefan Banach, Kazimierz Kuratowski, and Alfred Tarski, again using the axiom of choice, but that the proofs had not been published. There is an analogous statement for preorders: every preorder can be extended to a total preorder. This statement was proved by Hansson.Lemma 3 In modern axiomatic set theory the order-extension principle is itself taken as an axiom, of comparable ontological status to the axiom of choice. The order-extension principle is implied by the Boolean prime ideal theorem or the equivalent compactness theorem, but the reverse implication doesn't hold. Applying the order-extension principle to a partial order in which every two elements are incomparable shows that (under this principle) every set can be linearly ordered. This assertion that every set can be linearly ordered is known as the ordering principle, OP, and is a weakening of the well-ordering theorem. However, there are models of set theory in which the ordering principle holds while the order-extension principle does not. Related results. The order extension principle is constructively provable for finite sets using topological sorting algorithms, where the partial order is represented by a directed acyclic graph with the set's elements as its vertices. Several algorithms can find an extension in linear time. Despite the ease of finding a single linear extension, the problem of counting all linear extensions of a finite partial order is #P-complete; however, it may be estimated by a fully polynomial-time randomized approximation scheme. Among all partial orders with a fixed number of elements and a fixed number of comparable pairs, the partial orders that have the largest number of linear extensions are semiorders. The order dimension of a partial order is the minimum cardinality of a set of linear extensions whose intersection is the given partial order; equivalently, it is the minimum number of linear extensions needed to ensure that each critical pair of the partial order is reversed in at least one of the extensions. Antimatroids may be viewed as generalizing partial orders; in this view, the structures corresponding to the linear extensions of a partial order are the basic words of the antimatroid. This area also includes one of order theory's most famous open problems, the 1/3–2/3 conjecture, which states that in any finite partially ordered set formula_7 that is not totally ordered there exists a pair formula_21 of elements of formula_7 for which the linear extensions of formula_7 in which formula_22 number between 1/3 and 2/3 of the total number of linear extensions of formula_23 An equivalent way of stating the conjecture is that, if one chooses a linear extension of formula_7 uniformly at random, there is a pair formula_21 which has probability between 1/3 and 2/3 of being ordered as formula_24 However, for certain infinite partially ordered sets, with a canonical probability defined on its linear extensions as a limit of the probabilities for finite partial orders that cover the infinite partial order, the 1/3–2/3 conjecture does not hold. Algebraic combinatorics. Counting the number of linear extensions of a finite poset is a common problem in algebraic combinatorics. This number is given by the leading coefficient of the order polynomial multiplied by formula_25 Young tableau can be considered as linear extensions of a finite order-ideal in the infinite poset formula_26 and they are counted by the hook length formula. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\,\\leq\\," }, { "math_id": 1, "text": "\\,\\leq^*\\," }, { "math_id": 2, "text": "X," }, { "math_id": 3, "text": "x, y \\in X," }, { "math_id": 4, "text": "x \\leq y," }, { "math_id": 5, "text": "x \\leq^* y." }, { "math_id": 6, "text": "\\,\\leq." }, { "math_id": 7, "text": "P" }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "x\\precsim y" }, { "math_id": 10, "text": "y\\precsim x" }, { "math_id": 11, "text": "x=y" }, { "math_id": 12, "text": "\\precsim^*" }, { "math_id": 13, "text": "\\precsim" }, { "math_id": 14, "text": "x\\precsim^* y" }, { "math_id": 15, "text": "x\\prec y" }, { "math_id": 16, "text": "x\\prec^* y" }, { "math_id": 17, "text": "x \\precsim y" }, { "math_id": 18, "text": "y \\precsim x" }, { "math_id": 19, "text": "y\\neq x" }, { "math_id": 20, "text": "y\\precsim^* x" }, { "math_id": 21, "text": "(x, y)" }, { "math_id": 22, "text": "x < y" }, { "math_id": 23, "text": "P." }, { "math_id": 24, "text": "x < y." }, { "math_id": 25, "text": "|P|!." }, { "math_id": 26, "text": "\\N \\times \\N," } ]
https://en.wikipedia.org/wiki?curid=11726298
11728075
Inclusion order
Partial order that arises as the subset-inclusion relation on some collection of objects In the mathematical field of order theory, an inclusion order is the partial order that arises as the subset-inclusion relation on some collection of objects. In a simple way, every poset "P" = ("X",≤) is (isomorphic to) an inclusion order (just as every group is isomorphic to a permutation group – see Cayley's theorem). To see this, associate to each element "x" of "X" the set formula_0 then the transitivity of ≤ ensures that for all "a" and "b" in "X", we have formula_1 There can be sets formula_2 of cardinality less than formula_3 such that "P" is isomorphic to the inclusion order on "S". The size of the smallest possible "S" is called the 2-dimension of "P". Several important classes of poset arise as inclusion orders for some natural collections, like the Boolean lattice "Q""n", which is the collection of all 2"n" subsets of an "n"-element set, the interval-containment orders, which are precisely the orders of order dimension at most two, and the dimension-"n" orders, which are the containment orders on collections of "n"-boxes anchored at the origin. Other containment orders that are interesting in their own right include the circle orders, which arise from disks in the plane, and the angle orders.
[ { "math_id": 0, "text": " X_{\\leq(x)} = \\{ y \\in X \\mid y \\leq x\\} ; " }, { "math_id": 1, "text": " X_{\\leq(a)} \\subseteq X_{\\leq(b)} \\text{ precisely when } a \\leq b . " }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "|X|" } ]
https://en.wikipedia.org/wiki?curid=11728075
11728202
Bill Foster (politician)
American politician (born 1955) George William Foster (born October 7, 1955) is an American businessman and physicist serving as the U.S. representative for 's congressional district since 2013. He was the U.S. representative for 's congressional district from 2008 to 2011. He is a member of the Democratic Party. Early life and education. Foster was born in 1955 in Madison, Wisconsin. As a teenager, he attended James Madison Memorial High School. He received his bachelor's degree in physics from the University of Wisconsin–Madison in 1976 and his Ph.D. in physics from Harvard University in 1983. The title of his doctoral dissertation is "An experimental limit on proton decay: formula_0." When Foster was 19, he started a company with his younger brother, Fred. The company, ETC, has become the leading manufacturer of theatrical lighting. Physics career. After completing his Ph.D., Foster moved to the Fox Valley with his family to pursue a career in high-energy (particle) physics at Fermilab, a Department of Energy National Laboratory. During his 22 years at Fermilab, he participated in several projects, including the design of equipment and data analysis software for the CDF Detector, which were used in the discovery of the top quark, and the management of the design and construction of a 3 km Anti-Proton Recycler Ring for the Main Injector. In 1998, Foster was elected a fellow of the American Physical Society. He was a member of the team that received the 1989 Bruno Rossi Prize for cosmic ray physics for the discovery of the neutrino burst from the supernova SN 1987A. He also received the Institute of Electrical and Electronics Engineers' Particle Accelerator Technology Prize and was awarded an Energy Conservation award from the United States Department of Energy for his application of permanent magnets for Fermilab's accelerators. He and Stephen D. Holmes received the Robert R. Wilson Prize for Achievement in the Physics of Particle Accelerators in 2022 for "leadership in developing the modern accelerator complex at Fermilab, enabling the success of the Tevatron program that supports rich programs in neutrino and precision physics." U.S. House of Representatives. Elections. On November 26, 2007, former House Republican Speaker J. Dennis Hastert resigned as the Representative from Illinois's 14th congressional district. Foster announced his candidacy to fill the vacancy on May 30, 2007. In the March special election, Foster defeated Republican nominee and Hastert-endorsed candidate Jim Oberweis, 53%–47%. In November, Oberweis ran against Foster again. Foster won reelection to a full term, 58%–42%. Foster was challenged by Republican nominee State Senator Randy Hultgren and Green Party nominee Daniel Kairis. Despite being endorsed by the "Chicago Tribune", the "Chicago Sun-Times" and "The Daily Herald", Foster lost to Hultgren, 51%–45%. In May 2011, Foster sold his home in Geneva, moved to Naperville and announced plans to run for Congress in the 11th district, which encompasses Aurora, Joliet, Lisle in addition to Naperville. It also includes roughly a quarter of his former district. The district had previously been the 13th, represented by seven-term Republican Judy Biggert. Although Biggert's home in Hinsdale had been shifted to the Chicago-based 5th district, Biggert opted to seek election in the 11th, which contained half of her old territory. On November 6, 2012, Foster won the election for the 11th district with 58% of the vote. Foster ran again and was unopposed in the Democratic primary. In the general election, he defeated the Republican nominee, State Representative Darlene Senger, with 53.5% of the vote to her 46.5%. Foster ran again and was unopposed in the Democratic primary. In the general election, he defeated the Republican nominee, Tonia Khouri, with 60.4% of the vote to her 39.6%. Foster again was unopposed in the Democratic primary. In the general election, he defeated the Republican nominee, Nick Stella, with 63.8% of the vote to Stella's 36.2%. Foster faced a primary challenge from Rachel Ventura and won the nomination with 58.7% of the vote. In the general election, he defeated Republican nominee, Rick Laib, with 63.3% of the vote. 2022 Foster won the June 28 Democratic primary. In the general election, he defeated Catalina Lauf with 56.45% of the vote. Tenure. Although it was initially thought that Foster would not be sworn in until April 2008 due to the need to count absentee ballots before his first election was certified, he took the oath of office on March 11, 2008. Foster joined Vern Ehlers and Rush Holt Jr. as the only research physicists ever elected to Congress. On his first day in office, he cast the deciding vote to keep from tabling an ethics bill that would create an independent outside panel to investigate ethics complaints against House members. According to OpenSecrets, Foster received $637,050 from labor-related political action committees during his runs for Congress. $180,000 of this money came from PACs linked to public sector unions. $110,000 of these donations came from PACs linked to industrial labor unions. According to the Federal Election Commission, Nancy Pelosi gave $4,000 to Foster's 2012 campaign committee. PACs under Pelosi's control donated $10,000 to his 2012 campaign. Committee assignments. For the 118th Congress: Political positions. Foster voted with President Joe Biden's stated position 100% of the time in the 117th Congress, according to a "FiveThirtyEight" analysis. Taxes. Foster supported allowing the Bush tax cuts to expire. During a debate with his opponent in the 2012 election, Foster said, "The tax cuts were promised to generate job growth, but did not. If you follow the money, when you give a dollar to a very wealthy person, they won't typically put it back into the local economy." He said the tax benefits ended up in overseas accounts and spent on luxury purchases. Foster has opposed efforts to repeal the estate tax. On August 31, 2005, U.S. Newswire reported that Foster said, "The proponents of estate tax repeal are fond of calling it the 'death tax'. It's not a death tax, it's a Rich Kids' tax." In 2009, just before the estate tax was scheduled for a one-year repeal, Foster voted to permanently extend the then current estate tax rate of 45%. Card check. According to the official Thomas website, Foster co-sponsored the Employee Free Choice Act of 2009, which would enable unionization of small businesses of less than 50 employees. On February 25, 2012, the "Daily Herald" reported, "Foster pointed to his support for the Employee Free Choice Act while serving at the congressman in the 14th District as proof of his union support." Stimulus spending. Foster voted for the American Recovery and Reinvestment Act of 2009 Health care reform. Foster voted for the Patient Protection and Affordable Care Act (Obamacare). On June 29, 2012, the "Chicago Tribune" reported that Foster said of his vote for Obamacare, "I'm proud of my vote, and I would be proud to do it again." Dodd-Frank. He also voted for the Dodd-Frank Wall Street Reform and Consumer Protection Act, with all ten of the amendments he proposed being added to the final bill. Environment. He voted against the American Clean Energy and Security Act, which would create a Cap and trade system. Second Amendment. Asked if the Second Amendment should be up for reinterpretation, Foster said, "It always has been up for reinterpretation. The technology changes, and the weapons thought to be too dangerous to be in private hands change. A Civil War cannon is frankly much less dangerous than weapons we are allowed to carry on the streets in many of the states and cities in our country today. This is something where technology changes and public attitude changes and both are important in each of the generations." Israel. Foster voted to provide Israel with support following 2023 Hamas attack on Israel.
[ { "math_id": 0, "text": "p \\rightarrow \\mathrm{positron} + \\pi^0" } ]
https://en.wikipedia.org/wiki?curid=11728202
1172846
European Union energy label
Energy consumption labelling scheme EU Directive 92/75/EC (1992) established an energy consumption labelling scheme. The directive was implemented by several other directives thus most white goods, light bulb packaging and cars must have an EU Energy Label clearly displayed when offered for sale or rent. The energy efficiency of the appliance is rated in terms of a set of energy efficiency classes from A to G on the label, A being the most energy efficient, G the least efficient. The labels also give other useful information to the customer as they choose between various models. The information should also be given in catalogues and included by internet retailers on their websites. In an attempt to keep up with advances in energy efficiency, A+, A++, and A+++ grades were later introduced for various products; since 2010, a new type of label exists that makes use of pictograms rather than words, to allow manufacturers to use a single label for products sold in different countries. Directive 92/75/EC was replaced by Directive 2010/30/EU, and was again replaced by Regulation 2017/1369/EU from 1 August 2017. Updated labelling requirements entered into force in 2021, the exact date depends on the relevant delegated regulation (e.g. dishwasher's labels change 1 March 2021). It reintroduced a simpler classification, using only the letters from A to G. The rescaling will also lead to better differentiation among products that, under the current label classification, all appear in the same top categories. It means, for example, that a fridge that currently has the A+++ label could become a C category, even though the fridge is just as energy efficient as before. The main principle is that the A category will be empty at first, and B and C categories scarcely populated, to pave way for new, more energy efficient products to be invented and developed. Major appliances. Labelling. The energy labels are separated into at least four categories: Refrigerating appliances. For refrigerating appliances, such as refrigerators, freezers, wine-storage appliances, and combined appliances, the labelling is specified in terms of an energy efficiency index EEI, which is an indication of the annual power consumption relative to a reference consumption that is based on the storage volume and the type of appliance (refrigerator or freezer). The label also contains: Pre-2021. For cold appliances (and this product alone), for models that are more economical than those of category A, categories A+, A++, and A+++ were previously assigned. According to the 2010 regulations, the boundary between the A+ and A classes was 44 up to 1 July 2014, and 42 after that date. Washing machines and tumble dryers. Up to 2010, the energy efficiency scale for washing machines is calculated based on a cotton cycle at 60 °C (140 °F) with a maximum declared load. This load is typically 6 kg. The energy efficiency index is in kW·h per kilogram of washing, assuming a cold-water supply at 15 °C. The energy label also contains information on: The washing performance is measured according to European harmonised standard EN 60456 and is based on a 60 °C cycle on fabric samples with stains of oil, blood, chocolate, sebum, and red wine, using a standardised detergent and compared against a reference washing machine. The amount of stain removal is then translated into a washing performance index. The spin-drying efficiency class is based on the remaining moisture content (RMC), which is the mass of water divided by the dry mass of cotton fabrics. It is based on a weighted average of full-load and partial-load cycles. A new energy label, introduced in 2010, is based on the energy efficiency index (EEI), and has energy classes in the range A+++ to D. The EEI is a measure of the annual electricity consumption, and includes energy consumed during power-off and standby modes, and the energy consumed in 220 washing cycles. For the washing cycles, a weighted mix consisting of 42% full-load cycles at 60 °C, 29% partial-load cycles at 60 °C, and 29% partial-load cycles at 40 °C. The washing performance is not mentioned anymore, since all washing machines must reach class A anyway. For a 6-kg machine, an EEI of 100 is equivalent to 334 kWh per year, or 1.52 kWh per cycle. For tumble dryers the energy efficiency scale is calculated using the cotton drying cycle with a maximum declared load. The energy efficiency index is in kW·h per kilogram of load. Different scales apply for condenser and vented dryers. For condenser dryers, a weighted condensation efficiency class is calculated using the average condensation efficiency for the standard cotton cycle at both full and partial load. The label also contains: For combined washer dryers the energy efficiency scale is calculated using the cotton drying cycle with a maximum declared load. The energy efficiency index is in kW·h per kilogram of load. Different scales apply for condenser and vented dryers. The label also contains: Dishwashers. The energy efficiency of a dishwasher is calculated according to the number of place settings. For the most common size of appliance, the 12 place setting machine the following classes apply up to 2010. After 2010, a new system is used, based on an energy efficiency index (EEI), which is based on the annual power usage, based on stand-by power consumption and 280 cleaning cycles, relative to the standard power usage for that type of dishwasher. For a 12-place-setting dishwasher, an EEI of 100 corresponds to 462 kWh per year. The label also contains: Ovens. For ovens, the label also contains: Air conditioners. For air conditioners, the directive applies only to units under 12 kW. Every label contains the following information: Labels for air conditioners with heating capability also contain: Light bulbs. From 1 September 2021. Source: Every label of light sources, including light bulbs (halogen, compact fluorescent, etc.) or LED modules/lamps, contains the following information: Where the energy efficiency category is given by this table:&lt;ref name="EU energy labelling of light sources, Regulation 2019/2015"&gt;&lt;/ref&gt; Where, formula_0, is defined as the total mains efficacy, calculated as: formula_1 Where formula_2 is the declared useful luminous flux (in lm), formula_3 is the declared on-mode power consumption (in watts), and formula_4 is a factor between 0.926 and 1.176 depending on the light source being or not directional and being or not powered from mains. Until 31 August 2021. Every label of light bulbs and tubes (including incandescent light bulbs, fluorescent lamps, LED lamps) contains the following information: According to the light bulb's electrical consumption relative to a standard (GLS or incandescent), the lightbulb is in one of the following classes: Class A is defined in a different way; hence, the variable percentage. Since 2012 A+ and A++ classes are added and are introduced different classes for directional lamps and non-directional lamps. Directional lamps are defined as "having at least 80% light output within a solid angle of π sr (corresponding to a cone with angle of 120°)". These lamp classes correspond roughly to the following lamp types: Since September 2009, household light bulbs must be class A, with the exception of clear (transparent) lamps. For the latter category, lamps must be class C or better, with a transition period up to September 2012, and class B after September 2016. Calculation. Incandescent and fluorescent lamps with and without an integrated ballast can be divided into energy efficiency classes. The division of lamps into such classes was made in EU Directive 98/11/EC on 27 January 1998, and includes lamps that are not marketed for use in the home. Light sources with an output of more than 6,500 lm and those that are not operated on line voltage are excluded. The energy efficiency class is determined as follows (Φ is the luminous flux in lm and "P" is the power consumption of the lamp in W): Lamps are classified into class A if: formula_5 Fluorescent lamps without integrated ballast, are classified into class A if: formula_6 The classification in the energy efficiency class B-G is based on the percentage (Energy Efficiency Index) at the reference power formula_7 about the power consumption of a standard light bulb with the same luminous flux. Television. In 2010, an energy label for televisions was introduced. The energy class is based on the Energy Efficiency Index (EEI), which is the power consumption relative to a reference power consumption. The reference power consumption of a normal television with screen area "A" is formula_8 Where formula_9 = 20 W for a television set with one tuner/receiver and no hard disc. Since the switch to digital terrestrial transmissions all new televisions sold in Europe have both analogue and digital tuners so the reference power was increased to 24 watts as set out in the directive the formula is as follows formula_10 for television sets with two or more tuners/receivers. Adding of a hard drive(s), then the formula is as follows formula_11 for television sets with hard disc(s) and two or more tuners/receivers. For example, a television with a diagonal length of 82 cm has a screen area of "A" = 28.7 dm2 and a reference power consumption of 144 W. The energy classes are as in the table below. The annual on-mode energy consumption "E" in kWh is calculated as "E" = 1460 [h/a] × P [W] / 1000, or simplified "E" = 1,460 × P. In televisions with automatic brightness control, the on-mode power consumption is reduced by 5 % if the following conditions are fulfilled when the television is placed on the market: the luminance of the television in the home-mode or the on-mode condition as set by the supplier, is automatically reduced between an ambient light intensity of at least 20 lux and 0 lux; the automatic brightness control is activated in the home-mode condition or the on-mode condition of the television as set by the supplier. Cars. For vehicles possessing internal combustion engines, carbon dioxide emissions in grams per kilometre travelled are considered (instead of electrical efficiency). Other information that is indexed for the energy label is: Tyres. European tyre labels came into force in November 2012. The tyre labelling will show three tyre performance attributes; rolling resistance, wet grip and external rolling noise. The tyre label apply to: with the exception of: Society and culture. Impacts on purchasing decisions. A trial of estimated financial energy cost of refrigerators alongside EU energy-efficiency class (EEEC) labels online found that the approach of labels involves a trade-off between financial considerations and higher cost requirements in effort or time for the product-selection from the many available options – which are often unlabelled and don't have any EEEC-requirement for being bought, used or sold within the EU. Moreover, in this one trial the labeling was ineffective in shifting purchases towards more sustainable options. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta_{TM}" }, { "math_id": 1, "text": "\\eta_{TM}=\\Big( \\frac{\\phi_{use}}{P_{on}}\\Big) \\times F_{TM} \\text{ (lm/W)}" }, { "math_id": 2, "text": "\\phi_{use}" }, { "math_id": 3, "text": "P_{on}" }, { "math_id": 4, "text": "F_{TM}" }, { "math_id": 5, "text": "P \\leq 0.240 \\cdot \\sqrt{\\Phi} + 0.0103 \\cdot \\Phi." }, { "math_id": 6, "text": "P \\leq 0.150 \\cdot \\sqrt{\\Phi} + 0.0097 \\cdot \\Phi." }, { "math_id": 7, "text": "P_\\mathrm{R} = \\begin{cases}\n0.88\\cdot\\sqrt{\\Phi}+0.049\\cdot\\Phi&(\\Phi>34\\mbox{ lm})\\\\\n0.2\\cdot\\Phi&(\\Phi\\leq34\\mbox{ lm})\n\\end{cases}" }, { "math_id": 8, "text": "P_{\\mathrm{ref}} = P_{\\mathrm{basic}} + 4.3224\\, \\mathrm{[W/dm^2]}\\cdot A." }, { "math_id": 9, "text": "P_{\\mathrm{basic}}" }, { "math_id": 10, "text": "P_{\\mathrm{ref}} = 24\\, \\mathrm{[W]} + 4.3224\\, \\mathrm{[W/dm^2]}\\cdot A" }, { "math_id": 11, "text": "P_{\\mathrm{ref}} = 28\\, \\mathrm{[W]} + 4.3224\\, \\mathrm{[W/dm^2]}\\cdot A" } ]
https://en.wikipedia.org/wiki?curid=1172846
11730924
Merit order
Ranking of available sources of energy The merit order is a way of ranking available sources of energy, especially electrical generation, based on ascending order of price (which may reflect the order of their short-run marginal costs of production) and sometimes pollution, together with amount of energy that will be generated. In a centralized management, the ranking is so that those with the lowest marginal costs are the first ones to be brought online to meet demand, and the plants with the highest marginal costs are the last to be brought on line. Dispatching generation in this way, known as economic dispatch, minimizes the cost of production of electricity. Sometimes generating units must be started out of merit order, due to transmission congestion, system reliability or other reasons. In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. The effect of renewable energy on merit order. The high demand for electricity during peak demand pushes up the bidding price for electricity, and the often relatively inexpensive baseload power supply mix is supplemented by 'peaking power plants', which charge a premium for their electricity. Increasing the supply of renewable energy tends to lower the average price per unit of electricity because wind energy and solar energy have very low marginal costs: they do not have to pay for fuel, and the sole contributors to their marginal cost is operations and maintenance. With cost often reduced by feed-in-tariff revenue, their electricity is as a result, less costly on the spot market than that from coal or natural gas, and transmission companies buy from them first. Solar and wind electricity therefore substantially reduce the amount of highly priced peak electricity that transmission companies need to buy, reducing the overall cost. A study by the Fraunhofer Institute ISI found that this "merit order effect" had allowed solar power to reduce the price of electricity on the German energy exchange by 10% on average, and by as much as 40% in the early afternoon. In 2007; as more solar electricity is fed into the grid, peak prices may come down even further. By 2006, the "merit order effect" meant that the savings in electricity costs to German consumers more than offset for the support payments paid for renewable electricity generation. A 2013 study estimates the merit order effect of both wind and photovoltaic electricity generation in Germany between the years 2008 and 2012. For each additional GWh of renewables fed into the grid, the price of electricity in the day-ahead market is reduced by 0.11–0.13¢/kWh. The total merit order effect of wind and photovoltaics ranges from 0.5¢/kWh in 2010 to more than 1.1¢/kWh in 2012. The zero marginal cost of wind and solar energy does not, however, translate into zero marginal cost of peak load electricity in a competitive open electricity market system as wind and solar supply alone often cannot be dispatched to meet peak demand without batteries. The purpose of the merit order was to enable the lowest net cost electricity to be dispatched first thus minimising overall electricity system costs to consumers. Intermittent wind and solar is sometimes able to supply this economic function. If peak wind (or solar) supply and peak demand both coincide in time and quantity, the price reduction is larger. On the other hand, solar energy tends to be most abundant at noon, whereas peak demand is late afternoon in warm climates, leading to the so-called duck curve. A 2008 study by the Fraunhofer Institute ISI in Karlsruhe, Germany found that windpower saves German consumers €5billion a year. It is estimated to have lowered prices in European countries with high wind generation by between 3 and 23€/MWh. On the other hand, renewable energy in Germany increased the price for electricity, consumers there now pay 52.8 €/MWh more only for renewable energy (see German Renewable Energy Sources Act), average price for electricity in Germany now is increased to 26¢/kWh. Increasing electrical grid costs for new transmission, market trading and storage associated with wind and solar are not included in the marginal cost of power sources, instead grid costs are combined with source costs at the consumer end. Economic dispatch. Economic dispatch is the short-term determination of the optimal output of a number of electricity generation facilities, to meet the system load, at the lowest possible cost, subject to transmission and operational constraints. The Economic Dispatch Problem is solved by specialized computer software which should satisfy the operational and system constraints of the available resources and corresponding transmission capabilities. In the US Energy Policy Act of 2005, the term is defined as "the operation of generation facilities to produce energy at the lowest cost to reliably serve consumers, recognising any operational limits of generation and transmission facilities". The main idea is that, in order to satisfy the load at a minimum total cost, the set of generators with the lowest marginal costs must be used first, with the marginal cost of the final generator needed to meet load setting the system marginal cost. This is the cost of delivering one additional MWh of energy onto the system. Due to transmission constraints, this cost can vary at different locations within the power grid - these different cost levels are identified as "locational marginal prices" (LMPs). The historic methodology for economic dispatch was developed to manage fossil fuel burning power plants, relying on calculations involving the input/output characteristics of power stations. Basic mathematical formulation. The following is based on Biggar and Hesamzadeh (2014) and Kirschen (2010). The economic dispatch problem can be thought of as maximising the economic welfare "W" of a power network whilst meeting system constraints. For a network with "n" buses (nodes), suppose that "S""k" is the rate of generation, and "D""k" is the rate of consumption at bus "k". Suppose, further, that "C""k"("S""k") is the cost function of producing power (i.e., the rate at which the generator incurs costs when producing at rate "S""k"), and "V""k"("D""k") is the rate at which the load receives value or benefits (expressed in currency units) when consuming at rate "D""k". The total welfare is then formula_0 The economic dispatch task is to find the combination of rates of production and consumption ("S""k", "D""k") which maximise this expression "W" subject to a number of constraints: formula_1 The first constraint, which is necessary to interpret the constraints that follow, is that the net injection at each bus is equal to the total production at that bus less the total consumption: formula_2 The power balance constraint requires that the sum of the net injections at all buses must be equal to the power losses in the branches of the network: formula_3 The power losses "L" depend on the flows in the branches and thus on the net injections as shown in the above equation. However it cannot depend on the injections on all the buses as this would give an over-determined system. Thus one bus is chosen as the Slack bus and is omitted from the variables of the function "L". The choice of Slack bus is entirely arbitrary, here bus "n" is chosen. The second constraint involves capacity constraints on the flow on network lines. For a system with "m" lines this constraint is modeled as: formula_4 where "F""l" is the flow on branch "l", and "F""l""max" is the maximum value that this flow is allowed to take. Note that the net injection at the slack bus is not included in this equation for the same reasons as above. These equations can now be combined to build the Lagrangian of the optimization problem: formula_5 where π and μ are the Lagrangian multipliers of the constraints. The conditions for optimality are then: formula_6 formula_7 formula_8 formula_9 where the last condition is needed to handle the inequality constraint on line capacity. Solving these equations is computationally difficult as they are nonlinear and implicitly involve the solution of the power flow equations. The analysis can be simplified using a linearised model called a DC power flow. There is a special case which is found in much of the literature. This is the case in which demand is assumed to be perfectly inelastic (i.e., unresponsive to price). This is equivalent to assuming that formula_10 for some very large value of formula_11 and inelastic demand formula_12. Under this assumption, the total economic welfare is maximised by choosing formula_13. The economic dispatch task reduces to: formula_14 Subject to the constraint that formula_15 and the other constraints set out above. Environmental dispatch. In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. Due to the added complexity, a number of algorithms have been employed to optimize this environmental/economic dispatch problem. Notably, a modified bees algorithm implementing chaotic modeling principles was successfully applied not only "in silico", but also on a physical model system of generators. Other methods used to address the economic emission dispatch problem include Particle Swarm Optimization (PSO) and neural networks Another notable algorithm combination is used in a real-time emissions tool called Locational Emissions Estimation Methodology (LEEM) that links electric power consumption and the resulting pollutant emissions. The LEEM estimates changes in emissions associated with incremental changes in power demand derived from the locational marginal price (LMP) information from the independent system operators (ISOs) and emissions data from the US Environmental Protection Agency (EPA). LEEM was developed at Wayne State University as part of a project aimed at optimizing water transmission systems in Detroit, MI starting in 2010 and has since found a wider application as a load profile management tool that can help reduce generation costs and emissions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W= \\sum_{k=1}^n V_k(D_k)- \\sum_{k=1}^n C_k(S_k)" }, { "math_id": 1, "text": " \\max_{S_k,D_k} W" }, { "math_id": 2, "text": "\\forall k, \\; I_k=S_k-D_k " }, { "math_id": 3, "text": " \\sum_{k=1}^n I_k = L(I_1,I_2,\\dots,I_{n-1})" }, { "math_id": 4, "text": " F_l(I_1,I_2,\\dots,I_{n-1}) \\leq F_{l}^{max} \\qquad l=1,\\dots,m" }, { "math_id": 5, "text": " \\mathcal{L}= \\sum_{k=1}^n C_k(I_k) + \\pi \\left [ L(I_1,I_2,\\dots,I_{n-1})-\\sum_{k=1}^n I_k \\right ] + \\sum_{l=1}^m \\mu_l \\left [ F_{l}^{max}-F_l(I_1,I_2,\\dots,I_{n-1}) \\right ]" }, { "math_id": 6, "text": " {\\partial \\mathcal{L}\\over\\partial I_k} = 0 \\qquad k=1,\\dots,n " }, { "math_id": 7, "text": " {\\partial \\mathcal{L}\\over\\partial \\pi} = 0 " }, { "math_id": 8, "text": " {\\partial \\mathcal{L}\\over\\partial \\mu_l} = 0 \\qquad l=1,\\dots,m " }, { "math_id": 9, "text": " \\mu_l \\cdot \\left [ F_{l}^{max}-F_l(I_1,I_2,\\dots,I_{n-1}) \\right ] = 0 \\quad \\mu_l \\geq 0 \\quad k=1,\\dots,n " }, { "math_id": 10, "text": "V_k(D_k)= M \\min(D_k,\\bar{D}_k)" }, { "math_id": 11, "text": "M" }, { "math_id": 12, "text": "\\bar{D}_k" }, { "math_id": 13, "text": "D_k=\\bar{D}_k" }, { "math_id": 14, "text": "\\min_{S_k} \\sum_{k=1}^n C_k(S_k)" }, { "math_id": 15, "text": "\\forall k, \\; I_k=S_k-\\bar{D}_k" } ]
https://en.wikipedia.org/wiki?curid=11730924
11731171
Császár polyhedron
Toroidal polyhedron with 14 triangle faces In geometry, the Császár polyhedron () is a nonconvex toroidal polyhedron with 14 triangular faces. This polyhedron has no diagonals; every pair of vertices is connected by an edge. The seven vertices and 21 edges of the Császár polyhedron form an embedding of the complete graph "K"7 onto a topological torus. Of the 35 possible triangles from vertices of the polyhedron, only 14 are faces. Complete graph. The tetrahedron and the Császár polyhedron are the only two known polyhedra (having a manifold boundary) without any diagonals: every two vertices of the polygon are connected by an edge, so there is no line segment between two vertices that does not lie on the polyhedron boundary. That is, the vertices and edges of the Császár polyhedron form a complete graph. The combinatorial description of this polyhedron has been described earlier by Möbius. Three additional different polyhedra of this type can be found in a paper by . If the boundary of a polyhedron with "v" vertices forms a surface with "h" holes, in such a way that every pair of vertices is connected by an edge, it follows by some manipulation of the Euler characteristic that formula_0 This equation is satisfied for the tetrahedron with "h" = 0 and "v" = 4, and for the Császár polyhedron with "h" = 1 and "v" = 7. The next possible solution, "h" = 6 and "v" = 12, would correspond to a polyhedron with 44 faces and 66 edges, but it is not realizable as a polyhedron. It is not known whether such a polyhedron exists with a higher genus. More generally, this equation can be satisfied only when "v" is congruent to 0, 3, 4, or 7 modulo 12. History and related polyhedra. The Császár polyhedron is named after Hungarian topologist Ákos Császár, who discovered it in 1949. The dual to the Császár polyhedron, the Szilassi polyhedron, was discovered later, in 1977, by Lajos Szilassi; it has 14 vertices, 21 edges, and seven hexagonal faces, each sharing an edge with every other face. Like the Császár polyhedron, the Szilassi polyhedron has the topology of a torus. There are other known polyhedra such as the Schönhardt polyhedron for which there are no interior diagonals (that is, all diagonals are outside the polyhedron) as well as non-manifold surfaces with no diagonals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h = \\frac{(v-3)(v-4)}{12}." } ]
https://en.wikipedia.org/wiki?curid=11731171
11733308
Friedel's law
Friedel's law, named after Georges Friedel, is a property of Fourier transforms of real functions. Given a real function formula_0, its Fourier transform formula_1 has the following properties. where formula_3 is the complex conjugate of formula_4. Centrosymmetric points formula_5 are called Friedel's pairs. The squared amplitude (formula_6) is centrosymmetric: The phase formula_8 of formula_4 is antisymmetric: Friedel's law is used in X-ray diffraction, crystallography and scattering from real potential within the Born approximation. Note that a twin operation ( "Opération de maclage") is equivalent to an inversion centre and the intensities from the individuals are equivalent under Friedel's law. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x)" }, { "math_id": 1, "text": "F(k)=\\int^{+\\infty}_{-\\infty}f(x)e^{i k \\cdot x }dx" }, { "math_id": 2, "text": "F(k)=F^*(-k) \\," }, { "math_id": 3, "text": "F^*" }, { "math_id": 4, "text": "F" }, { "math_id": 5, "text": "(k,-k)" }, { "math_id": 6, "text": "|F|^2" }, { "math_id": 7, "text": "|F(k)|^2=|F(-k)|^2 \\," }, { "math_id": 8, "text": "\\phi" }, { "math_id": 9, "text": "\\phi(k) = -\\phi(-k) \\," } ]
https://en.wikipedia.org/wiki?curid=11733308
11733967
Törnqvist index
In economics, the Törnqvist index is a price or quantity index. In practice, Törnqvist index values are calculated for consecutive periods, then these are strung together, or "chained". Thus, the core calculation does not refer to a single base year. Computation. The price index for some period is usually normalized to be 1 or 100, and that period is called "base period." A Törnqvist or Törnqvist-Theil price index is the weighted geometric mean of the price relatives using arithmetic averages of the value shares in the two periods as weights. The data used are prices and quantities in two time-periods, (t-1) and (t), for each of "n" goods which are indexed by "i". If we denote the price of item "i" at time t-1 by formula_0, and, analogously, we define formula_1 to be the quantity purchased of item "i" at time t, then, the Törnqvist price index formula_2 at time t can be calculated as follows: formula_3 The denominators in the exponent are the sums of total expenditure in each of the two periods. This can be expressed more compactly in vector notation. Let formula_4 denote the vector of all prices at time t-1 and analogously define vectors formula_5, formula_6, and formula_7. Then the above expression can be rewritten: formula_8 In this second expression, notice that "the overall exponent is the average share of expenditure on good i across the two periods". The Törnqvist index weighs the experiences in the two periods equally, so it is said to be a "symmetric" index. Usually, that share doesn't change much; e.g. food expenditures across a million households might be 20% of income in one period and 20.1% the next period. In practice, Törnqvist indexes are often computed using an equation that results from taking logs of both sides, as in the expression below which computes the same formula_2 as those above. formula_9 A Törnqvist quantity index can be calculated analogously using prices for weights. Quantity indexes are used in computing aggregate indexes for physical "capital" summarizing equipment and structures of different types into one time series. Swapping p's for q's and q's for p's gives an equation for a quantity index: formula_10 If one needs matched quantity and price indexes they can be calculated directly from these equations, but it is more common to compute a price index by dividing total expenditure each period by the quantity index so the resulting indexes multiply out to total expenditure. This approach is called the "indirect" way of calculating a Törnqvist index, and it generates numbers that are not exactly the same as a direct calculation. There is research on which method to use based partly on whether price changes or quantity changes are more volatile. For multifactor productivity calculations, the indirect method is used. Törnqvist indexes are close to the figures given by the Fisher index. The Fisher index is sometimes preferred in practice because it handles zero-quantities without special exceptions, whereas in the equations above a quantity of zero would make the Törnqvist index calculation break down. Theory. A Törnqvist index is a discrete approximation to a continuous Divisia index. A Divisia index is a theoretical construct, a continuous-time weighted sum of the growth rates of the various components, where the weights are the component's shares in total value. For a Törnqvist index, the growth rates are defined to be the difference in natural logarithms of successive observations of the components (i.e. their log-change) and the weights are equal to the mean of the factor shares of the components in the corresponding pair of periods (usually years). Divisia-type indexes have advantages over constant-base-year weighted indexes, because as relative prices of inputs change, they incorporate changes both in quantities purchased and relative prices. For example, a Törnqvist index summarizing labor input may weigh the growth rate of the hours of each group of workers by the share of labor compensation they receive. The Törnqvist index is a superlative index, meaning it can approximate any smooth production or cost function. "Smooth" here means that small changes in relative prices for a good will be associated with small changes in the quantity of it used. The Törnqvist corresponds exactly to the translog production function, meaning that given a change in prices and an optimal response in quantities, the level of the index will change exactly as much as the change in production or utility would be. To express that thought, Diewert (1978) uses this phrasing which other economists now recognize: the Törnqvist index procedure "is exact for" the translog production or utility function. For this reason, the term translog index is sometimes used for a Törnqvist index. The Törnqvist index is approximately "consistent in aggregation", meaning that the almost exactly the same index values result from (a) combining many prices and quantities together, or (b) combining subgroups of them together then combining those indexes. For some purposes (like large annual aggregates), this is treated as consistent enough, and for others (like monthly price changes) it is not. History and use. The Törnqvist index theory is attributed to Leo Törnqvist (1936), perhaps working with others at the Bank of Finland. Törnqvist indexes are used in a variety of official price and productivity statistics. The time periods can be years, as in multifactor productivity statistics, or months, as in the U.S.'s Chained CPI. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_{i,t-1}" }, { "math_id": 1, "text": "q_{i,t}" }, { "math_id": 2, "text": "P_t" }, { "math_id": 3, "text": "\\frac{P_t}{P_{t-1}} = \\prod_{i=1}^{n}\\left(\\frac{p_{i,t}}{p_{i,t-1}}\\right)^{\\frac{1}{2} \\left[\\frac{p_{i,t-1}q_{i,t-1}}{\\sum_{j=1}^{n}\\left(p_{j,t-1}q_{j,t-1}\\right)}+ \\frac{p_{i,t}q_{i,t}}{\\sum_{j=1}^{n}\\left(p_{j,t}q_{j,t}\\right)}\\right]}" }, { "math_id": 4, "text": "p_{t-1}" }, { "math_id": 5, "text": "q_{t-1}" }, { "math_id": 6, "text": "p_t" }, { "math_id": 7, "text": "q_t" }, { "math_id": 8, "text": "\\frac{P_t}{P_{t-1}} = \\prod_{i=1}^{n}\\left(\\frac{p_{it}}{p_{i,t-1}}\\right)^{\\frac{1}{2} \\left[\\frac{p_{i,t-1}q_{i,t-1}}{p_{t-1} \\cdot q_{t-1}} + \\frac{p_{i,t}q_{i,t}}{p_{t} \\cdot q_{t}}\\right]}" }, { "math_id": 9, "text": "ln \\frac{P_t}{P_{t-1}} = \\frac{1}{2} \\sum_{i=1}^{n} \\left (\\frac {p_{i,t-1}q_{i,t-1}}{p_{t-1}q_{t-1}} + \\frac {p_{i,t}q_{i,t}}{p_tq_t} \\right) ln\\left (\\frac{p_{i,t}}{p_{i,t-1}} \\right)" }, { "math_id": 10, "text": "\\frac{Q_t}{Q_{t-1}} = \\prod_{i=1}^{n}\\left(\\frac{q_{i,t}}{q_{i,t-1}}\\right)^{\\frac{1}{2} \\left[\\frac{p_{i,t-1}q_{i,t-1}}{\\sum_{j=1}^{n}\\left(p_{j,t-1}q_{j,t-1}\\right)}+ \\frac{p_{i,t}q_{i,t}}{\\sum_{j=1}^{n}\\left(p_{j,t}q_{j,t}\\right)}\\right]}" } ]
https://en.wikipedia.org/wiki?curid=11733967