id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
76866914
Sikidy
Malagasy algebraic divination by seeds Sikidy is a form of algebraic geomancy practiced by Malagasy peoples in Madagascar. It involves algorithmic operations performed on random data generated from tree seeds, which are ritually arranged in a tableau called a and divinely interpreted after being mathematically operated on. Columns of seeds, designated "slaves" or "princes" belonging to respective "lands" for each, interact symbolically to express ('fate') in the interpretation of the diviner. The diviner also prescribes solutions to problems and ways to avoid fated misfortune, often involving a sacrifice. The centuries-old practice derives from Islamic influence brought to the island by medieval Arab traders. The is consulted for a range of divinatory questions pertaining to fate and the future, including identifying sources of and rectifying misfortune, reading the fate of newborns, and planning annual migrations. The mathematics of include the concepts of Boolean algebra, symbolic logic and parity. History. The practice is several centuries old, and is influenced by Arab geomantic traditions of Arab Muslim traders on the island. Stephen Ellis and Solofo Randrianja describe as "probably one of the oldest components of Malagasy culture", writing that it most likely the product of an indigenous divinatory art later influenced by Islamic practice. Umar H. D. Danfulani writes that the integration of Arabic divination into indigenous divination is "clearly demonstrated" in Madagascar, where the Arabic astrological system was adapted to the indigenous agricultural system and meshed with Malagasy lunar months by "adapting indigenous months, , to the astrological months, ". Danfulani also describes the concepts in of "houses" (lands) and "kings in their houses" as retained from medieval Arabic astrology. Most writers link the practice to the "sea-going trade involving the southwest coast of India, the Persian Gulf, and the east coast of Africa in the 9th or 10th century C.E." Though the etymology of is unknown, it has been posited that the word derives from the Arabic "sichr" ('incantation' or 'charm'). was of central importance to pre-Christian Malagasy religion, with one practitioner quoted in 1892 as calling "the Bible of our ancestors". A missionary report from 1616 describes one form of using tamarind seeds, and another using fingered markings in the sand. The early colonial French governor of Madagascar Étienne de Flacourt documented in the mid-17th century: <templatestyles src="Template:Blockquote/styles.css" />Matatane country in southeastern Madagascar [...] where the Antemoro [...] live was a center of astrological study as early as the fourteenth century [...]. This area was also the site of early Arab settlements, although strict Islamic observances were lost centuries ago [...]. Historical evidence shows that Antemoro diviners, bearers of the astrological system, infiltrated nearly all the ancient kingdoms of Madagascar beginning in the sixteenth century. [...] Today, although many persons claim to be [diviners], only the Antemoro diviners are considered true professionals. The area is still a famous place of learning where specialists go for training and then return to their home communities with a certain body of knowledge. Now we can better understand the degree of similarity of divination forms found throughout Madagascar. For centuries Matitanana has remained a training center for diviners who have migrated widely, usually attaining important positions in their home communities and with various royal families. The "infiltration" of Malagasy kingdoms by Antemoro diviners, and Matitanana's role as a place for astrological and divinatory learning, help to explain the relatively uniform practicing of across Madagascar. Origin myths. Mythic tradition relating to the origin of "links [the practice] both to the return by walking on water of Arab ancestors who had intermarried with Malagasy but then left, and to the names of the days of the week" and holds that the art was supernaturally communicated to the ancestors, with Zanahary (the supreme deity of Malagasy religion) giving it to Ranakandriana, who then gave it to a line of diviners (Ranakandriana to Ramanitralanana to Rabibi-andrano to Andriambavi-maitso (who was a woman) to Andriam-bavi-nosy), the last of whom terminated the monopoly by giving it to the people, declaring: "Behold, I give you the , of which you may inquire what offerings you should present in order to obtain blessings; and what expiation you should make so as to avert evils, when any are ill or under apprehension of some future calamity". A mythic anecdote of Ranakandriana says that two men observed him one day playing in the sand. In fact he was practicing a form of worked in sand called . The two men seized him, and Ranakandriana promised that he would teach them something if they released him. They agreed, and Ranakandriana taught them in depth how to work the . The two men then went to their chief and told him that they could tell him "the past and the future—what was good and what was bad—what increased and what diminished." The chief asked them to tell him how he could obtain plenty of cattle. The two men worked their and told the chief to kill all of his bulls, and that "great numbers would come to him" on the following Friday. The chieftain, doubting, asked what would happen if their prediction didn't come true, and the two men promised they would pay with their lives. The chief agreed and killed his bulls. On Thursday, thinking he'd been duped, he prematurely killed the first man of the two who'd told him about the divinatory art. On Friday, however, "vast herds" came amidst heavy rain, actually filling an immense plain in their crowd. The chieftain lamented the 's wrongful execution and ordered for him a pompous funeral. The chieftain took the second man as his close adviser and friend, and trusted the forever afterwards. The British missionary William Ellis recorded in 1839 two idiomatic expressions used in Madagascar that come from this story: "Tsy mahandry andro Zoma" (lit. 'He cannot wait 'til Friday') is said of someone extremely impatient, and heavy rainshowers falling in rapid succession are called "sese omby" (lit. 'a crowding together of cattle'). Rites and practitioners. The divination is performed by a practitioner called an , (lit. 'sacred one'), , or (derived from the Arabic "anbia", meaning 'prophet') who guides the client through the process and interprets the results in the context of the client's inquiries and desires. As part of an 's formal initiation into the art, which includes a long period of apprenticeship, the initiate must gather 124 and 200 ("Entada sp.") or (tamarind) tree seeds for his subsequent ritual use in . Raymond Decary writes that, at least among the Sakalava, a man must be 40 years old before learning and practicing , or he risks death. Before beginning to study, a student practitioner must make incisions at the tips of his index finger, his middle finger, and his tongue, and put within the incisions a paste containing red pepper and crushed wasp. This paste impregnates the fingers that will move the seeds of the and the tongue that will speak their revelations with the power to decipher the . Once this is done, he leaves at dawn to search for a ("Entada chrysostachys") tree. Upon finding it, he throws his spear at its branches, shaking the tree and causing its large seed pods to fall. During this act, some say: "When you were on the steep peak and in the dense forest, on you the crabs climbed, from you the crocodiles made their bed, with their paws the birds trod on you. Whether you are suspended in the trees or buried, you are never dried up nor rotten." In 1970, Decary reported that the salary paid by an apprentice to his master is "not very high": up to five francs, plus a red rooster's feather. Some are considered specialists, dealing only with areas of inquiry and resolution within their expertise. In the process of divination, the relates interactively to the client, asking new questions and discussing the interpretation of the seeds. Alfred Grandidier estimated in the late 19th century that roughly one in three Malagasy people had a firm grasp on the art; by 1970 Raymond Decary wrote that the number of was now more limited, and the common knowledge of how to operate and read the was now more basic, with masters of becoming more rare. and. also provide guidance on how to avoid the misfortune divined in the subject's fate. Solutions include offerings, sacrifices, charms (called ), stored remedies, or observed (taboos). The resolution often comes in the form of the ritual disposal of a symbolic object of misfortune, called the : for example, if the predicts the death of two men, then two locusts should be killed and thrown away as the . William Ellis compares this practice to the ancient Jewish scapegoat. Other objects can be trivial, such as "a little grass", some earth, or the water with which the patient rinses his mouth. If the is ashes, they are blown from the hand to be carried off by the wind; if it is cut money, it is thrown to the bottom of deep waters; if a sheep, it is "carried away to a distance on the shoulders of a man, who runs with all his might, mumbling as he goes, as if in the greatest rage against the , for the evils it is bearing away." If it is a pumpkin, it is carried away a short distance and then thrown on the ground with fury and indignation. The disposal of a may be as simple as a man standing at his doorway, throwing the object a few feet away, and saying the word "". Ellis reports the following for various sources and manifestations of evil: A divine offering, called a , is also prescribed by the . The may consist of a combination of beads, silver chains, ornaments, meats, herbs, and the singing of a child. Other objects include "a young bullock which just begins to bellow and to tear up the earth with his horns", fowl, rice mixed with milk and honey, a plantain tree flush with fruit, "slime from frogs floating on the water", and a groundnut called . amulets and bracelets may continue to be worn after the cause of their prescription, effectively becoming . Recovery without adherence to divined prescription and is believed "almost impossible". William Ellis recorded in 1838 that, though the application of indigenous remedies was most common, some patients had lately been instructed as part of the resolution to ask the local foreign missionaries for medicine. Occasions and questions for. Problems and questions for divined resolution via include the selection of a day on which to do something (including taking a trip, planting, a wedding, and the exhumation of ancestral corpses), whether a newborn child's destiny is compatible with its parents and thus whether it ought to be cared for by another family, the finding of a spouse, the finding of lost objects, the identification of a thief, and the explanation for a misfortune, including illness or sterility. Raymond Decary writes that the is consulted "in all circumstances", but especially: The kind and color of sheep to be sacrificed in a wedding procession is also divined by . Among the forest-dwelling Mikea people, is used "to direct the timing of residential movements to the forest ()". William Ellis describes two ritual occasions for relating to infants: the declaring of the child's destiny, and the "scrambling" ceremony. As one of the "first acts" following a child's birth, the child's father or close relative consults the local , who works the in order to read the child's destiny. When a child's destiny is declared to be favorable, "the child is nurtured with that tenderness and affection which nature inspires, and the warmest gratulations are tendered by the friends of the parents." The "scrambling" ceremony, which only occurs with firstborn infants, takes place two or three months after the child's birth on a day divined by the to be lucky or good. The child's friends and family gather, and the child's mother is decorated with silver chains on her head. If the infant is a boy, the father carries him, along with some ripe bananas, on his back. In a rice pan, a mixture is cooked, consisting of the fat from a zebu ox's hump, rice, milk, honey, and a grass called . One lock, called the ('evil lock') is cut from the left side of the child's head and thrown away, "in order to avert calamity". A second lock, called the ('the fortunate lock'), is cut from the right side, and added to the mixture in the rice pan. The mélange is mixed well and held up in its pan by the youngest girl of the family, at which point the gathered (especially the women) make a rush for its contents. It is believed that those who obtain a portion of the mixture are bound to become mothers. The scramble also takes place with bananas, lemons, and sugarcane. The rice pan is then considered sacred, and cannot be removed from the house for three days, "otherwise the virtue of those observances is supposed to be lost". Incantation. To "awaken" the seeds in his bag as well as his own verbal powers, the incants to the gods or earth spirits in attempt to constrain the gods/spirits to tell the truth, with emphasis on "the trickiness of the communicating entities, who misle[a]d if they [can]", and orates the practice's origin myth. As he incants, the turns the seeds on a mat eastward with his right hand. One Merina incantation quoted by Norwegian missionary Lars Dahle reads: <templatestyles src="Template:Blockquote/styles.css" />Awake, O God, to awaken the sun! Awake, O sun, to awaken the cock! Awake, O cock, to awaken mankind! Awake, O mankind, to awaken the sikidy, not to tell lies, not to deceive, not to play tricks, not to talk nonsense, not to agree to everything indiscriminately; but to search into the secret; to look into what is beyond the hills and on the other side of the forest, to see what no human eye can see. Wake up, for thou art from the long-haired Mohammedans from the high mountains, from [Anakandriananahitra, the almost mythical founder of the art in Madagascar, whose name is followed by those authorities who passed the art on to the people and their present diviners, thereby establishing an historical line of legitimacy] ... Awake! for we have not got thee for nothing, for thou art dear and expensive. We have hired. thee. in exchange for a fat cow With a large hump, and for money on which there was no dust [i.e. good value]. Awake! for thou art the trust of the sovereign [the ruling house of pre-colonial Madagascar used court diviners literally dozens of times a day to decide the advisability of even the most everyday actions, from matters of state to the timing of matters of personal hygiene] and the judgement of the people. If thou art a sikidy that can tell, a sikidy that can see, and does not [only] speak about the noise of the people, the hen killed by its owner, the cattle killed in the market, the dust clinging to the feet [i.e. uninteresting commonplaces], awake here on the mat! But if thou art a sikidy that does not see, a sikidy that agrees to everything indiscriminately, and makes [false statements, as if] the dead [were] living, and the living dead, then do not arise here on the mat.When practicing the , Sakalava diviners work with a fragment of hyaline quartz in front of their seeds, which is set out before the seeds are produced from their sack. Arranging the seeds. After his incantation, the takes a fistful of awakened seeds from his bag and randomly divides the seeds into four piles. Seeds are removed two at a time from each pile until there is either one seed or two seeds remaining in each. The four remaining "piles" (now either single seeds or pairs) become the first entries in the first column of a (tableau). The process is repeated three more times, with each new column of seeds being placed on the to the left of the previous. At the end of this, the array consists of four randomly-generated columns of four values (each being either one seed or two) each. The generated data represented in this array is called the (lit. 'mother-'). There are 65,536 possible arrays. From the data, four additional "columns" are read as the rows across the 's columns, and eight additional columns are generated algorithmically and placed in a specific order below the four original columns. Algorithmically-generated columns. Columns 9–16 of the are generated using the XOR logical operation (formula_0), which determines a value based on whether two other values are the same or different. In , the XOR operation is used to compare values in sequence across two existing columns and generate corresponding values for a third column: two seeds if the corresponding values are identical across the pair, and one seed if the values are different. The rules for generating a column from the XOR operation are (with "o" representing one seed, and "oo" representing two): formula_1 The first 12 columns are generated algorithmically from pairs of adjacent columns in the randomly-generated (the four-by-four grid of seeds representing eight datasets across its four columns and four rows). The last four columns (12–16) of the are derived from the algorithmically-generated columns, with column 16 operating on the first and fifteenth column as a pair. For example, the first value of column 9 is determined by comparing the first values of columns 7 and 8. If they are the same (both one seed or both two seeds), the first value of column 9 will be two seeds. If they are different, the first value of column 9 will be one seed. This operation iterates for each pair of corresponding values in columns 7 and 8, creating a complete set of values for column 9. Column 10 is then generated by applying the XOR operation between the values in columns 5 and 6. Similarly, column 11 is generated from columns 3 and 4, and column 12 from columns 1 and 2. Columns 13-16 are generated in the same manner, performing the XOR operation on ascending pairs of the algorithmically-generated columns, starting with columns 9 and 10 (to generate column 13) and ending with columns 15 and 1 (to generate column 16). Checks. The performs three algorithmic and logical checks to verify the 's validity according to its generative logic: one examining the whole , one examining the results of combining some particular columns, and one parity check examining only one column. First, the checks that at least two columns in the are identical. Next, it is ensured that the pairs of columns 13 and 16, 14 and 1, and 11 and 2 (called "the three inseparables") all yield the same result when combined via the XOR operation. Finally, it is checked that there is an even number of seeds in the 15th column—the only column for which parity is logically certain. Each of these three checks are mathematically proven valid in a 1997 paper by American ethnomathematician Marcia Ascher. Verification through the use of Microsoft Excel was achieved and published by Gomez et al. in 2015. Divination. Once the has checked the , his analysis and divination can begin. Certain questions and answers rely on additional columns beyond the prepared sixteen. Some of these columns are read spatially in patterns across the existing 's data, and some are generated with additional XOR operations referring to pairs of columns within the secondary series. These new columns can involve "about 100 additional algorithms". Each column making up the has a distinct divine referent: There are sixteen possible configurations of seeds in each column of four values. These formations are known to the diviner and identified with names, which vary regionally. Some names relate to names of months. For many , the formations are associated with directions. The eight formations with an even number of seeds are designated as "princes", while the eight with an odd number of seeds are "slaves". Each slave and prince has its place in a square whose sides are associated with the four cardinal directions. The square is divided into a northwestern "Land of Slaves" and a southeastern "Land of Princes" by a diagonal line extending from its northeastern corner to its southeastern corner. Despite their names, each "Land" contains both slaves and princes, including one migrating prince and one migrating slave that move directionally with the sun, such that the migrators belong to different lands depending on the time of day at which the is performed. The migrators are in the east from sunrise to 10 AM, in the north from 10 AM to 3 PM, and in the west from 3 PM to sunset. is never performed at night, and thus the migrators are never in the south. The power to see into the past or future is greater in in which all four directions are represented, and most powerful in with four directions represented but with one direction having only one representative. These are called ('-unique'). Beyond being powerful arrangements for divination, represent a particular abstract interest to , who seek to understand them and the data which generate them as an unsolved intellectual challenge. Knowing many leads to personal prestige for the , with discovered examples being posted on doors and spread among diviners by word of mouth. Divination of the refer to hierarchies of power relating to position and class of figures. "Princes are more powerful than slaves; figures from the Land of Princes are more powerful than those from the Land of Slaves; slaves from the same land are never harmful to one another; and battles between two princes from the Land of Princes are always serious but never end in death." In divinations relating to illness, the client and creator columns being the same indicates that there will definitely be recovery; if the client and ancestors columns are the same, the illness is due to some discontent on the part of the ancestors; and if the client and house columns are the same, the illness is the same as one that has previously ended in recovery. The relationship between the client and spirit columns is directly referent to illness. If the client is a slave of the east and the spirit is a prince of the south, the client is dominated by the illness, and thus the illness is divined to be serious—but not fatal, because both the east and the south are in the Land of Princes. If the client is a prince of the north (in the Land of Slaves), and the spirit a prince of the south (in the Land of Princes), there would be a difficult battle with a significant chance of the client dying. If the ninth and fifteenth columns are the same, a bead must be offered as a , called (lit. 'overcoming the calamity'). If the first and fourth are the same, then a piece of a tree that grows in the villages (not in the fields) must be offered. If the values of the tenth and fifteenth columns added together and subtracted by two equal the values of the first, a stone (called , lit. 'stone-not-lost') is thrown, retrieved, and carefully preserved by a friend or relation, and so not lost. The most exceptionally hopeless and severe outcome in a is each value in the first four columns (and thus in the entire tableau) being two seeds. This is called the "red ". A study computer-simulating the algorithmic generation and objective initial interpretation (according to Sakalava tradition) of the 65,536 possible arrangements of found that, assuming a male client and an inquiry about an illness' cause, the divined cause of illness would be sorcery 21.1% of the time, witchcraft 16.5% of the time, for 9.6%, the village chief for 2.6%, the contamination of food with dirt (which may involve carelessness or evil intentions) for .8%, ancestors for .7%, and undetermined for 48.7%. Figures. The following are the most common names and meanings for the sixteen geomantic figures of the . Names that also refer to lunar months are marked with a '☾'. Related traditions. Other Malagasy methods of divination include astrology, cartomancy, ornithomancy, extispicy, and necromantic dream-interpretation. African sixteen-figure divinatory traditions. Aside from Arabic geomancy, a number of African divination methods using sixteen basic figures have been studied, including Yoruba "Ifá" cowrie-shell divination, also known by its Fon name "Fa" and the Ewe and Igbo name "Afa". African diasporic populations in Latin America have retained the practice, with the tradition being called "Ifa" among Afro-Cubans, Afro-Brazilians, and Afro-Haitians. Umar H. D. Danfulani records a breadth of sixteen-figure divinatory traditions across Africa: References. <templatestyles src="Reflist/styles.css" /> Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\oplus" }, { "math_id": 1, "text": " \\begin{align}\n o \\oplus o &= oo \\\\ \n oo \\oplus oo &= oo \\\\\n o \\oplus oo &= o \\\\\n oo \\oplus o &= o\n\\end{align} " } ]
https://en.wikipedia.org/wiki?curid=76866914
76868055
Landau derivative
In gas dynamics, the Landau derivative or fundamental derivative of gas dynamics, named after Lev Landau who introduced it in 1942, refers to a dimensionless physical quantity characterizing the curvature of the isentrope drawn on the specific volume versus pressure plane. Specifically, the Landau derivative is a second derivative of specific volume with respect to pressure. The derivative is denoted commonly using the symbol formula_0 or formula_1 and is defined by formula_2 where Alternate representations of formula_0 include formula_3 For most common gases, formula_4, whereas abnormal substances such as the BZT fluids exhibit formula_5. In an isentropic process, the sound speed increases with pressure when formula_6; this is the case for ideal gases. Specifically for polytropic gases (ideal gas with constant specific heats), the Landau derivative is a constant and given by formula_7 where formula_8 is the specific heat ratio. Some non-ideal gases falls in the range formula_9, for which the sound speed decreases with pressure during an isentropic transformation. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Gamma" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\Gamma = \\frac{c^4}{2\\upsilon^3}\\left(\\frac{\\partial^2\\upsilon}{\\partial p^2}\\right)_s" }, { "math_id": 3, "text": "\\Gamma = \\frac{\\upsilon^3}{2c^2} \\left(\\frac{\\partial^2 p}{\\partial \\upsilon^2}\\right)_s = \\frac{1}{c} \\left(\\frac{\\partial \\rho c}{\\partial \\rho}\\right)_s= 1 + \\frac{c}{\\upsilon} \\left(\\frac{\\partial c}{\\partial p}\\right)_s = 1 + \\frac{c}{\\upsilon} \\left(\\frac{\\partial c}{\\partial p}\\right)_T + \\frac{cT}{\\upsilon c_p}\\left(\\frac{\\partial\\upsilon}{\\partial T}\\right)_p \\left(\\frac{\\partial c}{\\partial T}\\right)_p." }, { "math_id": 4, "text": "\\Gamma>0" }, { "math_id": 5, "text": "\\Gamma<0" }, { "math_id": 6, "text": "\\Gamma>1" }, { "math_id": 7, "text": "\\Gamma = \\frac{1}{2}(\\gamma+1)," }, { "math_id": 8, "text": "\\gamma>1" }, { "math_id": 9, "text": "0<\\Gamma<1" } ]
https://en.wikipedia.org/wiki?curid=76868055
768749
Johann Jakob Balmer
Swiss mathematician (1825–1898) Johann Jakob Balmer (1 May 1825 – 12 March 1898) was a Swiss mathematician best known for his work in physics, the Balmer series of hydrogen atom. Biography. Balmer was born in Lausen, Switzerland, the son of a chief justice also named Johann Jakob Balmer. His mother was Elizabeth Rolle Balmer, and he was the oldest son. During his schooling he excelled in mathematics, and so decided to focus on that field when he attended university. He studied at the University of Karlsruhe and the University of Berlin, then completed his PhD from the University of Basel in 1849 with a dissertation on the cycloid. Johann then spent his entire life in Basel, where he taught at a school for girls. He also lectured at the University of Basel. In 1868 he married Christine Pauline Rinck at the age of 43. The couple had six children. Despite being a mathematician, Balmer is best remembered for his work on spectral series. His major contribution (made at the age of sixty, in 1885) was an empirical formula for the visible spectral lines of the hydrogen atom, the study of which he took up at the suggestion of Eduard Hagenbach also of Basel. Using Ångström's measurements of the hydrogen lines, he arrived at a formula for computing the wavelength as follows: formula_0 for "m" = 2 and "n" = 3, 4, 5, 6, and so forth; "h" = 3.6456 · 10−7 m = 364.56 nm. In his 1885 notice, he referred to "h" as the "fundamental number of hydrogen." Today, "h" is known as the "Balmer constant." Balmer used his formula to predict the wavelength for "n" = 7: formula_1 Hagenbach informed Balmer that Ångström had observed a line with wavelength 397 nm. This portion of the Hydrogen emission spectrum, from transitions in electron energy levels with "n" ≥ 3 to "n" = 2, became known as the Balmer series. The Balmer lines refer to the emission lines that occur within the visible region of the Hydrogen emission spectrum at 410.29 nm, 434.17 nm, 486.27 nm, and 656.47 nm. These lines are caused by electrons in an excited state emitting a photon and returning to the first excited state of the hydrogen atom ("n" = 2). Two of Balmer's colleagues, Hermann Wilhelm Vogel and William Huggins, were able to confirm the existence of other lines of the Balmer series in the spectrum of hydrogen in white stars. Balmer's formula was later found to be a special case of the Rydberg formula, devised by Johannes Rydberg in 1888: formula_2 with formula_3 being the Rydberg constant for hydrogen, formula_4 for Balmer's formula, and formula_5. A full explanation of why these formulas worked, however, had to wait until after Balmer's death with the presentation of the Bohr model of the atom by Niels Bohr in 1913. Johann Balmer died in Basel, aged 72.
[ { "math_id": 0, "text": "\\lambda\\ = h \\, \\frac{ n^2 }{ n^2 - m^2 }" }, { "math_id": 1, "text": "\\lambda\\ = (364.56 \\, {\\rm nm}) \\cdot \\, \\frac{ 7^2 }{ \\, 7^2 - 2^2 \\, } \\simeq 397.0 \\, {\\rm nm}" }, { "math_id": 2, "text": "\\frac{1}{\\lambda}\\ = \\frac{4}{h} \\left( \\frac{1}{m^2} - \\frac{1}{n^2} \\right)= R_H \\left( \\frac{1}{m^2} - \\frac{1}{n^2} \\right)" }, { "math_id": 3, "text": "R_H" }, { "math_id": 4, "text": "m=2" }, { "math_id": 5, "text": "n>m" } ]
https://en.wikipedia.org/wiki?curid=768749
768770
George Kingsley Zipf
Pioneering American linguist George Kingsley Zipf ( ; January 7, 1902 – September 25, 1950), was an American linguist and philologist who studied statistical occurrences in different languages. Zipf earned his bachelors, masters, and doctoral degrees from Harvard University, although he also studied at the University of Bonn and the University of Berlin. He was chairman of the German department and university lecturer (meaning he could teach any subject he chose) at Harvard University. He worked with Chinese and demographics, and much of his effort can explain properties of the Internet, distribution of income within nations, and many other collections of data. Zipf's law. He is the eponym of Zipf's law, which states that while only a few words are used very often, many or most are used rarely, formula_0 where "Pn" is the frequency of a word ranked "n"th and the exponent "a" is almost 1. This means that the second item occurs approximately 1/2 as often as the first, and the third item 1/3 as often as the first, and so on. Zipf's discovery of this law in 1935 was one of the first academic studies of word frequency. Although he originally intended it as a model for linguistics, Zipf later generalized his law to other disciplines. In particular, he observed that the rank vs. frequency distribution of individual incomes in a unified nation approximates this law, and in his 1941 book, "National Unity and Disunity" he theorized that breaks in this "normal curve of income distribution" portend social pressure for change or revolution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_n \\sim 1/n^a" } ]
https://en.wikipedia.org/wiki?curid=768770
76879425
Brandes' algorithm
Algorithm for finding important nodes in a graph In network theory, Brandes' algorithm is an algorithm for calculating the betweenness centrality of vertices in a graph. The algorithm was first published in 2001 by Ulrik Brandes. Betweenness centrality, along with other measures of centrality, is an important measure in many real-world networks, such as social networks and computer networks. Definitions. There are several metrics for the centrality of a node, one such metric being the "betweenness centrality". For a node formula_2 in a connected graph, the betweenness centrality is defined as:formula_3where formula_4 is the total number of shortest paths from node formula_5 to node formula_6, and formula_7 is the number of these paths which pass through formula_2. For an unweighted graph, the length of a path is considered to be the number of edges it contains. By convention, formula_8 whenever formula_9, since the only path is the empty path. Also, formula_10 if formula_2 is either formula_5 or formula_6, since shortest paths do not pass "through" their endpoints. The quantityformula_11is known as the "pair dependency" of formula_12 on formula_2, and represents the proportion of the shortest formula_5–formula_6 paths which travel via formula_2. The betweenness centrality is simply the sum of the pair dependencies over all pairs. As well as the pair dependency, it is also useful to define the ("single) dependency" on formula_2 , with respect to a particular vertex formula_5:formula_13,with which, we can obtain the concise formulationformula_14. Algorithm. Brandes' algorithm calculates the betweenness centrality of all nodes in a graph. For every vertex formula_5, there are two stages. Single-source shortest path. The number of shortest paths formula_15 between formula_5 and every vertex formula_2 is calculated using breadth-first search. The breadth-first search starts at formula_5, and the shortest distance formula_16 of each vertex from formula_5 is recorded, dividing the graph into discrete layers. Additionally, each vertex formula_2 keeps track of the set of vertices which in the preceding layer which point to it, formula_17. Described in set-builder notation, it can be written as:formula_18.This lends itself to a simple iterative formula for formula_15:formula_19,which essentially states that, if formula_2 is at depth formula_16, then any shortest path at depth formula_20 extended by a single edge to formula_2 becomes a shortest path to formula_2. Backpropagation. Brandes proved the following recursive formula for vertex dependencies:formula_21,where the sum is taken over all vertices formula_2 that are one edge further away from formula_5 than formula_22. This lemma eliminates the need to explicitly sum all of the pair dependencies. Using this formula, the single dependency of formula_5 on a vertex formula_22 at depth formula_23 is determined by the layer at depth formula_24. Furthermore, the order of summation is irrelevant, which allows for a bottom up approach starting at the deepest layer. It turns out that the dependencies of formula_5 on all other vertices formula_22 can be computed in formula_25 time. During the breadth-first search, the order in which vertices are visited is logged in a stack data structure. The backpropagation step then repeatedly pops off vertices, which are naturally sorted by their distance from formula_5, descending. For each popped node formula_2, we iterate over its predecessors formula_26: the contribution of formula_2 towards formula_27 is added, that is,formula_28.Crucially, every layer propagates its dependencies completely, before moving to the layer with lower depth, due to the nature of breadth-first search. Once the propagation reaches back to formula_5, every vertex formula_2 now contains formula_29. These can simply be added to formula_30, sinceformula_14.After formula_31 iterations of "single-source shortest path" and "backpropagation", each formula_30 contains the betweenness centrality for formula_2. Pseudocode. The following pseudocode illustrates Brandes' algorithm on an unweighted directed graph. algorithm Brandes("Graph") is for each "u" in "Graph.Vertices" do CB["u"] ← 0 for each "s" in "Graph.Vertices" do for each "v" in "Graph.Vertices" do δ["v"] ← 0 // Single dependency of s on v prev["v"] ← empty list // Immediate predecessors of v during BFS σ["v"] ← 0 // Number of shortest paths from s to v (s implied) dist["v"] ← null // No paths are known initially, σ["s"] ← 1 // except the start vertex dist["s"] ← 0 "Q" ← queue containing only "s" // Breadth-first search "S" ← empty stack // Record the order in which vertices are visited // "Single-source shortest paths" while "Q" is not empty do "u" ← "Q".dequeue() "S".push("u") for each "v" in "Graph.Neighbours"["u"] do if dist["v"] = null then dist["v"] ← dist["u"] + 1 "Q".enqueue("v") if dist["v"] = dist["u"] + 1 then σ["v"] ← σ["v"] + σ["u"] prev["v"].append("u") // "Backpropagation of dependencies" while "S" is not empty do "v" ← "S".pop() for each "u" in prev["v"] do δ["u"] ← δ["u"] + σ["u"] / σ["v"] * (1 + δ["v"]) if "u" ≠ "s" then CB["v"] ← CB["v"] + δ["v"] // Halved for undirected graphs return CB Running time. The running time of the algorithm is expressed in terms of the number of vertices formula_31 and the number of edges formula_32. For each vertex formula_5, we run breadth-first search, which takes formula_1 time. Since the graph is connected, the formula_32 component subsumes the formula_31 term, since the number of edges is at least formula_33. In the backpropagation stage, every vertex is popped off the stack, and its predecessors are iterated over. However, since each predecessor entry corresponds to an edge in the graph, this stage is also bounded by formula_25. The overall running time of the algorithm is therefore formula_0, an improvement on the formula_34 time bounds achieved by prior algorithms. In addition, Brandes' algorithm improves on the space complexity of naive algorithms, which typically require formula_35 space. Brandes' algorithm only stores at most formula_32 predecessors, along with data for each vertex, making its extra space complexity formula_1 Variants. The algorithm can be generalised to weighted graphs by using Dijkstra's algorithm instead of breadth-first search. When operating on undirected graphs, the betweenness centrality may be divided by 2, to avoid double counting each path's reversed counterpart. Variants also exist to calculate different measures of centrality, including "betweenness" with paths at most length formula_36, "edge betweenness", "load betweenness", and "stress betweenness". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(|V||E|)" }, { "math_id": 1, "text": "O(|V|+|E|)" }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "C_B(v)= \\sum_{s \\in V} \\sum_{t \\in V} \\frac{\\sigma_{st}(v)}{\\sigma_{st}}" }, { "math_id": 4, "text": "\\sigma_{st}" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "t" }, { "math_id": 7, "text": "\\sigma_{st}(v)" }, { "math_id": 8, "text": "\\sigma_{st} = 1" }, { "math_id": 9, "text": "s=t" }, { "math_id": 10, "text": "\\sigma_{st}(v) = 0" }, { "math_id": 11, "text": "\\delta_{st}(v) = \\frac{\\sigma_{st}(v)}{\\sigma_{st}}" }, { "math_id": 12, "text": "st" }, { "math_id": 13, "text": "\\delta_s(v) = \\sum_{t \\in V} \\delta_{st}(v)" }, { "math_id": 14, "text": "C_B(v) = \\sum_{s \\in V} \\delta_s(v)" }, { "math_id": 15, "text": "\\sigma_{sv}" }, { "math_id": 16, "text": "d(v)" }, { "math_id": 17, "text": "p(v)" }, { "math_id": 18, "text": "p(v) = \\{u \\in V \\mid (u, v) \\in E \\and d(u) + 1 = d(v)\\}" }, { "math_id": 19, "text": "\\sigma_{sv} = \\sum_{u \\in p(v)} \\sigma_{su}" }, { "math_id": 20, "text": "d(v)-1" }, { "math_id": 21, "text": "\\delta_s(u) = \\sum_{v \\mid u \\in p(v)} \\frac{\\sigma_{su}}{\\sigma_{sv}} \\cdot (1 + \\delta_s(v))" }, { "math_id": 22, "text": "u" }, { "math_id": 23, "text": "d(u)" }, { "math_id": 24, "text": "d(u)+1" }, { "math_id": 25, "text": "O(|E|)" }, { "math_id": 26, "text": "u \\in p(v)" }, { "math_id": 27, "text": "\\delta_s(u)" }, { "math_id": 28, "text": "\\frac{\\sigma_{su}}{\\sigma_{sv}} \\cdot (1 + \\delta_s(v))" }, { "math_id": 29, "text": "\\delta_s(v)" }, { "math_id": 30, "text": "C_B(v)" }, { "math_id": 31, "text": "|V|" }, { "math_id": 32, "text": "|E|" }, { "math_id": 33, "text": "|V|-1" }, { "math_id": 34, "text": "O(|V|^3)" }, { "math_id": 35, "text": "O(|V|^2)" }, { "math_id": 36, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=76879425
768839
Copolymer
Polymer derived from more than one species of monomer In polymer chemistry, a copolymer is a polymer derived from more than one species of monomer. The polymerization of monomers into copolymers is called copolymerization. Copolymers obtained from the copolymerization of two monomer species are sometimes called "bipolymers". Those obtained from three and four monomers are called terpolymers and "quaterpolymers", respectively. Copolymers can be characterized by a variety of techniques such as NMR spectroscopy and size-exclusion chromatography to determine the molecular size, weight, properties, and composition of the material. Commercial copolymers include acrylonitrile butadiene styrene (ABS), styrene/butadiene co-polymer (SBR), nitrile rubber, styrene-acrylonitrile, styrene-isoprene-styrene (SIS) and ethylene-vinyl acetate, all of which are formed by chain-growth polymerization. Another production mechanism is step-growth polymerization, which is used to produce the nylon-12/6/66 copolymer of nylon 12, nylon 6 and nylon 66, as well as the copolyester family. Copolymers can be used to develop commercial goods or drug delivery vehicles. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; IUPAC definition copolymer: A polymer derived from more than one species of monomer. (See Gold Book entry for note.) Since a copolymer consists of at least two types of constituent units (also structural units), copolymers can be classified based on how these units are arranged along the chain. "Linear copolymers" consist of a single main chain and include alternating copolymers, statistical copolymers, and block copolymers. "Branched copolymers" consist of a single main chain with one or more polymeric side chains, and can be grafted, star shaped, or have other architectures. Reactivity ratios. The "reactivity ratio" of a growing copolymer chain terminating in a given monomer is the ratio of the reaction rate constant for addition of the same monomer and the rate constant for addition of the other monomer. That is, formula_0 and formula_1, where for example formula_2 is the rate constant for propagation of a polymer chain ending in monomer 1 (or A) by addition of monomer 2 (or B). The composition and structural type of the copolymer depend on these reactivity ratios r1 and r2 according to the Mayo–Lewis equation, also called the copolymerization equation or copolymer equation, for the relative instantaneous rates of incorporation of the two monomers. formula_3 Linear copolymers. Block copolymers. Block copolymers comprise two or more homopolymer subunits linked by covalent bonds. The union of the homopolymer subunits may require an intermediate non-repeating subunit, known as a junction block. Diblock copolymers have two distinct blocks; triblock copolymers have three. Technically, a block is a portion of a macromolecule, comprising many units, that has at least one feature which is not present in the adjacent portions. A possible sequence of repeat units A and B in a triblock copolymer might be ~A-A-A-A-A-A-A-B-B-B-B-B-B-B-A-A-A-A-A~. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; IUPAC definition block copolymer: A copolymer that is a block polymer. In the constituent macromolecules of a block copolymer, adjacent blocks are constitutionally different, i.e. adjacent blocks comprise constitutional unit derived from different species of monomer or from the same species of monomer but with a different composition or sequence distribution of constitutional units. Block copolymers are made up of blocks of different polymerized monomers. For example, polystyrene-b-poly(methyl methacrylate) or PS-b-PMMA (where b = block) is usually made by first polymerizing styrene, and then subsequently polymerizing methyl methacrylate (MMA) from the reactive end of the polystyrene chains. This polymer is a "diblock copolymer" because it contains two different chemical blocks. Triblocks, tetrablocks, multiblocks, etc. can also be made. Diblock copolymers are made using living polymerization techniques, such as atom transfer free radical polymerization (ATRP), reversible addition fragmentation chain transfer (RAFT), ring-opening metathesis polymerization (ROMP), and living cationic or living anionic polymerizations. An emerging technique is chain shuttling polymerization. The synthesis of block copolymers requires that both reactivity ratios are much larger than unity (r1 » 1, r2 » 1) under the reaction conditions, so that the terminal monomer unit of a growing chain tends to add a similar unit most of the time. The "blockiness" of a copolymer is a measure of the adjacency of comonomers vs their statistical distribution. Many or even most synthetic polymers are in fact copolymers, containing about 1-20% of a minority monomer. In such cases, blockiness is undesirable. A "block index" has been proposed as a quantitative measure of blockiness or deviation from random monomer composition. Alternating copolymers. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; IUPAC definition alternating copolymer: A copolymer consisting of macromolecule comprising two species of monomeric unit in alternating sequence. (See Gold Book entry for note.) An alternating copolymer has regular alternating A and B units, and is often described by the formula: -A-B-A-B-A-B-A-B-A-B-, or -(-A-B-)n-. The molar ratio of each monomer in the polymer is normally close to one, which happens when the reactivity ratios r1 and r2 are close to zero, as can be seen from the Mayo–Lewis equation. For example, in the free-radical copolymerization of styrene maleic anhydride copolymer, r1 = 0.097 and r2 = 0.001, so that most chains ending in styrene add a maleic anhydride unit, and almost all chains ending in maleic anhydride add a styrene unit. This leads to a predominantly alternating structure. A step-growth copolymer -(-A-A-B-B-)n- formed by the condensation of two bifunctional monomers A–A and B–B is in principle a perfectly alternating copolymer of these two monomers, but is usually considered as a homopolymer of the dimeric repeat unit A-A-B-B. An example is nylon 66 with repeat unit -OC-( CH2)4-CO-NH-(CH2)6-NH-, formed from a dicarboxylic acid monomer and a diamine monomer. Periodic copolymers. Periodic copolymers have units arranged in a repeating sequence. For two monomers A and B, for example, they might form the repeated pattern (A-B-A-B-B-A-A-A-A-B-B-B)n. Statistical copolymers. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; IUPAC definition statistical copolymer: A copolymer consisting of macromolecule in which the sequential distribution of the monomeric unit obeys known statistical laws. (See Gold Book entry for note.) In statistical copolymers the sequence of monomer residues follows a statistical rule. If the probability of finding a given type monomer residue at a particular point in the chain is equal to the mole fraction of that monomer residue in the chain, then the polymer may be referred to as a truly random copolymer (structure 3). Statistical copolymers are dictated by the reaction kinetics of the two chemically distinct monomer reactants, and are commonly referred to interchangeably as "random" in the polymer literature. As with other types of copolymers, random copolymers can have interesting and commercially desirable properties that blend those of the individual homopolymers. Examples of commercially relevant random copolymers include rubbers made from styrene-butadiene copolymers and resins from styrene-acrylic or methacrylic acid derivatives. Copolymerization is particularly useful in tuning the glass transition temperature, which is important in the operating conditions of polymers; it is assumed that each monomer occupies the same amount of free volume whether it is in a copolymer or homopolymer, so the glass transition temperature (Tg) falls between the values for each homopolymer and is dictated by the mole or mass fraction of each component. A number of parameters are relevant in the composition of the polymer product; namely, one must consider the reactivity ratio of each component. Reactivity ratios describe whether the monomer reacts preferentially with a segment of the same type or of the other type. For example, a reactivity ratio that is less than one for component 1 indicates that this component reacts with the other type of monomer more readily. Given this information, which is available for a multitude of monomer combinations in the "Wiley Database of Polymer Properties", the Mayo-Lewis equation can be used to predict the composition of the polymer product for all initial mole fractions of monomer. This equation is derived using the Markov model, which only considers the last segment added as affecting the kinetics of the next addition; the Penultimate Model considers the second-to-last segment as well, but is more complicated than is required for most systems. When both reactivity ratios are less than one, there is an azeotropic point in the Mayo-Lewis plot. At this point, the mole fraction of monomer equals the composition of the component in the polymer. There are several ways to synthesize random copolymers. The most common synthesis method is free radical polymerization; this is especially useful when the desired properties rely on the composition of the copolymer rather than the molecular weight, since free radical polymerization produces relatively disperse polymer chains. Free radical polymerization is less expensive than other methods, and produces high-molecular weight polymer quickly. Several methods offer better control over dispersity. Anionic polymerization can be used to create random copolymers, but with several caveats: if carbanions of the two components do not have the same stability, only one of the species will add to the other. Additionally, anionic polymerization is expensive and requires very clean reaction conditions, and is therefore difficult to implement on a large scale. Less disperse random copolymers are also synthesized by ″living″ controlled radical polymerization methods, such as atom-transfer radical-polymerization (ATRP), nitroxide mediated radical polymerization (NMP), or reversible addition−fragmentation chain-transfer polymerization (RAFT). These methods are favored over anionic polymerization because they can be performed in conditions similar to free radical polymerization. The reactions require longer experimentation periods than free radical polymerization, but still achieve reasonable reaction rates. Stereoblock copolymers. In stereoblock copolymers the blocks or units differ only in the tacticity of the monomers. Gradient copolymers. In gradient copolymers the monomer composition changes gradually along the chain. Branched copolymers. There are a variety of architectures possible for nonlinear copolymers. Beyond grafted and star polymers discussed below, other common types of branched copolymers include brush copolymers and comb copolymers. Graft copolymers. Graft copolymers are a special type of branched copolymer wherein the side chains are structurally distinct from the main chain. Typically, the main chain is formed from one type of monomer (A) and branches are formed from another monomer (B), or the side-chains have constitutional or configurational features that differ from those in the main chain. The individual chains of a graft copolymer may be homopolymers or copolymers. Note that different copolymer sequencing is sufficient to define a structural difference, thus an A-B diblock copolymer with A-B alternating copolymer side chains is properly called a graft copolymer. For example, polystyrene chains may be grafted onto polybutadiene, a synthetic rubber which retains one reactive C=C double bond per repeat unit. The polybutadiene is dissolved in styrene, which is then subjected to free-radical polymerization. The growing chains can add across the double bonds of rubber molecules forming polystyrene branches. The graft copolymer is formed in a mixture with ungrafted polystyrene chains and rubber molecules. As with block copolymers, the quasi-composite product has properties of both "components." In the example cited, the rubbery chains absorb energy when the substance is hit, so it is much less brittle than ordinary polystyrene. The product is called high-impact polystyrene, or HIPS. Star copolymers. Star copolymers have several polymer chains connected to a central core. Microphase separation. Block copolymers can "microphase separate" to form periodic nanostructures, such as styrene-butadiene-styrene block copolymer. The polymer is known as Kraton and is used for shoe soles and adhesives. Owing to the microfine structure, transmission electron microscope or TEM was used to examine the structure. The butadiene matrix was stained with osmium tetroxide to provide contrast in the image. The material was made by living polymerization so that the blocks are almost monodisperse to create a regular microstructure. The molecular weight of the polystyrene blocks in the main picture is 102,000; the inset picture has a molecular weight of 91,000, producing slightly smaller domains. Microphase separation is a situation similar to that of oil and water. Oil and water are immiscible (i.e., they can phase separate). Due to the incompatibility between the blocks, block copolymers undergo a similar phase separation. Since the blocks are covalently bonded to each other, they cannot demix macroscopically like water and oil. In "microphase separation," the blocks form nanometer-sized structures. Depending on the relative lengths of each block, several morphologies can be obtained. In diblock copolymers, sufficiently different block lengths lead to nanometer-sized spheres of one block in a matrix of the second (e.g., PMMA in polystyrene). Using less different block lengths, a "hexagonally packed cylinder" geometry can be obtained. Blocks of similar length form layers (often called lamellae in the technical literature). Between the cylindrical and lamellar phase is the gyroid phase. The nanoscale structures created from block copolymers can potentially be used to create devices for computer memory, nanoscale-templating, and nanoscale separations. Block copolymers are sometimes used as a replacement for phospholipids in model lipid bilayers and liposomes for their superior stability and tunability. Polymer scientists use thermodynamics to describe how the different blocks interact. The product of the degree of polymerization, "n", and the Flory-Huggins interaction parameter, formula_4, gives an indication of how incompatible the two blocks are and whether they will microphase separate. For example, a diblock copolymer of symmetric composition will microphase separate if the product formula_5 is greater than 10.5. If formula_5 is less than 10.5, the blocks will mix and microphase separation is not observed. The incompatibility between the blocks also affects the solution behavior of these copolymers and their adsorption behavior on various surfaces. Block copolymers are able to self-assemble in selective solvents to form micelles among other structures. In thin films, block copolymers are of great interest as masks in the lithographic patterning of semiconductor materials for applications in high density data storage. A key challenge is to minimise the feature size and much research is in progress on this. Characterization. Characterization techniques for copolymers are similar to those for other polymeric materials. These techniques can be used to determine the average molecular weight, molecular size, chemical composition, molecular homogeneity, and physiochemical properties of the material. However, given that copolymers are made of base polymer components with heterogeneous properties, this may require multiple characterization techniques to accurately characterize these copolymers. Spectroscopic techniques, such as nuclear magnetic resonance spectroscopy, infrared spectroscopy, and UV spectroscopy, are often used to identify the molecular structure and chemical composition of copolymers. In particular, NMR can indicate the tacticity and configuration of polymeric chains while IR can identify functional groups attached to the copolymer. Scattering techniques, such as static light scattering, dynamic light scattering, and small-angle neutron scattering, can determine the molecular size and weight of the synthesized copolymer. Static light scattering and dynamic light scattering use light to determine the average molecular weight and behavior of the copolymer in solution whereas small-angle neutron scattering uses neutrons to determine the molecular weight and chain length. Additionally, x-ray scattering techniques, such as small-angle X-ray scattering (SAXS) can help determine the nanometer morphology and characteristic feature size of a microphase-separated block-copolymer or suspended micelles. Differential scanning calorimetry is a thermoanalytical technique used to determine the thermal events of the copolymer as a function of temperature. It can indicate when the copolymer is undergoing a phase transition, such as crystallization or melting, by measuring the heat flow required to maintain the material and a reference at a constantly increasing temperature. Thermogravimetric analysis is another thermoanalytical technique used to access the thermal stability of the copolymer as a function of temperature. This provides information on any changes to the physicochemical properties, such as phase transitions, thermal decompositions, and redox reactions. Size-exclusion chromatography can separate copolymers with different molecular weights based on their hydrodynamic volume. From there, the molecular weight can be determined by deriving the relationship from its hydrodynamic volume. Larger copolymers tend to elute first as they do not interact with the column as much. The collected material is commonly detected by light scattering methods, a refractometer, or a viscometer to determine the concentration of the eluted copolymer.   Applications. Block copolymers. A common application of block copolymers is to develop thermoplastic elastomers (TPEs). Early commercial TPEs were developed from polyurethranes (TPUs), consisting of alternating soft segments and hard segments, and are used in automotive bumpers and snowmobile treads. Styrenic TPEs entered the market later, and are used in footwear, bitumen modification, thermoplastic blending, adhesives, and cable insulation and gaskets. Modifying the linkages between the blocks resulted in newer TPEs based on polyesters (TPES) and polyamides (TPAs), used in hose tubing, sport goods, and automotive components. Amphiphilic block copolymers have the ability to form micelles and nanoparticles. Due to this property, amphiphilic block copolymers have garnered much attention in research on vehicles for drug delivery. Similarly, amphiphilic block copolymers can be used for the removal of organic contaminants from water either through micelle formation or film preparation. Alternating copolymers. The styrene-maleic acid (SMA) alternating copolymer displays amphiphilicity depending on pH, allowing it to change conformations in different environments. Some conformations that SMA can take are random coil formation, compact globular formation, micelles, and nanodiscs. SMA has been used as a dispersing agent for dyes and inks, as drug delivery vehicles, and for membrane solubilization. Copolymer engineering. Copolymerization is used to modify the properties of manufactured plastics to meet specific needs, for example to reduce crystallinity, modify glass transition temperature, control wetting properties or to improve solubility. It is a way of improving mechanical properties, in a technique known as rubber toughening. Elastomeric phases within a rigid matrix act as crack arrestors, and so increase the energy absorption when the material is impacted for example. Acrylonitrile butadiene styrene is a common example. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_1 = \\frac{k_{11}}{k_{12}}" }, { "math_id": 1, "text": "r_2 = \\frac{k_{22}}{k_{21}}" }, { "math_id": 2, "text": "k_{12}" }, { "math_id": 3, "text": "\\frac{\\mathrm{d} \\left[ \\mathrm{M}_1 \\right]}{\\mathrm{d} \\left[ \\mathrm{M}_2 \\right]} = \\frac{\\left[ \\mathrm{M}_1 \\right] \\left( r_1 \\left[ \\mathrm{M}_1 \\right] + \\left[ \\mathrm{M}_2 \\right] \\right)}{\\left[ \\mathrm{M}_2 \\right] \\left( \\left[ \\mathrm{M}_1 \\right] + r_2 \\left[ \\mathrm{M}_2 \\right] \\right)}" }, { "math_id": 4, "text": "\\chi" }, { "math_id": 5, "text": "\\chi N" } ]
https://en.wikipedia.org/wiki?curid=768839
7689061
Terminal and nonterminal symbols
Categories of symbols in formal grammars In formal languages, terminal and nonterminal symbols are the lexical elements used in specifying the production rules constituting a formal grammar. "Terminal symbols" are the elementary symbols of the language defined as part of a formal grammar. "Nonterminal symbols" (or "syntactic variables") are replaced by groups of terminal symbols according to the production rules. The terminals and nonterminals of a particular grammar are in two completely separate sets. Terminal symbols. Terminal symbols are symbols that may appear in the outputs of the production rules of a formal grammar and which cannot be changed using the rules of the grammar. Applying the rules recursively to a source string of symbols will usually terminate in a final output string consisting only of terminal symbols. Consider a grammar defined by two rules. In this grammar, the symbol codice_0 is a terminal symbol and codice_1 is both a non-terminal symbol and the start symbol. The production rules for creating strings are as follows: Here codice_0 is a terminal symbol because no rule exists which would change it into something else. On the other hand, codice_1 has two rules that can change it, thus it is nonterminal. A formal language defined or "generated" by a particular grammar is the set of strings that can be produced by the grammar "and that consist only of terminal symbols". Diagram 1 illustrates a string that can be produced with this grammar. Nonterminal symbols. Nonterminal symbols are those symbols that can be replaced. They may also be called simply "syntactic variables". A formal grammar includes a "start symbol", a designated member of the set of nonterminals from which all the strings in the language may be derived by successive applications of the production rules. In fact, the language defined by a grammar is precisely the set of "terminal" strings that can be so derived. Context-free grammars are those grammars in which the left-hand side of each production rule consists of only a single nonterminal symbol. This restriction is non-trivial; not all languages can be generated by context-free grammars. Those that can are called context-free languages. These are exactly the languages that can be recognized by a non-deterministic push down automaton. Context-free languages are the theoretical basis for the syntax of most programming languages. Production rules. A grammar is defined by production rules (or just 'productions') that specify which symbols may replace which other symbols; these rules may be used to generate strings, or to parse them. Each such rule has a "head", or left-hand side, which consists of the string that may be replaced, and a "body", or right-hand side, which consists of a string that may replace it. Rules are often written in the form "head" → "body"; e.g., the rule "a" → "b" specifies that "a" can be replaced by "b". In the classic formalization of generative grammars first proposed by Noam Chomsky in the 1950s, a grammar "G" consists of the following components: formula_0 where formula_1 is the Kleene star operator and ∪ denotes set union, so formula_2 represents zero or more symbols, and N means one "nonterminal" symbol. That is, each production rule maps from one string of symbols to another, where the first string contains at least one nonterminal symbol. In the case that the body consists solely of the empty string, it may be denoted with a special notation (often Λ, e or ε) in order to avoid confusion. A grammar is formally defined as the ordered quadruple formula_4. Such a formal grammar is often called a rewriting system or a phrase structure grammar in the literature. Example. Backus–Naur form is a notation for expressing certain grammars. For instance, the following production rules in Backus-Naur form are used to represent an integer (which may be signed): &lt;digit&gt; ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' In this example, the symbols (-,0,1,2,3,4,5,6,7,8,9) are terminal symbols and codice_9 and codice_10 are nonterminal symbols. Another example is: &lt;chem&gt;S -&gt; cAd&lt;/chem&gt; &lt;chem&gt;A -&gt; a | ab&lt;/chem&gt; In this example, the symbols a,b,c,d are terminal symbols and S,A are nonterminal symbols. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\Sigma \\cup N)^{*} N (\\Sigma \\cup N)^{*} \\rightarrow (\\Sigma \\cup N)^{*} " }, { "math_id": 1, "text": "{}^{*}" }, { "math_id": 2, "text": "(\\Sigma \\cup N)^{*}" }, { "math_id": 3, "text": "S \\in N" }, { "math_id": 4, "text": "\\langle N, \\Sigma, P, S\\rangle" } ]
https://en.wikipedia.org/wiki?curid=7689061
769021
Bioavailability
Pharmacological measurement In pharmacology, bioavailability is a subcategory of absorption and is the fraction (%) of an administered drug that reaches the systemic circulation. By definition, when a medication is administered intravenously, its bioavailability is 100%. However, when a medication is administered via routes other than intravenous, its bioavailability is lower due to intestinal epithelium absorption and first-pass metabolism. Thereby, mathematically, bioavailability equals the ratio of comparing the area under the plasma drug concentration curve versus time (AUC) for the extravascular formulation to the AUC for the intravascular formulation. AUC is used because AUC is proportional to the dose that has entered the systemic circulation. Bioavailability of a drug is an average value; to take population variability into account, deviation range is shown as ±. To ensure that the drug taker who has poor absorption is dosed appropriately, the bottom value of the deviation range is employed to represent real bioavailability and to calculate the drug dose needed for the drug taker to achieve systemic concentrations similar to the intravenous formulation. To dose without knowing the drug taker's absorption rate, the bottom value of the deviation range is used in order to ensure the intended efficacy, unless the drug is associated with a narrow therapeutic window. For dietary supplements, herbs and other nutrients in which the route of administration is nearly always oral, bioavailability generally designates simply the quantity or fraction of the ingested dose that is absorbed. Definitions. In pharmacology. Bioavailability is a term used to describe the percentage of an administered dose of a xenobiotic that reaches the systemic circulation. It is denoted by the letter "f" (or, if expressed in percent, by "F"). In nutritional science. In nutritional science, which covers the intake of nutrients and non-drug dietary ingredients, the concept of bioavailability lacks the well-defined standards associated with the pharmaceutical industry. The pharmacological definition cannot apply to these substances because utilization and absorption is a function of the nutritional status and physiological state of the subject, resulting in even greater differences from individual to individual (inter-individual variation). Therefore, bioavailability for dietary supplements can be defined as the proportion of the administered substance capable of being absorbed and available for use or storage. In both pharmacology and nutrition sciences, bioavailability is measured by calculating the area under curve (AUC) of the drug concentration time profile. In environmental sciences or science. Bioavailability is the measure by which various substances in the environment may enter into living organisms. It is commonly a limiting factor in the production of crops (due to solubility limitation or absorption of plant nutrients to soil colloids) and in the removal of toxic substances from the food chain by microorganisms (due to sorption to or partitioning of otherwise degradable substances into inaccessible phases in the environment). A noteworthy example for agriculture is plant phosphorus deficiency induced by precipitation with iron and aluminum phosphates at low soil pH and precipitation with calcium phosphates at high soil pH. Toxic materials in soil, such as lead from paint may be rendered unavailable to animals ingesting contaminated soil by supplying phosphorus fertilizers in excess. Organic pollutants such as solvents or pesticides may be rendered unavailable to microorganisms and thus persist in the environment when they are adsorbed to soil minerals or partition into hydrophobic organic matter. Absolute bioavailability. Absolute bioavailability compares the bioavailability of the active drug in systemic circulation following non-intravenous administration (i.e., after oral, buccal, ocular, nasal, rectal, transdermal, subcutaneous, or sublingual administration), with the bioavailability of the same drug following intravenous administration. It is the fraction of exposure to a drug (AUC) through non-intravenous administration compared with the corresponding intravenous administration of the same drug. The comparison must be dose normalized (e.g., account for different doses or varying weights of the subjects); consequently, the amount absorbed is corrected by dividing the corresponding dose administered. In pharmacology, in order to determine absolute bioavailability of a drug, a pharmacokinetic study must be done to obtain a "plasma drug concentration vs time" plot for the drug after both intravenous (iv) and extravascular (non-intravenous, i.e., oral) administration. The absolute bioavailability is the dose-corrected area under curve ("AUC") non-intravenous divided by "AUC" intravenous. The formula for calculating the absolute bioavailability, "F", of a drug administered orally (po) is given below (where "D" is dose administered). formula_0 Therefore, a drug given by the intravenous route will have an absolute bioavailability of 100% ("f" = 1), whereas drugs given by other routes usually have an absolute bioavailability of "less" than one. If we compare the two different dosage forms having same active ingredients and compare the two drug bioavailability is called comparative bioavailability. Although knowing the true extent of systemic absorption (referred to as absolute bioavailability) is clearly useful, in practice it is not determined as frequently as one may think. The reason for this is that its assessment requires an "intravenous reference"; that is, a route of administration that guarantees all of the administered drug reaches systemic circulation. Such studies come at considerable cost, not least of which is the necessity to conduct preclinical toxicity tests to ensure adequate safety, as well as potential problems due to solubility limitations. These limitations may be overcome, however, by administering a very low dose (typically a few micrograms) of an isotopically labelled drug concomitantly with a therapeutic non-isotopically labelled oral dose (the isotopically labelled intravenous dose is sufficiently low so as not to perturb the systemic drug concentrations achieved from the non-labelled oral dose). The intravenous and oral concentrations can then be deconvoluted by virtue of their different isotopic constitution, and can thus be used to determine the oral and intravenous pharmacokinetics from the same dose administration. This technique eliminates pharmacokinetic issues with non-equivalent clearance as well as enabling the intravenous dose to be administered with a minimum of toxicology and formulation. The technique was first applied using stable-isotopes such as 13C and mass-spectrometry to distinguish the isotopes by mass difference. More recently, 14C labelled drugs are administered intravenously and accelerator mass spectrometry (AMS) used to measure the isotopically labelled drug along with mass spectrometry for the unlabelled drug. There is no regulatory requirement to define the intravenous pharmacokinetics or absolute bioavailability however regulatory authorities do sometimes ask for absolute bioavailability information of the extravascular route in cases in which the bioavailability is apparently low or variable and there is a proven relationship between the pharmacodynamics and the pharmacokinetics at therapeutic doses. In all such cases, to conduct an absolute bioavailability study requires that the drug be given intravenously. Intravenous administration of a developmental drug can provide valuable information on the fundamental pharmacokinetic parameters of volume of distribution ("V") and clearance ("CL"). Relative bioavailability and bioequivalence. In pharmacology, relative bioavailability measures the bioavailability (estimated as the "AUC") of a formulation (A) of a certain drug when compared with another formulation (B) of the same drug, usually an established standard, or through administration via a different route. When the standard consists of intravenously administered drug, this is known as absolute bioavailability (see above). formula_1 Relative bioavailability is one of the measures used to assess bioequivalence ("BE") between two drug products. For FDA approval, a generic manufacturer must demonstrate that the 90% confidence interval for the ratio of the mean responses (usually of "AUC" and the maximum concentration, "C"max) of its product to that of the "brand name drug"[OB] is within the limits of 80% to 125%. Where "AUC" refers to the concentration of the drug in the blood over time "t" = 0 to "t" = ∞, "C"max refers to the maximum concentration of the drug in the blood. When "T"max is given, it refers to the time it takes for a drug to reach "C"max. While the mechanisms by which a formulation affects bioavailability and bioequivalence have been extensively studied in drugs, formulation factors that influence bioavailability and bioequivalence in nutritional supplements are largely unknown. As a result, in nutritional sciences, relative bioavailability or bioequivalence is the most common measure of bioavailability, comparing the bioavailability of one formulation of the same dietary ingredient to another. Factors influencing bioavailability. The absolute bioavailability of a drug, when administered by an extravascular route, is usually less than one (i.e., "F"&lt; 100%). Various physiological factors reduce the availability of drugs prior to their entry into the systemic circulation. Whether a drug is taken with or without food will also affect absorption, other drugs taken concurrently may alter absorption and first-pass metabolism, intestinal motility alters the dissolution of the drug and may affect the degree of chemical degradation of the drug by intestinal microflora. Disease states affecting liver metabolism or gastrointestinal function will also have an effect. Other factors may include, but are not limited to: Each of these factors may vary from patient to patient (inter-individual variation), and indeed in the same patient over time (intra-individual variation). In clinical trials, inter-individual variation is a critical measurement used to assess the bioavailability differences from patient to patient in order to ensure predictable dosing. Notes. ^ TH: One of the few exceptions where a drug shows "F" of over 100% is theophylline. If administered as an oral solution "F" is 111%, since the drug is completely absorbed and first-pass metabolism in the lung after intravenous administration is bypassed. ^ OB: Reference listed drug products (i.e., innovator's) as well as generic drug products that have been approved based on an Abbreviated New Drug Application are given in FDA's "Orange Book". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_\\mathrm{abs} = 100 \\cdot \\frac{AUC_\\mathrm{po} \\cdot D_\\mathrm{iv}}{AUC_\\mathrm{iv} \\cdot D_\\mathrm{po}}" }, { "math_id": 1, "text": "F_\\mathrm{rel} = 100 \\cdot \\frac{AUC_\\mathrm{A} \\cdot D_\\mathrm{B}}{AUC_\\mathrm{B} \\cdot D_\\mathrm{A}}" } ]
https://en.wikipedia.org/wiki?curid=769021
769022
Intermediate logic
Propositional logic extending intuitionistic logic In mathematical logic, a superintuitionistic logic is a propositional logic extending intuitionistic logic. Classical logic is the strongest consistent superintuitionistic logic; thus, consistent superintuitionistic logics are called intermediate logics (the logics are intermediate between intuitionistic logic and classical logic). Definition. A superintuitionistic logic is a set "L" of propositional formulas in a countable set of variables "p""i" satisfying the following properties: 1. all axioms of intuitionistic logic belong to "L"; 2. if "F" and "G" are formulas such that "F" and "F" → "G" both belong to "L", then "G" also belongs to "L" (closure under modus ponens); 3. if "F"("p"1, "p"2, ..., "p""n") is a formula of "L", and "G"1, "G"2, ..., "G""n" are any formulas, then "F"("G"1, "G"2, ..., "G""n") belongs to "L" (closure under substitution). Such a logic is intermediate if furthermore 4. "L" is not the set of all formulas. Properties and examples. There exists a continuum of different intermediate logics and just as many such logics exhibit the disjunction property (DP). Superintuitionistic or intermediate logics form a complete lattice with intuitionistic logic as the bottom and the inconsistent logic (in the case of superintuitionistic logics) or classical logic (in the case of intermediate logics) as the top. Classical logic is the only coatom in the lattice of superintuitionistic logics; the lattice of intermediate logics also has a unique coatom, namely SmL. The tools for studying intermediate logics are similar to those used for intuitionistic logic, such as Kripke semantics. For example, Gödel–Dummett logic has a simple semantic characterization in terms of total orders. Specific intermediate logics may be given by semantical description. Others are often given by adding one or more axioms to Examples include: = IPC + ¬¬"p" → "p" (Double-negation elimination, DNE) = IPC + (¬"p" → "p") → "p" (Consequentia mirabilis) = IPC + "p" ∨ ¬"p" (Principle of excluded middle, PEM) Generalized variants of the above (but actually equivalent principles over intuitionistic logic) are, respectively, = IPC + (¬"p" → ¬"q") → ("q" → "p") (inverse contraposition principle) = IPC + (("p" → "q") → "p") → "p" (Pierce's principle PP, compare to Consequentia mirabilis) = IPC + ("q" → "p") → ((¬"q" → "p") → "p") (another schema generalizing Consequentia mirabilis) = IPC + "p" ∨ ("p" → "q") (following from PEM via principle of explosion) = IPC + (¬"q" → "p") → ((("p" → "q") → "p") → "p") (a conditional PP) = IPC + ("p" → "q") ∨ ("q" → "p") (Dirk Gently’s principle, DGP, or linearity) = IPC + ("p" → ("q" ∨ "r")) → (("p" → "q") ∨ ("p" → "r")) (a form of independence of premise IP) = IPC + (("p" ∧ "q") → "r") → (("p" → "r") ∨ ("q" → "r")) (Generalized 4th De Morgan's law) = IPC + "p" ∨ ("p" → ("q" ∨ ¬"q")) = IPC + ¬¬"p" ∨ ¬"p" (weak PEM, a.k.a. WPEM) = IPC + ("p" → "q") ∨ (¬"p" → ¬"q") (a weak DGP) = IPC + ("p" → ("q" ∨ ¬"r")) → (("p" → "q") ∨ ("p" → ¬"r")) (a variant, with negation, of a form of IP) = IPC + ¬("p" ∧ "q") → (¬"q" ∨ ¬"p") (4th De Morgan's law) = IPC + ((¬¬"p" → "p") → ("p" ∨ ¬"p")) → (¬¬"p" ∨ ¬"p") (a conditional WPEM) = IPC + (¬"p" → ("q" ∨ "r")) → ((¬"p" → "q") ∨ (¬"p" → "r")) (the other variant, with negation, of a form of IP) This list is, for the most part, not any sort of ordering. For example, LC is known not to prove all theorems of SmL, but it does not directly compare in strength to BD"2". Likewise, e.g., KP does not compare to SL. The list of equalities for each logic is by no means exhaustive either. For example, as with WPEM and De Morgan's law, several forms of DGP using conjunction may be expressed. Even (¬¬"p" ∨ ¬"p") ∨ (¬¬"p" → "p"), a further weakening of WPEM, is not a theorem of IPC. It may also be worth noting that, taking all of intuitionistic logic for granted, the equalities notably rely on explosion. For example, over mere minimal logic, as principle PEM is already equivalent to Consequentia mirabilis, but there does not imply the stronger DNE, nor PP, and is not comparable to DGP. Going on: IPC + "pn" ∨ ("pn" → ("p""n"−1 ∨ ("p""n"−1 → ... → ("p"2 ∨ ("p"2 → ("p"1 ∨ ¬"p"1)))...))) LC + BD"n"−1 = LC + BC"n"−1 formula_0 formula_1 formula_2 formula_3 Furthermore: The propositional logics SL and KP do have the disjunction property DP. Kleene realizability logic and the strong Medvedev's logic do have it as well. There is no unique maximal logic with DP on the lattice. Note that if a consistent theory validates WPEM but still has independent statements when assuming PEM, then it cannot have DP. Semantics. Given a Heyting algebra "H", the set of propositional formulas that are valid in "H" is an intermediate logic. Conversely, given an intermediate logic it is possible to construct its Lindenbaum–Tarski algebra, which is then a Heyting algebra. An intuitionistic Kripke frame "F" is a partially ordered set, and a Kripke model "M" is a Kripke frame with valuation such that formula_5 is an upper subset of "F". The set of propositional formulas that are valid in "F" is an intermediate logic. Given an intermediate logic "L" it is possible to construct a Kripke model "M" such that the logic of "M" is "L" (this construction is called the "canonical model"). A Kripke frame with this property may not exist, but a general frame always does. Relation to modal logics. Let "A" be a propositional formula. The "Gödel–Tarski translation" of "A" is defined recursively as follows: If "M" is a modal logic extending S4 then ρ"M" = {"A" | "T"("A") ∈ "M"} is a superintuitionistic logic, and "M" is called a "modal companion" of ρ"M". In particular: For every intermediate logic "L" there are many modal logics "M" such that "L" = ρ"M".
[ { "math_id": 0, "text": "\\textstyle\\mathbf{IPC}+\\bigvee_{i=0}^n\\bigl(\\bigwedge_{j<i}p_j\\to p_i\\bigr)" }, { "math_id": 1, "text": "\\textstyle\\mathbf{IPC}+\\bigvee_{i=0}^n\\bigl(\\bigwedge_{j<i}p_j\\to\\neg\\neg p_i\\bigr)" }, { "math_id": 2, "text": "\\textstyle\\mathbf{IPC}+\\bigvee_{i=0}^n\\bigl(\\bigwedge_{j\\ne i}p_j\\to p_i\\bigr)" }, { "math_id": 3, "text": "\\textstyle\\mathbf{IPC}+\\bigwedge_{i=0}^n\\bigl(\\bigl(p_i\\to\\bigvee_{j\\ne i}p_j\\bigr)\\to\\bigvee_{j\\ne i}p_j\\bigr)\\to\\bigvee_{i=0}^np_i" }, { "math_id": 4, "text": "\\langle\\mathcal P(X)\\setminus\\{X\\},\\subseteq\\rangle" }, { "math_id": 5, "text": "\\{x\\mid M,x\\Vdash p\\}" }, { "math_id": 6, "text": " T(p_n) = \\Box p_n " }, { "math_id": 7, "text": " T(\\neg A) = \\Box \\neg T(A) " }, { "math_id": 8, "text": " T(A \\land B) = T(A) \\land T(B) " }, { "math_id": 9, "text": " T(A \\vee B) = T(A) \\vee T(B) " }, { "math_id": 10, "text": " T(A \\to B) = \\Box (T(A) \\to T(B)) " } ]
https://en.wikipedia.org/wiki?curid=769022
76906020
Discharge regime
Long-term annual pattern of a river's discharge Discharge regime, flow regime, or hydrological regime (commonly termed river regime, but that term is also used for other measurements) is the long-term pattern of annual changes to a river's discharge at a particular point. Hence, it shows how the discharge of a river at that point is expected to change over the year. The main factor affecting the regime is climate, along with relief, bedrock, soil and vegetation, as well as human activity. Like general trends can be grouped together into certain named groups, either by what causes them and the part of the year they happen (most classifications) or by the climate in which they most commonly appear (Beckinsale classification). There are many different classifications; however, most of them are localized to a specific area and cannot be used to classify all the rivers of the world. When interpreting such records of discharge, it is important to factor in the timescale over which the average monthly values were calculated. It is particularly difficult to establish a typical annual river regime for rivers with high interannual variability in monthly discharge and/or significant changes in the catchment's characteristics (e.g. tectonic influences or the introduction of water management practices). Overview. Maurice Pardé was the first to classify river regimes more thoroughly. His classification was based on what the primary reasons for such pattern are, and how many of them there are. According to this, he termed three basic types: Pardé split the simple regimes further into temperature-dependent (glacial, mountains snow melt, plains snow melt; latter two often called "nival") and rainfall-dependent or pluvial (equatorial, intertropical, temperate oceanic, mediterranean) categories. Beckinsale later more clearly defined the distinct simple regimes based on climate present in the catchment area and thus splitting the world into "hydrological regions". His main inspiration was the Köppen climate classification, and he also devised strings of letters to define them. However, the system was criticised as it based the regimes on climate instead of purely on discharge pattern and also lacked some patterns. Another attempt to provide classification of world regimes was made in 1988 by Heines et al., which was based purely on the discharge pattern and classified all patterns into one of 15 categories; however, the determination is sometimes contradictory and quite complex, and the distinction does not differentiate between simple, mixed or complex regimes as it determines the regime solely on the main peak, which is contradictory to commonly used system in the Alpine region. Hence, rivers with nivo-pluvial regimes are commonly split into two different categories, while most pluvio-nival regimes are all grouped into a single category along with complex regimes – the uniform regime, despite showing quite pronounced and regular yearly pattern. Moreover, it does not differentiate between temperature-dependant and rainfall-dependant regimes. Nonetheless, it added one new regime that was not present in Beckinsale's classification, the moderate mid-autumn regime with a peak in November (Northern Hemisphere) or May (Southern Hemisphere). This system too, is very rarely used. In later years, most of the research was only done in the region around the Alps, so that area is much more thoroughly researched than others, and most names for subclasses of regimes are for those found there. These were mostly further differentiated from Pardé's distinction. The most common names given, although they might be defined differently in different publications, are: The Pardé's differentiation of single regimes from mixed regimes is sometimes rather considered to be based on the number of peaks rather than the number of factors as it is more objective. Most of nival and even glacial regimes have some influence of rainfall and regimes considered pluvial have some influence of snowfall in regions with continental climate; see the coefficient of nivosity. The distinction between both classifications can be seen with the nivo-glacial regime, which is sometimes considered as a mixed regime, but is often considered as a simple regime in more detailed studies. However, many groupings of multiple pluvial or nival peaks are still considered a simple regime in some sources. Measurement of river regimes. River regimes, similarly to the climate, are compounded by averaging the discharge data for several years; ideally that should be 30 years or more, as with the climate. However, the data is much scarcer, and sometimes data for as low as eight years are used. If the flow is regular and shows very similar year-to-year pattern, that could be enough, but for rivers with irregular patterns or for those that are most of the time dry, that period has to be much longer for accurate results. This is especially the problem with wadis as they often have both traits. The discharge pattern is specific not only to a river, but also a point along a river as it can change with new tributaries and an increase in the catchment area. This data is then averaged for each month separately. Sometimes, the average maximum and minimum for each month is also added. But unlike climate, rivers can drastically range in discharge, from small creeks with mean discharges less than 0.1 cubic meters per second to the Amazon River, which has average monthly discharge of more than 200,000 cubic meters per second at its peak in May. For regimes, the exact discharge of a river in one month is not as important as is the relation to other monthly discharges measured at the same point along a particular river. And although discharge is still often used for showing seasonal variation, two other forms are more commonly used, the percentage of yearly flow and the Pardé coefficient. Percentage of yearly flow represents how much of the total yearly discharge the month contributes and is calculated by the following formula: formula_0, where formula_1 is the mean discharge of a particular month and formula_2is the mean yearly discharge. Discharge of an average month is formula_3 and the total of all months should add to 100% (or rather, roughly, due to rounding). Even more common is the Pardé coefficient, discharge coefficient or simply the coefficient, which is more intuitive as an average month would have a value of 1. Anything above that means there is bigger discharge than average and anything lower means that there is lower discharge than the average. It is calculated by the following equation: formula_4, where formula_1 is the mean discharge of a particular month and formula_2 is the mean yearly discharge. Pardé coefficients for all months should add to 12 and are without a unit. The data is often presented is a special diagram, called a hydrograph, or, more specifically, an annual hydrograph as it shows monthly discharge variation in a year, but no rainfall pattern. The units used in a hydrograph can be either discharge, monthly percentage or Pardé coefficients. The shape of the graph is the same in any case, only the scale needs to be adjusted. From the hydrograph, maxima and minima are easy to spot and the regime can be determined more easily. Hence, they are a vital part for river regimes, just as climographs are for climate. Yearly coefficient. Similarly to Pardé's coefficient, there are also other coefficients that can be used to analyze the regime of a river. One possibility is to look how many times the discharge during the peak is larger than the discharge during the minimum, rather than the mean as with Pardé's coefficient. It is sometimes called the yearly coefficient and is defined as: formula_5, where formula_6 is the mean discharge of the month with the highest discharge and formula_7 is the mean discharge of the month with the lowest discharge. If formula_7 is 0, then the coefficient is undefined. Annual variability. Annual variability shows how much the peaks on average deviate from the perfectly uniform regime. It is calculated as the standard deviation of the mean discharge of months from the mean yearly discharge. That value is then divided by the mean yearly discharge and multiplied by 100%, i.e.: formula_8 The most uniform regimes have a value below 10%, while it can reach more than 150% for rivers with the most drastic peaks. Grimm coefficients. Grimm coefficients, used in Austria, are not defined for a single month, but for 'doppelmonats', i.e., for two consecutive months. The mean flow of both months – January and February, February and March, March and April, and so on – is added, still conserving 12 different values throughout the year. This is done since for nival regimes, this better correlates to different types of peak (nival, nivo-glacial, glacial etc.). They are defined as follows: formula_9 (Initial definition) formula_10 (Adapted definition so values are closer to Pardé's; version used on Wikipedia) formula_11, where formula_12. Coefficient of nivosity. Pardé and Beckinsale determined whether the peak is pluvio-nival, nivo-pluvial, nival or glacial based on the fact what percentage of the discharge during the warm season is contributed by the melt-water, and not by the time of the peak as it is common today. However, it has been calculated for few rivers. The values are the following: Factors affecting river regimes. There are multiple factors that determine when a river will have a greater discharge and when a smaller one. The most obvious factor is precipitation, since most rivers get their water supply in that way. However, temperature also plays a significant role, as well as the characteristics of its catchment area, such as altitude, vegetation, bedrock, soil and lake storage. An important factor is also the human factor as humans may either fully control the water supply by building dams and barriers, or partially by diverting water for irrigation, industrial and personal use. The factor that differentiates classification of river regimes from climate the most is that rivers can change their regime along its path due to a change of conditions and new tributaries. Climate. The primary factor affecting river regimes is the climate of its catchment area, both by the amount of rainfall and by the temperature fluctuations throughout the year. This has led Beckinsale to classify regimes based primarily on the climate. Although there is correlation, climate is still not fully reflected in a river regime. Moreover, a catchment area can span through more than one climate and lead to more complex interactions between the climate and the regime. A discharge pattern can closely resemble the rainfall pattern since rainfall in a river's catchment area contributes to its water flow, rise of the underground water and filling of lakes. There is some delay between the peak rainfall and peak discharge, which is also dependent on the type of soil and bedrock, since the water from rain must reach the gauging station for the discharge to be recorded. The time is naturally longer for bigger catchment areas. If the water from precipitation is frozen, such as snow or hail, it has to melt first, leading to longer delays and shallower peaks. The delay becomes heavily influenced by the temperature since temperatures below zero cause the snow to stay frozen until it becomes warmer in the spring, when temperatures rise and melt the snow, leading to a peak, which might be again a bit delayed. The time of the peak is determined by when the midday temperature sufficiently soars above 0, which is usually considered to be when the average temperature reaches above -3. In the mildest continental climates, bordering the oceanic climate, the peak is usually in March on the Northern Hemisphere or September on the Southern Hemisphere, but can be as late as August/February on the highest mountains and ice caps, where the flow also heavily varies throughout the day. Melting of glaciers alone can also supply large amounts of water even in areas where there is little to no precipitation, as in ice cap climate and cold dry and semi-dry climates. On the other side, high temperatures and sunny weather lead to a significant increase in evapotranspiration, either directly from river, or from moist soil and plants, leading to the fact that less precipitation reaches the river and that plants consume more water, respectively. For terrain in darker colors, the rate of evaporation is higher than for a terrain in lighter colors due to lower albedo. Relief. Relief often determines how sharp and how wide the nival peaks are, leading Pardé to already classify mountain nival and plain nival regimes separately. If the relief is rather flat, the snow will melt everywhere in a short period of time due to similar conditions, leading to a sharp peak about three months wide. However, if the terrain is hilly or mountainous, snow located in lowlands will melt first, with the temperature gradually decreasing with altitude (about 6 °C per 1000 m). Hence the peak is wider, and especially the decrease after the peak can extend all the way to late summer when the temperatures are highest. Due to this phenomenon, the precipitation in lowland areas might be rainfall, but snow in higher areas, leading to a peak quickly after the rainfall and another when the temperatures start to melt the snow. Another important aspect is altitude. At exceptionally high altitudes, atmosphere is thinner so the solar insolation is much greater, which is why Beckinsale differentiates between mountain nival and glacial from similar regimes found at higher latitudes. Additionally, steeper slopes lead to faster surface runoff, leading to more prominent peaks, while flat terrain allows for lakes to spread, which regulate the discharge of the river downstream. Larger catchment areas also lead to shallower peaks. Vegetation. Vegetation in general decreases surface runoff and consequently discharge of a river, and leads to greater infiltration. Forests dominated by trees that shed their leaves during winter have an annual pattern of the extent of water interception, which shapes the pattern in its own way. The impact of vegetation is noticeable in all areas but the driest and coldest, where vegetation is scarce. Vegetation growing in the river beds can drastically hinder the flow of water, especially in the summer, leading to smaller discharges. Soil and bedrock. The most important aspect of the ground in this regard is the permeability and water-holding capacity of the rocks and soils in the discharge basin. In general, the more the ground is permeable, the less pronounced the maxima and minima are since the rocks accumulate water during the wet season and release it during the dry season; lag time is also longer since there is less surface runoff. If the wet season is really pronounced, the rocks become saturated and fail to infiltrate excess water, so all rainfall is quickly released into the stream. On the other side, however, if the rocks are too permeable, as in the karst terrain, rivers might have a notable discharge only when the rocks are saturated or the groundwater level rises and would otherwise be dry with all the water accumulating in subterranean rivers or disappearing in ponors. Examples of rocks with high water-holding capacity include limestone, sandstone and basalt, while materials used in urban areas (such as asphalt and concrete) have very low permeability leading to flash floods. Human activity. Human factors can also greatly change discharge of a river. On one side, water can be extracted either directly from a river or indirectly from groundwater for the purposes of drinking and irrigation, among others, lowering the discharge. For the latter, the consumption usually spikes during the dry season or during crop growth (i.e., summer and spring). On the other side, waste waters are released into streams, increasing the discharge; however, they are more or less constant all year round so they do not impact the regime as much. Another important factor is the construction of dams, where a lot of water accumulates in a lake, making the minima and maxima less pronounced. In addition, the discharge of water is often in large part regulated in regard to other human needs, such as electricity production, meaning that the discharge of a river downstream of a dam can be completely different than upstream. Here, an example is given for the Aswan dam. As can be observed, the yearly coefficient is lower at the dam than upstream, showing the effect of the dam. Simple regimes. Simple regimes are hence only those that have exactly one peak; this does not hold for cases where both peaks are nival or both are pluvial, which are often grouped together into simple regimes. They are grouped into five categories: pluvial, tropical pluvial, nival, nivo-glacial and glacial. Pluvial regime. Pluvial regimes occur mainly in oceanic and mediterranean climates, such as the UK, New Zealand, southeastern USA, South Africa and the Mediterranean regions. Generally, peaks occur in colder season, from November to May on the Northern Hemisphere (although April and May occur in a small area near Texas) and from June to September on the Southern Hemisphere. Pardé had two different types for this category – the temperate pluvial and the Mediterranean regimes. The peak is due to rainfall in the colder period and the minimum is in summer due to higher evapotranspiration and usually less rainfall. The temperate pluvial regime (Beckinsale symbol CFa/b) usually has a milder minimum and the discharge is quite high also during the summer. Meanwhile, the Mediterranean regime (Beckinsale symbol CS) has a more pronounced minimum due to a lack of rainfall in the region, and rivers have a noticeably smaller discharge during summer, or even dry up completely. Beckinsale distinguished another pluvial regime, with a peak in April or May, which he denoted CFaT as it occurs almost solely around Texas, Louisiana and Arkansas. Tropical pluvial regime. The name for the regime is misleading; the regime commonly occurs anywhere the main rainfall is during summer. This includes the intertropical region, but also includes parts influenced by monsoon, extending north even to Russia and south to central Argentina. It is characterized by a strong peak during the warm period, with a maximum from May to December on the Northern Hemisphere and from January to June on the Southern Hemisphere. The regime therefore allows for a lot of variation, both in terms of when the peak occurs and how low the minimum is. Pardé additionally differentiated this category into two subtypes and Beckinsale split it into four. The most common such regime is Beckinsale's regime AM (for monsoon, as in Köppen classification), which is characterized by a period of low discharge for up to four months. It occurs in western Africa, the Amazon basin, and southeastern Asia. In more arid areas, the period of low water increases to six, seven months and up to nine, which Beckinsale classified as AW. The peak is hence narrower and greater. In dry climate, ephemeral streams that have irregular year-to-year patterns exist. Most of the time, it is dry and it only has discharge during flash floods. Beckinsale classifies it as BW, but only briefly mentions. Due to irregularity, the peak might be spread out or show multiple peaks, and could resemble other regimes. The previous three regimes are all called intertropical by Pardé but the next is also differentiated by him as it has two maxima instead of one. He termed the name equatorial regime, while Beckinsale used the symbol AF. It occurs in Africa around Cameroon and Gabon, and in Asia in Indonesia and Malaysia, where one peak is in October/November/December and another in April/May/June, sort of being symmetrical for both hemispheres. Interestingly, the same pattern is not observed in South America. Nival regime. Nival regime is characterized by a maximum which is contributed by the snow-melt as the temperatures increase above the melting point. Hence, the peaks occur in spring or summer. They occur in regions with continental and polar climate, which is on the Southern Hemisphere mostly limited to the Andes, Antarctica and minor outlying islands. Pardé split the regimes into two groups: the mountain nival and the plain nival regimes, which Beckinsale also expanded. Plain regimes have maxima that are more pronounced and narrow, usually up to three months, and the minimum is milder and mostly not much lower from other months apart from the peak. The minimum, if the regime is not transitioning to a pluvio-nival regime, is usually quickly after the maximum, while for mountain regimes, it is often right before. Such regimes are exceptionally rare on the Southern Hemisphere. Nival regimes are commonly intermittent in subarctic climate where the river freezes during winter. Plain nival regime. Beckinsale differentiates six plain nival to nivo-pluvial regimes, mainly based on when the peak occurs. If the peak occurs in March or April, Beckinsale called this a DFa/b regime, which correlates to Mader's transitional pluvial regime. There, it is defined more precisely that the peak is in March or April, with the second highest discharge in the other of those months, not February or May. This translates to a peak in September or October on the southern hemisphere. This regime occurs in most European plains and parts of St. Lawrence River basin. If the nivo-pluvial peak occurs later, in April or May (October or November on the Southern Hemisphere), followed by the discharge of the other month, the regime is transitional nival or DFb/c. This regime is rarer and occurs mostly in parts of Russia and Canada, but also at some plains at higher altitudes. In parts of Russia and Canada and on elevated plains, the peak can be even later, in May or June (November or December on the Southern Hemisphere). Beckinsale denoted this regime with DFc. Beckinsale also added another category, Dwd, for rivers that completely diminish during the winter due to cold conditions with a sharp maximum in the summer. Such rivers occur in Siberia and northern Canada. The peak can be from May to July on Northern Hemisphere or from November to January on Southern Hemisphere. Apart from that, he also added another category for regimes with pluvio-nival or nivo-pluvial maxima where the pluvial maximum corresponds to a Texan or early tropical pluvial regime, not the usual temperate pluvial. This regime occurs in parts of PRC and around Kansas. If this peak happens later, Beckinsale classified it as DWb/c. The peak can occur as late as September on the Northern Hemisphere or March on the Southern Hemisphere. Mountain nival regime. Pardé and Beckinsale both assigned only one category to the mountain nival regime (symbol HN), but Mader distinguishes several of them. If the peak occurs in April or May on Northern Hemisphere and October or November on Southern Hemisphere with the discharge of the other of those two months following, it is called transitional nival, common for lower hilly areas. If the peak is in May or June on the Northern Hemisphere, or November or December on Southern Hemisphere, followed by the other of those two, the regime is called mild nival. The regime which Mader just calls 'nival' is when the highest discharge is in June/December, followed by July/January, and then May/November. Nivo-glacial regime. The nivo-glacial regime occurs in areas where seasonal snow meets the permanent ice sheets of glaciers on top of mountains or at higher latitudes. Therefore, both melting of snow and ice from glaciers contribute to produce a maximum in early or mid summer. In turn, it could still be distinguished between plain and mountain regimes, but that distinction is rarely made despite being quite obvious. It is also characterized by great diurnal changes, and a sharp maximum. Pardé and Beckinsale did not distinguish this regime from glacial and nival regimes. Mader defines it as having a peak in June or July, followed by the other of the two, and then the August's discharge, which translates to a peak in December or January, followed by the other two and then February for Southern Hemisphere. Such regimes occur in the Alps, the Himalayas, Coast Mountains and southern Andes. Plain nivo-glacial regimes occur on Greenland, northern Canada and Svalbard. Glacial regime. The glacial regime is the most extreme variety of temperature-dependent regimes and occurs in areas where more than 20% of its catchment area is covered by glaciers. This is usually at altitudes over , but it can also happen in polar climates which was not explicitly mentioned by Pardé, who grouped both categories together. Rivers with this regime also experience great diurnal variations. The discharge is heavily dominated by the melting of glaciers, leading to a strong maximum in late summer and a really intense minimum during the rest of the year, unless it has major lake storage, such as the Rhône after the Lake Geneva or the Baker River, which is shown below. Mader defines it to have the highest discharge in July or August, followed by the other month. In really extreme cases (mostly on Antarctica), there could also be a plain glacial regime. Mixed regimes. Mixed or double regimes are regimes where one peak is due to a temperature-dependent factor (snow or ice melt) and one is due to rainfall. There are many possible combinations, but only some have been studied in more detail. They can also be split into two categories – plain (versions of Beckinsale's plain nival regimes with another peak) and mountain. They can be in general thought of as combinations of two simple regimes but the cold-season pluvial peak is usually in autumn, not in late winter as is common for temperate pluvial regime. Mixed regimes are usually split into two other categories: the nivo-pluvial and pluvio-nival regimes, the first having a nival peak in late spring (April to June on Northern Hemisphere, October to December on Southern Hemisphere) and the biggest minimum in the winter while the latter usually has a nival peak in early spring (March or April on Northern Hemisphere, September or October on Southern Hemisphere) and the biggest minimum in the summer. Plain mixed regime. Beckinsale did not really classify the regimes by the number of factors contributing to the discharge, so such regimes are grouped with simple regimes in his classification as they appear in close proximity to those regimes. For all of his six examples, mixed regimes can be found, although for DFa and DWd, that is quite rare. In the majority of cases, they are nivo-pluvial with the main minimum in winter, except for DFa/b. Mountain mixed regime. Mountain mixed regimes are thoroughly researched and quite common in the Alps, and rivers with such regimes rise in most mountain chains. Beckinsale does not distinguish them from plain regimes, however, they are classified rather different from his classification in newer sources. Mader classifies mixed regimes with the nival peaks corresponding to mild nival or Mader's nival as 'winter nival' and 'autumn nival', depending on the pluvial peak. The winter peak is usually small. In monsoonal areas, the peak can be in summer as well. Mader denoted only those regimes with nival peaks corresponding to transitional nival as 'nivo-pluvial'. Hrvatin in his distinction also differentiated between 'high mountain Alpine nivo-pluvial regime' and 'medium mountain Alpine nivo-pluvial regime', the first showing significant difference between the minima and the other not, although some regimes in his classification also have mild nival peaks. In Japan, the pluvial peak is in the summer. In Mader's classification, any regime with a transitional pluvial peak is pluvio-nival. Hrvatin also defines it further with a major overlap to Mader's classification. If minima are rather mild, then it is classified as 'Alpine pluvio-nival regime', if minima are more pronounced but the peaks are mild, then it is classified as 'Dinaric-Alpine pluvio-nival regime' and if the peaks are also pronounced, then it is 'Dinaric pluvio-nival regime'. His 'Pannonian pluvio-nival regime' corresponds to a plain mixed regime. Japan has mixed regimes with tropical pluvial peak. Complex regimes. Complex regimes is the catch-all category for all rivers where the discharge is influenced by many different factors that occur at different times of the year. For rivers that flow through many different climates and have many tributaries from different climates, their regime can become unrepresentative of any area the river's catchment area is in. Many of the world's longest rivers have such regimes, such as the Nile, the Congo, the St. Lawrence River and the Rhone. A special form of such regimes is the uniform regime, where all peaks and minima are extremely mild.
[ { "math_id": 0, "text": "percentage_i = {Q_{i}\\over 12 \\times {Q_{mean}}} \\times 100%" }, { "math_id": 1, "text": "Q_{i}" }, { "math_id": 2, "text": "Q_{mean}" }, { "math_id": 3, "text": "100/12 \\approx 8.33%" }, { "math_id": 4, "text": "PK_i = {Q_{i}\\over{Q_{mean}}}" }, { "math_id": 5, "text": "K_{year} = {Q_{i_{max}}\\over{Q_{i_{min}}}}, {Q_{i_{min}}} \\neq 0" }, { "math_id": 6, "text": "Q_{i_{max}}" }, { "math_id": 7, "text": "Q_{i_{min}}" }, { "math_id": 8, "text": "C_v^* = {\\sqrt {{\\sum_{i=1}^{12}(Q_i-Q_{mean})^2} \\over 12} \\over Q_{mean}} \\times 100%" }, { "math_id": 9, "text": "SK_{doppelmonat} = {Q_{doppelmonat}\\over{Q_{mean}}}" }, { "math_id": 10, "text": "SK_{doppelmonat} = {Q_{doppelmonat}\\over{2 \\times Q_{mean}}}" }, { "math_id": 11, "text": "SK_{year} = {Q_{{doppelmonat}_{max}}\\over{Q_{{doppelmonat}_{min}}}}, {Q_{{doppelmonat}_{min}}} \\neq 0" }, { "math_id": 12, "text": "Q_{doppelmonat} = Q_i + Q_{i+1}" } ]
https://en.wikipedia.org/wiki?curid=76906020
76910156
Didicosm
Short story by Greg Egan "Didicosm" is a science-fiction short story by Australian writer Greg Egan, first published in "Analog" in July/August 2023. Plot. As a child, her father shows Charlotte the night sky and wants her to realize the truth about the endless worlds and possibilities in the universe. In one of his books, he read about the idea of the universe repeating, but with changes occurring and later uses this thought to rationalize his own suicide. After her mother dies as well, Charlotte is brought to her grandmother and later wants to find the correct topology of the universe, which turns out to be a didicosm (Hantzsche-Wendt manifold). Her own student later comes up with a theoretical explanation involving quantum gravity, concluding this shape is indeed canonical due to being the only platycosm with a finite first homology group. Charlotte returns to her partner thinking that she lives in the best possible universe. Background. While the 3-torus (formula_0), also one of the ten platycosms, can be depicted as space-filling repetition of the exact same cube with same orientation (hence a cube with respective opposite sides identified with same alignment), the didicosm can be depicted as a chessboard-like filling featuring cubes flipped and turned upside down. Both illustrations are featured in the short story. In 1984, Alexei Starobinsky and Yakov Zeldovich at the Landau Institute in Moscow proposed a cosmological model where the shape of the universe is a 3-torus. The first homology of the didicosm is formula_1. (For the 3-torus it is formula_2.) The derivation is explained by Greg Egan on his website, which also lists four academic papers taken for the scientific basis of the short story: „Describing the platycosms“ by John Conway and Jean-Paul Rossetti, „The Hantzsche-Wendt Manifold in Cosmic Topology“ by Ralf Aurich and Sven Lustig, „On the coverings of the Hantzsche-Wendt Manifold“ by Grigory Chelnokov and Alexander Mednykh as well as „How Surfaces Intersect in Space"“" by J. Scott Carter. Reception. Reviews. Sam Tomaino, writing for "SFRevu", thinks that the short story „gets a little technical but [has] an interesting idea“. Mike Bickerdike, writing for "Tangent Online", states that "Didicosm" is "somewhat unusual as an SF short story, because while it is technically a story, it is more a speculation on whether Hantzsche–Wendt manifolds apply in cosmological topology." He claims that "there is a story here, but it is rather weak, and serves only as a vehicle" for the main idea, which is an "impenetrable subject for those [...] who lack a higher degree in theoretical physics or the relevant mathematics." Awards. The short story was a finalist for the Analog Analytical Laboratory (AnLab) Award for best novelette in 2023. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T^3=\\mathbb R^3/\\mathbb Z^3" }, { "math_id": 1, "text": "\\mathbb Z_4^2" }, { "math_id": 2, "text": "\\mathbb Z^3" } ]
https://en.wikipedia.org/wiki?curid=76910156
76910378
Binet-Simon Intelligence Test
Historical intelligence test The Binet-Simon Intelligence Test was the first working intelligence test. The development of the test started in 1905 with Alfred Binet and Théodore Simon in Paris, France. Binet and Simon published articles about the test multiple times in Binet's scientific journal "L'Année Psychologique", twice in 1905, once in 1908, and once in 1911 (this time, Binet was the sole author). The revisions and publications on the Binet-Simon Intelligence Test by Binet and Simon stopped in 1911 due to the death of Alfred Binet in 1911. The outcomes of the test were related to academic performance. The Binet-Simon was popular because people felt that it was able to measure higher and more complex mental functions in situations that closely resembled real life. This was in contrast to previous attempts at tests of intelligence, which were designed to measure specific and separate "faculties" of the mind. Binet's and Simon's intelligence test was well received among contemporary psychologists because it fit the generally accepted view that intelligence includes many different mental functions (e.g. language proficiency, imagination, memory, sensory discrimination). Precursors. The precursors to the Binet-Simon Intelligence Test were craniological and anthropometric research, especially the anthropometric research by Francis Galton and James Mckeen Cattell. Galton and Cattell discontinued their research when they realised that their measurements of human bodies did not correlate to academic performance. As a result, French psychology excluded methods that measured intelligence based on correlations between physical measurements of the body and academic success. Development. Before working on the test, Alfred Binet had experience of raising his two daughters on whom he also conducted studies of intelligence between 1900 and 1902. He had already written a considerable number of scientific articles on individual differences, intelligence, magnetism, hypnosis, and many other psychological topics. Théodore Simon had studied medicine and was beginning his PhD at the Perray-Vacluse psychiatric hospital when he contacted Binet in 1898, when Simon was 25 years old. Simon's supervisor, Dr Blin, tasked Simon with finding a better assessment to measure children's intelligence than the available medical methods. Binet and Simon started working together, first looking at the relationship between skull measurements and intelligence, and later abandoning this anthropometric approach in favour of psychological testing. The development of Binet and Simon's intelligence test started in Paris in 1905. The test was issued to identify mental abnormality in French primary school children. These children, referred to as feebleminded or mentally retarded, supposedly caused trouble in French primary schools because they were unable to follow standard education and were disturbing the rest of their classmates. The law on compulsory primary education for children ages of six to thirteen was passed in 1882, and in 1904, primary school teachers started complaining about children of abnormal intelligence in the press and meetings. These complaints were picked up by French national politicians, resulting in the establishment of the Bourgeois Commission in 1904 by the French Minister of Public Instruction. This Commission aimed to study the measures to be taken so that these abnormal children could be identified. The Bourgeois Commission was staffed by specialists in the study of children with mental abnormality (psychiatrists, psychologists), members of the public education system and representatives from the interior ministry. Binet joined the Commission because of his presidency of (Free Society of the Psychological Study of the Child). was a scientific collaboration mainly between scientists and educators. Binet volunteered as the Secretary of the Commission. Binet constructed the first intelligence test to limit the influence of psychiatrists. Psychiatrists such as Bourneville (who was also on the Bourgeois Commission) argued for taking abnormal children out of schools and placing them in medical asylums to receive special education from medical practitioners. Identifying and treating the abnormal had, until that point, been a psychiatric domain but Binet wanted to keep these children in schools and looked for a way for psychologists to become the authority. Binet was supported in this attempt by , other collaborators and friends. Despite Binet's position as Secretary of the Bourgeois Commission, he was unable to prevent the Commission from recommending that only medical and educational experts should decide on the intellectual level of children and if they should go to a special school. However, the recommendation never turned into legislation, nor did Bourneville's plans for creating special education classes in asylums. lobbied against both these plans, and Binet was encouraged to come up with a better alternative to measure the difference between normal and abnormal children. Versions. There have been three versions of the Binet-Simon Intelligence Test. The first, designed in 1905, was designed to detect abnormal children. The second version, in 1908, added the notion of age, making it possible to calculate how many years a child was intellectually behind. The last test, from 1911, retained the notion of mental age and was a revised version of the 1908 version. The tests from 1908 and 1911 were later used by American psychologists, such as Henry H. Goddard and Lewis Terman. 1905 version. The 1905 version aimed to distinguish children with normal and 'abnormal' intelligence. Binet and Simon grouped children into: 'idiocy', 'imbecility' 'debility' and 'normality'. Each category had its own set of tasks, organised from lowest to highest difficulty. Typically, the administration of the full test only took fifteen minutes. Binet and Simon assumed that an 'idiot' had basic skills. Six subtests on the 1905 test first measured these basic skills. The second part of the Binet-Simon Intelligence test aimed to differentiate between 'idiocy' and 'imbecility'. If a child could not pass all the subtests in this section, the test was discontinued, and the child was labelled an 'idiot'. This part was made up of five subtests. The third part of the 1905 test intended to differentiate between imbecility and debility. If a child could not pass all the tests from this part, they were labelled an imbecile. This part included 15 subtests. The fourth and final part of the test distinguished between debility and normality. This part had four subtests. The 1905 test was mainly based on Binet's work from the previous 15 years and was constructed within a few weeks. This bundle of tests was the first metric scale of intelligence ("échelle métrique de l'intelligence"). 1908 version. Historian Annette Mülberger argued that the 1908 version was the first successful version of the test. The published text could be easy read as a manual of an intelligence test. The test had become a scale, and the subtests were arranged from easiest to most difficult. The test also showed in detail the four to eight tasks that children should be able to perform at 11 different ages, ranging from 3 to 13. The test was constructed by giving the subtests to children of a specific (chronological) age group. If 75% of these children passed, the subtest would be assigned to that age group. The test measured what Binet termed mental age, the age level at which a child could perform. If a child, for example, could perform all the tasks meant for a 10-year-old, but not those meant for an 11-year-old, they would have the mental age of a 10. The mental age was established independently from the chronological age, meaning that a child could have the mental age of a 10-year-old and the chronological age of a 12-year-old. It was also possible for a child to have a higher mental age than their chronological age. If the mental age of a child was two years behind their chronological age, the child was classified as abnormal. Binet and Simon saw a two or more years lag as a warning sign of low intelligence, which required special attention, first by providing remedial education. The 1908 version of the Binet-Simon test was seen as a scientific and objective method capable of delivering factual statements about the complex mental phenomenon of human intellectual capabilities. 1911 version. In 1911, Binet revised the 1908 version without Simon. Simon did not contribute to the 1911 version because he had moved to Northern France to work at the Saint-Yon asylum and on his book "L'Aliéné, l'Asile, l'Infirmier" [The Alienated, the Asylum, the Nurse]. In the 1911 version, no new tests were added. The number of subtests was evened out, with five tasks per age group. Binet created new categories for 15-year-olds and adults by moving the most difficult subtests to these new categories. This 1911 publication was made up mainly of clarifications and reactions to comments from teachers and researchers and the presentation of new data collected from using the test in a couple of schools. Binet died in 1911, and Simon did not work on any new test versions. Translated version for the United States 1911. There were two differences between this practical guide from 1911 for the US and the original French test from 1908, apart from it being a translation from French to English. First, the translated version included a category for idiocy (questions 1-6), which measures a mental age of 1-2, and the addition of tests 17a and 50a. Secondly, this version focused on distinguishing between different levels of mental ability. Arranged from lowest to highest, these were: 'idiot', 'imbecile', 'moron' and 'normal'.The test was advised to be administered in the following ways. Before starting the test, the person conducting the test, the experimenter, would note down the biodata of the subject. These biodata were: name, birth year, place of birth, nationality, sex, health, physical defects, school grade, school standing (years pedagogically retarded or accelerated) and the data of the examination and who the experimenter was that executed the test. After the test was finished, the experimenter indicated the subject's mental condition during the test. The general results were first reported as the number of 'passed tests of mental age', then the chronological (actual) age, and then the number of years difference between this and intellectual age. Lastly, the experimenter had to indicate the degree of mentality. The labels an examiner could choose were 'supernormal', 'normal', 'subnormal', 'backward' or 'feeble-minded'. These labels could also be linked to the labels 'low, middle or high idiot'; low, middle or high imbecile, and low, middle or high moron. Validity. When the test was published in France, its validity was accepted because it could distinguish between normal and intellectually slow children and because the scores on the test increased with a child's age. Moreover, Binet and Simon presented evidence that the order of tasks was linear and that a child's score on the test would correlate to their academic performance. In the United States, Henry Goddard (1886-1957) was enthralled by the Binet-Simon Test's efficiency. he is quoted as saying: 'No one can use the tests on any fair number of children without becoming convinced that whatever defects or faults they may have, and no one can claim that they are perfect, the tests do come amazingly near [to] what we feel to be the truth in regard to the mental status of any child tested.' Who used it? The test's original purpose was to distinguish between normal and abnormal children in French primary schools. The test was administered by French public school teachers. Critiques. Yerkes's Critique. Robert Yerkes (1876-1956) critized the Binet-Simon test for assuming that everyone taking it is a native speaker. He also argued that an intelligence test should have a univerisal performance scale, not an age-graded scale. He further stated that social and biological differences should be considered when deciding the norms used to evaluate performance . Mülberger's Critique. Mülberger (2020) has pointed out that an intelligence test is a theory-laden tool that 'does something' with those who interact with it. The Binet-Simon Intelligence Test instrumentalized intelligence for psychologists for the first time. Consequently, ontological assumptions and conceptual understandings were black-boxed. As a result, the concept of intelligence became what the test was measuring. The test has built-in sets of norms and values that assume the kind of mental work a normal citizen should be able to perform. Mülberger (2020) recognizes the methodological diligence of psychologists to assess and assure its validity, reliability and replicability but that despite these efforts, the test was dangerous and had the ability to bring about scepticism and fear. Gould's Critique. In Stephen Jay Gould's book 'the mismeasure of man', Gould argues that even though the Binet-Simon test was used for a morally good goal (to identify children who needed extra help), the way the test was subsequently used in the United States by psychologists such as Spearman, Terman, Goddard, Burt and Brigham, was not ethical. Gould argues that in the United States, the intelligence test was used to help discriminate against foreigners and members of the working class and to help those of higher social classes. Influence. The Binet-Simon intelligence test was the model for future intelligence tests. Many later intelligence tests also combined different mental tests to arrive at a single score of intelligence. Specific items from the Binet-Simon test were also be re-used for other intelligence tests. Theodore Simon was the biggest supporter of the test after Binet's passing in 1911, advocating for its international use. Shortly after, other famous pedagogues and psychologists such as Édouard Claparède and Ovide Decroly, joined him in his advocacy of the test. By using their large and far-reaching network, information about the Binet-Simon test's reliability and efficiency spread rapidly during the period when schooling became public and graded. A group of physicians from Barcelona hired by the City Hall were among the first to use a translated version of the 1908 version.They tested 420 boys and girls to identify physically, mentally, and socially lagging schoolchildren. Parents and children hoped for low scores on the test because it would mean the children would be chosen to go to state-sponsored summer camps in the countryside. Henry H. Goddard became aware of the Binet-Simon test while travelling through Europe, and became the greatest promoter of the Binet-Simon Intelligence test in the United States. In 1916, Goddard instructed his laboratory field worker, Elizabeth S. Kite, to translate the complete Binet and Simon's work on the intelligence test into English. As the head of research at Vineland Training School for Feeble-minded Girls and Boys, Goddard led a movement that would result in the widespread use of the Binet-Simon test in American Institutions. William Stern's Intelligence Quotient (1912). In 1912, William Stern (1871-1938) standardized the test scores of the Binet-Simon Intelligence Test. He achieved this by dividing the mental age by the chronological age. The number this calculation produced was the widely known intelligence quotient or IQ. For many people, both testers and tested, this number became a person's precise built-in intelligence. Théodore Simon critcized this revision of the Binet-Simon test, arguing it to be a betrayal of the test's objective. formula_0 Robbert Yerkes &amp; James Bridges (1915). Robbert Yerkes and James Bridges revised the year scale of the Binet-Simon test to a point scale, becoming the Yerkes-Bridges Point Scale Examination. Yerkes and Bridges achieved this by groupings items of similar content. For example, the Binet-Simon had multiple memory span digit tests spread over different age groups.For example, Yerkes and Bridges took all those memory span digit tests and created a new category for them, arranged from easiest to difficult. They created many other groups using this technique. This adaptation would become the model for the later Wechsler Scales. Lewis Terman's Stanford-Binet Intelligence Scales (1916). The Stanford-Binet Intelligence Scales was a revised version of the Binet-Simon Intelligence test by Lewis Terman. He started his revision in 1910 and published it in 1916. Terman used the 1908 version of the Binet-Simon test for his revision. The most important addition is the replacement of mental age for the intelligence quotient (IQ). The Stanford-Binet Intelligence test also gained items. The first version of the Stanford-Binet had 90 items, and a later revised version had 129.
[ { "math_id": 0, "text": "IQ = \\frac{\\text{mental age}}{\\text{chronological age}} " } ]
https://en.wikipedia.org/wiki?curid=76910378
76913611
Milnor conjecture (Ricci curvature)
In 1968 John Milnor conjectured that the fundamental group of a complete manifold is finitely generated if its Ricci curvature stays nonnegative. In an oversimplified interpretation, such a manifold has a finite number of "holes". A version for almost-flat manifolds holds from work of Gromov. In two dimensions formula_0 has finitely generated fundamental group as a consequence that if formula_1 for noncompact formula_0, then it is flat or diffeomorphic to formula_2, by work of Cohn-Vossen from 1935. In three dimensions the conjecture holds due to a noncompact formula_3 with formula_1 being diffeomorphic to formula_4 or having its universal cover isometrically split. The diffeomorphic part is due to Schoen-Yau (1982) while the other part is by Liu (2013). Another proof of the full statement has been given by Pan (2020). In 2023 Bruè, Naber and Semola disproved in two preprints the conjecture for six or more dimensions by constructing counterexamples that they described as "smooth fractal snowflakes". The status of the conjecture for four or five dimensions remains open. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M^2" }, { "math_id": 1, "text": "\\operatorname{Ric}>0" }, { "math_id": 2, "text": "\\mathbb{R}^2" }, { "math_id": 3, "text": "M^3" }, { "math_id": 4, "text": "\\mathbb{R}^3" } ]
https://en.wikipedia.org/wiki?curid=76913611
769148
Ludwig Prandtl
German physicist (1875–1953) Ludwig Prandtl (4 February 1875 – 15 August 1953) was a German fluid dynamicist, physicist and aerospace scientist. He was a pioneer in the development of rigorous systematic mathematical analyses which he used for underlying the science of aerodynamics, which have come to form the basis of the applied science of aeronautical engineering. In the 1920s, he developed the mathematical basis for the fundamental principles of subsonic aerodynamics in particular; and in general up to and including transonic velocities. His studies identified the boundary layer, thin-airfoils, and lifting-line theories. The Prandtl number was named after him. Early years. Prandtl was born in Freising, near Munich, on 4 February 1875. His mother suffered from a lengthy illness and, as a result, Ludwig spent more time with his father, a professor of engineering. His father also encouraged him to observe nature and think about his observations. Prandtl entered the Technische Hochschule Munich in 1894 and graduated with a Ph.D. under guidance of Professor August Foeppl in six years. His thesis was "On Tilting Phenomena, an Example of Unstable Elastic Equilibrium" (1900), After university, Prandtl went to work in the Maschinenfabrik Augsburg-Nürnberg to improve a suction device for shavings removal in the manufacturing process. While working there, he discovered that the suction tube did not work because the lines of flow separated from the walls of the tube, so the expected pressure rise in the sharply-divergent tube never occurred. This phenomenon had been previously noted by Daniel Bernoulli in a similar hydraulic case. Prandtl recalled that this discovery led to the reasoning behind his boundary-layer approach to resistance in slightly-viscous fluids. Later years. In 1901 Prandtl became a professor of fluid mechanics at the technical school in Hannover, later the Technical University Hannover and then the University of Hannover. It was here that he developed many of his most important theories. On August 8, 1904, he delivered a groundbreaking paper, "Über Flüssigkeitsbewegung bei sehr kleiner Reibung" ("On the Motion of Fluids in Very Little Friction"), at the Third International Mathematics Congress in Heidelberg. In this paper, he described the boundary layer and its importance for drag and streamlining. The paper also described flow separation as a result of the boundary layer, clearly explaining the concept of stall for the first time. Several of his students made attempts at closed-form solutions, but failed, and in the end the approximation contained in his original paper remains in widespread use. The effect of the paper was so great that Prandtl would succeed Hans Lorenz as director of the Institute for Technical Physics at the University of Göttingen later in the year. In 1907, during his time at Göttingen, Prandtl was tasked with establishing a new facility for model studies of motorized airships called Motorluftschiffmodell-Versuchsanstalt (MVA), later the Aerodynamische Versuchsanstalt (AVA) in 1919. The facility was focused on wind tunnel measurements of airship models with the goal of shapes with minimal air resistance. During WWI, it was used as a large research establishment with many tasks including lift and drag on airfoils, aerodynamics of bombs, and cavitation on submarine propeller blades. In 1925, the university spun off his research arm to create the Kaiser Wilhelm Institute for Flow Research (now the Max Planck Institute for Dynamics and Self-Organization). Due to the complexity of Prandtl's boundary layer ideas in his 1904 paper, the spread of the concept was initially slow. Many people failed to adopt the idea due to lack of understanding. There was a halt on new boundary layer discoveries until 1908 when two of his students at Gottingen, Blasius and Boltze, released their dissertations on the boundary layer. Blasius' dissertation explained what happened with the boundary layer when a flat plate comes in parallel contact with a uniform stream. Boltze's research was similar to Blasius' but applied Prandtl's theory to spherical shapes instead of flat objects. Prandtl expanded upon the ideas in his student's dissertations to include a thermal boundary layer associated with heat transfer. There would be three more papers from Gottingen researchers regarding the boundary layer released by 1914. Due to similar reasons to Prandtl's 1904 paper, these first 7 papers on the boundary layer would be slow to spread outside of Gottingen. Partially due to World War I, there would be a lack of papers published regarding the boundary layer until another of Prandtl's students, Theodore Von Karman, published a paper in 1921 on the momentum integral equation across the boundary layer. Following earlier leads by Frederick Lanchester from 1902–1907, Prandtl worked with Albert Betz and Max Munk on the problem of a useful mathematical tool for examining lift from "real world" wings. The results were published in 1918–1919, known as the Lanchester–Prandtl wing theory. He also made specific additions to study cambered airfoils, like those on World War I aircraft, and published a simplified thin-airfoil theory for these designs. This work led to the realization that on any wing of finite length, wing-tip effects became very important to the overall performance and characterization of the wing. Considerable work was included on the nature of induced drag and wingtip vortices, which had previously been ignored. Prandtl showed that an elliptical spanwise lift distribution the most efficient, giving the minimum induced drag for the given span. These tools enabled aircraft designers to make meaningful theoretical studies of their aircraft before they were built. Prandtl later extended his theory to describe a bell-like lift distribution, reducing the loads near the tip of the wings by washing out the wing tips until negative downwash was obtained, which gave the minimum induced drag for any given wing structural weight. However, this new lift distribution drew less interest than the elliptical distribution and was initially ignored in most practical aircraft designs. This concept has been rediscovered by other researchers and has become increasingly important (see also the Prandtl-D experimental aircraft). Prandtl and his student Theodor Meyer developed the first theories of supersonic shock waves and flow in 1908. The Prandtl–Meyer expansion fans allowed for the construction of supersonic wind tunnels. He had little time to work on the problem further until the 1920s, when he worked with Adolf Busemann and created a method for designing a supersonic nozzle in 1929. Today, all supersonic wind tunnels and rocket nozzles are designed using the same method. A full development of supersonics would have to wait for the work of Theodore von Kármán, a student of Prandtl at Göttingen. Prandtl developed the concept of "circulation" which proved to be particularly important for the hydrodynamics of ship propellers. He did most of the experimental work at his lab in Göttingen from 1910-1918 with his assistant Albert Betz and student Max Munk. Most of his discoveries related to circulation would be kept secret from the western world until after World War I. Prior to World War I, the Society of German Natural Scientists and Physicians (GDNÄ) was the only opportunity for applied mathematicians, physicists, and engineers in German speaking countries to discuss. In 1920, they met in Bad Nauheim and came to the conclusion that there was a need for a new umbrella for applied sciences due to their experience during the war. In the same year, physicists primarily from industrial laboratories formed a new society called the German Physical Society (DGTP). In September 1921, the two societies held a meeting with the German Mathematical Society (DMV) in Jena. In its first volume, ZAMM (Journal of Applied Mathematics and Mechanics) stated that this meeting "for the first time, applied mathematics and mechanics was coming to its own to a larger extent" This journal advertised the common goals of Prandtl, Theodore von Kármán, Richard von Mises, and Hans Reissner. On top of the foundation of ZAMM, the GAMM (International Association of Applied Mathematics and Mechanics) was also formed due to the joint efforts of Prandtl and his peers. After these initial meetings of GAMM, it became clear that there was now a new international community of mathematicians, "scientific engineers", and physicists. Other work examined the problem of compressibility at high subsonic speeds, known as the Prandtl–Glauert correction. This became very useful during World War II as aircraft began approaching supersonic speeds for the first time. He also worked on meteorology, plasticity and structural mechanics. He also made significant contributions to the field of tribology. Following Prandtl's investigation into instabilities from 1921-1929, he then moved to exploring developed turbulence. This was also being investigated by Kármán, resulting in a race to formulate a solution for the velocity profile in developed turbulence. Regarding the professional rivalry that started between the two, Kármán commented: “I came to realize that ever since I had come to Aachen my old professor and I were in a kind of world competition. The competition was gentlemanly, of course. But it was first-class rivalry nonetheless, a kind of Olympic games, between Prandtl and me, and beyond that between Göttingen and Aachen. The ‘playing field’ was the Congress of Applied Mechanics. Our ‘ball’ was the search for a universal law of turbulence.” Around 1930, the race ended in a draw as both men concluded that the inverse square of skin friction was related to the logarithmic value of the product of Reynold's number and skin friction as seen below where k and C are constants. formula_0 Prandtl and von Kármán's work on the boundary was influential and adopted by aerodynamic and hydrodynamic experts around the world after WWI. In May 1932, the International Conference on Hydromechanical Problems of Ship Propulsion was held in Hamburg. Günther Kempf showcased a number of experiments at the conference which confirmed many of the theoretical discoveries of von Kármán and Prandtl. Prandtl and the Third Reich. After Hitler's rise to power and the establishment of the Third Reich, Prandtl continued his role as director of the Kaiser Wilhelm Society. During this period, the Nazi air ministry, led by Hermann Göring, often used Prandtl's international reputation as a scientist to promote Germany's scientific agenda. Prandtl appears to have happily served as an ambassador for the Nazi regime, writing in 1937 to a NACA representative "I believe that Fascism in Italy and National Socialism in Germany represent very good beginnings of new thinking and economics." Prandtl's support for the regime is apparent in his letters to G. I. Taylor and his wife in 1938 and 1939. Referring to Nazi Germany's treatment of Jews, Prandtl wrote "The struggle, which Germany unfortunately had to fight against the Jews, was necessary for its self-preservation." Prandtl also claimed that "If there will be war, the guilt to have caused it by political measures is this time unequivocally on the side of England." As a member of the German Physical Society (DPG), Prandtl assisted Carl Ramsauer in drafting the DPG Petition in 1941. The DPG Petition would be published in 1942 and argued that physics in Germany was falling behind that of the United States due to rejection of "Jewish Physics" (relativity and quantum theory) from German physicists. After publication of the DPG Petition, the belief of "German Physics" superiority deteriorated to allow for German students to study these new fields in school. Death and afterwards. Prandtl worked at Göttingen until he died on 15 August 1953. His work in fluid dynamics is still used today in many areas of aerodynamics and chemical engineering. He is often referred to as the father of modern aerodynamics. The crater Prandtl on the far side of the Moon is named in his honor. The Ludwig-Prandtl-Ring is awarded by Deutsche Gesellschaft für Luft- und Raumfahrt in his honor for outstanding contribution in the field of aerospace engineering. In 1992, Prandtl was inducted into the International Air &amp; Space Hall of Fame at the San Diego Air &amp; Space Museum. Notable students. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{k}{\\sqrt{c_{f}}} = \\log_{10}{(Re * C_{f})} + C " } ]
https://en.wikipedia.org/wiki?curid=769148
769163
Astroid
Curve generated by rolling a circle inside another circle with 4x or (4/3)x the radius In mathematics, an astroid is a particular type of roulette curve: a hypocycloid with four cusps. Specifically, it is the locus of a point on a circle as it rolls inside a fixed circle with four times the radius. By double generation, it is also the locus of a point on a circle as it rolls inside a fixed circle with 4/3 times the radius. It can also be defined as the envelope of a line segment of fixed length that moves while keeping an end point on each of the axes. It is therefore the envelope of the moving bar in the Trammel of Archimedes. Its modern name comes from the Greek word for "star". It was proposed, originally in the form of "Astrois", by Joseph Johann von Littrow in 1838. The curve had a variety of names, including tetracuspid (still used), cubocycloid, and paracycle. It is nearly identical in form to the evolute of an ellipse. Equations. If the radius of the fixed circle is "a" then the equation is given by formula_0 This implies that an astroid is also a superellipse. Parametric equations are formula_1 The pedal equation with respect to the origin is formula_2 the Whewell equation is formula_3 and the Cesàro equation is formula_4 The polar equation is formula_5 The astroid is a real locus of a plane algebraic curve of genus zero. It has the equation formula_6 The astroid is, therefore, a real algebraic curve of degree six. Derivation of the polynomial equation. The polynomial equation may be derived from Leibniz's equation by elementary algebra: formula_0 Cube both sides: formula_7 Cube both sides again: formula_8 But since: formula_9 It follows that formula_10 Therefore: formula_11 or formula_12 Properties. The astroid has four cusp singularities in the real plane, the points on the star. It has two more complex cusp singularities at infinity, and four complex double points, for a total of ten singularities. The dual curve to the astroid is the cruciform curve with equation formula_17 The evolute of an astroid is an astroid twice as large. The astroid has only one tangent line in each oriented direction, making it an example of a hedgehog. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^{2/3} + y^{2/3} = a^{2/3}. " }, { "math_id": 1, "text": "\n\\begin{align}\n x = a\\cos^3 t &= \\frac{a}{4} \\left( 3\\cos \\left(t\\right) + \\cos \\left(3t\\right)\\right), \\\\[2ex]\n y = a\\sin^3 t &= \\frac{a}{4} \\left( 3\\sin \\left(t\\right) - \\sin \\left(3t\\right) \\right).\n\\end{align}\n" }, { "math_id": 2, "text": "r^2 = a^2 - 3p^2," }, { "math_id": 3, "text": "s = {3a \\over 4} \\cos 2\\varphi," }, { "math_id": 4, "text": "R^2 + 4s^2 = \\frac{9a^2}{4}." }, { "math_id": 5, "text": "r = \\frac{a}{\\left(\\cos^{2/3}\\theta + \\sin^{2/3}\\theta\\right)^{3/2}}." }, { "math_id": 6, "text": "\\left(x^2 + y^2 - a^2\\right)^3 + 27 a^2 x^2 y^2 = 0. " }, { "math_id": 7, "text": "\\begin{align}\nx^{6/3} + 3x^{4/3}y^{2/3} + 3x^{2/3}y^{4/3} + y^{6/3} &= a^{6/3} \\\\[1.5ex]\nx^2 + 3x^{2/3}y^{2/3} \\left(x^{2/3} + y^{2/3}\\right) + y^2 &= a^2 \\\\[1ex]\nx^2 + y^2 - a^2 &= -3x^{2/3}y^{2/3} \\left(x^{2/3} + y^{2/3}\\right)\n \\end{align}" }, { "math_id": 8, "text": "\\left(x^2 + y^2 - a^2\\right)^3 = -27 x^2 y^2 \\left(x^{2/3} + y^{2/3}\\right)^3" }, { "math_id": 9, "text": "x^{2/3} + y^{2/3} = a^{2/3} \\," }, { "math_id": 10, "text": "\\left(x^{2/3} + y^{2/3}\\right)^3 = a^2." }, { "math_id": 11, "text": "\\left(x^2 + y^2 - a^2\\right)^3 = -27 x^2 y^2 a^2" }, { "math_id": 12, "text": "\\left(x^2 + y^2 - a^2\\right)^3 + 27 x^2 y^2 a^2 = 0. " }, { "math_id": 13, "text": "\\frac{3}{8} \\pi a^2" }, { "math_id": 14, "text": "6a" }, { "math_id": 15, "text": "\\frac{32}{105}\\pi a^3" }, { "math_id": 16, "text": "\\frac{12}{5}\\pi a^2" }, { "math_id": 17, "text": " x^2 y^2 = x^2 + y^2." } ]
https://en.wikipedia.org/wiki?curid=769163
76916805
Anafunctor
Mathematical notion An anafunctor is a notion introduced by for ordinary categories that is a generalization of functors. In category theory, some statements require the axiom of choice, but the axiom of choice can sometimes be avoided when using an anafunctor. For example, the statement "every fully faithful and essentially surjective functor is an equivalence of categories" is equivalent to the axiom of choice, but we can usually follow the same statement without the axiom of choice by using anafunctor instead of functor. Definition. Span formulation of anafunctors. Let X and A be categories. An anafunctor F with domain (source) X and codomain (target) A, and between categories X and A is a category formula_0, in a notation formula_1, is given by the following conditions: Set-theoretic definition. An anafunctor formula_6 following condition: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|F|" }, { "math_id": 1, "text": "F:X \\xrightarrow{a} A" }, { "math_id": 2, "text": "F_0" }, { "math_id": 3, "text": "F_0:|F| \\rightarrow X" }, { "math_id": 4, "text": "F_1:|F| \\rightarrow A" }, { "math_id": 5, "text": "X \\leftarrow |F| \\rightarrow A" }, { "math_id": 6, "text": "F: X \\xrightarrow{a} A" }, { "math_id": 7, "text": "F" }, { "math_id": 8, "text": "\\sigma : |F| \\to \\mathrm{Ob} (X)" }, { "math_id": 9, "text": "\\tau : |F| \\to \\mathrm{Ob} (A)" }, { "math_id": 10, "text": "s \\in |F|" }, { "math_id": 11, "text": "\\tau (s)" }, { "math_id": 12, "text": "\\sigma (s)" }, { "math_id": 13, "text": "X \\in \\mathrm{Ob} (X)" }, { "math_id": 14, "text": "|F| \\; X" }, { "math_id": 15, "text": "\\{s \\in |F| : \\sigma (s) = X\\}" }, { "math_id": 16, "text": "F_{s} (X)" }, { "math_id": 17, "text": "s \\in |F| \\; X" }, { "math_id": 18, "text": "X, \\; Y \\in \\mathrm{Ob} (X)" }, { "math_id": 19, "text": "x \\in |F| \\; X" }, { "math_id": 20, "text": "y \\in |F| \\; Y" }, { "math_id": 21, "text": "f : X \\to Y" }, { "math_id": 22, "text": "\\mathrm{Arr (X)}" }, { "math_id": 23, "text": "F_{x,y} (f) : F_{x} (X) \\to F_{y} (Y)" }, { "math_id": 24, "text": "A" }, { "math_id": 25, "text": "F_{x,x} (\\mathrm{id}_x) = \\mathrm{id}_{F_{x}X}" }, { "math_id": 26, "text": "X, Y, Z \\in \\mathrm{Ob} (X)" }, { "math_id": 27, "text": "x \\in |F| \\; X" }, { "math_id": 28, "text": "z \\in |F| \\; Z," }, { "math_id": 29, "text": "F_{x,z} (gf) = F_{y,z} (g) \\circ F_{x,y} (f)\n" } ]
https://en.wikipedia.org/wiki?curid=76916805
769176
Deltoid curve
Roulette curve made from circles with radii that differ by factors of 3 or 1.5 In geometry, a deltoid curve, also known as a tricuspoid curve or Steiner curve, is a hypocycloid of three cusps. In other words, it is the roulette created by a point on the circumference of a circle as it rolls without slipping along the inside of a circle with three or one-and-a-half times its radius. It is named after the capital Greek letter delta (Δ) which it resembles. More broadly, a "deltoid" can refer to any closed figure with three vertices connected by curves that are concave to the exterior, making the interior points a non-convex set. Equations. A hypocycloid can be represented (up to rotation and translation) by the following parametric equations formula_0 formula_1 where "a" is the radius of the rolling circle, "b" is the radius of the circle within which the aforementioned circle is rolling and "t" ranges from zero to 6π. (In the illustration above "b = 3a" tracing the deltoid.) In complex coordinates this becomes formula_2. The variable "t" can be eliminated from these equations to give the Cartesian equation formula_3 so the deltoid is a plane algebraic curve of degree four. In polar coordinates this becomes formula_4 The curve has three singularities, cusps corresponding to formula_5. The parameterization above implies that the curve is rational which implies it has genus zero. A line segment can slide with each end on the deltoid and remain tangent to the deltoid. The point of tangency travels around the deltoid twice while each end travels around it once. The dual curve of the deltoid is formula_6 which has a double point at the origin which can be made visible for plotting by an imaginary rotation y ↦ iy, giving the curve formula_7 with a double point at the origin of the real plane. Area and perimeter. The area of the deltoid is formula_8 where again "a" is the radius of the rolling circle; thus the area of the deltoid is twice that of the rolling circle. The perimeter (total arc length) of the deltoid is 16"a". History. Ordinary cycloids were studied by Galileo Galilei and Marin Mersenne as early as 1599 but cycloidal curves were first conceived by Ole Rømer in 1674 while studying the best form for gear teeth. Leonhard Euler claims first consideration of the actual deltoid in 1745 in connection with an optical problem. Applications. Deltoids arise in several fields of mathematics. For instance: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x=(b-a)\\cos(t)+a\\cos\\left(\\frac{b-a}at\\right) \\," }, { "math_id": 1, "text": "y=(b-a)\\sin(t)-a\\sin\\left(\\frac{b-a}at\\right) \\, ," }, { "math_id": 2, "text": "z=2ae^{it}+ae^{-2it}" }, { "math_id": 3, "text": "(x^2+y^2)^2+18a^2(x^2+y^2)-27a^4 = 8a(x^3-3xy^2),\\," }, { "math_id": 4, "text": "r^4+18a^2r^2-27a^4=8ar^3\\cos 3\\theta\\,." }, { "math_id": 5, "text": "t=0,\\, \\pm\\tfrac{2\\pi}{3}" }, { "math_id": 6, "text": "x^3-x^2-(3x+1)y^2=0,\\," }, { "math_id": 7, "text": "x^3-x^2+(3x+1)y^2=0\\," }, { "math_id": 8, "text": "2\\pi a^2" } ]
https://en.wikipedia.org/wiki?curid=769176
7692013
E. C. Stoner (physicist)
British theoretical physicist (1899–1968) Edmund Clifton Stoner FRS (2 October 1899 – 27 December 1968) was a British theoretical physicist. He is principally known for his work on the origin and nature of itinerant ferromagnetism (the type of ferromagnetic behaviour associated with pure transition metals like cobalt, nickel, and iron), including the collective electron theory of ferromagnetism and the Stoner criterion for ferromagnetism. Stoner made significant contributions to the electron configurations in the periodic table. Biography. Stoner was born in Esher, Surrey, the son of cricketer Arthur Hallett Stoner. He won a scholarship to Bolton School (1911–1918) and then attended University of Cambridge in 1918, graduating in 1921. After graduation, he worked at the Cavendish Laboratory on the absorption of X-rays by matter and electron energy levels; his 1924 paper on this subject prefigured the Pauli exclusion principle. Stoner was appointed a Lecturer in the Department of Physics at the University of Leeds in 1932, becoming Professor of Theoretical Physics in 1939. Starting in 1938, he developed the collective electron theory of ferromagnetism. From 1951 to 1963, he held the Cavendish Chair of Physics. He retired in 1963. He did some early work in astrophysics and independently computed the (Chandrasekhar) limit for the mass of a white dwarf one year before Subrahmanyan Chandrasekhar in 1931. Stoner calculation was based on earlier work from Wilhelm Anderson on the Fermi gas and on earlier observations of Ralph H. Fowler on white dwarfs. Stoner also derived a pressure–density equation of state for the stars in 1932. These equations were also previously published by the Soviet physicist Yakov Frenkel in 1928. However Frenkel's work was ignored by the astronomical community. Stoner had been diagnosed with diabetes in 1919. He controlled it with diet until 1927, when insulin treatment became available. Stoner model of ferromagnetism. Electron bands can spontaneously split into up and down spins. This happens if the relative gain in exchange interaction (the interaction of electrons via the Pauli exclusion principle) is larger than the loss in kinetic energy. formula_0 formula_1 where formula_2 is the energy of the metal before exchange effects are included, formula_3 and formula_4 are the energies of the spin up and down electron bands respectively. The Stoner parameter which is a measure of the strength of the exchange correlation is denoted formula_5, the number of electrons is formula_6. Finally, formula_7 is the wavenumber as the electrons bands are in wavenumber-space. If more electrons favour one of the states, this will create ferromagnetism. The electrons obey Fermi–Dirac statistics so when the above formulas are summed over all formula_8-space, the "Stoner criterion" for ferromagnetism can be established. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\epsilon_{\\uparrow} (k) = \\epsilon_0 (k) - I \\frac{n_{\\uparrow}-n_{\\downarrow}}{n}\n" }, { "math_id": 1, "text": "\n\\epsilon_{\\downarrow} (k) = \\epsilon_0 (k) + I \\frac{n_\\uparrow-n_{\\downarrow}}{n}\n" }, { "math_id": 2, "text": "\\epsilon_0 (k)" }, { "math_id": 3, "text": "\\epsilon_{\\uparrow}" }, { "math_id": 4, "text": "\\epsilon_{\\downarrow}" }, { "math_id": 5, "text": "I" }, { "math_id": 6, "text": " n = n_{\\uparrow} + n_{\\downarrow} " }, { "math_id": 7, "text": " k " }, { "math_id": 8, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=7692013
76920619
Ground-based interferometric gravitational-wave search
Method of detecting gravitational waves Ground-based interferometric gravitational-wave search refers to the use of extremely large interferometers built on the ground to passively detect (or "observe") gravitational wave events from throughout the cosmos. Most recorded gravitational wave observations have been made using this technique; the first detection, revealing the merger of two black holes, was made in 2015 by the LIGO sites. As of 2024[ [update]], major detectors are the two LIGO sites in the United States, Virgo in Italy and KAGRA in Japan, which are all part of the second generation of operational detectors. Developing projects include LIGO-India as part of the second generation, and the Einstein Telescope and Cosmic Explorer forming a third generation. Space-borne inteferometers such as LISA are also planned, with a similar concept but targeting different kind of sources and using very different technologies.40 History. While gravitational waves were first formulated as part of general relativity by Einstein in 1916, there were no real attempts to detect them until the 1960s, when Joseph Weber created the first of so-called "Weber bars". While these proved unable to reach the required sensitivity for detecting gravitational waves, many research groups focused on this topic were created at that time. While a lot of efforts were dedicated to improving the resonant bar design, the idea of using a large interferometer for gravitational wave detection was formulated in the 1970s and began to gain traction in the 1980s, leading to the foundation of LIGO in 1984 and Virgo in 1989. Most of the current large interferometers started construction in the 1990s and finished in the early 2000s (1999 for LIGO, 2003 for Virgo, 2002 for GEO 600). After a few years of observation and improvements to reach their target sensitivity, it became clear that a detection was unlikely and that further upgrades were required, leading to large projects now labelled as the "second generation of detectors" (Advanced LIGO and Virgo), with important sensitivity gains. This periods also marked the beginning of joint observing periods between the different detectors, which are crucial to confirm the validity of a signal, and sparked collaborations between the different teams. The second generation upgrades were made during the early 2010s, lasting from 2010 to 2014 for LIGO and 2011 to 2017 for Virgo. In parallel, the KAGRA project was launched in Japan in 2010. In 2015, soon after restarting observations, the two LIGO detectors achieved the first direct observation of gravitational waves. This marked the beginning of the still ongoing series of gravitational wave observation periods, labelled O1 through O5; Virgo joined the observations in 2017, near the end of the O2 period, leading quickly to the first three-detector observation, and a few days later the GW170817 event, which is the only one to date to have been observed both with gravitational waves and electromagnetic radiation. KAGRA was completed in 2020, only observing for brief periods of time due to its low sensitivity up until now. The O4 observing run is currently ongoing, and expected to last until June 2025. More than 90 confirmed detections have been published; the collaborations now also produce live alerts when signals are detected, with more than 100 significant alerts already emitted during O4. Principle. In general relativity, a gravitational wave is a space-time perturbation which propagates at the speed of light. It thus slightly curves space-time, which locally changes the light path. Mathematically speaking, if formula_0 is the amplitude (assumed to be small) of the incoming gravitational wave and formula_1 the length of the optical cavity in which the light is in circulation, the change formula_2 of the optical path due to the gravitational wave is given by the formula:formula_3with formula_4 being a geometrical factor which depends on the relative orientation between the cavity and the direction of propagation of the incoming gravitational wave. In other terms, the change in length is proportional to both to the length of the cavity and the amplitude of the gravitational wave. Interferometer. In a typical configuration, the detector is a Michelson interferometer whose mirrors are suspended. A laser is divided into two beams by a beam splitter tilted by 45 degrees. The two beams propagate in the two perpendicular arms of the interferometer, are reflected by mirrors located at the end of the arms, and recombine on the beam splitter, generating interferences which are detected by a photodiode. An incoming gravitational wave changes the optical path of the laser beams in the arms, which then changes the interference pattern recorded by the photodiode. This means the various mirrors of the interferometer must be "frozen" in position: when they move, the optical cavity length changes and so does the interference signal read at the instrument output port. The mirror positions relative to a reference and their alignment are monitored accurately in real time with a precision better than the tenth of a nanometre for the lengths; at the level of a few nano radians for the angles. The more sensitive the detector, the narrower its optimal working point. Reaching that working point from an initial configuration in which the various mirrors are moving freely is a control system challenge; a complex series of steps is required to coordinate all the steerable parts of the interferometer. Once the working point is achieved, corrections are continuously applied to keep it in the optimal configuration. The signal induced by a potential gravitational wave is thus "embedded" in the light intensity variations detected at the interferometer output. Yet, several external causes—globally denoted as noise—change the interference pattern perpetually and significantly. Should nothing be done to remove or mitigate them, the expected physical signals would be buried in noise and would then remain undetectable. The design of detectors like Virgo and LIGO thus requires a detailed inventory of all noise sources which could impact the measurement, allowing a strong and continuing effort to reduce them as much as possible. Using an interferometer rather than a single optical cavity allows one to significantly enhance the detector's sensitivity to gravitational waves. Indeed, in this configuration based on an interference measurement, the contributions from some experimental noises are strongly reduced: instead of being proportional to the length of the single cavity, they depend in that case on the length difference between the arms (so equal arm length cancels the noise). In addition, the interferometer configuration benefits from the differential effect induced by a gravitational wave in the plane transverse to its direction of propagation: when the length of an optical path formula_5 changes by a quantity formula_6, the perpendicular optical path of the same length changes by formula_7 (same magnitude but opposite sign). And the interference at the output port of a Michelson interferometer depends on the difference of length between the two arms: the measured effect is hence amplified by a factor of 2 compared to a simple cavity. The optimal working point of an interferometric detector of gravitational waves is slightly detuned from the "dark fringe", a configuration in which the two laser beams recombined on the beam splitter interfere in a destructive way: almost no light is detected at the output port. Detectors. LIGO. LIGO is composed of two different detectors, one in Hanford, Washington and one in Livingston, Louisiana (they are thus separated by around 3000 km); the two detectors have very similar design, with 4 km long arms, although there are minor differences between the two. They were part of the first generation of detectors, and were completed in 2002; in 2010, they were shut down for an important set of upgrades, termed "Advanced LIGO", making the improved detector a part of the second generation. These upgrades were finished in early 2015, following which the two detectors made the first detection of gravitational waves. Virgo. Virgo is a single detector located near Pisa, Italy, with 3 km long arms. It was part of the first generation of detectors, following its completion in 2003; it was shut down in 2011 to prepare for the "Advanced Virgo" second-generation upgrades. The upgrades were completed in 2017, allowing it to join the "O2" run, quickly making the first three-detector detection jointly with LIGO. KAGRA. KAGRA (formerly known as LCGT) is a single interferometer with 3 km long arms, based in the Kamioka Observatory in Japan, which is part of the second generation of detectors. It was first made operational in 2020, although it has not been able to make a detection yet. Although the base design is similar to LIGO and Virgo, it is built underground and integrates cryogenic mirrors, which is why it has often been referred to as a "2.5 generation detector". Other detectors. GEO600 was initially designed as a British-German effort to build an interferometer with 3 km long arms; it was later downscaled to 600 m due to funding reasons. It was completed in 2002 and is located near Hanover, Germany. Although it has limited capacities (especially in the lower frequency range), making a detection unlikely, it plays a key role in the gravitational wave network as a testbed for many new technologies. TAMA 300 (and its predecessor, the prototype TAMA 20) was a Japanese detector with 300 m arms, built at the Mitaka university. It was partly designed as a stepstone for larger detectors (including KAGRA), and operated between 1999 and 2004. It has now been repurposed as a testbed for new technologies. The CLIO detector, with 100 m arms and located in the Kamioka mine, is another test detector, specifically designed to test the cryogenic technology used in KAGRA. LIGO-Australia is a defunct project which was envisioned to be built on the model of the LIGO detector in Australia, but was finally not funded by the Australian government; the project was later relocated to become LIGO-India. The Fermilab Holometer, with its 39 m long arms, probes a pretty different range in frequency than other interferometers, aiming for the MHz range. Future detectors. LIGO-India. LIGO-India is a current project of a single interferometer based in Aundha, India, following a design very similar to LIGO (with support from the LIGO collaboration). It has received approval from the Indian government in 2023, and is planned to be completed around 2030. Cosmic Explorer. Cosmic Explorer is a project for a third-generation detector, featuring two interferometers with respectively 40 km and 20 km long arms located in two different places in the United States. It relies on a design similar to LIGO, leveraging the experience from the two LIGO detectors, scaled to the much longer arm length. It is currently going through the process of approval by the NSF. If approved, it should be completed by the end of the 2030s. Einstein Telescope. Einstein Telescope is a European project for a third-generation detector; it is currently planned to use a design with three 10 km arms arranged in an equilateral triangle (effectively acting as 3 interferometers), which would be built underground; it would also use cryogenic mirrors. It is currently planned to be completed around 2035, with construction starting in 2026. Science case. Ground-based detectors are designed to study gravitational waves from astrophysical sources. By design, they can only detect waves with a frequency ranging from a few Hz to a few thousand of Hz. The main known gravitational-wave emitting systems within this range are: black hole and/or neutron star binary mergers, rotating neutron stars, bursts and supernovae explosions, and even the gravitational wave background generated in the instants following the Big Bang. Moreover, gravitational radiation may also lead to the discovery of unexpected and theoretically predicted exotic objects. Transient sources. Coalescences of black holes and neutron stars. When two massive and compact objects such as black holes and neutron stars orbit each other in a binary system, they emit gravitational radiation and, therefore, lose energy. Hence, they begin to get closer to each other, increasing the frequency and the amplitude of the gravitational waves; this first phase of the coalescence phenomenon, called the "inspiral", can last for millions of years. This culminates in the merger of the two objects, eventually forming a single compact object (generally a black hole). The part of the waveform corresponding to the merger has the largest amplitude and highest frequency, and can only be modeled by performing numerical relativity simulations of these systems. In the case of black holes, a signal is still emitted during a few seconds after the merger, while the new black hole "settles in"; this signal is known as the "ringdown". Current detectors are only sensitive to the late stages of the coalescence of black hole and neutron star binaries: only the last seconds of the whole process can currently be observed (including the end of the inspiral phase, the merger itself and part of the ringdown). The typical shape of the detectable signal is known as the "chirp", as it resembles the sound emitted by some birds, with a rapid increase in amplitude and frequency. All the gravitational waves signal detected so far originate from black hole or neutron star mergers. Bursts. Any signal lasting from a few milliseconds to a few seconds is considered a gravitational wave burst. Supernova explosions—the gravitational collapse of massive stars at the end of their lives—emit gravitational radiation that may be seen by current interferometers. A multi-messenger detection (electromagnetic and gravitational radiation, and neutrinos) would help to better understand the supernova process and the formation of black holes. Other possible burst candidates include perturbations in neutron stars, "memory" effects arising from the non-linearity of general relativity or cosmic strings. Some phenomena may also generate "long" bursts (longer than 1 second), like instabilities in a black hole accretion disk, or in newly formed black holes and neutron stars when some of the matter ejected during the supernova falls back towards the compact object. Continuous sources. The main expected sources of continuous gravitational waves are neutron stars, very compact objects resulting from the collapse of massive stars. In particular, pulsars are special cases of neutron stars that emit light pulses periodically: they can spin up to hundreds of times per second (the fastest spinning pulsar currently known is PSR J1748−2446ad, which spins 716 times per second). Any small deviation from axial symmetry (a tiny "mountain" on the surface) will generate long duration periodic gravitational waves. A number of potential mechanisms have been identified which could generate some "mountains" due to thermal, mechanic, or magnetic effects; accretion may also induce a break in axial symmetry. Another possible source of continuous waves in the current detection range could be more exotic objects, such as dark matter candidates. Axions rotating around a black hole or binary systems consisting of a primordial low-mass black hole and another compact object have in particular been suggested as potential sources. Some possible types of dark matter may also be detected by the interferometers directly, by interacting with optical elements of the device. Stochastic background. Several physical phenomena may be the source of a gravitational wave stochastic background, an additional source of noise of astrophysical and/or cosmological origin. It represents a (usually) continuous source of gravitational waves, but unlike other continuous wave sources (like rotating neutron stars), it comes from large regions of the sky instead of a single location. The cosmic microwave background (CMB) is the earliest signal of the Universe that can be observed in the electromagnetic spectrum. However, cosmological models predict the emission of gravitational waves generated instants after the Big Bang. Because gravitational waves interact very weakly with matter, detecting such background would give more insight in the cosmological evolution of our Universe. In particular, it could provide evidence for inflation, from gravitational waves emitted either by the process of inflation itself (according to some theories) or at the end of inflation; first-order phase transitions may also produce gravitational waves. Primordial black holes, which may form during the early universe, are also a potential source of a stochastic background for that period. Moreover, current detectors may be able to detect an astrophysical background resulting from the superposition of all faint and distant sources emitting gravitational waves at all times, which would help to study the evolution of astrophysical sources and star formation. The most likely sources to contribute to the astrophysical background are binary neutron stars, binary black holes, or neutron star-black hole binaries. Other possible sources include supernovae and pulsars. It is expected that this type of background will be the first kind to be detected by the current ground interferometers. Finally, cosmic strings may represent a source of gravitational wave background, whose detection could provide proof that cosmic strings actually exist. Exotic sources. Non-conventional, alternative models of compact objects have been proposed by physicists. Some examples of these models can be described within general relativity (quark and strange stars, boson and Proca stars, Kerr black holes with scalar and Proca hair), others arise from some approaches to quantum gravity (cosmic strings, fuzzballs, gravastars), or come from alternative theories of gravity (scalarised neutron stars or black holes, wormholes). Theoretically predicted exotic compact objects could now be detected and would help to elucidate the true nature of gravity or discover new forms of matter. Furthermore, completely unexpected phenomena may be observed, unveiling new physics. Fundamental properties of gravity. Gravitational wave polarization. Gravitational waves are expected to have two "tensor" polarizations, nicknamed "plus" and "cross" due to their effects on a ring of particle (displayed in the figure below). A single gravitational wave is usually a superposition of these two polarizations, depending on the orientation of the source. In addition, some theories of gravity allow for additional polarizations to exist: the two "vector" polarizations (x and y), and the two "scalar" polarizations ("breathing" and "longitudinal"). Detecting these additional polarizations could provide evidence for physics beyond general relativity. The polarizations can only be distinguished using several detectors; they could only be properly probed after Virgo was introduced, as the two LIGO detectors are almost co-aligned. They can be measured from compact binary coalescences, but also from the stochastic background and continuous waves. With the combination of the current detectors, it is possible to determine the presence or absence of the additional polarizations, but not their nature; a total of 5 independent detectors would be required to fully separate all the polarizations (except for the longitudinal and breathing polarizations, which cannot be distinguished from each other by current detector designs). Lensed gravitational waves. General relativity predicts that a gravitational wave should be subject to gravitational lensing, just as light waves are; that is, the trajectory of a gravitational wave will be curved by the presence of a massive object (typically a galaxy or a galaxy cluster) near its path. This can result in an increase in the amplitude of the wave, or even multiple observations of the event at different times, as we currently observe for the light of supernovae. Such events are predicted to be common enough to be detected by the current detectors in the near future. Microlensing effects are also predicted. Detecting a lensed event would allow for a very precise localization, as well as further tests of the speed of gravity and of the polarization. Cosmological measurements. Gravitational waves also provide a new way to measure some cosmological parameters, and in particular the Hubble constant formula_8, which represents the rate of the expansion of the universe and whose value is currently disputed due to conflicting measurement from different methods. The main benefit of this method is that the source luminosity distance measured from the gravitational wave signal does not rely on other measurements or assumptions, as is usually the case. There are two main possibilities for measuring formula_8 with gravitational waves in current detectors: Testing general relativity. The measurement of gravitational wave signals offers a unique perspective for testing results from general relativity, as they are produced in environments where the gravitational field is very strong (e.g., near black holes). Such tests may uncover physics beyond general relativity, or possible issues in the models. These tests include: Data analysis. The detection of gravitational waves within the output of the detectors (typically know as the "strain") is a complex process. Currently, most of the data processing is done within the LIGO-Virgo-KAGRA (LVK) collaboration; teams outside of the collaboration also produce results on the data once it is released publicly. The data from the current detectors is initially only available to LVK members; segments of data around detected events are released at the time of publication of the related paper, and the full data is released after a proprietary period, currently lasting 18 months. During the third observing run (O3), this resulted in two separated data releases (O3a and O3b), corresponding to the first six months and last six months of the run respectively. The data is then available for anyone on the Gravitational Wave Open Science Center (GWOSC) platform. Transient searches. Event detection pipelines. The various softwares used for the analysis of gravitational wave signals are usually referred to as "search pipelines", as they often encompass many steps of the data processing. During the O3 run, five different pipelines were used to identify event candidates within the data and collect a list of observations of short-lived ("transient") gravitational waves signals in a catalog publication. Four of them (GstLAL, PyCBC, MBTA, and SPIIR) were dedicated to the detection of compact binary coalescences (CBC, the only type of event detected so far), while the fifth one (cWB) was designed to detect any transient signal. All five pipelines have been used during the run ("online") as part of the low-latency alert system, and after the run ("offline") to reassess the significance of the candidates and spot events which may have been missed (except for SPIIR, which was only run online) The oLIB pipeline, also looking for generic "burst" signals, has also been used to generate alerts, but not for the catalogs. In addition, two other pipelines have been used specifically for burst searches after the run, as they are too computationally expensive to be run online : BayesWave, a pipeline using Bayesian techniques which was used to further investigate events by cWB, and STAMPS-AS, which is designed to look specifically for long-duration bursts (more than 1 second). The four CBC pipelines all rely on the concept of matched filtering, a technique used to search for a known signal within noisy data in an optimal way. This technique requires some knowledge of what the signal looks like, and is thus dependent on the model used to simulate it. Although reasonable models exist, the complexity of the equations governing the dynamics of a compact merger makes the generation of accurate waveforms challenging; the development of new waveforms is still an active field of research. In addition, the sources cover a wide range of possible parameters (masses and spins of the two objects, location in the sky) which will yield different waveforms, instead of having one specific signal. This prompts the researchers to generate "template banks" containing a large amount of different waveforms corresponding to different parameters; a compromise has to be done between how tight the bank is (maximizing the number of detections) and the limited computational resources available to carry out the search with all the templates. How to generate such template banks efficiently is also an active field of research. During the search, the matched filtering is performed on every waveform within the (pre-calculated) template bank. Although the four searches use the same technique, they all have different optimizations and specificities on how they handle the data. In particular, they use different techniques for estimating the significance of an event, for discriminating between real events and glitches, and for combining the data from the different detectors; they also use different template banks. The cWB (coherent wave burst) pipeline uses a different approach: it works by grouping the data from the different detectors and carrying a joint analysis to look for coherent signals appearing in several detectors at once. Although its sensitivity for binary mergers is less than the dedicated CBC pipelines, its strength lies in being able to detect signals from any kind of sources, as it does not require any assumption on the shape of the signal (which is why it often referred to as an "unmodeled" search). Low-latency. The low-latency system is designed to produce alerts for astronomers when gravitational events are detected, with the hope that an electromagnetic counterpart can be observed. This is achieved by centralizing the event candidates from the different analysis pipelines in the gravitational-wave candidate event database (GraceDB), from which the data is processed. If an event is deemed significant enough, a rapid sky localization is produced and preliminary alerts are sent autonomously within the span of a few minutes; after a more precise evaluation of the source parameters, as well as human vetting, a new alert or a retraction notice is sent within a day. The alerts are sent through the GCN, which also centralizes alerts from gamma-ray and neutrino telescopes, as well as SciMMA. A total of 78 alerts were sent during the O3 run, of which 23 were later retracted. Parameter estimation. After an event has been detected by one of the event detection pipelines, a deeper analysis is performed to get a more precise estimation of the parameters of the source and the measurement uncertainty. During the O3 run, this was carried out using several different pipelines, including Bilby and RIFT. These pipelines employ Bayesian methods to quantify the uncertainty, including MCMC and nested sampling. Search for counterparts. While many astronomers try to follow-up the low-latency alerts from gravitational wave detectors, the reverse also exists: electromagnetic events expected to have an associated gravitational wave emission are subjected to a deeper search. One of the prime targets for these are gamma-ray bursts; these are thought to be associated with supernovae ("long" bursts, lasting more than 2 seconds) and with compact binary coalescences involving neutron stars ("short" bursts). The merger of two neutron stars in particular has been confirmed to be associated with both a gamma-ray burst and gravitational waves with the GW170817 event. Searches targeted toward gamma-ray bursts observations have been performed on data from the past runs using the pyGRB pipeline for CBC, using methods similar to the regular searches, but centered around the time of the bursts and targeting only the sky area found by gamma-ray observatories. An unmodelled search was also carried out using the X-pipeline package, in a similar fashion as regular unmodelled searches. In addition to these searches, several pipelines are looking for coincidences between alerts from gravitational waves and alerts from other detectors. In particular, the RAVEN pipeline is part of the low-latency infrastructure and analyzes the coincidence with gamma-ray burst events and other sources. The LLAMA pipeline is also dedicated to identifying such coincidences with neutrino events, predominantly from IceCube. Continuous wave searches. Searches dedicated to periodic gravitational waves—such as the ones generated by rapidly rotating neutron stars—are generally referred to as continuous wave searches. These can be divided in three categories: all-sky searches, which look for unknown signals from any direction, directed searches, which aim for objects with known positions but unknown frequency, and targeted searches, which hunt for signals from sources where both the position and the frequency are known. The directed and targeted searches are motivated by the fact that all-sky searches are extremely computationally expensive, and thus require trade-offs that limit their sensitivity. The principal challenge in continuous wave search is that the signal is much weaker than current detected transients, meaning that one must observe a long time period to accumulate enough data to detect it, as the signal-to-noise ratio scales with the square root of the observing time (intuitively, the signal will add up over the observing duration while the noise will not). The issue is that over such long periods of time, the frequency from the source will evolve, and the motion of the Earth around the Sun will affect the frequency via the Doppler effect. This greatly increases the computational cost of the search, even more so when the frequency is unknown. Although there are mitigation strategies, such as semi-coherent searches, where the analysis is performed separately on segments from the data rather than the full data, these result in a loss of sensitivity. Other approaches include cross-correlation, inspired by stochastic wave searches, which takes advantage of having multiple detectors to look for a correlated signal in a pair of detectors. Stochastic wave searches. The stochastic gravitational wave background is another target for data analysis teams. By definition, it can be seen as a source of noise in the detectors; the main challenge is to separate it from the other sources of noise, and measure its power spectral density. The easiest method for solving this issue is to look for correlations within a network of several detectors; the idea being that the noise related to the gravitational wave background will be identical in all detectors, while the instrumental noise will (in principle) not be correlated across the detectors. Another possible approach would be to look for excess power not accounted by other noise sources; however, this proves impractical for current interferometers as the noise is not known well enough compared to the expected power of the stochastic background. Only searches based on cross-correlation between detectors are currently in use by the LVK collaboration, although other types of searches are also developed. This kind of search must also account for factors such as the detectors antenna pattern, the motion of the Earth, and the distance between the detectors. Assumptions also have to be made on some properties of the background; it is common to assume that it is Gaussian and isotropic, but searches for anisotropic, non-Gaussian, and more exotic backgrounds also exist. Gravitational wave properties searches. A number of software have been developed to investigate the physics surrounding gravitational waves. These analyses are generally performed offline (after the run), and often rely on the results from the other searches (currently mostly CBC searches). Several analyses are performed to look for events observed multiple time due to lensing, first by trying to match all the known events together, and then by performing a joint analysis for the most promising pair of events; these analyses have been performed using LALInference and HANABI software. Additional searches for events which may have been missed by the regular CBC searches are also performed, by reusing the existing CBC pipelines. Software designed for estimating the Hubble constant has also been developed. The gwcosmo pipeline performs a Bayesian analysis to determine a distribution of the possible values of the constant, both using "dark sirens" (CBC events without electromagnetic counterpart), which can be correlated with a galaxy catalog, and events with an electromagnetic counterpart for which a direct estimation can be made based on the distance measured with gravitational waves and the identified host galaxy. This requires assuming a specific population of black holes, which may be a significant source of bias; recent analyses have been trying to circumvent this issue by fitting both the population and the Hubble constant simultaneously. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "\\delta L" }, { "math_id": 3, "text": " \\frac{\\delta L}{L} = C \\times h " }, { "math_id": 4, "text": "C \\le 1" }, { "math_id": 5, "text": " L " }, { "math_id": 6, "text": " \\delta L " }, { "math_id": 7, "text": " -\\delta L " }, { "math_id": 8, "text": "H_0" } ]
https://en.wikipedia.org/wiki?curid=76920619
76926111
Expectile
In the mathematical theory of probability, the expectiles of a probability distribution are related to the expected value of the distribution in a way analogous to that in which the quantiles of the distribution are related to the median. For formula_0 expectile of the probability distribution with cumulative distribution function formula_1 is characterized by any of the following equivalent conditions: formula_2 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\tau \\in (0,1) " }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "\n\\begin{align}\n& (1-\\tau)\\int^t_{-\\infty}(t-x) \\, dF(x) = \\tau\\int^\\infty_t(x-t) \\, dF(x) \\\\[5pt]\n& \\int^t_{-\\infty}|t-x| \\, dF(x) = \\tau\\int^\\infty_{-\\infty}|x-t| \\, dF(x) \\\\[5pt]\n& t-\\operatorname E[X]=\\frac{2\\tau-1}{1-\\tau} \\int^\\infty_t(x-t) \\, dF(x)\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=76926111
7693581
Peirce quincuncial projection
Conformal map projection The Peirce quincuncial projection is the conformal map projection from the sphere to an unfolded square dihedron, developed by Charles Sanders Peirce in 1879. Each octant projects onto an isosceles right triangle, and these are arranged into a square. The name "quincuncial" refers to this arrangement: the north pole at the center and quarters of the south pole in the corners form a quincunx pattern like the pips on the "five" face of a traditional die. The projection has the distinctive property that it forms a seamless square tiling of the plane, conformal except at four singular points along the equator. Typically the projection is square and oriented such that the north pole lies at the center, but an oblique aspect in a rectangle was proposed by Émile Guyou in 1887, and a transverse aspect was proposed by Oscar S. Adams in 1925. The projection has seen use in digital photography for portraying spherical panoramas. History. The maturation of complex analysis led to general techniques for conformal mapping, where points of a flat surface are handled as numbers on the complex plane. While working at the United States Coast and Geodetic Survey, the American philosopher Charles Sanders Peirce published his projection in 1879, having been inspired by H. A. Schwarz's 1869 conformal transformation of a circle onto a polygon of "n" sides (known as the Schwarz–Christoffel mapping). In the normal aspect, Peirce's projection presents the Northern Hemisphere in a square; the Southern Hemisphere is split into four isosceles triangles symmetrically surrounding the first one, akin to star-like projections. In effect, the whole map is a square, inspiring Peirce to call his projection "quincuncial", after the arrangement of five items in a quincunx. After Peirce presented his projection, two other cartographers developed similar projections of the hemisphere (or the whole sphere, after a suitable rearrangement) on a square: Guyou in 1887 and Adams in 1925. The three projections are transversal versions of each other (see related projections below). Formal description. The Peirce quincuncial projection is "formed by transforming the stereographic projection with a pole at infinity, by means of an elliptic function". The Peirce quincuncial is really a projection of the hemisphere, but its tessellation properties (see below) permit its use for the entire sphere. The projection maps the interior of a circle onto the interior of a square by means of the Schwarz–Christoffel mapping, as follows: formula_0 where An elliptic integral of the first kind can be used to solve for w. The comma notation used for sd("u", "k") means that &amp;NoBreak;}&amp;NoBreak; is the "modulus" for the elliptic function ratio, as opposed to the "parameter" [which would be written sd("u"|"m")] or the "amplitude" [which would be written sd("u"\"α")]. The mapping has a scale factor of 1/2 at the center, like the generating stereographic projection. Note that: formula_1 is the lemniscatic sine function (see Lemniscate elliptic functions). Properties. According to Peirce, his projection has the following properties (Peirce, 1879): Tiled Peirce quincuncial maps. The projection tessellates the plane; i.e., repeated copies can completely cover (tile) an arbitrary area, each copy's features exactly matching those of its neighbors. (See the example to the right). Furthermore, the four triangles of the second hemisphere of Peirce quincuncial projection can be rearranged as another square that is placed next to the square that corresponds to the first hemisphere, resulting in a rectangle with aspect ratio of 2:1; this arrangement is equivalent to the transverse aspect of the Guyou hemisphere-in-a-square projection. Known uses. Like many other projections based upon complex numbers, the Peirce quincuncial has been rarely used for geographic purposes. One of the few recorded cases is in 1946, when it was used by the U.S. Coast and Geodetic Survey world map of air routes. It has been used recently to present spherical panoramas for practical as well as aesthetic purposes, where it can present the entire sphere with most areas being recognizable. Related projections. In transverse aspect, one hemisphere becomes the Adams hemisphere-in-a-square projection (the pole is placed at the corner of the square). Its four singularities are at the North Pole, the South Pole, on the equator at 25°W, and on the equator at 155°E, in the Arctic, Atlantic, and Pacific oceans, and in Antarctica. That great circle divides the traditional Western and Eastern hemispheres. In oblique aspect (45 degrees) of one hemisphere becomes the Guyou hemisphere-in-a-square projection (the pole is placed in the middle of the edge of the square). Its four singularities are at 45 degrees north and south latitude on the great circle composed of the 20°W meridian and the 160°E meridians, in the Atlantic and Pacific oceans. That great circle divides the traditional western and eastern hemispheres. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\operatorname{sd} \\left(\\sqrt 2 w , \\frac{1}{\\sqrt 2}\\right) = \\sqrt 2 \\, r" }, { "math_id": 1, "text": " \\operatorname{sd} \\left(\\sqrt 2 w , \\frac{1}{\\sqrt 2}\\right) = \\sqrt 2 \\operatorname{sl}\\left(w\\right)" } ]
https://en.wikipedia.org/wiki?curid=7693581
76944
Perpendicular
Relationship between two lines that meet at a right angle (90 degrees) In geometry, two geometric objects are perpendicular if their intersection forms right angles (angles that are 90 degrees or π/2 radians wide) at the point of intersection called a "foot". The condition of perpendicularity may be represented graphically using the "perpendicular symbol", ⟂. Perpendicular intersections can happen between two lines (or two line segments), between a line and a plane, and between two planes. Perpendicularity is one particular instance of the more general mathematical concept of "orthogonality"; perpendicularity is the orthogonality of classical geometric objects. Thus, in advanced mathematics, the word "perpendicular" is sometimes used to describe much more complicated geometric orthogonality conditions, such as that between a surface and its "normal vector". A line is said to be perpendicular to another line if the two lines intersect at a right angle. Explicitly, a first line is perpendicular to a second line if (1) the two lines meet; and (2) at the point of intersection the straight angle on one side of the first line is cut by the second line into two congruent angles. Perpendicularity can be shown to be symmetric, meaning if a first line is perpendicular to a second line, then the second line is also perpendicular to the first. For this reason, we may speak of two lines as being perpendicular (to each other) without specifying an order. A great example of perpendicularity can be seen in any compass, note the cardinal points; North, East, South, West (NESW) The line N-S is perpendicular to the line W-E and the angles N-E, E-S, S-W and W-N are all 90° to one another. Perpendicularity easily extends to segments and rays. For example, a line segment formula_0 is perpendicular to a line segment formula_1 if, when each is extended in both directions to form an infinite line, these two resulting lines are perpendicular in the sense above. In symbols, formula_2 means line segment AB is perpendicular to line segment CD. A line is said to be perpendicular to a plane if it is perpendicular to every line in the plane that it intersects. This definition depends on the definition of perpendicularity between lines. Two planes in space are said to be perpendicular if the dihedral angle at which they meet is a right angle. Foot of a perpendicular. The word foot is frequently used in connection with perpendiculars. This usage is exemplified in the top diagram, above, and its caption. The diagram can be in any orientation. The foot is not necessarily at the bottom. More precisely, let A be a point and m a line. If B is the point of intersection of m and the unique line through A that is perpendicular to m, then B is called the "foot" of this perpendicular through A. Construction of the perpendicular. To make the perpendicular to the line AB through the point P using compass-and-straightedge construction, proceed as follows (see figure left): To prove that the PQ is perpendicular to AB, use the SSS congruence theorem for QPA' and QPB' to conclude that angles OPA' and OPB' are equal. Then use the SAS congruence theorem for triangles OPA' and OPB' to conclude that angles POA and POB are equal. To make the perpendicular to the line g at or through the point P using Thales's theorem, see the animation at right. The Pythagorean theorem can be used as the basis of methods of constructing right angles. For example, by counting links, three pieces of chain can be made with lengths in the ratio 3:4:5. These can be laid out to form a triangle, which will have a right angle opposite its longest side. This method is useful for laying out gardens and fields, where the dimensions are large, and great accuracy is not needed. The chains can be used repeatedly whenever required. In relationship to parallel lines. If two lines ("a" and "b") are both perpendicular to a third line ("c"), all of the angles formed along the third line are right angles. Therefore, in Euclidean geometry, any two lines that are both perpendicular to a third line are parallel to each other, because of the parallel postulate. Conversely, if one line is perpendicular to a second line, it is also perpendicular to any line parallel to that second line. In the figure at the right, all of the orange-shaded angles are congruent to each other and all of the green-shaded angles are congruent to each other, because vertical angles are congruent and alternate interior angles formed by a transversal cutting parallel lines are congruent. Therefore, if lines "a" and "b" are parallel, any of the following conclusions leads to all of the others: Graph of functions. In the two-dimensional plane, right angles can be formed by two intersected lines if the product of their slopes equals −1. Thus for two linear functions formula_3 and formula_4, the graphs of the functions will be perpendicular if formula_5 The dot product of vectors can be also used to obtain the same result: First, shift coordinates so that the origin is situated where the lines cross. Then define two displacements along each line, formula_6, for formula_7 Now, use the fact that the inner product vanishes for perpendicular vectors: formula_8 formula_9 formula_10 formula_11 (unless formula_12 or formula_13 vanishes.) Both proofs are valid for horizontal and vertical lines to the extent that we can let one slope be formula_14, and take the limit that formula_15 If one slope goes to zero, the other goes to infinity. In circles and other conics. Circles. Each diameter of a circle is perpendicular to the tangent line to that circle at the point where the diameter intersects the circle. A line segment through a circle's center bisecting a chord is perpendicular to the chord. If the intersection of any two perpendicular chords divides one chord into lengths "a" and "b" and divides the other chord into lengths "c" and "d", then "a"2 + "b"2 + "c"2 + "d"2 equals the square of the diameter. The sum of the squared lengths of any two perpendicular chords intersecting at a given point is the same as that of any other two perpendicular chords intersecting at the same point, and is given by 8"r"2 – 4"p"2 (where "r" is the circle's radius and "p" is the distance from the center point to the point of intersection). Thales' theorem states that two lines both through the same point on a circle but going through opposite endpoints of a diameter are perpendicular. This is equivalent to saying that any diameter of a circle subtends a right angle at any point on the circle, except the two endpoints of the diameter. Ellipses. The major and minor axes of an ellipse are perpendicular to each other and to the tangent lines to the ellipse at the points where the axes intersect the ellipse. The major axis of an ellipse is perpendicular to the directrix and to each latus rectum. Parabolas. In a parabola, the axis of symmetry is perpendicular to each of the latus rectum, the directrix, and the tangent line at the point where the axis intersects the parabola. From a point on the tangent line to a parabola's vertex, the other tangent line to the parabola is perpendicular to the line from that point through the parabola's focus. The orthoptic property of a parabola is that If two tangents to the parabola are perpendicular to each other, then they intersect on the directrix. Conversely, two tangents which intersect on the directrix are perpendicular. This implies that, seen from any point on its directrix, any parabola subtends a right angle. Hyperbolas. The transverse axis of a hyperbola is perpendicular to the conjugate axis and to each directrix. The product of the perpendicular distances from a point P on a hyperbola or on its conjugate hyperbola to the asymptotes is a constant independent of the location of P. A rectangular hyperbola has asymptotes that are perpendicular to each other. It has an eccentricity equal to formula_16 In polygons. Triangles. The legs of a right triangle are perpendicular to each other. The altitudes of a triangle are perpendicular to their respective bases. The perpendicular bisectors of the sides also play a prominent role in triangle geometry. The Euler line of an isosceles triangle is perpendicular to the triangle's base. The Droz-Farny line theorem concerns a property of two perpendicular lines intersecting at a triangle's orthocenter. Harcourt's theorem concerns the relationship of line segments through a vertex and perpendicular to any line tangent to the triangle's incircle. Quadrilaterals. In a square or other rectangle, all pairs of adjacent sides are perpendicular. A right trapezoid is a trapezoid that has two pairs of adjacent sides that are perpendicular. Each of the four maltitudes of a quadrilateral is a perpendicular to a side through the midpoint of the opposite side. An orthodiagonal quadrilateral is a quadrilateral whose diagonals are perpendicular. These include the square, the rhombus, and the kite. By Brahmagupta's theorem, in an orthodiagonal quadrilateral that is also cyclic, a line through the midpoint of one side and through the intersection point of the diagonals is perpendicular to the opposite side. By van Aubel's theorem, if squares are constructed externally on the sides of a quadrilateral, the line segments connecting the centers of opposite squares are perpendicular and equal in length. Lines in three dimensions. Up to three lines in three-dimensional space can be pairwise perpendicular, as exemplified by the "x, y", and "z" axes of a three-dimensional Cartesian coordinate system. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\overline{AB}" }, { "math_id": 1, "text": "\\overline{CD}" }, { "math_id": 2, "text": "\\overline{AB} \\perp \\overline{CD}" }, { "math_id": 3, "text": "y_1(x) = m_1 x + b_1" }, { "math_id": 4, "text": "y_2(x) = m_2 x + b_2" }, { "math_id": 5, "text": "m_1 m_2 = -1." }, { "math_id": 6, "text": "\\vec r_j" }, { "math_id": 7, "text": "(j=1,2)." }, { "math_id": 8, "text": "\\vec r_1=x_1\\hat x + y_1\\hat y =x_1\\hat x + m_1x_1\\hat y" }, { "math_id": 9, "text": "\\vec r_2=x_2\\hat x + y_2\\hat y = x_2\\hat x + m_2x_2\\hat y" }, { "math_id": 10, "text": "\\vec r_1 \\cdot \\vec r_2 = \\left(1+m_1m_2\\right)x_1x_2 =0" }, { "math_id": 11, "text": "\\therefore m_1m_2=-1" }, { "math_id": 12, "text": "x_1" }, { "math_id": 13, "text": "x_2" }, { "math_id": 14, "text": "\\varepsilon" }, { "math_id": 15, "text": "\\varepsilon\\rightarrow 0." }, { "math_id": 16, "text": "\\sqrt{2}." } ]
https://en.wikipedia.org/wiki?curid=76944
7694775
S4 Index
Parameter used to measure ionospheric disturbances The formula_0 Index is a standard index used to measure ionospheric disturbances. It is defined as the ratio of the standard deviation of signal intensity to the average signal intensity. Real Time data. This parameter is displayed in real time by many institutions: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_4" } ]
https://en.wikipedia.org/wiki?curid=7694775
769503
Scale parameter
Statistical measure In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition. If a family of probability distributions is such that there is a parameter "s" (and other parameters "θ") for which the cumulative distribution function satisfies formula_0 then "s" is called a scale parameter, since its value determines the "scale" or statistical dispersion of the probability distribution. If "s" is large, then the distribution will be more spread out; if "s" is small then it will be more concentrated. If the probability density exists for all values of the complete parameter set, then the density (as a function of the scale parameter only) satisfies formula_1 where "f" is the density of a standardized version of the density, i.e. formula_2. An estimator of a scale parameter is called an estimator of scale. Families with Location Parameters. In the case where a parametrized family has a location parameter, a slightly different definition is often used as follows. If we denote the location parameter by formula_3, and the scale parameter by formula_4, then we require that formula_5 where formula_6 is the cmd for the parametrized family. This modification is necessary in order for the standard deviation of a non-central Gaussian to be a scale parameter, since otherwise the mean would change when we rescale formula_7. However, this alternative definition is not consistently used. Simple manipulations. We can write formula_8 in terms of formula_9, as follows: formula_10 Because "f" is a probability density function, it integrates to unity: formula_11 By the substitution rule of integral calculus, we then have formula_12 So formula_8 is also properly normalized. Rate parameter. Some families of distributions use a rate parameter (or "inverse scale parameter"), which is simply the reciprocal of the "scale parameter". So for example the exponential distribution with scale parameter β and probability density formula_13 could equivalently be written with rate parameter λ as formula_14 Estimation. A statistic can be used to estimate a scale parameter so long as it: Various measures of statistical dispersion satisfy these. In order to make the statistic a consistent estimator for the scale parameter, one must in general multiply the statistic by a constant scale factor. This scale factor is defined as the theoretical value of the value obtained by dividing the required scale parameter by the asymptotic value of the statistic. Note that the scale factor depends on the distribution in question. For instance, in order to use the median absolute deviation (MAD) to estimate the standard deviation of the normal distribution, one must multiply it by the factor formula_21 where Φ−1 is the quantile function (inverse of the cumulative distribution function) for the standard normal distribution. (See MAD for details.) That is, the MAD is not a consistent estimator for the standard deviation of a normal distribution, but 1.4826... MAD is a consistent estimator. Similarly, the average absolute deviation needs to be multiplied by approximately 1.2533 to be a consistent estimator for standard deviation. Different factors would be required to estimate the standard deviation if the population did not follow a normal distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(x;s,\\theta) = F(x/s;1,\\theta), \\!" }, { "math_id": 1, "text": "f_s(x) = f(x/s)/s, \\!" }, { "math_id": 2, "text": "f(x) \\equiv f_{s=1}(x)" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": "F(x;s,m,\\theta)=F((x-m)/s;1,0,\\theta)" }, { "math_id": 6, "text": "F(x,s,m,\\theta)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "f_s" }, { "math_id": 9, "text": "g(x) = x/s" }, { "math_id": 10, "text": "f_s(x) = f\\left(\\frac{x}{s}\\right) \\cdot \\frac{1}{s} = f(g(x))g'(x)." }, { "math_id": 11, "text": "\n 1 = \\int_{-\\infty}^{\\infty} f(x)\\,dx\n = \\int_{g(-\\infty)}^{g(\\infty)} f(x)\\,dx.\n " }, { "math_id": 12, "text": "\n 1 = \\int_{-\\infty}^{\\infty} f(g(x)) g'(x)\\,dx\n = \\int_{-\\infty}^{\\infty} f_s(x)\\,dx.\n " }, { "math_id": 13, "text": "f(x;\\beta ) = \\frac{1}{\\beta} e^{-x/\\beta} ,\\; x \\ge 0 " }, { "math_id": 14, "text": "f(x;\\lambda) = \\lambda e^{-\\lambda x} ,\\; x \\ge 0. " }, { "math_id": 15, "text": "(a+b)/2" }, { "math_id": 16, "text": "|b-a|" }, { "math_id": 17, "text": "\\mu" }, { "math_id": 18, "text": "\\sigma" }, { "math_id": 19, "text": "\\sigma^2" }, { "math_id": 20, "text": "\\theta" }, { "math_id": 21, "text": "1/\\Phi^{-1}(3/4) \\approx 1.4826," } ]
https://en.wikipedia.org/wiki?curid=769503
76975056
Caputo fractional derivative
Generalization in fractional calculus In mathematics, the Caputo fractional derivative, also called Caputo-type fractional derivative, is a generalization of derivatives for non-integer orders named after Michele Caputo. Caputo first defined this form of fractional derivative in 1967. Motivation. The Caputo fractional derivative is motivated from the Riemann–Liouville fractional integral. Let formula_0 be continuous on formula_1, then the Riemann–Liouville fractional integral formula_2 states that formula_3 where formula_4 is the Gamma function. Let's define formula_5, say that formula_6 and that formula_7 applies. If formula_8 then we could say formula_9. So if formula_10 is also formula_11, then formula_12 This is known as the Caputo-type fractional derivative, often written as formula_13. Definition. The first definition of the Caputo-type fractional derivative was given by Caputo as: formula_14 where formula_11 and formula_15. A popular equivalent definition is: formula_16 where formula_17 and formula_18 is the ceiling function. This can be derived by substituting formula_19 so that formula_20 would apply and formula_21 follows. Another popular equivalent definition is given by: formula_22 where formula_23. The problem with these definitions is that they only allow arguments in formula_24. This can be fixed by replacing the lower integral limit with formula_25: formula_26. The new domain is formula_27. Properties and theorems. Basic properties and theorems. A few basic properties are: Non-commutation. The index law does not allays fulfill the property of commutation: formula_28 where formula_29. Fractional Leibniz rule. The Leibniz rule Leibniz rule for the Caputo fractional derivative is given by: formula_30 where formula_31 is the binomial coefficient. Relation to other fractional differential operators. Caputo-type fractional derivative is closely related to the Riemann–Liouville fractional integral via its definition: formula_32 Furthermore, the following relation applies: formula_33 where formula_34 is the Riemann–Liouville fractional derivative. Laplace transform. The Laplace transform of the Caputo-type fractional derivative is given by: formula_35 where formula_36. Caputo fractional derivative of some functions. The Caputo fractional derivative of a constant formula_37 is given by: formula_38 The Caputo fractional derivative of a power function formula_39 is given by: formula_40 The Caputo fractional derivative of a exponential function formula_41 is given by: formula_42 where formula_43 is the formula_44-function and formula_45 is the lower incomplete gamma function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "\\left( 0,\\, \\infty \\right)" }, { "math_id": 2, "text": "{^{\\text{RL}}\\operatorname{I}}" }, { "math_id": 3, "text": "{_{0}^{\\text{RL}}\\operatorname{I}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] = \\frac{1}{\\Gamma\\left( -\\alpha \\right)} \\cdot \\int\\limits_{0}^{x} \\frac{f\\left( t \\right)}{\\left( x - t \\right)^{1 - \\alpha}} \\, \\operatorname{d}t" }, { "math_id": 4, "text": "\\Gamma\\left( \\cdot \\right)" }, { "math_id": 5, "text": "\\operatorname{D}_{x}^{\\alpha} := \\frac{\\operatorname{d}^{\\alpha}}{\\operatorname{d}^{\\alpha}x}" }, { "math_id": 6, "text": "\\operatorname{D}_{x}^{\\alpha} \\operatorname{D}_{x}^{\\beta} = \\operatorname{D}_{x}^{\\alpha + \\beta}" }, { "math_id": 7, "text": "\\operatorname{D}_{x}^{\\alpha} = {^{\\text{RL}}\\operatorname{I}_{x}^{-\\alpha}}" }, { "math_id": 8, "text": "\\alpha = m + z \\in \\mathbb{R} \\wedge m \\in \\mathbb{N}_{0} \\wedge 0 < z < 1" }, { "math_id": 9, "text": "\\operatorname{D}_{x}^{\\alpha} = \\operatorname{D}_{x}^{m + z} = \\operatorname{D}_{x}^{z + m} = \\operatorname{D}_{x}^{z - 1 + 1 + m} = \\operatorname{D}_{x}^{z - 1}\\operatorname{D}_{x}^{1 + m} = {^{\\text{RL}}\\operatorname{I}}_{x}^{1 - z}\\operatorname{D}_{x}^{1 + m}" }, { "math_id": 10, "text": "f" }, { "math_id": 11, "text": "C^{m}\\left( 0,\\, \\infty \\right)" }, { "math_id": 12, "text": "{\\operatorname{D}_{x}^{m + z}}\\left[ f\\left( x \\right) \\right] = \\frac{1}{\\Gamma\\left( 1 - z \\right)} \\cdot \\int\\limits_{0}^{x} \\frac{f^{\\left( 1 + m \\right)}\\left( t \\right)}{\\left( x - t \\right)^{z}} \\, \\operatorname{d}t." }, { "math_id": 13, "text": "{ ^{\\text{C}}\\operatorname{D}}_{x}^{\\alpha}" }, { "math_id": 14, "text": "{^{\\text{C}}\\operatorname{D}_{x}^{m + z}}\\left[ f\\left( x \\right) \\right] = \\frac{1}{\\Gamma\\left( 1 - z \\right)} \\cdot \\int\\limits_{0}^{x} \\frac{f^{\\left( m + 1 \\right)}\\left( t \\right)}{\\left( x - t \\right)^{z}} \\, \\operatorname{d}t" }, { "math_id": 15, "text": "m \\in \\mathbb{N}_{0} \\wedge 0 < z < 1" }, { "math_id": 16, "text": "{^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] = \\frac{1}{\\Gamma\\left( \\left\\lceil \\alpha \\right\\rceil - \\alpha \\right)} \\cdot \\int\\limits_{0}^{x} \\frac{f^{\\left( \\left\\lceil \\alpha \\right\\rceil \\right)}\\left( t \\right)}{\\left( x - t \\right)^{\\alpha + 1 - \\left\\lceil \\alpha \\right\\rceil}}\\, \\operatorname{d}t" }, { "math_id": 17, "text": "\\alpha \\in \\mathbb{R}_{> 0} \\setminus \\mathbb{N}" }, { "math_id": 18, "text": "\\left\\lceil \\cdot \\right\\rceil" }, { "math_id": 19, "text": "\\alpha = m + z" }, { "math_id": 20, "text": "\\left\\lceil \\alpha \\right\\rceil = m + 1" }, { "math_id": 21, "text": "\\left\\lceil \\alpha \\right\\rceil + z = \\alpha + 1 " }, { "math_id": 22, "text": "{^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] = \\frac{1}{\\Gamma\\left( n - \\alpha \\right)} \\cdot \\int\\limits_{0}^{x} \\frac{f^{\\left( n \\right)}\\left( t \\right)}{\\left( x - t \\right)^{\\alpha + 1 - n}}\\, \\operatorname{d}t" }, { "math_id": 23, "text": "n - 1 < \\alpha < n \\in \\mathbb{N}. " }, { "math_id": 24, "text": "\\left( 0,\\, \\infty \\right)" }, { "math_id": 25, "text": "a" }, { "math_id": 26, "text": "{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] = \\frac{1}{\\Gamma\\left( \\left\\lceil \\alpha \\right\\rceil - \\alpha \\right)} \\cdot \\int\\limits_{a}^{x} \\frac{f^{\\left( \\left\\lceil \\alpha \\right\\rceil \\right)}\\left( t \\right)}{\\left( x - t \\right)^{\\alpha + 1 - \\left\\lceil \\alpha \\right\\rceil}}\\, \\operatorname{d}t" }, { "math_id": 27, "text": "\\left( a,\\, \\infty \\right)" }, { "math_id": 28, "text": "\\operatorname{_{a}^{\\text{C}}D}_{x}^{\\alpha}\\operatorname{_{a}^{\\text{C}}D}_{x}^{\\beta} = \\operatorname{_{a}^{\\text{C}}D}_{x}^{\\alpha + \\beta} \\ne \\operatorname{_{a}^{\\text{C}}D}_{x}^{\\beta}\\operatorname{_{a}^{\\text{C}}D}_{x}^{\\alpha}" }, { "math_id": 29, "text": "\\alpha \\in \\mathbb{R}_{> 0} \\setminus \\mathbb{N} \\wedge \\beta \\in \\mathbb{N}" }, { "math_id": 30, "text": "\\operatorname{_{a}^{\\text{C}}D}_{x}^{\\alpha}\\left[ g\\left( x \\right) \\cdot h\\left( x \\right) \\right] = \\sum\\limits_{k = 0}^{\\infty}\\left[ \\binom{a}{k} \\cdot g^{\\left( k \\right)}\\left( x \\right) \\cdot \\operatorname{_{a}^{\\text{RL}}D}_{x}^{\\alpha - k}\\left[ h\\left( x \\right) \\right] \\right] - \\frac{\\left( x - a \\right)^{-\\alpha}}{\\Gamma\\left( 1 - \\alpha \\right)} \\cdot g\\left( a \\right) \\cdot h\\left( a \\right)" }, { "math_id": 31, "text": "\\binom{a}{b} = \\frac{\\Gamma\\left( a + 1 \\right)}{\\Gamma\\left( b + 1 \\right) \\cdot \\Gamma\\left( a - b + 1 \\right)}" }, { "math_id": 32, "text": "{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] = {_{a}^{\\text{RL}}\\operatorname{I}_{x}^{\\left\\lceil \\alpha \\right\\rceil - \\alpha}}\\left[ \\operatorname{D}_{x}^{\\left\\lceil \\alpha \\right\\rceil}\\left[ f\\left( x \\right) \\right] \\right]" }, { "math_id": 33, "text": "{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] = {_{a}^{\\text{RL}}\\operatorname{D}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] - \\sum\\limits_{k = 0}^{\\left\\lceil \\alpha \\right\\rceil}\\left[ \\frac{x^{k - \\alpha}}{\\Gamma\\left( k - \\alpha + 1 \\right)} \\cdot f^{\\left( k \\right)}\\left( 0 \\right) \\right]" }, { "math_id": 34, "text": "{_{a}^{\\text{RL}}\\operatorname{D}_{x}^{\\alpha}}" }, { "math_id": 35, "text": "\\mathcal{L}_{x}\\left\\{ {_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ f\\left( x \\right) \\right] \\right\\}\\left( s \\right) = s^{\\alpha} \\cdot F\\left( s \\right) - \\sum\\limits_{k = 0}^{\\left\\lceil \\alpha \\right\\rceil}\\left[ s^{\\alpha - k - 1} \\cdot f^{\\left( k \\right)}\\left( 0 \\right) \\right]" }, { "math_id": 36, "text": "\\mathcal{L}_{x}\\left\\{ f\\left( x \\right) \\right\\}\\left( s \\right) = F\\left( s \\right)" }, { "math_id": 37, "text": "c" }, { "math_id": 38, "text": "\\begin{align}\n{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ c \\right] &= \\frac{1}{\\Gamma\\left( \\left\\lceil \\alpha \\right\\rceil - \\alpha \\right)} \\cdot \\int\\limits_{a}^{x} \\frac{\\operatorname{D}_{t}^{\\left\\lceil \\alpha \\right\\rceil}\\left[ c \\right]}{\\left( x - t \\right)^{\\alpha + 1 - \\left\\lceil \\alpha \\right\\rceil}}\\, \\operatorname{d}t = \\frac{1}{\\Gamma\\left( \\left\\lceil \\alpha \\right\\rceil - \\alpha \\right)} \\cdot \\int\\limits_{a}^{x} \\frac{0}{\\left( x - t \\right)^{\\alpha + 1 - \\left\\lceil \\alpha \\right\\rceil}}\\, \\operatorname{d}t\\\\\n{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ c \\right] &= 0\n\\end{align}" }, { "math_id": 39, "text": "x^{b}" }, { "math_id": 40, "text": "\\begin{align}\n{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ x^{b} \\right] &= {_{a}^{\\text{RL}}\\operatorname{I}_{x}^{\\left\\lceil \\alpha \\right\\rceil - \\alpha}}\\left[ \\operatorname{D}_{x}^{\\left\\lceil \\alpha \\right\\rceil}\\left[ x^{b} \\right] \\right] = \\frac{\\Gamma\\left( b + 1 \\right)}{\\Gamma\\left( b - \\left\\lceil \\alpha \\right\\rceil + 1 \\right)} \\cdot {_{a}^{\\text{RL}}\\operatorname{I}_{x}^{\\left\\lceil \\alpha \\right\\rceil - \\alpha}}\\left[ x^{b - \\left\\lceil \\alpha \\right\\rceil} \\right]\\\\ {_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ x^{b} \\right] &= \\begin{cases} \\frac{\\Gamma\\left( b + 1 \\right)}{\\Gamma\\left( b - \\alpha + 1 \\right)} \\left( x^{b - \\alpha} - a^{b - \\alpha} \\right),\\, &\\text{for } \\left\\lceil \\alpha \\right\\rceil - 1 < b \\wedge b \\in \\mathbb{R}\\\\ 0,\\, &\\text{for } \\left\\lceil \\alpha \\right\\rceil - 1 \\geq b \\wedge b \\in \\mathbb{N}\\\\ \\end{cases}\n\\end{align}" }, { "math_id": 41, "text": "e^{a \\cdot x}" }, { "math_id": 42, "text": "\\begin{align}\n{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ e^{b \\cdot x} \\right] &= {_{a}^{\\text{RL}}\\operatorname{I}_{x}^{\\left\\lceil \\alpha \\right\\rceil - \\alpha}}\\left[ \\operatorname{D}_{x}^{\\left\\lceil \\alpha \\right\\rceil}\\left[ e^{b \\cdot x} \\right] \\right] = b^{\\left\\lceil \\alpha \\right\\rceil} \\cdot {_{a}^{\\text{RL}}\\operatorname{I}_{x}^{\\left\\lceil \\alpha \\right\\rceil - \\alpha}}\\left[ e^{b \\cdot x} \\right]\\\\\n{_{a}^{\\text{C}}\\operatorname{D}_{x}^{\\alpha}}\\left[ e^{b \\cdot x} \\right] &= b^{\\alpha} \\cdot \\left( E_{x}\\left( \\left\\lceil \\alpha \\right\\rceil - \\alpha,\\, b \\right) - E_{a}\\left( \\left\\lceil \\alpha \\right\\rceil - \\alpha,\\, b \\right) \\right)\\\\\n\\end{align}" }, { "math_id": 43, "text": "E_{x}\\left( \\nu,\\, a \\right) = \\frac{a^{-\\nu} \\cdot e^{a \\cdot x} \\cdot \\gamma\\left( \\nu,\\, a \\cdot x \\right)}{\\Gamma\\left( \\nu \\right)}" }, { "math_id": 44, "text": "\\operatorname{E}_{t}" }, { "math_id": 45, "text": "\\gamma \\left( a,\\, b \\right)" } ]
https://en.wikipedia.org/wiki?curid=76975056
76985673
Alessandra Sarti
Italian mathematician Alessandra Sarti (born 1974) is an Italian mathematician specializing in algebraic geometry. She is the namesake of the Sarti surface, and has also published research on "K"3 surfaces. She works in France as a professor at the University of Poitiers and deputy director of the Institut national des sciences mathématiques et de leurs interactions (Insmi) of the French National Centre for Scientific Research in Paris. Education and career. Sarti was born in 1974, in Ferrara, Italy. After studying for a laurea at the University of Ferrara from 1993 to 1997, she moved to Germany for graduate study in mathematics. After a year at the University of Göttingen, supported by an Italian research grant, she became a research assistant at the University of Erlangen–Nuremberg. She completed her Ph.D. there in 2001, with the dissertation "Pencils of symmetric surfaces in formula_0", supervised by Wolf Barth. She took an assistant professor position at the University of Mainz in Germany, from 2003 to 2008, earning a habilitation there in 2007. After a temporary faculty position at the University of Erlangen–Nuremberg, she became a full professor at the University of Poitiers in France in 2008. At the University of Poitiers, she directed the Laboratoire de Mathématiques et Applications from 2016 to 2021. Since 2022, she has held a second affiliation as deputy director of the Institut national des sciences mathématiques et de leurs interactions of the French National Centre for Scientific Research (CNRS), in Paris. Research. Sarti is the namesake of the Sarti surfaces (also called Sarti dodecics) a family of degree-12 nodal surfaces with 600 nodes that she discovered in 1999 and published in 2001.[SS] One member of the family can be chosen so that 560 of the nodes have real rather than complex coordinates. The Sarti surface has a "K"3 surface as one of its quotients, and some of Sarti's other publications include research on the symmetries of "K"3 surfaces.[K3a][K3b] Personal life. Sarti has a twin sister, Cristina Sarti, who also did a Ph.D. in mathematics in Germany. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{P}_3" } ]
https://en.wikipedia.org/wiki?curid=76985673
76987869
4D N = 1 global supersymmetry
Theory of supersymmetry in four dimensions In supersymmetry, 4D formula_0 global supersymmetry is the theory of global supersymmetry in four dimensions with a single supercharge. It consists of an arbitrary number of chiral and vector supermultiplets whose possible interactions are strongly constrained by supersymmetry, with the theory primarily fixed by three functions: the Kähler potential, the superpotential, and the gauge kinetic matrix. Many common models of supersymmetry are special cases of this general theory, such as the Wess–Zumino model, formula_1 super Yang–Mills theory, and the Minimal Supersymmetric Standard Model. When gravity is included, the result is described by 4D formula_0 supergravity. Background. Global formula_1 supersymmetry has a spacetime symmetry algebra given by the super-Poincaré algebra with a single supercharge. In four dimensions this supercharge can be expressed either as a pair of Weyl spinors or as a single Majorana spinor. The particle content of this theory must belong to representations of the super-Poincaré algebra, known as supermultiplets. Without including gravity, there are two types of supermultiplets: a chiral supermultiplet consisting of a complex scalar field and its Majorana spinor superpartner, and a vector supermultiplet consisting of a gauge field along with its Majorana spinor superpartner. The general theory has an arbitrary number of chiral multiplets formula_2 indexed by formula_3, along with an arbitrary number of gauge multiplets formula_4 indexed by formula_5. Here formula_6 are complex scalar fields, formula_7 are gauge fields, and formula_8 and formula_9 are Majorana spinors known as chiralini and gaugini, respectively. Supersymmetry imposes stringent conditions on the way that the supermultiplets can be combined in the theory. In particular, most of the structure is fixed by three arbitrary functions of the scalar fields. The dynamics of the chiral multiplets is fixed by the holomorphic superpotential formula_10 and the Kähler potential formula_11, while the mixing between the chiral and gauge sectors is primarily fixed by the holomorphic gauge kinetic matrix formula_12. When such mixing occurs, the gauge group must also be consistent with the structure of the chiral sector. Scalar manifold geometry. The complex scalar fields in the formula_13 chiral supermultiplets can be seen as coordinates of a formula_14-dimensional manifold, known as the scalar manifold. This manifold can be parametrized using complex coordinates formula_15, where the barred index represents the complex conjugate formula_16. Supersymmetry ensures that the manifold is necessarily a complex manifold, which is a type of manifold that locally looks like formula_17 and whose transition functions are holomorphic. This is because supersymmetry transformations map formula_6 into left-handed Weyl spinors, and formula_18 into right-handed Weyl spinors, so the geometry of the scalar manifold must reflect the fermion spacetime chirality by admitting an appropriate decomposition into complex coordinates. For any complex manifold there always exists a special metric compatible with the manifolds complex structure, known as a Hermitian metric. The only non-zero components of this metric are formula_19, with a line element given by formula_20 Using this metric on the scalar manifold makes it a Hermitian manifold. The chirality properties inherited from supersymmetry imply that any closed loop around the scalar manifold has to maintain the splitting between formula_6 and formula_18. This implies that the manifold has a formula_21 holonomy group. Such manifolds are known as Kähler manifolds and can alternatively be defined as being manifolds that admit a two-form, known as a Kähler form, defined by formula_22 such that formula_23. This also implies that the scalar manifold is a symplectic manifold. These manifolds have the useful property that their metric can be expressed in terms of a function known as a Kähler potential formula_24 through formula_25 where this function is invariant up to the addition of the real part of an arbitrary holomorphic function formula_26 Such transformations are known as Kähler transformations and since they do not affect the geometry of the scalar manifold, any supersymmetric action must be invariant under these transformations. Coupling the chiral and gauge sectors. The gauge group of a general supersymmetric theory is heavily restricted by the interactions of the theory. One key condition arises when chiral multiplets are charged under the gauge group, in which case the gauge transformation must be such as to leave the geometry of the scalar manifold unchanged. More specifically, they leave the scalar metric as well as the complex structure unchanged. The first condition implies that the gauge symmetry belongs to the isometry group of the scalar manifold, while the second further restricts them to be holomorphic Killing symmetries. Therefore, the gauge group must be a subgroup of this symmetry group, although additional consistency conditions can restrict the possible gauge groups further. The generators of the isometry group are known as Killing vectors, with these being vectors that preserve the metric, a condition mathematically expressed by the Killing equation formula_27, where formula_28 are the Lie derivatives for the corresponding vector. The isometry algebra is then the algebra of these Killing vectors formula_29 where formula_30 are the structure constants. Not all of these Killing vectors can necessarily be gauged. Rather, the Kähler structure of the scalar manifolds also demands the preservation of the complex structure formula_31, with this imposing that the Killing vectors must also be holomorphic functions formula_33. It is these holomorphic Killing vectors that define symmetries of Kähler manifolds, and so a gauge group can only be formed by gauging a subset of these. An implication of formula_34 is that there exists a set of real holomorphic functions known as Killing prepotentials formula_35 which satisfy formula_36, where formula_37 is the interior product. The Killing prepotentials entirely fix the holomorphic Killing vectors formula_38 Conversely, if the holomorphic Killing vectors are known, then the prepotential can be explicitly written in terms of the Kähler potential as formula_39 The holomorphic functions formula_40 describe how the Kähler potential changes under isometry transformations formula_41, allowing them to be calculated up to the addition of an imaginary constant. A key consistency condition on the prepotentials is that they must satisfy the equivariance condition formula_42 For non-abelian symmetries, this condition fixes the imaginary constants associated to the holomorphic functions formula_43, known as Fayet–Iliopoulos terms. For abelian subalgebras of the gauge algebra, the Fayet–Iliopoulos terms remain unfixed since these have vanishing structure constants. Lagrangian. The derivatives in the Lagrangian are covariant with respect to the symmetries under which the fields transform, these being the gauge symmetries and the scalar manifold coordinate redefinition transformations. The various covariant derivatives are given by formula_44 formula_45 formula_46 where the hat indicates that the derivative is covariant with respect to gauge transformations. Here formula_47 are the holomorphic Killing vectors that have been gauged, while formula_48 are the scalar manifold Christoffel symbols and formula_49 are the gauge algebra structure constants. Additionally, second derivatives on the scalar manifold must also be covariant formula_50. Meanwhile, the left-handed and right-handed Weyl fermion projections of the Majorana spinors are denoted by formula_51. The general four-dimensional Lagrangian with global formula_1 supersymmetry is given by formula_52 formula_53 formula_54 formula_55 formula_56 formula_57 Here formula_58 are the so-called D-terms. The first line is the kinetic term for the chiral multiplets whose structure is primarily fixed by the scalar metric while the second line is the kinetic term for the gauge multiplets which is instead primarily fixed by the real part of the holomorphic gauge kinetic matrix formula_59. The third line is the generalized supersymmetric theta-like term for the gauge multiplet, with this being a total derivative when the imaginary part of the gauge kinetic function is a constant, in which case it does not contribute to the equations of motion. The next line is an interaction term while the second-to-last line are the fermion mass terms given by formula_60 formula_61 where formula_10 is the superpotential, an arbitrary holomorphic function of the scalars. It is these terms that determine the masses of the fermions since in a particular vacuum state with scalar fields expanded around some value formula_62, then the mass matrices become fixed matrices to leading order in the scalar field. Higher order terms give rise to interaction terms between the scalars and the fermions. The mass basis will generally involve diagonalizing the entire mass matrix implying that the mass eigenbasis are generally linear combinations of the chiral and gauge fermion fields. The last line includes the scalar potential formula_63 where the first term is called the F-term and the second is known as the D-term. Finally this line also contains the four-fermion interaction terms formula_64 formula_65 formula_66 with formula_67 is the Riemann tensor of the scalar manifold. Properties. Supersymmetry transformations. Neglecting three-fermion terms, the supersymmetry transformation rules that leave the Lagrangian invariant are given by formula_68 formula_69 formula_70 formula_71 The second part of the fermion transformations, proportional to formula_72 for the chiralino and formula_73 for the gaugino, are referred to as fermion shifts. These dictate a lot of the physical properties of the supersymmetry model such as the form of the potential and the goldstino when supersymmetry is spontaneously broken. Spontaneous symmetry breaking. At the quantum level, supersymmetry is broken if the supercharges do not annihilate the vacuum formula_74. Since the Hamiltonian can be written in terms of these supercharges, this implies that unbroken supersymmetry corresponds to vanishing vacuum energy, while broken supersymmetry necessarily requires positive vacuum energy. In contrast to supergravity, global supersymmetry does not admit negative vacuum energies, with this being a direct consequence of the supersymmetry algebra. In the classical approximation, supersymmetry is unbroken if the scalar potential vanishes, which is equivalent to the condition that formula_75 If any of these are non-zero, then supersymmetry is classically broken. Due to the superpotential nonrenormalization theorem, which states that the superpotential does not receive corrections at any level of quantum perturbation theory, the above condition holds at all orders of quantum perturbation theory. Only non-perturbative quantum corrections can modify the condition for supersymmetry breaking. Spontaneous symmetry breaking of global supersymmetry necessarily leads to the presence of a massless Nambu–Goldstone fermion, referred to as a goldstino formula_76. This fermion is given by the linear combination of the fermion fields multiplied by their fermion shifts and contracted with appropriate metrics formula_77 with this being the eigenvector corresponding to the zero eigenvalue of the fermion mass matrix. The goldstino vanishes when the conditions for supersymmetry are meet, that being the vanishing of the superpotential and the prepotential. Mass sum rules. One important set of quantities are the supertraces of powers of the mass matrices formula_78, usually expressed as a sum over all the eigenvalues formula_79 modified by the spin formula_32 of the state formula_80 In unbroken global formula_1 supersymmetry, formula_81 for all formula_3. The formula_82 case is referred to as the mass sum formula, which in the special case of a trivial gauge kinetic matrix formula_83 can be expressed as formula_84 showing that this vanishes in the case of a Ricci-flat scalar manifold, unless spontaneous symmetry breaking occurs through non-vanishing D-terms. For most models formula_85, even when supersymmetry is spontaneously broken. An implication of this is that the mass difference between bosons and fermions cannot be very large. The result can be generalized variously, such as for vanishing vacuum energy but a general gauge kinetic term, or even to a general formula using the superspace formalism. In the full quantum theory the masses can get additional quantum corrections so the above results only hold at tree-level. Special cases and generalizations. A theory with only chiral multiplets and no gauge multiplets is sometimes referred to as the supersymmetric sigma model, with this determined by the Kähler potential and the superpotential. From this, the Wess–Zumino model is acquired by restricting to a trivial Kähler potential corresponding to a Euclidean metric, together with a superpotential that is at most cubic formula_86 This model has the useful property of being fully renormalizable. If instead there are no chiral multiplets, then the theory with a Euclidean gauge kinetic matrix formula_87 is known as super Yang–Mills theory. In the case of a single gauge multiplet with a formula_88 gauge group, this corresponds to super Maxwell theory. Super quantum chromodynamics is meanwhile acquired using a Euclidean scalar metric, together with an arbitrary number of chiral multiplets behaving as matter and a single gauge multiplet. When the gauge group is an abelian group this is referred to a super quantum electrodynamics. Models with extended supersymmetry formula_89 arise as special cases of formula_1 supersymmetry models with particular choices of multiplets, potentials, and kinetic terms. This is in contrast to supergravity where extended supergravity models are not special cases of formula_1 supergravity and necessarily include additional structures that must be added to the theory. Gauging global supersymmetry gives rise to local supersymmetry which is equivalent to supergravity. In particular, 4D N = 1 supergravity has a matter content similar with the case of global supersymmetry except with the addition of a single gravity supermultiplet, consisting of a graviton and a gravitino. The resulting action requires a number of modifications to account for the coupling to gravity, although structurally shares many similarities with the case of global supersymmetry. The global supersymmetry model can be directly acquired from its supergravity generalization through the decoupling limit whereby the Planck mass is taken to infinity formula_90. These models are also applied in particle physics to construct supersymmetric generalizations of the Standard Model, most notably the Minimal Supersymmetric Standard Model. This is the minimal extension of the Standard Model that is consistent with phenomenology and includes supersymmetry that is broken at some high scale. Construction. There are a number of ways to construct a four dimensional global formula_1 supersymmetric action. The most common approach is the superspace approach. In this approach, Minkowski spacetime is extended to an eight-dimensional supermanifold which additionally has four Grassmann coordinates. The chiral and vector multiplets are then packaged into fields known as superfields. The supersymmetry action is subsequently constructed by considering general invariant actions of the superfields and integrating over the Grassmann subspace to get a four-dimensional Lagrangian in Minkowski spacetime. An alternative approach to the superspace formalism is the multiplet calculus approach. Rather than working with superfields, this approach works with multiplets, which are sets of fields on which the supersymmetry algebra is realized. Invariant actions are then constructed from these. For global supersymmetry this is more complicated than the superspace approach, although a generalized approach is very useful when constructing supergravity actions. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal N = 1" }, { "math_id": 1, "text": "\\mathcal N=1" }, { "math_id": 2, "text": "(\\phi^n,\\chi^n)" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "(A^I_\\mu, \\lambda^I)" }, { "math_id": 5, "text": "I" }, { "math_id": 6, "text": "\\phi^n" }, { "math_id": 7, "text": "A^I_\\mu" }, { "math_id": 8, "text": "\\chi^n" }, { "math_id": 9, "text": "\\lambda^I" }, { "math_id": 10, "text": "W(\\phi)" }, { "math_id": 11, "text": "K(\\phi,\\bar \\phi)" }, { "math_id": 12, "text": "f_{IJ}(\\phi)" }, { "math_id": 13, "text": "n_c" }, { "math_id": 14, "text": "2n_c" }, { "math_id": 15, "text": "(\\phi^n, \\phi^{\\bar n})" }, { "math_id": 16, "text": "\\phi^{\\bar n} = (\\phi^n)^*" }, { "math_id": 17, "text": "\\mathbb C^{n_c}" }, { "math_id": 18, "text": "\\phi^{\\bar n}" }, { "math_id": 19, "text": "g_{m\\bar n}" }, { "math_id": 20, "text": "\nds^2 = g_{m\\bar n}(d\\phi^m \\otimes d\\phi^{\\bar n} + d\\phi^{\\bar n}\\otimes d\\phi^m).\n" }, { "math_id": 21, "text": "\\text{U}(N)" }, { "math_id": 22, "text": "\n\\Omega = i g_{m\\bar n} d\\phi^m \\wedge d\\phi^{\\bar n}\n" }, { "math_id": 23, "text": "d\\Omega = 0" }, { "math_id": 24, "text": "K(\\phi, \\bar \\phi)" }, { "math_id": 25, "text": "\ng_{m\\bar n} = \\partial_m \\partial_{\\bar n} K,\n" }, { "math_id": 26, "text": "\nK(\\phi, \\bar \\phi) \\rightarrow K(\\phi, \\bar \\phi) + h(\\phi) + h^*(\\bar \\phi).\n" }, { "math_id": 27, "text": "\\mathcal L_{\\xi_I}g = 0" }, { "math_id": 28, "text": "\\mathcal L_{\\xi_I}" }, { "math_id": 29, "text": "\n[\\xi_I, \\xi_J] = f_{IJ}{}^K \\xi_K,\n" }, { "math_id": 30, "text": "f_{IJ}{}^K" }, { "math_id": 31, "text": "\\mathcal L_{\\xi_I}J = 0" }, { "math_id": 32, "text": "J" }, { "math_id": 33, "text": "\\xi_I^{\\bar n}(\\bar\\phi) = (\\xi_I^n(\\phi))^*" }, { "math_id": 34, "text": "\\mathcal L_{\\xi_I} J = 0" }, { "math_id": 35, "text": "\\mathcal P_I" }, { "math_id": 36, "text": "i_{\\xi_I} J = d \\mathcal P_I" }, { "math_id": 37, "text": "i_{\\xi_I}" }, { "math_id": 38, "text": "\n\\xi^m_I = -ig^{m\\bar n}\\partial_{\\bar n}\\mathcal P_I.\n" }, { "math_id": 39, "text": "\n\\mathcal P_J = \\frac{i}{2}[\\xi^m_I \\partial_m K - \\xi_I^{\\bar n}\\partial_{\\bar n}K - (r_I-r_I^*)].\n" }, { "math_id": 40, "text": "r_I(\\phi)" }, { "math_id": 41, "text": "\\delta_I K \\equiv r_I+r_I^*" }, { "math_id": 42, "text": "\n\\xi_I^mg_{m\\bar n}\\xi_J^{\\bar n} - \\xi_J^mg_{m\\bar n}\\xi_I^{\\bar n} = if_{IJ}{}^K \\mathcal P_K.\n" }, { "math_id": 43, "text": "r_I -r_I^* = -i\\eta_I" }, { "math_id": 44, "text": "\n\\hat \\partial_\\mu \\phi^n = \\partial_\\mu \\phi^n - A^I_\\mu \\xi_I^n,\n" }, { "math_id": 45, "text": "\n\\hat{\\partial}_\\mu\\lambda^I = \\partial_\\mu \\lambda^I + A^J_\\mu f^I_{JK}\\lambda^K,\n" }, { "math_id": 46, "text": "\n\\hat{\\mathcal D}_\\mu \\chi^m_L = \\partial_\\mu\\chi^m_L + (\\hat \\partial_\\mu \\phi^n)\\Gamma^m_{nl} \\chi^l_L - A^I_\\mu (\\partial_n \\xi^m_I)\\chi^n_L,\n" }, { "math_id": 47, "text": "\\xi_I^m(\\phi)" }, { "math_id": 48, "text": "\\Gamma^m_{nl} = g^{m\\bar p}\\partial_n g_{l \\bar p}" }, { "math_id": 49, "text": "f_{JK}{}^I" }, { "math_id": 50, "text": "\\mathcal D_m \\partial_n = \\partial_m \\partial_n - \\Gamma^l_{mn}\\partial_l" }, { "math_id": 51, "text": "\\chi_{L,R} = P_{L,R}\\chi" }, { "math_id": 52, "text": "\n\\mathcal L = -g_{m\\bar n}\\bigg[\\hat \\partial_\\mu \\phi^m \\hat \\partial^\\mu \\phi^{\\bar n} +\\bar \\chi_L^{m}\\hat{{\\mathcal D}\\!\\!\\!/}\\chi^{\\bar n}_R + \\bar \\chi_R^{\\bar n}\\hat{{{\\mathcal D}\\!\\!\\!/}}\\chi^{m}_L\\bigg]\n" }, { "math_id": 53, "text": "\n+ \\text{Re}(f_{IJ})\\bigg[-\\frac{1}{4}F^I_{\\mu\\nu}F^{\\mu\\nu J} - \\frac{1}{2}\\bar \\lambda^I \\hat{{\\partial\\!\\!\\!/}}\\lambda^J\\bigg]\n" }, { "math_id": 54, "text": "\n+ \\frac{1}{8}(\\text{Im} f_{IJ})\\bigg[F_{\\mu\\nu}^I F_{\\rho \\sigma}^J \\epsilon^{\\mu\\nu\\rho\\sigma}-2i \\hat{\\partial}_\\mu(\\bar \\lambda^I \\gamma_5 \\gamma^\\mu \\lambda^J)\\bigg]\n" }, { "math_id": 55, "text": "\n-\\bigg[\\frac{1}{4\\sqrt 2}\\partial_m f_{IJ}F^I_{\\mu\\nu}\\bar \\chi^m_L \\gamma^{\\mu\\nu}\\lambda^J_L + h.c.\\bigg]\n" }, { "math_id": 56, "text": "\n+ \\bigg[ -\\frac{1}{2}m_{mn}\\bar \\chi^m_L \\chi^n_L - m_{n I}\\bar \\chi^n_L\\lambda_L^I -\\frac{1}{2}m_{IJ}\\bar \\lambda^I_L \\lambda^J_L +h.c.\\bigg]\n" }, { "math_id": 57, "text": "\n- V(\\phi^m, \\phi^n) + \\mathcal L_{4f}.\n" }, { "math_id": 58, "text": "D^I = (\\text{Re} f)^{-1 IJ} \\mathcal P_J" }, { "math_id": 59, "text": " f_{IJ}(\\phi)" }, { "math_id": 60, "text": "\nm_{mn} = \\mathcal D_m \\partial_n W, \\ \\ \\ \\ \\ m_{IJ} = -\\frac{1}{2}\\partial_n f_{IJ} \\partial^n \\bar W, \n" }, { "math_id": 61, "text": "\nm_{nI} = m_{In} = i\\sqrt 2 \\bigg[\\partial_n \\mathcal P_I - \\frac{1}{4}\\partial_n f_{IJ}D^J\\bigg],\n" }, { "math_id": 62, "text": "\\phi = \\phi_0 + \\phi'" }, { "math_id": 63, "text": "\nV = g^{m\\bar n}\\partial_m W \\partial_{\\bar n}\\bar W + \\frac{1}{2}\\text{Re} (f_{IJ}) D^I D^J,\n" }, { "math_id": 64, "text": "\n\\mathcal L_{4f} = \\bigg[ \\frac{1}{8}(\\mathcal D_m \\partial_n f_{IJ})\\bar \\chi^m \\chi^n \\bar \\lambda^I \\lambda^J_L + h.c.\\bigg] + \\frac{1}{4}R_{m \\bar n p \\bar q} \\bar \\chi^m \\chi^p \\bar \\chi^{\\bar n} \\chi^{\\bar q}\n" }, { "math_id": 65, "text": "\n-\\frac{1}{16}\\partial_mf_{IJ}\\bar \\lambda^I \\lambda^J_L g^{m\\bar n}\\bar \\partial_{\\bar n}\\bar f_{KL}\\bar \\lambda^K \\lambda^L_R\n" }, { "math_id": 66, "text": "\n+ \\frac{1}{16} (\\text{Re} f)^{-1 \\ IJ}(\\partial_m f_{IN} \\bar \\chi^m - \\partial_{\\bar m}\\bar f_{IN}\\bar \\chi^{\\bar m})\\lambda^N (\\partial_n f_{JM}\\bar \\chi^{n}- \\partial_{\\bar n}\\bar f_{JM}\\bar \\chi^{\\bar n})\\lambda^M,\n" }, { "math_id": 67, "text": "R_{m\\bar n p\\bar q}" }, { "math_id": 68, "text": "\n\\delta \\phi^m = \\frac{1}{\\sqrt 2}\\bar \\epsilon \\chi^m,\n" }, { "math_id": 69, "text": "\n\\delta \\chi_L^m = \\frac{1}{\\sqrt 2}\\hat{{\\partial\\!\\!\\!/}} \\phi^m \\epsilon_R -\\frac{1}{\\sqrt 2}g^{m\\bar n}(\\partial_{\\bar n}\\bar W)\\epsilon_L,\n" }, { "math_id": 70, "text": "\n\\delta A^I_\\mu = -\\frac{1}{2}\\bar \\epsilon \\gamma_\\mu \\lambda^I,\n" }, { "math_id": 71, "text": "\n\\delta \\lambda^I_L = \\frac{1}{4}\\gamma^{\\mu\\nu}F^I_{\\mu\\nu}\\epsilon_L + \\frac{i}{2}D^I \\epsilon_L.\n" }, { "math_id": 72, "text": "\\partial_{\\bar n}\\bar W" }, { "math_id": 73, "text": "D^I" }, { "math_id": 74, "text": "Q_\\alpha |0\\rangle \\neq 0" }, { "math_id": 75, "text": "\n\\partial_m W(\\phi) = 0, \\ \\ \\ \\ \\ \\ \\ \\mathcal P_I(\\phi, \\bar \\phi) = 0.\n" }, { "math_id": 76, "text": "v" }, { "math_id": 77, "text": "\nv_L = -\\frac{1}{\\sqrt 2} P_L\\bigg[\\partial_n W \\chi^n + \\frac{1}{\\sqrt 2} i \\mathcal P_I \\lambda^I\\bigg],\n" }, { "math_id": 78, "text": "\\mathcal M" }, { "math_id": 79, "text": "m_J" }, { "math_id": 80, "text": "\n\\text{str}(\\mathcal M^{n}) = \\sum_J (-1)^{2J}(2J+1)m_J^{n}.\n" }, { "math_id": 81, "text": "\\text{str}( \\mathcal M^n) = 0" }, { "math_id": 82, "text": "n=2" }, { "math_id": 83, "text": "f_{IJ}=\\delta_{IJ}" }, { "math_id": 84, "text": "\n\\text{str}(\\mathcal M^2) = \\sum_J (-1)^{2J}(2J+1)m_J^2 = 2R^{m\\bar n}\\partial_m W \\partial_{\\bar n}\\bar W + 2i D^I \\nabla_m \\xi_I^m,\n" }, { "math_id": 85, "text": "\\text{str}(\\mathcal M^2)=0" }, { "math_id": 86, "text": "\nW(\\phi) = \\frac{1}{2}m\\phi^2 + \\frac{1}{3}\\lambda \\phi^3.\n" }, { "math_id": 87, "text": "f_{IJ}= \\delta_{IJ}" }, { "math_id": 88, "text": "\\text{U}(1)" }, { "math_id": 89, "text": "\\mathcal N\\geq 2" }, { "math_id": 90, "text": "M_P \\rightarrow \\infty" } ]
https://en.wikipedia.org/wiki?curid=76987869
76988
Electrocardiography
Examination of the heart's electrical activity Electrocardiography is the process of producing an electrocardiogram (ECG or EKG), a recording of the heart's electrical activity through repeated cardiac cycles. It is an electrogram of the heart which is a graph of voltage versus time of the electrical activity of the heart using electrodes placed on the skin. These electrodes detect the small electrical changes that are a consequence of cardiac muscle depolarization followed by repolarization during each cardiac cycle (heartbeat). Changes in the normal ECG pattern occur in numerous cardiac abnormalities, including: Traditionally, "ECG" usually means a 12-lead ECG taken while lying down as discussed below. However, other devices can record the electrical activity of the heart such as a Holter monitor but also some models of smartwatch are capable of recording an ECG. ECG signals can be recorded in other contexts with other devices. In a conventional 12-lead ECG, ten electrodes are placed on the patient's limbs and on the surface of the chest. The overall magnitude of the heart's electrical potential is then measured from twelve different angles ("leads") and is recorded over a period of time (usually ten seconds). In this way, the overall magnitude and direction of the heart's electrical depolarization is captured at each moment throughout the cardiac cycle. There are three main components to an ECG: During each heartbeat, a healthy heart has an orderly progression of depolarization that starts with pacemaker cells in the sinoatrial node, spreads throughout the atrium, and passes through the atrioventricular node down into the bundle of His and into the Purkinje fibers, spreading down and to the left throughout the ventricles. This orderly pattern of depolarization gives rise to the characteristic ECG tracing. To the trained clinician, an ECG conveys a large amount of information about the structure of the heart and the function of its electrical conduction system. Among other things, an ECG can be used to measure the rate and rhythm of heartbeats, the size and position of the heart chambers, the presence of any damage to the heart's muscle cells or conduction system, the effects of heart drugs, and the function of implanted pacemakers. Medical uses. The overall goal of performing an ECG is to obtain information about the electrical functioning of the heart. Medical uses for this information are varied and often need to be combined with knowledge of the structure of the heart and physical examination signs to be interpreted. Some indications for performing an ECG include the following: ECGs can be recorded as short intermittent tracings or "continuous" ECG monitoring. Continuous monitoring is used for critically ill patients, patients undergoing general anesthesia, and patients who have an infrequently occurring cardiac arrhythmia that would unlikely be seen on a conventional ten-second ECG. Continuous monitoring can be conducted by using Holter monitors, internal and external defibrillators and pacemakers, and/or biotelemetry. Screening. For adults, evidence does not support the use of ECGs among those without symptoms or at low risk of cardiovascular disease as an effort for prevention. This is because an ECG may falsely indicate the existence of a problem, leading to misdiagnosis, the recommendation of invasive procedures, and overtreatment. However, persons employed in certain critical occupations, such as aircraft pilots, may be required to have an ECG as part of their routine health evaluations. Hypertrophic cardiomyopathy screening may also be considered in adolescents as part of a sports physical out of concern for sudden cardiac death. Electrocardiograph machines. Electrocardiograms are recorded by machines that consist of a set of electrodes connected to a central unit. Early ECG machines were constructed with analog electronics, where the signal drove a motor to print out the signal onto paper. Today, electrocardiographs use analog-to-digital converters to convert the electrical activity of the heart to a digital signal. Many ECG machines are now portable and commonly include a screen, keyboard, and printer on a small wheeled cart. Recent advancements in electrocardiography include developing even smaller devices for inclusion in fitness trackers and smart watches. These smaller devices often rely on only two electrodes to deliver a single lead I. Portable twelve-lead devices powered by batteries are also available. Recording an ECG is a safe and painless procedure. The machines are powered by mains power but they are designed with several safety features including an earthed (ground) lead. Other features include: Most modern ECG machines include automated interpretation algorithms. This analysis calculates features such as the PR interval, QT interval, corrected QT (QTc) interval, PR axis, QRS axis, rhythm and more. The results from these automated algorithms are considered "preliminary" until verified and/or modified by expert interpretation. Despite recent advances, computer misinterpretation remains a significant problem and can result in clinical mismanagement. Cardiac monitors. Besides the standard electrocardiograph machine, there are other devices capable of recording ECG signals. Portable devices have existed since the Holter monitor was produced in 1962. Traditionally, these monitors have used electrodes with patches on the skin to record the ECG, but new devices can stick to the chest as a single patch without need for wires, developed by Zio (Zio XT), TZ Medical (Trident), Philips (BioTel) and BardyDx (CAM) among many others. Implantable devices such as the artificial cardiac pacemaker and implantable cardioverter-defibrillator are capable of measuring a "far field" signal between the leads in the heart and the implanted battery/generator that resembles an ECG signal (technically, the signal recorded in the heart is called an electrogram, which is interpreted differently). Advancement of the Holter monitor became the implantable loop recorder that performs the same function but in an implantable device with batteries that last on the order of years. Additionally, there are available various Arduino kits with ECG sensor modules and smartwatch devices that are capable of recording an ECG signal as well, such as with the 4th generation Apple Watch, Samsung Galaxy Watch 4 and newer devices. Electrodes and leads. Electrodes are the actual conductive pads attached to the body surface. Any pair of electrodes can measure the electrical potential difference between the two corresponding locations of attachment. Such a pair forms "a lead". However, "leads" can also be formed between a physical electrode and a "virtual electrode," known as Wilson's central terminal (WCT), whose potential is defined as the average potential measured by three limb electrodes that are attached to the right arm, the left arm, and the left foot, respectively. Commonly, 10 electrodes attached to the body are used to form 12 ECG leads, with each lead measuring a specific electrical potential difference (as listed in the table below). Electrodes applied to patient's body. Leads are broken down into three types: limb; augmented limb; and precordial or chest. The 12-lead ECG has a total of three "limb leads" and three "augmented limb leads" arranged like spokes of a wheel in the coronal plane (vertical), and six "precordial leads" or "chest leads" that lie on the perpendicular transverse plane (horizontal). In medical settings, the term "leads" is also sometimes used to refer to the wires or to the electrodes themselves, although this is technically incorrect. The term "leads" should be reserved for the electrocardiographic measurements or for their graphical representations. The 10 electrodes in a 12-lead ECG are listed below. Two types of electrodes in common use are a flat paper-thin sticker and a self-adhesive circular pad. The former are typically used in a single ECG recording while the latter are for continuous recordings as they stick longer. Each electrode consists of an electrically conductive electrolyte gel and a silver/silver chloride conductor. The gel typically contains potassium chloride – sometimes silver chloride as well – to permit electron conduction from the skin to the wire and to the electrocardiogram. The common virtual electrode, known as Wilson's central terminal (VW), is produced by averaging the measurements from the electrodes RA, LA, and LL to give an average potential of the body: formula_0 In a 12-lead ECG, all leads except the limb leads are assumed to be unipolar (aVR, aVL, aVF, V1, V2, V3, V4, V5, and V6). The measurement of a voltage requires two contacts and so, electrically, the unipolar leads are measured from the common lead (negative) and the unipolar lead (positive). This averaging for the common lead and the abstract unipolar lead concept makes for a more challenging understanding and is complicated by sloppy usage of "lead" and "electrode". In fact, instead of being a constant reference, VW has a value that fluctuates throughout the heart cycle. It also does not truly represent the center-of-heart potential due to the body parts the signals travel through. Because voltage is by definition a bipolar measurement between two points, describing an electrocardiographic lead as "unipolar" makes little sense electrically and should be avoided. The American Heart Association states "All leads are effectively 'bipolar,' and the term 'unipolar' in description of the augmented limb leads and the precordial leads lacks precision." Limb leads. Leads I, II and III are called the "limb leads". The electrodes that form these signals are located on the limbs – one on each arm and one on the left leg. The limb leads form the points of what is known as Einthoven's triangle. formula_1 formula_2 formula_3 Augmented limb leads. Leads aVR, aVL, and aVF are the "augmented limb leads". They are derived from the same three electrodes as leads I, II, and III, but they use Goldberger's central terminal as their negative pole. Goldberger's central terminal is a combination of inputs from two limb electrodes, with a different combination for each augmented lead. It is referred to immediately below as "the negative pole". formula_4 formula_5 formula_6 Together with leads I, II, and III, augmented limb leads aVR, aVL, and aVF form the basis of the hexaxial reference system, which is used to calculate the heart's electrical axis in the frontal plane. Older versions of the nodes (VR, VL, VF) use Wilson's central terminal as the negative pole, but the amplitude is too small for the thick lines of old ECG machines. The Goldberger terminals scale up (augments) the Wilson results by 50%, at the cost of sacrificing physical correctness by not having the same negative pole for all three. Precordial leads. The "precordial leads" lie in the transverse (horizontal) plane, perpendicular to the other six leads. The six precordial electrodes act as the positive poles for the six corresponding precordial leads: (V1, V2, V3, V4, V5, and V6). Wilson's central terminal is used as the negative pole. Recently, unipolar precordial leads have been used to create bipolar precordial leads that explore the right to left axis in the horizontal plane. Specialized leads. Additional electrodes may rarely be placed to generate other leads for specific diagnostic purposes. "Right-sided" precordial leads may be used to better study pathology of the right ventricle or for dextrocardia (and are denoted with an R (e.g., V5R). "Posterior leads" (V7 to V9) may be used to demonstrate the presence of a posterior myocardial infarction. The Lewis lead or S5-lead (requiring an electrode at the right sternal border in the second intercostal space) can be used to better detect atrial activity in relation to that of the ventricles. An "esophageal lead" can be inserted to a part of the esophagus where the distance to the posterior wall of the left atrium is only approximately 5–6 mm (remaining constant in people of different age and weight). An esophageal lead avails for a more accurate differentiation between certain cardiac arrhythmias, particularly atrial flutter, AV nodal reentrant tachycardia and orthodromic atrioventricular reentrant tachycardia. It can also evaluate the risk in people with Wolff-Parkinson-White syndrome, as well as terminate supraventricular tachycardia caused by re-entry. An intracardiac electrogram (ICEG) is essentially an ECG with some added "intracardiac leads" (that is, inside the heart). The standard ECG leads (external leads) are I, II, III, aVL, V1, and V6. Two to four intracardiac leads are added via cardiac catheterization. The word "electrogram" (EGM) without further specification usually means an intracardiac electrogram. Lead locations on an ECG report. A standard 12-lead ECG report (an electrocardiograph) shows a 2.5 second tracing of each of the twelve leads. The tracings are most commonly arranged in a grid of four columns and three rows. The first column is the limb leads (I, II, and III), the second column is the augmented limb leads (aVR, aVL, and aVF), and the last two columns are the precordial leads (V1 to V6). Additionally, a rhythm strip may be included as a fourth or fifth row. The timing across the page is continuous and notes tracings of the 12 leads for the same time period. In other words, if the output were traced by needles on paper, each row would switch which leads as the paper is pulled under the needle. For example, the top row would first trace lead I, then switch to lead aVR, then switch to V1, and then switch to V4, and so none of these four tracings of the leads are from the same time period as they are traced in sequence through time. Contiguity of leads. Each of the 12 ECG leads records the electrical activity of the heart from a different angle, and therefore align with different anatomical areas of the heart. Two leads that look at neighboring anatomical areas are said to be "contiguous". In addition, any two precordial leads next to one another are considered to be contiguous. For example, though V4 is an anterior lead and V5 is a lateral lead, they are contiguous because they are next to one another. Electrophysiology. The study of the conduction system of the heart is called cardiac electrophysiology (EP). An EP study is performed via a right-sided cardiac catheterization: a wire with an electrode at its tip is inserted into the right heart chambers from a peripheral vein, and placed in various positions in close proximity to the conduction system so that the electrical activity of that system can be recorded. Standard catheter positions for an EP study include "high right atrium" or hRA near the sinus node, a "His" across the septal wall of the tricuspid valve to measure bundle of His, a "coronary sinus" into the coronary sinus, and a "right ventricle" in the apex of the right ventricle. Interpretation. Interpretation of the ECG is fundamentally about understanding the electrical conduction system of the heart. Normal conduction starts and propagates in a predictable pattern, and deviation from this pattern can be a normal variation or be pathological. An ECG does not equate with mechanical pumping activity of the heart; for example, pulseless electrical activity produces an ECG that should pump blood but no pulses are felt (and constitutes a medical emergency and CPR should be performed). Ventricular fibrillation produces an ECG but is too dysfunctional to produce a life-sustaining cardiac output. Certain rhythms are known to have good cardiac output and some are known to have bad cardiac output. Ultimately, an echocardiogram or other anatomical imaging modality is useful in assessing the mechanical function of the heart. Like all medical tests, what constitutes "normal" is based on population studies. The heartrate range of between 60 and 100 beats per minute (bpm) is considered normal since data shows this to be the usual resting heart rate. Theory. Interpretation of the ECG is ultimately that of pattern recognition. In order to understand the patterns found, it is helpful to understand the theory of what ECGs represent. The theory is rooted in electromagnetics and boils down to the four following points: Thus, the overall direction of depolarization and repolarization produces positive or negative deflection on each lead's trace. For example, depolarizing from right to left would produce a positive deflection in lead I because the two vectors point in the same direction. In contrast, that same depolarization would produce minimal deflection in V1 and V2 because the vectors are perpendicular, and this phenomenon is called isoelectric. Normal rhythm produces four entities – a P wave, a QRS complex, a T wave, and a U wave – that each have a fairly unique pattern. Changes in the structure of the heart and its surroundings (including blood composition) change the patterns of these four entities. The U wave is not typically seen and its absence is generally ignored. Atrial repolarization is typically hidden in the much more prominent QRS complex and normally cannot be seen without additional, specialized electrodes. Background grid. ECGs are normally printed on a grid. The horizontal axis represents time and the vertical axis represents voltage. The standard values on this grid are shown in the adjacent image at 25mm/sec: The "large" box is represented by a heavier line weight than the small boxes. The standard printing speed in the United States is 25 mm per sec (5 big boxes per second), but in other countries it can be 50 mm per sec. Faster speeds such as 100 and 200 mm per sec are used during electrophysiology studies. Not all aspects of an ECG rely on precise recordings or having a known scaling of amplitude or time. For example, determining if the tracing is a sinus rhythm only requires feature recognition and matching, and not measurement of amplitudes or times (i.e., the scale of the grids are irrelevant). An example to the contrary, the voltage requirements of left ventricular hypertrophy require knowing the grid scale. Rate and rhythm. In a normal heart, the heart rate is the rate at which the sinoatrial node depolarizes since it is the source of depolarization of the heart. Heart rate, like other vital signs such as blood pressure and respiratory rate, change with age. In adults, a normal heart rate is between 60 and 100 bpm (normocardic), whereas it is higher in children. A heart rate below normal is called "bradycardia" (&lt;60 in adults) and above normal is called "tachycardia" (&gt;100 in adults). A complication of this is when the atria and ventricles are not in synchrony and the "heart rate" must be specified as atrial or ventricular (e.g., the ventricular rate in ventricular fibrillation is 300–600 bpm, whereas the atrial rate can be normal [60–100] or faster [100–150]). In normal resting hearts, the physiologic rhythm of the heart is normal sinus rhythm (NSR). Normal sinus rhythm produces the prototypical pattern of P wave, QRS complex, and T wave. Generally, deviation from normal sinus rhythm is considered a cardiac arrhythmia. Thus, the first question in interpreting an ECG is whether or not there is a sinus rhythm. A criterion for sinus rhythm is that P waves and QRS complexes appear 1-to-1, thus implying that the P wave causes the QRS complex. Once sinus rhythm is established, or not, the second question is the rate. For a sinus rhythm, this is either the rate of P waves or QRS complexes since they are 1-to-1. If the rate is too fast, then it is sinus tachycardia, and if it is too slow, then it is sinus bradycardia. If it is not a sinus rhythm, then determining the rhythm is necessary before proceeding with further interpretation. Some arrhythmias with characteristic findings: Determination of rate and rhythm is necessary in order to make sense of further interpretation. Axis. The heart has several axes, but the most common by far is the axis of the QRS complex (references to "the axis" imply the QRS axis). Each axis can be computationally determined to result in a number representing degrees of deviation from zero, or it can be categorized into a few types. The QRS axis is the general direction of the ventricular depolarization wavefront (or mean electrical vector) in the frontal plane. It is often sufficient to classify the axis as one of three types: normal, left deviated, or right deviated. Population data shows that a normal QRS axis is from −30° to 105°, with 0° being along lead I and positive being inferior and negative being superior (best understood graphically as the hexaxial reference system). Beyond +105° is right axis deviation and beyond −30° is left axis deviation (the third quadrant of −90° to −180° is very rare and is an indeterminate axis). A shortcut for determining if the QRS axis is normal is if the QRS complex is mostly positive in lead I and lead II (or lead I and aVF if +90° is the upper limit of normal). The normal QRS axis is generally "down and to the left", following the anatomical orientation of the heart within the chest. An abnormal axis suggests a change in the physical shape and orientation of the heart or a defect in its conduction system that causes the ventricles to depolarize in an abnormal way. The extent of a normal axis can be +90° or 105° depending on the source. Amplitudes and intervals. All of the waves on an ECG tracing and the intervals between them have a predictable time duration, a range of acceptable amplitudes (voltages), and a typical morphology. Any deviation from the normal tracing is potentially pathological and therefore of clinical significance. For ease of measuring the amplitudes and intervals, an ECG is printed on graph paper at a standard scale: each 1 mm (one small box on the standard 25mm/s ECG paper) represents 40 milliseconds of time on the x-axis, and 0.1 millivolts on the y-axis. Limb leads and electrical conduction through the heart. The animation shown to the right illustrates how the path of electrical conduction gives rise to the ECG waves in the limb leads. What is green zone ? Recall that a positive current (as created by depolarization of cardiac cells) traveling towards the positive electrode and away from the negative electrode creates a positive deflection on the ECG. Likewise, a positive current traveling away from the positive electrode and towards the negative electrode creates a negative deflection on the ECG. The red arrow represents the overall direction of travel of the depolarization. The magnitude of the red arrow is proportional to the amount of tissue being depolarized at that instance. The red arrow is simultaneously shown on the axis of each of the 3 limb leads. Both the direction and the magnitude of the red arrow's projection onto the axis of each limb lead is shown with blue arrows. Then, the direction and magnitude of the blue arrows are what theoretically determine the deflections on the ECG. For example, as a blue arrow on the axis for Lead I moves from the negative electrode, to the right, towards the positive electrode, the ECG line rises, creating an upward wave. As the blue arrow on the axis for Lead I moves to the left, a downward wave is created. The greater the magnitude of the blue arrow, the greater the deflection on the ECG for that particular limb lead. Frames 1–3 depict the depolarization being generated in and spreading through the Sinoatrial node. The SA node is too small for its depolarization to be detected on most ECGs. Frames 4–10 depict the depolarization traveling through the atria, towards the Atrioventricular node. During frame 7, the depolarization is traveling through the largest amount of tissue in the atria, which creates the highest point in the P wave. Frames 11–12 depict the depolarization traveling through the AV node. Like the SA node, the AV node is too small for the depolarization of its tissue to be detected on most ECGs. This creates the flat PR segment. Frame 13 depicts an interesting phenomenon in an over-simplified fashion. It depicts the depolarization as it starts to travel down the interventricular septum, through the Bundle of His and Bundle branches. After the Bundle of His, the conduction system splits into the left bundle branch and the right bundle branch. Both branches conduct action potentials at about 1 m/s. Interestingly, however, the action potential starts traveling down the left bundle branch about 5 milliseconds before it starts traveling down the right bundle branch, as depicted by frame 13. This causes the depolarization of the interventricular septum tissue to spread from left to right, as depicted by the red arrow in frame 14. In some cases, this gives rise to a negative deflection after the PR interval, creating a Q wave such as the one seen in lead I in the animation to the right. Depending on the mean electrical axis of the heart, this phenomenon can result in a Q wave in lead II as well. Following depolarization of the interventricular septum, the depolarization travels towards the apex of the heart. This is depicted by frames 15–17 and results in a positive deflection on all three limb leads, which creates the R wave. Frames 18–21 then depict the depolarization as it travels throughout both ventricles from the apex of the heart, following the action potential in the Purkinje fibers. This phenomenon creates a negative deflection in all three limb leads, forming the S wave on the ECG. Repolarization of the atria occurs at the same time as the generation of the QRS complex, but it is not detected by the ECG since the tissue mass of the ventricles is so much larger than that of the atria. Ventricular contraction occurs between ventricular depolarization and repolarization. During this time, there is no movement of charge, so no deflection is created on the ECG. This results in the flat ST segment after the S wave. Frames 24–28 in the animation depict repolarization of the ventricles. The epicardium is the first layer of the ventricles to repolarize, followed by the myocardium. The endocardium is the last layer to repolarize. The plateau phase of depolarization has been shown to last longer in endocardial cells than in epicardial cells. This causes repolarization to start from the apex of the heart and move upwards. Since repolarization is the spread of negative current as membrane potentials decrease back down to the resting membrane potential, the red arrow in the animation is pointing in the direction opposite of the repolarization. This therefore creates a positive deflection in the ECG, and creates the T wave. Ischemia and infarction. Ischemia or non-ST elevation myocardial infarctions (non-STEMIs) may manifest as ST depression or inversion of T waves. It may also affect the high frequency band of the QRS. ST elevation myocardial infarctions (STEMIs) have different characteristic ECG findings based on the amount of time elapsed since the MI first occurred. The earliest sign is "hyperacute T waves," peaked T waves due to local hyperkalemia in ischemic myocardium. This then progresses over a period of minutes to elevations of the ST segment by at least 1 mm. Over a period of hours, a pathologic Q wave may appear and the T wave will invert. Over a period of days the ST elevation will resolve. Pathologic Q waves generally will remain permanently. The coronary artery that has been occluded can be identified in an STEMI based on the location of ST elevation. The left anterior descending (LAD) artery supplies the anterior wall of the heart, and therefore causes ST elevations in anterior leads (V1 and V2). The LCx supplies the lateral aspect of the heart and therefore causes ST elevations in lateral leads (I, aVL and V6). The right coronary artery (RCA) usually supplies the inferior aspect of the heart, and therefore causes ST elevations in inferior leads (II, III and aVF). Artifacts. An ECG tracing is affected by patient motion. Some rhythmic motions (such as shivering or tremors) can create the illusion of cardiac arrhythmia. Artifacts are distorted signals caused by a secondary internal or external sources, such as muscle movement or interference from an electrical device. Distortion poses significant challenges to healthcare providers, who employ various techniques and strategies to safely recognize these false signals. Accurately separating the ECG artifact from the true ECG signal can have a significant impact on patient outcomes and legal liabilities. Improper lead placement (for example, reversing two of the limb leads) has been estimated to occur in 0.4% to 4% of all ECG recordings, and has resulted in improper diagnosis and treatment including unnecessary use of thrombolytic therapy. A Method for Interpretation. Whitbread, consultant nurse and paramedic, suggests ten rules of the normal ECG, deviation from which is likely to indicate pathology. These have been added to, creating the 15 rules for 12-lead (and 15- or 18-lead) interpretation. Rule 1: All waves in aVR are negative. Rule 2: The ST segment (J point) starts on the isoelectric line (except in V1 &amp; V2 where it may be elevated by not greater than 1 mm). Rule 3: The PR interval should be 0.12–0.2 seconds long. Rule 4: The QRS complex should not exceed 0.11–0.12 seconds. Rule 5: The QRS and T waves tend to have the same general direction in the limb leads. Rule 6: The R wave in the precordial (chest) leads grows from V1 to at least V4 where it may or may not decline again. Rule 7: The QRS is mainly upright in I and II. Rule 8: The P wave is upright in I II and V2 to V6. Rule 9: There is no Q wave or only a small q (&lt;0.04 seconds in width) in I, II and V2 to V6. Rule 10: The T wave is upright in I II and V2 to V6. The end of the T wave should not drop below the isoelectric baseline. Rule 11: Does the deepest S wave in V1 plus the tallest R wave in V5 or V6 equal &gt;35 mm? Rule 12: Is there an Epsilon wave? Rule 13: Is there an J wave? Rule 14: Is there a Delta wave? Rule 15: Are there any patterns representing an occlusive myocardial infarction (OMI)? Diagnosis. Numerous diagnoses and findings can be made based upon electrocardiography, and many are discussed above. Overall, the diagnoses are made based on the patterns. For example, an "irregularly irregular" QRS complex without P waves is the hallmark of atrial fibrillation; however, other findings can be present as well, such as a bundle branch block that alters the shape of the QRS complexes. ECGs can be interpreted in isolation but should be applied – like all diagnostic tests – in the context of the patient. For example, an observation of peaked T waves is not sufficient to diagnose hyperkalemia; such a diagnosis should be verified by measuring the blood potassium level. Conversely, a discovery of hyperkalemia should be followed by an ECG for manifestations such as peaked T waves, widened QRS complexes, and loss of P waves. The following is an organized list of possible ECG-based diagnoses. Rhythm disturbances or arrhythmias: Heart block and conduction problems: Electrolytes disturbances and intoxication: Ischemia and infarction: Structural: Other phenomena: History. Etymology. The word is derived from the Greek "electro", meaning related to electrical activity; "kardia", meaning heart; and "graph", meaning "to write". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nV_W = \\frac{1}{3}(RA+LA+LL)\n" }, { "math_id": 1, "text": "\nI = LA - RA\n" }, { "math_id": 2, "text": "\nII = LL - RA\n" }, { "math_id": 3, "text": "\nIII = LL - LA\n" }, { "math_id": 4, "text": "\naVR = RA - \\frac{1}{2} (LA + LL) = \\frac 32 (RA - V_W)\n" }, { "math_id": 5, "text": "\naVL = LA - \\frac{1}{2} (RA + LL) = \\frac 32 (LA - V_W)\n" }, { "math_id": 6, "text": "\naVF = LL - \\frac{1}{2} (RA + LA) = \\frac 32 (LL - V_W)\n" } ]
https://en.wikipedia.org/wiki?curid=76988
76995097
4D N = 1 supergravity
Theory of supergravity in four dimensions In supersymmetry, 4D formula_0 supergravity is the theory of supergravity in four dimensions with a single supercharge. It contains exactly one supergravity multiplet, consisting of a graviton and a gravitino, but can also have an arbitrary number of chiral and vector supermultiplets, with supersymmetry imposing stringent constraints on how these can interact. The theory is primarily determined by three functions, those being the Kähler potential, the superpotential, and the gauge kinetic matrix. Many of its properties are strongly linked to the geometry associated to the scalar fields in the chiral multiplets. After the simplest form of this supergravity was first discovered, a theory involving only the supergravity multiplet, the following years saw an effort to incorporate different matter multiplets, with the general action being derived in 1982 by Eugène Cremmer, Sergio Ferrara, Luciano Girardello, and Antonie Van Proeyen. This theory plays an important role in many Beyond the Standard Model scenarios. Notably, many four-dimensional models derived from string theory are of this type, with supersymmetry providing crucial control over the compactification procedure. The absence of low-energy supersymmetry in our universe requires that supersymmetry is broken at some scale. Supergravity provides new mechanisms for supersymmetry breaking that are absent in global supersymmetry, such as gravity mediation. Another useful feature is the presence of no-scale models, which have numerous applications in cosmology. History. Supergravity was first discovered in 1976 in the form of pure 4D formula_1 supergravity. This was a theory of only the graviton and its superpartner, the gravitino. The first extension to also couple matter fields to the theory was acquired by adding Maxwell and Yang–Mills fields. Adding chiral multiplets proved harder, but the first step was to successfully add a single massless chiral multiplet in 1977. This was then extended the next year to adding more chiral multiplets in the form of the non-linear sigma model. All these theories were constructed using the iterative Noether method, which does not lend itself towards deriving more general matter coupled actions due to being very tedious. The development of tensor calculus techniques allowed for the construction of supergravity actions more efficiently. Using this formalism, the general four-dimensional matter-coupled formula_1 supergravity action was constructed in 1982 by Eugène Cremmer, Sergio Ferrara, Luciano Girardello, and Antonie Van Proeyen. It was also derived by Jonathan Bagger shortly after using superspace techniques, with this work highlighting important geometric features of the theory. Around this time two other features of the models were identified. These are the Kähler–Hodge structure present in theory and the presence and importance of no-scale models. Overview. The particle content of a general four-dimensional formula_1 supergravity consists of a single supergravity multiplet and an arbitrary number of chiral multiplets and gauge multiplets. The supergravity multiplet formula_2 contains the spin-2 graviton describing fluctuations in the spacetime metric formula_3, along with a spin-3/2 Majorana gravitino formula_4, where the spinor index formula_5 is often left implicit. The chiral multiplets formula_6, indexed by lower-case Latin indices formula_7, each consist of a scalar formula_8 and its Majorana superpartner formula_9. Similarly, the gauge multiplets formula_10 consist of a Yang–Mills gauge field formula_11 and its Majorana superpartner the gaugino formula_12, with these multiplets indexed by capital Latin letters formula_13. One of the most important structures of the theory is the scalar manifold, which is the field space manifold whose coordinates are the scalars. Global supersymmetry implies that this manifold must be a special type of complex manifold known as a Kähler manifold. Local supersymmetry of supergravity further restricts its form to be that of a Kähler–Hodge manifold. The theory is primarily described by three arbitrary functions of the scalar fields, the first being the Kähler potential formula_14 which fixes the metric on the scalar manifold. The second is the superpotential, which is an arbitrary holomorphic function formula_15 that fixes a number of aspects of the action such as the scalar field F-term potential along with the fermion mass terms and Yukawa couplings. Lastly, there is the gauge kinetic matrix whose components are holomorphic functions formula_16 determining, among other aspects, the gauge kinetic term, the theta term, and the D-term potential. Additionally, the supergravity may be gauged or ungauged. In ungauged supergravity, any gauge transformations present can only act on abelian gauge fields. Meanwhile, a gauged supergravity can be acquired from an ungauged one by gauging some of its global symmetries, which can cause the scalars or fermions to also transform under gauge transformations and result in non-abelian gauge fields. Besides local supersymmetry transformations, local Lorentz transformations, and gauge transformations, the action must also be invariant under Kähler transformations formula_17, where formula_18 is an arbitrary holomorphic function of the scalar fields. Construction. Historically, the first approach to constructing supergravity theories was the iterative Noether formalism which uses a globally supersymmetric theory as a starting point. Its Lagrangian is then coupled to pure supergravity through the term formula_19 which couples the gravitino to the supercurrent of the original theory, with everything also Lorentz covariantized to make it valid in curved spacetime. This candidate theory is then varied with respect to local supersymmetry transformations yielding some nonvanishing part. The Lagrangian is then modified by adding to it new terms that cancel this variation, at the expense of introducing new nonvanishing variations. More terms are the introduced to cancel these, and the procedure is repeated until the Lagrangian is fully invariant. Since the Noether formalism proved to be very tedious and inefficient, more efficient construction techniques were developed. The first formalism that successfully constructed the general matter-coupled 4D supergravity theory was the tensor calculus formalism. Another early approach was the superspace approach which generalizes the notion of superspace to a curved superspace whose tangent space at each point behaves like the traditional flat superspace from global supersymmetry. The general invariant action can then be constructed in terms of the superfields, which can then be expanded in terms of the component fields to give the component form of the supergravity action. Another approach is the superconformal tensor calculus approach which uses conformal symmetry as a tool to construct supergravity actions that do not themselves have any conformal symmetry. This is done by first constructing a gauge theory using the superconformal algebra. This theory contains extra fields and symmetries, but they can be eliminated using constraints or through gauge fixing to yield Poincaré supergravity without conformal symmetry. The superconformal and superspace ideas have also been combined into a number of different supergravity conformal superspace formulations. The direct generalization of the original on-shell superspace approach is the Grimm–Wess–Zumino formalism formulated in 1979. There is also the formula_20 superspace formalism proposed by Paul Howe in 1981. Lastly, the formula_1 conformal superspace approach formulated in 2010 has the convenient property that any other formulation of conformal supergravity is either equivalent to it or can otherwise be obtained from a partial gauge fixing. Symmetries. Scalar manifold and Kähler transformations. Supergravity often uses Majorana spinor notation over that of Weyl spinors since four-component notation is easier to use in curved spacetime. Weyl spinors can be acquired as projections of a Majorana spinor formula_21, with the left and right handed Weyl spinors denoted by formula_22. Complex scalars in the chiral multiplets act as coordinates on a complex manifold in the sense of the nonlinear sigma model, known as the scalar manifold. In supersymmetric theories these manifolds are imprinted with additional geometric constraints arising from the supersymmetry transformations. In formula_1 supergravity this manifold may be compact or noncompact, while for formula_23 supergravities it is necessarily noncompact. Global supersymmetry already restricts the manifold to be a Kähler manifolds. These are a type of complex manifold, which roughly speaking are manifolds that look locally like formula_24 and whose transition maps are holomorphic functions. Complex manifolds are also Hermitian manifolds if they admit a well-defined metric whose only nonvanishing components are the formula_25 components, where the bar over the index denotes the conjugate coordinate formula_26. More generally, a bar over scalars denotes complex conjugation while for spinors it denotes an adjoint spinor. Kähler manifolds are Hermitian manifolds that admit a two-form called a Kähler form formula_27 that is closed formula_28. A property of these manifolds is that their metric can be written in terms of the derivatives of a scalar function formula_29, where the formula_30 is known as the Kähler potential. Here formula_31 denotes a derivative with respect to formula_8. This potential corresponding to a particular metric is not unique and can be changed by the addition of the real part of a holomorphic function formula_32 in what are known as Kähler transformations formula_33 Since this does not change the scalar manifold, supersymmetric actions must be invariant under such transformations. While in global supersymmetry, fields and the superpotential transform trivially under Kähler transformations, in supergravity they are charged under the Kähler transformations as formula_34 formula_35 formula_36 where formula_37 is the Majorana spinor supersymmetry transformation parameter. These transformation rules impose further restrictions on the geometry of the scalar manifold. Since the superpotential transforms by a prefactor, this implies that the scalar manifold must globally admit a consistent line bundle. The fermions meanwhile transform by a complex phase, which implies that the scalar manifold must also admit an associated formula_20 principal bundle. The nondynamical connection corresponding to this principal bundle is given by formula_38 with this satisfying formula_39, where formula_40 is the Kähler form. Here formula_41 are holomorphic functions associated to the gauge sector, described below. This condition means that the scalar manifold in four-dimensional formula_1 supergravity must be of a type which can admit a connection whose field strength is equal to the Kähler form. Such manifolds are known as Kähler–Hodge manifolds. In terms of characteristic classes, this condition translates to the requirement that formula_42 where formula_43 is the first Chern class of the line bundle, while formula_44 is the cohomology class of the Kähler form. An implication of the presence of an associated formula_20 principal bundle on the Kähler–Hodge manifold is that its field strength formula_45 must be quantized on any topologically non-trivial two-sphere of the scalar manifold, analogous to the Dirac quantization condition for magnetic monopoles. This arises due to the cocycle condition, which is the consistency of the connection across different coordinate patches. This can have various implications for the resulting physics, such as on an formula_46 scalar manifold, it results in the quantization of Newton's constant. Global symmetries of ungauged supergravity. Global symmetries in ungauged supergravity fall roughly into three classes; they are subgroups of the scalar manifold isometry group, they are rotations among the gauge fields, or they are the R-symmetry group. The exact global symmetry group depends on the details of the theory, such as the particular superpotential and gauge kinetic function, which provide additional constraints on the symmetry group. The global symmetry group of a supergravity with formula_47 abelian vector multiplets and formula_48 chiral multiplets must be a subgroup of formula_49. Here formula_50 is the isometry group of the scalar manifold, formula_51 is the set of symmetries acting only on the vector fields, and formula_52 is the R-symmetry group, with this surviving as a global symmetry only in theories with a vanishing superpotential. When the gauge kinetic matrix is a function of formula_53 scalars, then the isometry group decomposes into formula_54, where the first group acts only on the scalars leaving the vectors unchanged, while the second simultaneously transforms both the scalars and vectors. These simultaneous transformations are not conventional symmetries of the action, rather they are duality transformations that leave the equations of motion and Bianchi identity unchanged, similar to the Montonen–Olive duality. Global symmetries acting on scalars can only be subgroups of the isometry group of the scalar manifold since the transformations must preserves the scalar metric. Infinitesimal isometry transformations are described by Killing vectors formula_55, which are vectors satisfying the Killing equation formula_56, where formula_57 is the Lie derivative along the direction of the Killing vector. They act on the scalars as formula_58 and are the generators for the isometry algebra, satisfying the structure equation formula_59 Since the scalar manifold is a complex manifold, Killing vectors corresponding to symmetries of this manifold must also preserve the complex structure formula_60, which implies that they must be holomorphic formula_61. Therefore, the gauge group must be a subgroup of the group formed by holomorphic Killing vectors, not merely a subgroup of the isometry group. For Kähler manifolds, this condition additionally implies that there exists a set of holomorphic functions known as Killing prepotentials formula_62 which satisfy formula_63, where formula_64 is the interior product. The Killing prepotentials can be explicitly written in terms of the Kähler potential formula_65 where the holomorphic functions formula_66 are the Kähler transformations that undo the isometry transformation, defined by formula_67 The prepotential must also satisfy a consistency condition known as the equivariance condition formula_68 where formula_69 are the structure constants of the gauge algebra. An additional restriction on global symmetries of scalars is that the superpotential must be invariant up to the same Kähler transformation formula_66 that leaves the Kähler potential invariant, which imposes the condition that the only admissible superpotentials are ones satisfying formula_70 Global symmetries involving scalars present in the gauge kinetic matrix still act on the scalar fields as isometry transformations, but now these transformations change the gauge kinetic matrix. To leave the theory invariant under a scalar isometry transformation requires a compensating transformation on the vectors. These vector transformations can be expressed as transformations on the electric field strength tensors formula_71 and their dual magnetic counterpart formula_72 defined from the equation of motion formula_73 Writing the field strengths and dual field strengths in a single vector allows the most general transformations to be written as formula_74 where the generators of these transformation are given by formula_75 Demanding that the equations of motion and Bianchi identities are unchanged restricts the transformations to be a subgroup of the symplectic group formula_76. The exact generators depend on the particular gauge kinetic matrix, with them formula_77 fixing the coefficients determining formula_78. Transformations involving formula_79, are non-perturbative symmetries that do not leave the action invariant since they map the electric field strength into the magnetic field strength. Rather, these are duality transformations that are only symmetries at the level of the equations of motion, related to the electromagnetic duality. Meanwhile, transformations with formula_80 are known as generalized Peccei–Quinn shifts and they only leave the action invariant up to total derivatives. Global symmetries involving only vectors formula_51 are transformations that map the field strength tensor into itself and generally belong to formula_81. Gauge symmetry. In an ungauged supergravity, gauge symmetry only consists of abelian transformations of the gauge fields formula_82, with no other fields being gauged. Meanwhile, gauged supergravity gauges some of the global symmetries of the ungauged theory. Since the global symmetries are strongly limited by the details of the theory present, such as the scalar manifold, the scalar potential, and the gauge kinetic matrix, the available gauge groups are likewise limited. Gauged supergravity is invariant under the gauge transformations with gauge parameter formula_83 given by formula_84 formula_85 formula_86 formula_87 formula_88 Here formula_89 are the generators of the gauged algebra while formula_66 are defined as the compensating Kähler transformations needed to restore the Kähler potential to its original form after performing scalar field isometry transformations, with their imaginary components fixed by the equivariance condition. Whenever a formula_20 subgroup is gauged, as occurs when R-symmetry is gauged, this does not fix formula_90, with these terms then referred to as Fayet–Iliopoulos terms. Covariant derivatives. Supergravity has a number of distinct symmetries, all of which require their own covariant derivatives. The standard Lorentz covariant derivative on curved spacetime is denoted by formula_91, with this being trivial for scalar fields, while for fermionic fields it can be written using the spin connection formula_92 as formula_93 Scalars transform nontrivially only under scalar coordinate transformations and gauge transformations, so their covariant derivative is given by formula_94 where formula_55 are the holomorphic Killing vectors corresponding to the gauged isometry subgroup of the scalar manifold. A hat above a derivative indicates that it is covariant with respect to gauge transformations. Meanwhile, the superpotential only transforms nontrivially under Kähler transformations and so has a covariant derivative given by formula_95 where formula_31 is a derivative with respect to formula_8. The various covariant derivatives associated to the fermions depend upon which symmetries the fermions are charged under. The gravitino transforms under both Lorentz and Kähler transformation, while the gaugino additionally also transforms under gauge transformations. The chiralino transforms under all these as well as transforming as a vector under scalar field redefinitions. Therefore, their covariant derivatives are given by formula_96 formula_97 formula_98 Here formula_99 is the Christoffel symbol of the scalar manifold, while formula_100 are the structure constants of the Lie algebra associated to the gauge group. Lastly, formula_101 is the formula_20 connection on the scalar manifold, with its explicit form given in terms of the Kähler potential described previously. R-symmetry. R-symmetry of formula_1 superalgebras is a global symmetry acting only on fermions, transforming them by a phase formula_102 This is identical to the way that a constant Kähler transformation acts on fermions, differing from such transformations only in that it does not additionally transform the superpotential. Since Kähler transformations are necessarily symmetries of supergravity, R-symmetry is only a symmetry of supergravity when these two coincide, which only occurs for a vanishing superpotential. Whenever R-symmetry is a global symmetry of the ungauged theory, it can be gauged to construct a gauged supergravity, which does not necessarily require gauging any chiral scalars. The simplest example of such a supergravity is Freedman's gauged supergravity which only has a single vector used to gauge R-symmetry and whose bosonic action is equivalent to an Einstein–Maxwell–de Sitter theory. 4D "N" = 1 supergravity Lagrangian. The Lagrangian for 4D formula_1 supergravity with an arbitrary number of chiral and vector supermultiplets can be split up as formula_103 Besides being invariant under local supersymmetry transformations, this Lagrangian also is Lorentz invariant, gauge invariant, and Kähler transformation invariant, with covariant derivatives being covariant under these. The three main functions determining the structure of the Lagrangian are the superpotential, the Kähler potential, and the gauge kinetic matrix. Kinetic and theta terms. The first term in the Lagrangian consists of all the kinetic terms of the fields formula_104 formula_105 formula_106 The first line is the kinetic action for the supergravity multiplet, made up of the Einstein–Hilbert action and the covariantized Rarita–Schwinger action; this line is the covariant generalization of the pure supergravity action. The formalism used for describing gravity is the vielbein formalism, where formula_107 is the vielbein while formula_108 is the spin-connection. Additionally, formula_109 and formula_110 is the four-dimensional Planck mass. The second line consists of the kinetic terms for the chiral multiplets, with its overall form determined by the scalar manifold metric which itself is fully fixed by the Kähler potential formula_29. The third line has the kinetic terms for the gauge multiplets, with their behaviour fixed by the real part of the gauge kinetic matrix. The holomorphic gauge kinetic matrix formula_16 must have a positive definite real part to have kinetic terms with the correct sign. The slash on the covariant derivatives corresponds to the Feynman slash notation formula_111, while formula_112 are the field strengths of the gauge fields formula_113. The gauge sector also introduces a theta-like term formula_114 with this being a total derivative whenever the imaginary part of the gauge kinetic matrix is a constant, in which case it does not contribute to the classical equations of motion. Mass and interaction terms. The supergravity action has a set of mass-like bilinear terms for its fermions given by formula_115 formula_116 formula_117 The D-terms formula_118 are defined as formula_119 where formula_120 are the holomorphic Killing prepotentials and formula_15 is the holomorphic superpotential. The first line in the Lagrangian is the mass-like term for the gravitino while the remaining two lines are the mass terms for the chiralini and gluini along with bilinear mixing terms for these. These terms determine the masses of the fermions since evaluating the Lagrangian in a vacuum state with constant scalar fields reduces the Lagrangian to a set of fermion bilinears with numerical prefactors. This can be written as a matrix, with the eigenvalues of this mass matrix being the masses of the fermions in the mass basis. The mass eigenstates are in general linear combinations of the chiralini and gaugini fermions. The next term in the Lagrangian is the supergravity generalization of a similar term found in the corresponding globally supersymmetric action that describes mixing between the gauge boson, a chiralino, and the gaugino. In the supergravity Lagrangian it is given by formula_121 Supercurrent terms. The supercurrent terms describe the coupling of the gravitino to generalizations of the chiral and gauge supercurrents from global supersymmetry as formula_122 where formula_123 formula_124 These are the supercurrents of the chiral sector and of the gauge sector modified appropriately to be covariant under the symmetries of the supergravity action. They provide additional bilinear terms between the gravitino and the other fermions that need to be accounted for when going into the mass basis. The presence of terms coupling the gravitino to the supercurrents of the global theory is a generic feature of supergravity theories since the gravitino acts as the gauge field for local supersymmetry. This is analogous to the case of gauge theories more generally, where gauge fields couple to the current associated to the symmetry that has been gauged. For example, quantum electrodynamics consists of the Maxwell action and the Dirac action, together with a coupling between the photon and the current formula_125, with this usually being absorbed into the definition of the fermion covariant derivative. Scalar potential. The potential term in the Lagrangian describes the scalar potential formula_126 as formula_127 where the first term is known as the F-term, and is a generalization of the potential arising from the chiral multiplets in global supersymmetry, together with a new negative gravitational contribution proportional to formula_128. The second term is called the D-term and is also found in a similar form in global supersymmetry, with it arising from the gauge sector. The Kähler potential and the superpotential are not independent in supergravity since Kähler transformations allow for the shifting of terms between them. The two functions can instead be packaged into an invariant function known as the Kähler invariant function formula_129 The Lagrangian can be written in terms of this function as formula_130 Four-fermion terms. Finally, there are the four-fermion interaction terms. These are given by formula_131 formula_132 formula_133 formula_134 formula_135 formula_136 formula_137 Here formula_138 is the scalar manifold Riemann tensor, while formula_139 is the supergravity four-gravitino interaction term formula_140 that arises in the second-order action of pure formula_1 supergravity after the torsion tensor has been substituted into the first-order action. Properties. Supersymmetry transformation rules. The supersymmetry transformation rules, up to three-fermion terms which are unimportant for most applications, are given by formula_141 formula_142 formula_143 formula_144 formula_145 formula_146 where formula_147 formula_148 formula_149 are known as fermionic shifts. It is a general feature of supergravity theories that fermionic shifts fix the form of the potential. In this case they can be used to express the potential as formula_150 showing that the fermionic shifts from the matter fields gives a positive-definite contribution, while the gravitino gives a negative definite contribution. Spontaneous symmetry breaking. A vacuum state used in many applications of supergravity is that of a maximally symmetric spacetime with no fermionic condensate. The case when fermionic condensates are present can be dealt with similarly by instead considering the effective field theory below the condensation scale where the condensate is now described by the presence of another scalar field. There are three types of maximally symmetric spacetimes, those being de Sitter, Minkowski, and anti-de Sitter spacetimes, with these distinguished by the sign of the cosmological constant, which in supergravity at the classical level is equivalent to the sign of the scalar potential. Supersymmetry is preserved if all supersymmetric variations of fermionic fields vanish in the vacuum state. Since the maximally symmetric spacetime under consideration has a constant scalar field and a vanishing gauge field, the variation of the chiralini and gluini imply that formula_151. This is equivalently to the condition that formula_152. From the form of the scalar potential it follows that one can only have a supersymmetric vacuum if formula_153. Additionally, supersymmetric Minkowski spacetime occurs if and only if the superpotential also vanishes formula_154. However, having a Minkowski or an anti-de Sitter solution does not necessarily imply that the vacuum is supersymmetric. An important feature of supersymmetic solutions in anti-de Sitter spacetime is that they satisfy the Breitenlohner–Freedman bound and are therefore stable with respect to fluctuations of the scalar fields, a feature that is present in other supergravity theories as well. Supergravity provides a useful mechanism for spontaneous symmetry breaking of supersymmetry known as gravity mediation. This setup has a hidden and an observable sector that have no renormalizable couplings between them, meaning that they fully decouple from each other in the global supersymmetry formula_155 limit. In this scenario, supersymmetry breaking occurs in the hidden sector, with this transmitted to the observable sector only through nonrenormalizable terms, resulting in soft supersymmetry breaking in the visible sector, meaning that no quadratic divergences are introduced. One of the earliest and simplest models of gravity mediation is the Polonyi model. Other notable spontaneous symmetry breaking mechanism are anomaly mediation and gauge mediation, in which the tree-level soft terms generated from gravity mediation are themselves subdominant. Super-Higgs mechanism. The supercurrent Lagrangian terms consists in part of bilinear fermion terms mixing the gravitino with the other fermions. These terms can be expressed as formula_156 where formula_157 is the supergravity generalization of the global supersymmetry goldstino field formula_158 This field transforms under supersymmetry transformations as formula_159, where formula_160 is the positive part of the scalar potential. When supersymmetry is spontaneously broken formula_161, then one can always choose a gauge where formula_162, in which case the terms mixing the gravitino with the other fermions drops out. The only remaining fermion bilinear term involving the gravitino is the quadratic gravitino term in formula_163. When the final spacetime is Minkowski spacetime, this bilinear term corresponds to a mass for the gravitino with a value of formula_164 An implication of this procedure when calculating the mass of the remaining fermions is that the gauge fixing transformation for the goldstino leads to additional shift contributions to the mass matrix for the chiral and gauge fermions, which have to be included. Mass sum rules. The supertrace sum of the squares of the mass matrix eigenvalues gives valuable information about the mass spectra of particles in supergravity. The general formula is most compactly written in the superspace formalism, but in the special case of a vanishing cosmological constant, a trivial gauge kinetic matrix formula_165, and formula_48 chiral multiplets, it is given by formula_166 formula_167 which is the supergravity generalization of the corresponding result in global supersymmetry. One important implication is that generically scalars have masses of order of the gravitino mass while fermionic masses can remain small. No-scale models. No-scale models are models with a vanishing F-term, achieved by picking a Kähler potential and superpotential such that formula_168 When D-terms for gauge multiplets are ignored, this gives rise to the vanishing of the classical potential, which is said to have flat directions for all values of the scalar field. Additionally, supersymmetry is formally broken, indicated by a non-vanishing but undetermined mass of the gravitino. When moving beyond the classical level, quantum corrections come in to break this degeneracy, fixing the mass of the gravitino. The tree-level flat directions are useful in pheonomenological applications of supergravity in cosmology where even after lifting the flat directions, the slope is usually relatively small, a feature useful for building inflationary potentials. No-scale models also commonly occur in string theory compactifications. Quantum effects. Quantizing supergravity introduces additional subtleties. In particular, for supergravity to be consistent as a quantum theory, new constraints come in such as anomaly cancellation conditions and black hole charge quantization. Quantum effects can also play an important role in many scenarios where they can contribute dominant effects, such as when quantum contributions lift flat directions. The nonrenormalizability of four-dimensional supergravity also implies that it should be seen as an effective field theory of some UV theory. Quantum gravity is expected to have no exact global symmetries, which forbids constant Fayet–Iliopoulos terms as these can only arise if there are exact unbroken global formula_20 symmetries. This is seen in string theory compactifications, which can at most produce field dependent Fayet–Iliopoulos terms associated to Stueckelberg masses for gauged formula_20 symmetries. Related theories. A globally supersymmetric 4D formula_1 theory can be acquired from its supergravity generalization through the decoupling of gravity by rescaling the gravitino formula_169 and taking the Planck mass to infinity formula_170. The pure supergravity theory is meanwhile acquired by having no chiral or gauge multiplets. Additionally, a more general version of 4D formula_1 supergravity exists that also includes Chern–Simon terms. Unlike in global supersymmetry, where all extended supersymmetry models can be constructed as special cases of the formula_1 theory, extended supergravity models are not merely special cases of the formula_1 theory. For example, in formula_171 supergravity the relevant scalar manifold must be a quaternionic Kähler manifold. But since these manifolds are not themselves Kähler manifolds, they cannot occur as special cases of the formula_1 supergravity scalar manifold. Four-dimensional formula_1 supergravity plays a significant role in Beyond the Standard Model physics, being especially relevant in string theory, where it is the resulting effective theory in many compactifications. For example, since compactification on a 6-dimensional Calabi–Yau manifold breaks 3/4ths of the initial supersymmetry, compactification of heterotic strings on such manifolds gives an formula_1 supergravity, while the compactification of type II string theories gives an formula_171 supergravity. But if the type II theories are instead compactified on a Calabi–Yau orientifold, which breaks even more of the supersymmetry, the result is also an formula_1 supergravity. Similarly, compactification of M-theory on a formula_172 manifold also results in an formula_1 supergravity. In all these theories, the particular properties of the resulting supergravity theory such as the Kähler potential and the superpotential are fixed by the geometry of the compact manifold. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal N = 1" }, { "math_id": 1, "text": "\\mathcal N=1" }, { "math_id": 2, "text": "(g_{\\mu\\nu},\\psi_\\mu)" }, { "math_id": 3, "text": "g_{\\mu\\nu}" }, { "math_id": 4, "text": "\\psi_{\\alpha \\mu}" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "(\\phi^n, \\chi^n)" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "\\phi^n" }, { "math_id": 9, "text": "\\chi^n" }, { "math_id": 10, "text": "(A_\\mu^I, \\lambda^I)" }, { "math_id": 11, "text": "A_\\mu^I" }, { "math_id": 12, "text": "\\lambda^I" }, { "math_id": 13, "text": "I" }, { "math_id": 14, "text": "K(\\phi,\\bar \\phi)" }, { "math_id": 15, "text": "W(\\phi)" }, { "math_id": 16, "text": "f_{IJ}(\\phi)" }, { "math_id": 17, "text": "K(\\phi, \\bar \\phi)\\rightarrow K(\\phi,\\bar \\phi) + f(\\phi)+\\bar f(\\bar \\phi)" }, { "math_id": 18, "text": "f(\\phi)" }, { "math_id": 19, "text": "\\mathcal L \\supset -\\psi^\\mu j_\\mu" }, { "math_id": 20, "text": "\\text{U}(1)" }, { "math_id": 21, "text": "\\chi" }, { "math_id": 22, "text": "\\chi_{L,R} = P_{L,R} \\chi" }, { "math_id": 23, "text": "\\mathcal N>1" }, { "math_id": 24, "text": "\\mathbb C^n" }, { "math_id": 25, "text": "g_{m\\bar n}" }, { "math_id": 26, "text": "\\phi^{\\bar n} \\equiv \\bar\\phi^n" }, { "math_id": 27, "text": "\n\\Omega = i g_{m\\bar n} d\\phi^m \\wedge d\\phi^{\\bar n},\n" }, { "math_id": 28, "text": "d\\Omega= 0" }, { "math_id": 29, "text": "g_{m\\bar n}= \\partial_m \\partial_{\\bar n} K" }, { "math_id": 30, "text": "K(\\phi, \\bar \\phi)" }, { "math_id": 31, "text": "\\partial_n" }, { "math_id": 32, "text": "h(\\phi)" }, { "math_id": 33, "text": "\nK(\\phi, \\bar \\phi) \\rightarrow K(\\phi, \\bar \\phi) + h(\\phi) + \\bar h(\\bar \\phi).\n" }, { "math_id": 34, "text": "\nW \\rightarrow e^{-\\frac{h}{M_P^2}}W, \n" }, { "math_id": 35, "text": "\n\\chi^m \\rightarrow e^{i \\frac{\\text{Im}(h)}{2M_P^2}\\gamma_5}\\chi^m,\n" }, { "math_id": 36, "text": "\n\\psi_\\mu, \\epsilon, \\lambda^I \\rightarrow e^{-i\\frac{\\text{Im}(h)}{2M_P^2}\\gamma_5}\\psi_\\mu, \\epsilon, \\lambda^I,\n" }, { "math_id": 37, "text": "\\epsilon" }, { "math_id": 38, "text": "\nQ_\\mu = \\frac{i}{2}\\bigg[(\\partial_{\\bar n}K)\\partial_\\mu \\phi^{\\bar n} - (\\partial_m K)\\partial_\\mu \\phi^m - A^I_\\mu (r_I-\\bar r_I)\\bigg],\n" }, { "math_id": 39, "text": "dQ = \\Omega" }, { "math_id": 40, "text": "\\Omega" }, { "math_id": 41, "text": "r_I" }, { "math_id": 42, "text": "c_1(L) = [\\mathcal K]" }, { "math_id": 43, "text": "c_1(L)" }, { "math_id": 44, "text": "[\\mathcal K]" }, { "math_id": 45, "text": "\\Omega=dQ" }, { "math_id": 46, "text": "S^2" }, { "math_id": 47, "text": "n_v" }, { "math_id": 48, "text": "n_c" }, { "math_id": 49, "text": "G_{\\text{iso}}\\times G_v \\times U(1)_R" }, { "math_id": 50, "text": "G_{\\text{iso}}" }, { "math_id": 51, "text": "G_v" }, { "math_id": 52, "text": "\\text{U}(1)_R" }, { "math_id": 53, "text": "n_{cv}\\leq n_c" }, { "math_id": 54, "text": "G_{\\text{iso}} \\rightarrow G_{\\text{iso},c}\\times G_{\\text{iso},cv}" }, { "math_id": 55, "text": "\\xi^n_I(\\phi)" }, { "math_id": 56, "text": "\\mathcal L_{\\xi_I}g = 0" }, { "math_id": 57, "text": "\\mathcal L_{\\xi_I}" }, { "math_id": 58, "text": "\\phi^n \\rightarrow \\phi^n+\\alpha^I\\xi_I^n(\\phi)" }, { "math_id": 59, "text": "\n[\\xi_I, \\xi_J] = f_{IJ}{}^K \\xi_K.\n" }, { "math_id": 60, "text": "\\mathcal L_{\\xi_I} J = 0" }, { "math_id": 61, "text": "\\xi^{\\bar m}_I = \\bar \\xi^m_I" }, { "math_id": 62, "text": "\\mathcal P_I" }, { "math_id": 63, "text": "i_{\\xi_I} J = d \\mathcal P_I" }, { "math_id": 64, "text": "i_{\\xi_I}" }, { "math_id": 65, "text": "\n\\mathcal P_J = \\frac{i}{2}[\\xi^m_I \\partial_m K - \\xi_I^{\\bar n}\\partial_{\\bar n}K - (r_I-\\bar r_I)],\n" }, { "math_id": 66, "text": "r_I(\\phi)" }, { "math_id": 67, "text": "\n\\xi_I^m \\partial_m K + \\xi_I^{\\bar n}\\partial_{\\bar n}K = r_I(\\phi)+\\bar r_I(\\bar \\phi).\n" }, { "math_id": 68, "text": "\n\\xi_I^mg_{m\\bar n}\\xi_J^{\\bar n} - \\xi_J^mg_{m\\bar n}\\xi_I^{\\bar n} = if_{IJ}{}^K \\mathcal P_K," }, { "math_id": 69, "text": "f_{IJ}{}^K" }, { "math_id": 70, "text": "\n\\xi_I^n \\partial_n W = \\frac{r_I}{M_P^2} W.\n" }, { "math_id": 71, "text": "F^{\\mu\\nu}_I" }, { "math_id": 72, "text": "G^{\\mu\\nu}_I" }, { "math_id": 73, "text": "\n\\star G_I^{\\mu\\nu} = 2\\frac{\\delta S}{\\delta F^I_{\\mu\\nu}}.\n" }, { "math_id": 74, "text": "\\delta_I (\\begin{smallmatrix}F\\\\G\\end{smallmatrix}) = T_I(\\begin{smallmatrix}F \\\\ G \\end{smallmatrix})" }, { "math_id": 75, "text": "\nT_I = \\begin{pmatrix} a_{I}{}^J{}_K & b_I{}^{JK} \\\\ c_{IJK} & d_{IJ}{}^K \\end{pmatrix}.\n" }, { "math_id": 76, "text": "\\text{Sp}(2n_v,\\mathbb R)" }, { "math_id": 77, "text": "\n\\xi_I^n \\partial_n f_{JK}(\\phi) = c_{IJK}+d_{IJ}{}^Mf_{MK}-f_{JM}a_{I}{}^M{}_K + b_I{}^{MN}f_{JM}f_{KN}\n" }, { "math_id": 78, "text": "T_I" }, { "math_id": 79, "text": "b_I \\neq 0" }, { "math_id": 80, "text": "c_I\\neq 0" }, { "math_id": 81, "text": "\\text{O}(n_v) \\subset \\text{Sp}(2n_v,\\mathbb R)" }, { "math_id": 82, "text": "\\delta A^I_\\mu = \\partial_\\mu \\alpha^I(x)" }, { "math_id": 83, "text": "\\alpha^I(x)" }, { "math_id": 84, "text": "\n\\delta_\\alpha \\phi^n = \\alpha^I(x) \\xi_I^n,\n" }, { "math_id": 85, "text": "\n\\delta_\\alpha \\chi^n = \\alpha^I(x)\\partial_m\\xi^n_I \\chi^m + \\frac{1}{4M_P^2}\\alpha^I(x)(r_I-\\bar r_I)\\chi^n,\n" }, { "math_id": 86, "text": "\n\\delta_\\alpha A^I_\\mu = \\partial_\\mu \\alpha^I(x) + \\alpha^J(x) f_{KJ}{}^IA^K_\\mu,\n" }, { "math_id": 87, "text": "\n\\delta_\\alpha \\lambda^I = \\alpha^J(x)f_{KJ}{}^I\\lambda^K -\\frac{1}{4M_P^2}\\alpha^J(x)\\gamma_5(r_J-\\bar r_J)\\lambda^I,\n" }, { "math_id": 88, "text": "\n\\delta_\\alpha \\psi_{L\\mu} = -\\frac{1}{4M_P^2}\\alpha^I(x)(r_I-\\bar r_I) \\psi_{L\\mu}.\n" }, { "math_id": 89, "text": "\\xi^n_I" }, { "math_id": 90, "text": "\\text{Im}(r_I)" }, { "math_id": 91, "text": "D_\\mu" }, { "math_id": 92, "text": "\\omega_\\mu^{ab}" }, { "math_id": 93, "text": "\nD_\\mu = \\partial_\\mu + \\tfrac{1}{4}\\omega_\\mu{}^{ab}\\gamma_{ab}.\n" }, { "math_id": 94, "text": "\n\\hat \\partial_\\mu \\phi^n = \\partial_\\mu \\phi^n - A^I_\\mu \\xi_I^n,\n" }, { "math_id": 95, "text": "\n\\mathcal D_nW = \\partial_nW +\\frac{1}{M_P^2}(\\partial_n K)W,\n" }, { "math_id": 96, "text": "\n\\mathcal D_\\mu \\psi_\\nu = D_\\mu \\psi_\\nu + \\frac{i}{2M_P^2}Q_\\mu \\gamma_5 \\psi_\\nu,\n" }, { "math_id": 97, "text": "\n\\hat{\\mathcal D}_\\mu\\lambda^I = D_\\mu \\lambda^I + A^J_\\mu f^I_{JK}\\lambda^K + \\frac{i}{2M_P^2}Q_\\mu \\gamma_5 \\lambda^I,\n" }, { "math_id": 98, "text": "\n\\hat{\\mathcal D}_\\mu \\chi^m_L = D_\\mu\\chi^m_L + (\\hat \\partial_\\mu \\phi^n)\\Gamma^m_{nl} \\chi^l_L - A^I_\\mu (\\partial_n \\xi^m_I)\\chi^n_L - \\frac{i}{2M_P^2}Q_\\mu \\chi^m_L.\n" }, { "math_id": 99, "text": "\\Gamma^m_{nl} = g^{m\\bar p}\\partial_n g_{l \\bar p}" }, { "math_id": 100, "text": "f_{JK}{}^I" }, { "math_id": 101, "text": "Q_\\mu" }, { "math_id": 102, "text": "\n\\chi^m \\rightarrow e^{i\\theta \\gamma_5}\\chi^m, \\ \\ \\ \\ \\ \\ \\psi_\\mu, \\lambda^I \\rightarrow e^{-i\\theta \\gamma_5}\\psi_\\mu, \\lambda^I. \n" }, { "math_id": 103, "text": "\n\\mathcal L = \\mathcal L_{\\text{kinetic}} + \\mathcal L_{\\text{theta}} + \\mathcal L_{\\text{mass}} + \\mathcal L_{\\text{interaction}} + \\mathcal L_{\\text{supercurrent}} + \\mathcal L_{\\text{potential}}+ \\mathcal L_{\\text{4-fermion}}.\n" }, { "math_id": 104, "text": "\ne^{-1}\\mathcal L_{\\text{kinetic}} = \\frac{M_P^2}{2}R - \\frac{M_P^2}{2}\\bar \\psi_\\mu \\gamma^{\\mu \\nu \\rho}\\mathcal D_\\nu \\psi_\\rho\n" }, { "math_id": 105, "text": "\n- g_{m\\bar n}[(\\hat \\partial_\\mu \\phi^m)(\\hat \\partial^\\mu \\phi^{\\bar n})+\\bar \\chi_L^m \\hat{{\\mathcal D}\\!\\!\\!/} \\chi_R^n + \\bar \\chi_R^{\\bar n}\\hat{{\\mathcal D}\\!\\!\\!/}\\chi_L^m]\n" }, { "math_id": 106, "text": "\n+ \\text{Re}(f_{IJ}) \\bigg[-\\frac{1}{4}F_{\\mu\\nu}^I F^{\\mu\\nu J}-\\frac{1}{2}\\bar \\lambda^I \\hat{{\\mathcal D}\\!\\!\\!/}\\lambda^J\\bigg].\n" }, { "math_id": 107, "text": "e^\\mu_a" }, { "math_id": 108, "text": "\\omega^\\mu_{ab}" }, { "math_id": 109, "text": "e = \\det e^a_\\mu = \\sqrt{-g}" }, { "math_id": 110, "text": "M_P" }, { "math_id": 111, "text": "\\partial\\!\\!\\!/ = \\gamma^\\mu \\partial_\\mu" }, { "math_id": 112, "text": "F^I_{\\mu\\nu}" }, { "math_id": 113, "text": "A^I_\\mu" }, { "math_id": 114, "text": "\ne^{-1}\\mathcal L_{\\text{theta}} = \\frac{1}{8}\\text{Im}(f_{IJ})\\bigg[F_{\\mu\\nu}^I F_{\\rho \\sigma}^J \\epsilon^{\\mu\\nu\\rho \\sigma}-2i \\hat{\\mathcal D}_\\mu(e \\bar \\lambda^I \\gamma_5 \\gamma^\\mu \\lambda^J)\\bigg],\n" }, { "math_id": 115, "text": "\ne^{-1}\\mathcal L_{\\text{mass}} = \\frac{1}{2M_P^2}e^{K/2M_P^2}W \\bar \\psi_{\\mu R}\\gamma^{\\mu\\nu}\\psi_{\\nu R}\n" }, { "math_id": 116, "text": "\n+\\frac{1}{4}e^{K/2M_P^2}(\\mathcal D_mW)g^{m\\bar n}\\partial_{\\bar n}\\bar f_{IJ}\\bar \\lambda_R^I \\lambda^J_R - \\frac{1}{2}e^{K/2M_P^2}(\\mathcal D_m\\mathcal D_nW)\\bar \\chi^{\\bar m}_L \\chi^n_L\n" }, { "math_id": 117, "text": "\n+ \\frac{i\\sqrt 2}{4}D^I \\partial_m f_{IJ}\\bar \\chi_L^m \\lambda^J - \\sqrt 2 \\xi^{\\bar n}_I g_{m\\bar n}\\bar \\lambda^I \\chi^m_L + h.c..\n" }, { "math_id": 118, "text": "D^I" }, { "math_id": 119, "text": "\nD^I = (\\text{Re} \\ f)^{-1IJ}\\mathcal P_J,\n" }, { "math_id": 120, "text": "\\mathcal P_J" }, { "math_id": 121, "text": "\ne^{-1}\\mathcal L_{\\text{interaction}} = -\\frac{1}{4 \\sqrt 2}\\partial_m f_{IJ}F_{\\mu\\nu}^I \\bar \\chi_L^m \\gamma^{\\mu\\nu}\\lambda^J_L + h.c..\n" }, { "math_id": 122, "text": "\ne^{-1}\\mathcal L_{\\text{supercurrent}} = -(J^\\mu_{\\text{chiral}}\\psi_{\\mu L} + h.c.)-J^\\mu_{\\text{gauge}}\\psi_\\mu,\n" }, { "math_id": 123, "text": "\nJ^\\mu_{\\text{chiral}} = -\\tfrac{1}{\\sqrt 2}g_{m\\bar n}\\bar \\chi_L^m \\gamma^\\mu \\gamma^\\nu \\hat \\partial_\\nu \\phi^{\\bar n} + \\tfrac{1}{\\sqrt 2}\\bar \\chi^{\\bar n}_R \\gamma^\\mu e^{K/2M_P^2}\\mathcal D_{\\bar n}\\bar W,\n" }, { "math_id": 124, "text": "\nJ^\\mu_{\\text{gauge}} = -\\tfrac{1}{4}\\bar \\lambda^J\\text{Re}(f_{IJ}) F^I_{\\nu \\rho}\\gamma^\\mu\\gamma^{\\nu \\rho} - \\tfrac{i}{2} \\bar \\lambda^J \\mathcal P_J \\gamma^\\mu \\gamma_5.\n" }, { "math_id": 125, "text": "-ej^\\mu A_\\mu" }, { "math_id": 126, "text": "e^{-1}\\mathcal L_{\\text{potential}} = - V(\\phi, \\bar \\phi)" }, { "math_id": 127, "text": "\nV(\\phi, \\bar \\phi) = e^{K/M_P^2}\\bigg[g^{m\\bar n}(\\mathcal D_m W)(\\mathcal D_{\\bar n}\\bar W)-\\frac{3|W|^2}{M_P^2}\\bigg]+\\frac{1}{2}\\text{Re}(f_{IJ})D^ID^J,\n" }, { "math_id": 128, "text": "|W|^2" }, { "math_id": 129, "text": "\nG = M_P^{-2}K+ \\ln (M_P^{-6}|W|^2).\n" }, { "math_id": 130, "text": "\nV = M_P^4e^{G}[\\partial_m G (\\partial^m \\partial^{\\bar n}G) \\partial_{\\bar n}G - 3].\n" }, { "math_id": 131, "text": "\ne^{-1}\\mathcal L_{\\text{4-fermion}} = \\frac{M_P^2}{2}\\mathcal L_{\\text{SG}}\n" }, { "math_id": 132, "text": "\n+ \\bigg[-\\frac{1}{4 \\sqrt 2}\\partial_m f_{IJ}\\bar \\psi_\\mu \\gamma^\\mu \\chi^m \\bar \\lambda^I \\lambda^J_L + \\frac{1}{8}(\\mathcal D_m \\partial_n f_{IJ})\\bar \\chi^m \\chi^n \\bar \\lambda^I \\lambda^J_L + h.c.\\bigg]\n" }, { "math_id": 133, "text": "\n+ \\frac{1}{16}ie^{-1} \\epsilon^{\\mu\\nu\\rho\\sigma}\\bar \\psi_\\mu \\gamma_\\nu \\psi_\\rho\\bigg(\\frac{1}{2}\\text{Re}(f_{IJ})\\bar \\lambda^I \\gamma_5 \\gamma_\\sigma \\lambda^J + g_{m \\bar n}\\bar \\chi^{\\bar n} \\gamma_\\sigma \\chi^m\\bigg)- \\frac{1}{2}g_{m \\bar n}\\bar \\psi_\\mu \\chi^{\\bar n}\\bar \\psi^\\mu \\chi^m\n" }, { "math_id": 134, "text": "\n+ \\frac{1}{4}\\bigg(R_{m \\bar n p \\bar q} - \\frac{1}{2M_P^2}g_{m \\bar n}g_{p \\bar q}\\bigg) \\bar \\chi^m \\chi^p \\bar \\chi^{\\bar n} \\chi^{\\bar q}\n" }, { "math_id": 135, "text": "\n+\\frac{3}{64 M_P^2}[\\text{Re}(f_{IJ})\\bar \\lambda^I \\gamma_\\mu \\gamma_5 \\lambda^J]^2 -\\frac{1}{16} \\partial_m f_{IJ}\\bar \\lambda^I \\lambda^J_Lg^{m\\bar n}\\bar \\partial_{\\bar n} f_{KM} \\bar \\lambda^K \\lambda_R^M\n" }, { "math_id": 136, "text": "\n+ \\frac{1}{16} (\\text{Re}(f))^{-1 \\ IJ}(\\partial_m f_{IK} \\bar \\chi^m - \\partial_{\\bar m}\\bar f_{IK}\\bar \\chi^{\\bar m})\\lambda^K (\\partial_n f_{JM}\\bar \\chi^{n}- \\partial_{\\bar n}\\bar f_{JM}\\bar \\chi^{\\bar n})\\lambda^M\n" }, { "math_id": 137, "text": "\n- \\frac{1}{4M_P^2}g_{m \\bar n}\\text{Re}(f_{IJ}) \\bar \\chi^{m}\\lambda^I \\bar \\chi^{\\bar n} \\lambda^J.\n" }, { "math_id": 138, "text": "R_{m\\bar np\\bar q}" }, { "math_id": 139, "text": "\\mathcal L_{\\text{SG}}" }, { "math_id": 140, "text": "\ne^{-1}\\mathcal L_{\\text{SG}} = -\\frac{1}{16}[(\\bar \\psi^\\rho \\gamma^\\mu \\psi^\\nu)(\\bar \\psi_\\rho \\gamma_\\mu \\psi_\\nu + 2 \\bar \\psi_\\rho \\gamma_\\nu \\psi_\\mu)-4(\\bar \\psi_\\mu \\gamma^\\sigma \\psi_\\sigma)(\\bar \\psi^\\mu \\gamma^\\sigma \\psi_\\sigma)]\n" }, { "math_id": 141, "text": "\n\\delta e^a_\\mu = \\tfrac{1}{2}\\bar \\epsilon \\gamma^a\\psi_\\mu,\n" }, { "math_id": 142, "text": "\n\\delta \\phi^m = \\tfrac{1}{\\sqrt 2}\\bar \\epsilon_L \\chi_L^m,\n" }, { "math_id": 143, "text": "\n\\delta A^I_\\mu = -\\tfrac{1}{2}\\bar \\epsilon \\gamma_\\mu \\lambda^I,\n" }, { "math_id": 144, "text": "\n\\delta \\psi_{\\mu L} = \\mathcal D_\\mu \\epsilon_L + \\gamma_\\mu S \\epsilon_R,\n" }, { "math_id": 145, "text": "\n\\delta \\chi^m_L = \\tfrac{1}{\\sqrt 2}\\hat{\\partial\\!\\!\\!/}\\phi^m \\epsilon_R + \\mathcal N^m \\epsilon_L,\n" }, { "math_id": 146, "text": "\n\\delta \\lambda_L^I = \\tfrac{1}{4}\\gamma^{\\mu\\nu}F_{\\mu\\nu}^I \\epsilon_L + N^I \\epsilon_L,\n" }, { "math_id": 147, "text": "\nS = \\tfrac{1}{2M_P^2} e^{K/2M_P^2}W,\n" }, { "math_id": 148, "text": "\n\\mathcal N^m = -\\tfrac{1}{\\sqrt 2} g^{m\\bar n} e^{K/2M_P^2} \\mathcal D_{\\bar n}\\bar W,\n" }, { "math_id": 149, "text": "\nN^I = \\tfrac{i}{2}D^I,\n" }, { "math_id": 150, "text": "\nV(\\phi) = -12 M_P^2 S\\bar S + 2 g_{m \\bar n}\\mathcal N^m \\mathcal N^{\\bar n} +2 \\text{Re}(f_{IJ})N^I \\bar N^{J},\n" }, { "math_id": 151, "text": "\\langle \\mathcal N^m\\rangle = \\langle N^I\\rangle = 0" }, { "math_id": 152, "text": "\\langle \\mathcal D_m W\\rangle = \\langle \\mathcal D^I\\rangle = 0" }, { "math_id": 153, "text": "V\\leq 0" }, { "math_id": 154, "text": "\\langle W\\rangle = 0" }, { "math_id": 155, "text": "M_P\\rightarrow \\infty" }, { "math_id": 156, "text": "\n\\mathcal L_{\\text{supercurrent}} \\supset -\\bar \\psi_\\mu \\gamma^\\mu v_L + h.c.\n" }, { "math_id": 157, "text": "v_L" }, { "math_id": 158, "text": "\nv_L = -\\tfrac{1}{\\sqrt 2} \\chi^m_L e^{K/2M_P^2}\\mathcal D_m W-\\tfrac{1}{2}i \\lambda_L^I\\mathcal P_I.\n" }, { "math_id": 159, "text": "\\delta v_L =\\tfrac{1}{2}V_+\\epsilon_L+\\cdots" }, { "math_id": 160, "text": "V_+" }, { "math_id": 161, "text": "V_+>0" }, { "math_id": 162, "text": "v=0" }, { "math_id": 163, "text": "\\mathcal L_{\\text{mass}}" }, { "math_id": 164, "text": "\nm_{3/2} = \\tfrac{1}{M_P^2}e^{K/2M_P^2}W.\n" }, { "math_id": 165, "text": "f_{IJ}=\\delta_{IJ}" }, { "math_id": 166, "text": "\n\\text{str}(\\mathcal M^2) = \\sum_J (-1)^{2J}(2J+1)m_J^2 \n" }, { "math_id": 167, "text": "\n= (n_c-1)\\bigg(2|m_{3/2}|^2-\\frac{1}{M_P^2}\\mathcal P^I\\mathcal P_I\\bigg) + 2e^{K/2M_P^2}R^{m\\bar n}\\mathcal D_m W \\mathcal D_{\\bar n}\\bar W + 2i D^I \\nabla_m \\xi_I^m,\n" }, { "math_id": 168, "text": "\ng^{m\\bar n} (\\mathcal D_m W)(\\mathcal D_{\\bar n}\\bar W) = \\frac{3|W|^2}{M_P^2}.\n" }, { "math_id": 169, "text": "\\psi_\\mu \\rightarrow \\psi_\\mu/M_P" }, { "math_id": 170, "text": "M_P \\rightarrow \\infty" }, { "math_id": 171, "text": "\\mathcal N=2" }, { "math_id": 172, "text": "G_2" } ]
https://en.wikipedia.org/wiki?curid=76995097
76996
Self-organizing map
Machine learning technique useful for dimensionality reduction &lt;templatestyles src="Machine learning/styles.css"/&gt; A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher-dimensional data set while preserving the topological structure of the data. For example, a data set with formula_0 variables measured in formula_1 observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze. An SOM is a type of artificial neural network but is trained using competitive learning rather than the error-correction learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network. The Kohonen map or network is a computationally convenient abstraction building on biological models of neural systems from the 1970s and morphogenesis models dating back to Alan Turing in the 1950s. SOMs create internal representations reminiscent of the cortical homunculus, a distorted representation of the human body, based on a neurological "map" of the areas and proportions of the human brain dedicated to processing sensory functions, for different parts of the body. Overview. Self-organizing maps, like most artificial neural networks, operate in two modes: training and mapping. First, training uses an input data set (the "input space") to generate a lower-dimensional representation of the input data (the "map space"). Second, mapping classifies additional input data using the generated map. In most cases, the goal of training is to represent an input space with "p" dimensions as a map space with two dimensions. Specifically, an input space with "p" variables is said to have "p" dimensions. A map space consists of components called "nodes" or "neurons", which are arranged as a hexagonal or rectangular grid with two dimensions. The number of nodes and their arrangement are specified beforehand based on the larger goals of the analysis and exploration of the data. Each node in the map space is associated with a "weight" vector, which is the position of the node in the input space. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data (reducing a distance metric such as Euclidean distance) without spoiling the topology induced from the map space. After training, the map can be used to classify additional observations for the input space by finding the node with the closest weight vector (smallest distance metric) to the input space vector. Learning algorithm. The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other sensory information is handled in separate parts of the cerebral cortex in the human brain. The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largest principal component eigenvectors. With the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights. The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping. The examples are usually administered several times as iterations. The training utilizes competitive learning. When a training example is fed to the network, its Euclidean distance to all weight vectors is computed. The neuron whose weight vector is most similar to the input is called the best matching unit (BMU). The weights of the BMU and neurons close to it in the SOM grid are adjusted towards the input vector. The magnitude of the change decreases with time and with the grid-distance from the BMU. The update formula for a neuron v with weight vector Wv(s) is formula_2, where "s" is the step index, "t" is an index into the training sample, "u" is the index of the BMU for the input vector D("t"), "α"("s") is a monotonically decreasing learning coefficient; "θ"("u", "v", "s") is the neighborhood function which gives the distance between the neuron u and the neuron "v" in step "s". Depending on the implementations, t can scan the training data set systematically ("t" is 0, 1, 2..."T"-1, then repeat, "T" being the training sample's size), be randomly drawn from the data set (bootstrap sampling), or implement some other sampling method (such as jackknifing). The neighborhood function "θ"("u", "v", "s") (also called "function of lateral interaction") depends on the grid-distance between the BMU (neuron "u") and neuron "v". In the simplest form, it is 1 for all neurons close enough to BMU and 0 for others, but the Gaussian and Mexican-hat functions are common choices, too. Regardless of the functional form, the neighborhood function shrinks with time. At the beginning when the neighborhood is broad, the self-organizing takes place on the global scale. When the neighborhood has shrunk to just a couple of neurons, the weights are converging to local estimates. In some implementations, the learning coefficient "α" and the neighborhood function "θ" decrease steadily with increasing "s", in others (in particular those where "t" scans the training data set) they decrease in step-wise fashion, once every "T" steps. This process is repeated for each input vector for a (usually large) number of cycles λ. The network winds up associating output nodes with groups or patterns in the input data set. If these patterns can be named, the names can be attached to the associated nodes in the trained net. During mapping, there will be one single "winning" neuron: the neuron whose weight vector lies closest to the input vector. This can be simply determined by calculating the Euclidean distance between input vector and weight vector. While representing input data as vectors has been emphasized in this article, any kind of object which can be represented digitally, which has an appropriate distance measure associated with it, and in which the necessary operations for training are possible can be used to construct a self-organizing map. This includes matrices, continuous functions or even other self-organizing maps. Algorithm. The variable names mean the following, with vectors in bold, The key design choices are the shape of the SOM, the neighbourhood function, and the learning rate schedule. The idea of the neighborhood function is to make it such that the BMU is updated the most, its immediate neighbors are updated a little less, and so on. The idea of the learning rate schedule is to make it so that the map updates are large at the start, and gradually stop updating. For example, if we want to learn a SOM using a square grid, we can index it using formula_15 where both formula_16. The neighborhood function can make it so that the BMU updates in full, the nearest neighbors update in half, and their neighbors update in half again, etc.formula_17And we can use a simple linear learning rate schedule formula_18. Notice in particular, that the update rate does "not" depend on where the point is in the Euclidean space, only on where it is in the SOM itself. For example, the points formula_19 are close on the SOM, so they will always update in similar ways, even when they are far apart on the Euclidean space. In contrast, even if the points formula_20 end up overlapping each other (such as if the SOM looks like a folded towel), they still do not update in similar ways. Initialization options. Selection of initial weights as good approximations of the final weights is a well-known problem for all iterative methods of artificial neural networks, including self-organizing maps. Kohonen originally proposed random initiation of weights. (This approach is reflected by the algorithms described above.) More recently, principal component initialization, in which initial map weights are chosen from the space of the first principal components, has become popular due to the exact reproducibility of the results. A careful comparison of random initialization to principal component initialization for a one-dimensional map, however, found that the advantages of principal component initialization are not universal. The best initialization method depends on the geometry of the specific dataset. Principal component initialization was preferable (for a one-dimensional map) when the principal curve approximating the dataset could be univalently and linearly projected on the first principal component (quasilinear sets). For nonlinear datasets, however, random initiation performed better. Interpretation. There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar ones apart. This may be visualized by a U-Matrix (Euclidean distance between weight vectors of neighboring cells) of the SOM. The other way is to think of neuronal weights as pointers to the input space. They form a discrete approximation of the distribution of training samples. More neurons point to regions with high training sample concentration and fewer where the samples are scarce. SOM may be considered a nonlinear generalization of Principal components analysis (PCA). It has been shown, using both artificial and real geophysical data, that SOM has many advantages over the conventional feature extraction methods such as Empirical Orthogonal Functions (EOF) or PCA. Originally, SOM was not formulated as a solution to an optimisation problem. Nevertheless, there have been several attempts to modify the definition of SOM and to formulate an optimisation problem which gives similar results. For example, Elastic maps use the mechanical metaphor of elasticity to approximate principal manifolds: the analogy is an elastic membrane and plate. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "W_{v}(s + 1) = W_{v}(s) + \\theta(u, v, s) \\cdot \\alpha(s) \\cdot (D(t) - W_{v}(s))" }, { "math_id": 3, "text": "s = 0, 1, 2, ..., \\lambda" }, { "math_id": 4, "text": "{D}(t)" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "v" }, { "math_id": 7, "text": "W_{v}(s + 1) = W_{v}(s) + \\theta(u, v, s) \\cdot \\alpha(s) \\cdot (D(t) - W_{v}(s)) " }, { "math_id": 8, "text": "s" }, { "math_id": 9, "text": "\\lambda" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "\\mathbf{D}" }, { "math_id": 12, "text": "\\mathbf{W}_v" }, { "math_id": 13, "text": "\\theta (u, v, s)" }, { "math_id": 14, "text": "\\alpha (s)" }, { "math_id": 15, "text": "(i, j)" }, { "math_id": 16, "text": "i, j \\in 1:N" }, { "math_id": 17, "text": "\\theta((i, j), (i', j'), s) = \\frac{1}{2^{|i-i'| + |j-j'|}} = \\begin{cases}\n1 & \\text{if }i=i', j = j' \\\\\n1/2 & \\text{if }|i-i'| + |j-j'| = 1 \\\\\n1/4 & \\text{if }|i-i'| + |j-j'| = 2 \\\\\n\\cdots & \\cdots\n\\end{cases} \n" }, { "math_id": 18, "text": "\\alpha(s) = 1-s/\\lambda" }, { "math_id": 19, "text": "(1,1), (1,2) " }, { "math_id": 20, "text": "(1,1), (1, 100)" }, { "math_id": 21, "text": "s < \\lambda" } ]
https://en.wikipedia.org/wiki?curid=76996
76999747
Kolchin's problems
Kolchin's problems are a set of unsolved problems in differential algebra, outlined by Ellis Kolchin at the International Congress of Mathematicians in 1966 (Moscow) Kolchin Catenary Conjecture. The Kolchin Catenary Conjecture is a fundamental open problem in differential algebra related to dimension theory. Statement. "Let formula_0 be a differential algebraic variety of dimension formula_1 By a "long gap chain" we mean a chain of irreducible differential subvarieties formula_2 of ordinal number length formula_3." Given an irreducible differential variety formula_0 of dimension formula_4 and an arbitrary point formula_5 , does there exist a long gap chain beginning at formula_6 and ending at formula_0? The positive answer to this question is called the Kolchin catenary conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Sigma " }, { "math_id": 1, "text": " d " }, { "math_id": 2, "text": " \\Sigma_0 \\subset \\Sigma_1 \\subset \\Sigma_2 \\subset \\cdots " }, { "math_id": 3, "text": " \\omega^m \\cdot d " }, { "math_id": 4, "text": " d > 0 " }, { "math_id": 5, "text": " p \\in \\Sigma " }, { "math_id": 6, "text": " p " } ]
https://en.wikipedia.org/wiki?curid=76999747
77002473
Q tensor
Orientational order parameter In physics, formula_0 tensor is an orientational order parameter that describes uniaxial and biaxial nematic liquid crystals and vanishes in the isotropic liquid phase. The formula_0 tensor is a second-order, traceless, symmetric tensor and is defined by formula_1 where formula_2 and formula_3 are scalar order parameters, formula_4 are the two directors of the nematic phase and formula_5 is the temperature; in uniaxial liquid crystals, formula_6. The components of the tensor are formula_7 The states with directors formula_8 and formula_9 are physically equivalent and similarly the states with directors formula_10 and formula_11 are physically equivalent. The formula_0 tensor can always be diagonalized, formula_12 The following are the invariants of the formula_0 tensor formula_13 the first-order invariant formula_14 is trivial here. It can be shown that formula_15 Uniaxial nematics. In uniaxial nematic liquid crystals, formula_6 and therefore the formula_0 tensor reduces to formula_16 The scalar order parameter is defined as follows. If formula_17 represents the angle between the axis of a nematic molecular and the director axis formula_8, then formula_18 where formula_19 denotes the ensemble average of the orientational angles calculated with respect to the distribution function formula_20 and formula_21 is the solid angle. The distribution function must necessarily satisfy the condition formula_22 since the directors formula_8 and formula_9 are physically equivalent. The range for formula_23 is given by formula_24, with formula_25 representing the perfect alignment of all molecules along the director and formula_26 representing the complete random alignment (isotropic) of all molecules with respect to the director; the formula_27 case indicates that all molecules are aligned perpendicular to the director axis although such nematics are rare or hard to synthesize. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf Q" }, { "math_id": 1, "text": "\\mathbf{Q} = S\\left(\\mathbf n\\mathbf n - \\frac{1}{3}\\mathbf I\\right) + P\\left(\\mathbf m\\mathbf m - \\frac{1}{3}\\mathbf I\\right) " }, { "math_id": 2, "text": "S=S(T)" }, { "math_id": 3, "text": "P=P(T)" }, { "math_id": 4, "text": "(\\mathbf n,\\mathbf m)" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "P=0" }, { "math_id": 7, "text": "Q_{ij} = S\\left(n_in_j - \\frac{1}{3}\\delta_{ij}\\right) + P\\left(m_im_j - \\frac{1}{3}\\delta_{ij}\\right)" }, { "math_id": 8, "text": "\\mathbf n" }, { "math_id": 9, "text": "-\\mathbf n" }, { "math_id": 10, "text": "\\mathbf m" }, { "math_id": 11, "text": "-\\mathbf m" }, { "math_id": 12, "text": "\n\\mathbf Q=\n-\\frac{1}{2}\\begin{pmatrix}\nS+P & 0 &0 \\\\\n0 &S-P & 0 \\\\\n0 & 0& -2S\\\\\n\\end{pmatrix} \n" }, { "math_id": 13, "text": "\\delta = Q_{ij}Q_{ij} = \\frac{1}{2}(3S^2+P^2), \\quad \\Delta = Q_{ij}Q_{jk}Q_{ki} = \\frac{3}{4}S(S^2-P^2);" }, { "math_id": 14, "text": "Q_{ii}=0" }, { "math_id": 15, "text": "\\delta^3\\geq 6\\Delta^2." }, { "math_id": 16, "text": "\\mathbf{Q} = S\\left(\\mathbf n\\mathbf n - \\frac{1}{3}\\mathbf I\\right)." }, { "math_id": 17, "text": "\\theta_{\\mathrm{mol}}" }, { "math_id": 18, "text": "S = \\langle P_2(\\cos \\theta_{\\mathrm{mol}})\\rangle = \\frac{1}{2}\\langle 3 \\cos^2 \\theta_{\\mathrm{mol}}-1 \\rangle = \\frac{1}{2}\\int (3 \\cos^2 \\theta_{\\mathrm{mol}}-1)f(\\theta_{\\mathrm{mol}}) d\\Omega" }, { "math_id": 19, "text": "\\langle\\cdot\\rangle" }, { "math_id": 20, "text": "f(\\theta_{\\mathrm{mol}})" }, { "math_id": 21, "text": "d\\Omega = \\sin \\theta_{\\mathrm{mol}}d\\theta_{\\mathrm{mol}}d\\phi_{\\mathrm{mol}}" }, { "math_id": 22, "text": "f(\\theta_{\\mathrm{mol}}+\\pi) = f(\\theta_{\\mathrm{mol}})" }, { "math_id": 23, "text": "S" }, { "math_id": 24, "text": "-1/2\\leq S\\leq 1" }, { "math_id": 25, "text": "S=1" }, { "math_id": 26, "text": "S=0" }, { "math_id": 27, "text": "S=-1/2" } ]
https://en.wikipedia.org/wiki?curid=77002473
77002567
Landau–de Gennes theory
In physics, Landau–de Gennes theory describes the NI transition, i.e., phase transition between nematic liquid crystals and isotropic liquids, which is based on the classical Landau's theory and was developed by Pierre-Gilles de Gennes in 1969. The phenomonological theory uses the formula_0 tensor as an order parameter in expandiing the free energy density. Mathematical description. The NI transition is a first-order phase transition, albeit it is very weak. The order parameter is the formula_0 tensor, which is symmetric, traceless, second-order tensor and vanishes in the isotripic liquid phase. We shall consider a uniaxial formula_1 tensor, which is defined by formula_2 where formula_3 is the scalar order parameter and formula_4 is the director. The formula_1 tensor is zero in the isotropic liquid phase since the scalar order parameter formula_5 is zero, but becomes non-zero in the nematic phase. Near the NI transition, the (Helmholtz or Gibbs) free energy density formula_6 is expanded about as formula_7 or more compactly formula_8 Further, we can expand formula_9, formula_10 and formula_11 with formula_12 being three positive constants. Now substituting the formula_1 tensor results in formula_13 This is minimized when formula_14 The two required solutions of this equation are formula_15 The NI transition temperature formula_16 is not simply equal to formula_17 (which would be the case in second-order phase transition), but is given by formula_18 formula_19 is the scalar order parameter at the transition. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{Q}" }, { "math_id": 1, "text": "\\mathbf Q" }, { "math_id": 2, "text": "\\mathbf Q = S\\left(\\mathbf n\\mathbf n - \\frac{1}{3}\\mathbf I\\right)" }, { "math_id": 3, "text": "S=S(T)" }, { "math_id": 4, "text": "\\mathbf n" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "\\mathcal{F}" }, { "math_id": 7, "text": "\\mathcal{F} = \\mathcal{F}_0 + \\frac{1}{2}A (T) Q_{ij}Q_{ji} - \\frac{1}{3}B(T) Q_{ij}Q_{jk}Q_{ki} + \\frac{1}{4}C(T) (Q_{ij}Q_{ij})^2 " }, { "math_id": 8, "text": "\\mathcal{F} = \\mathcal{F}_0 + \\frac{1}{2}A(T)\\mathrm{tr}(\\mathbf{Q}^2) - \\frac{1}{3}B(T)\\mathrm{tr}(\\mathbf{Q}^3) + + \\frac{1}{4}C(T)[\\mathrm{tr}(\\mathbf{Q}^2)]^2." }, { "math_id": 9, "text": "A(T)=a (T-T_*)+\\cdots" }, { "math_id": 10, "text": "B(T) = b + \\cdots" }, { "math_id": 11, "text": "C(T)=c + \\cdots" }, { "math_id": 12, "text": "(a,b,c)" }, { "math_id": 13, "text": "\\mathcal{F} - \\mathcal{F}_0 = \\frac{a}{3}(T-T_*)S^2 - \\frac{2b}{27} S^3 + \\frac{c}{9}S^4." }, { "math_id": 14, "text": "3a(T-T_*) - b S^2 + 2c S^3=0." }, { "math_id": 15, "text": "\\begin{align}\\text{Isotropic:} & \\,\\,S_I = 0,\\\\\n\\text{Nematic:} & \\,\\,S_N = \\frac{b}{4c} \\left[1+\\sqrt{1-\\frac{24ac}{b^2}(T-T_*)}\\,\\right]>0.\n\\end{align}" }, { "math_id": 16, "text": "T_{NI}" }, { "math_id": 17, "text": "T_*" }, { "math_id": 18, "text": "T_{NI} = T_* + \\frac{b^2}{27ac}, \\quad S_{NI} = \\frac{b}{3c}" }, { "math_id": 19, "text": "S_{NI}" } ]
https://en.wikipedia.org/wiki?curid=77002567
77028079
Jet erosion test
Geotechnical engineering method used to quantify resistance of soil to erosion The jet erosion test (JET), or jet index test, is a method used in geotechnical engineering to quantify the resistance of a soil to erosion. The test can be applied in-situ after preparing a field site, or it can be applied in a laboratory on either an intact or a remolded soil sample. A quantitative measure of erodibility allows for the prediction of erosion, assisting with the design of structures such as vegetated channels, road embankments, dams, levees, and spillways. Procedure. The test consists of mounting a jet tube inside of an enclosed cylinder and releasing a turbulent downpour of water onto a soil specimen at a constant hydraulic head. If the shear stress applied by the jet stream exceeds the critical shear stress for erosion of the soil, the jet will erode soil particles, causing a scour hole to form. The depth of the scour hole is then measured at specified time intervals. Fitting the measured erosion rate ("Er") to the following equation allows the estimation of the erodibility of the soil ("kd") and the critical shear stress ("τc"), provided that the applied shear stress ("τ") is estimated precisely: formula_0 As of 2017, there is no universally accepted methodology to determine the erodibility of a soil. While the jet erosion test provides one estimate for the erodibility, the underlying assumptions of the test have been criticized for various reasons. Other erosion testing methods may produce values for erodibility and critical shear stress inconsistent with this method. Additionally, depending on the method used to fit the results to the above equation, the predicted values of "kd" for a given "τc" can be up to 100 times smaller or larger due to predictive uncertainty. The jet erosion index. One of the results of the test is the jet erosion index ("Ji"), which can be correlated with the soil erodibility. Typically, the jet erosion index ranges from 0 to 0.03. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_r = k_d(\\tau - \\tau_c)" } ]
https://en.wikipedia.org/wiki?curid=77028079
7702975
Frank–Tamm formula
The Frank–Tamm formula yields the amount of Cherenkov radiation emitted on a given frequency as a charged particle moves through a medium at superluminal velocity. It is named for Russian physicists Ilya Frank and Igor Tamm who developed the theory of the Cherenkov effect in 1937, for which they were awarded a Nobel Prize in Physics in 1958. When a charged particle moves faster than the phase speed of light in a medium, electrons interacting with the particle can emit coherent photons while conserving energy and momentum. This process can be viewed as a decay. See Cherenkov radiation and nonradiation condition for an explanation of this effect. Equation. The energy formula_0 emitted per unit length travelled by the particle per unit of frequency formula_1 is: formula_2 provided that formula_3. Here formula_4 and formula_5 are the frequency-dependent permeability and index of refraction of the medium respectively, formula_6 is the electric charge of the particle, formula_7 is the speed of the particle, and formula_8 is the speed of light in vacuum. Cherenkov radiation does not have characteristic spectral peaks, as typical for fluorescence or emission spectra. The relative intensity of one frequency is approximately proportional to the frequency. That is, higher frequencies (shorter wavelengths) are more intense in Cherenkov radiation. This is why visible Cherenkov radiation is observed to be brilliant blue. In fact, most Cherenkov radiation is in the ultraviolet spectrum; the sensitivity of the human eye peaks at green, and is very low in the violet portion of the spectrum. The total amount of energy radiated per unit length is: formula_9 This integral is done over the frequencies formula_10 for which the particle's speed formula_7 is greater than speed of light of the media formula_11. The integral is convergent (finite) because at high frequencies the refractive index becomes less than unity and for extremely high frequencies it becomes unity. Derivation of Frank–Tamm formula. Consider a charged particle moving relativistically along formula_12-axis in a medium with refraction index formula_13 with a constant velocity formula_14. Start with Maxwell's equations (in Gaussian units) in the wave forms (also known as the Lorenz gauge condition) and take the Fourier transform: formula_15 formula_16 For a charge of magnitude formula_17 (where formula_18 is the elementary charge) moving with velocity formula_7, the density and charge density can be expressed as formula_19 and formula_20, taking the Fourier transform gives: formula_21 formula_22 Substituting this density and charge current into the wave equation, we can solve for the Fourier-form potentials: formula_23 and formula_24 Using the definition of the electromagnetic fields in terms of potentials, we then have the Fourier-form of the electric and magnetic field: formula_25 and formula_26 To find the radiated energy, we consider electric field as a function of frequency at some perpendicular distance from the particle trajectory, say, at formula_27, where formula_28 is the impact parameter. It is given by the inverse Fourier transform: formula_29 First we compute formula_12-component formula_30 of the electric field (parallel to formula_31): formula_32 For brevity we define formula_33. Breaking the integral apart into formula_34, the formula_35 integral can immediately be integrated by the definition of the Dirac Delta: formula_36 The integral over formula_37 has the value formula_38, giving: formula_39 The last integral over formula_40 is in the form of a modified (Macdonald) Bessel function, giving the evaluated parallel component in the form: formula_41 One can follow a similar pattern of calculation for the other fields components arriving at: formula_42 and formula_43 We can now consider the radiated energy formula_0 per particle traversed distance formula_44. It can be expressed through the electromagnetic energy flow formula_45 through the surface of an infinite cylinder of radius formula_46 around the path of the moving particle, which is given by the integral of the Poynting vector formula_47 over the cylinder surface: formula_48 The integral over formula_49 at one instant of time is equal to the integral at one point over all time. Using formula_50: formula_51 Converting this to the frequency domain: formula_52 To go into the domain of Cherenkov radiation, we now consider perpendicular distance formula_28 much greater than atomic distances in a medium, that is, formula_53. With this assumption we can expand the Bessel functions into their asymptotic form: formula_54 formula_55 and formula_56 Thus: formula_57 If formula_58 has a positive real part (usually true), the exponential will cause the expression to vanish rapidly at large distances, meaning all the energy is deposited near the path. However, this isn't true when formula_59 is purely imaginary – this instead causes the exponential to become 1 and then is independent of formula_46, meaning some of the energy escapes to infinity as radiation – this is Cherenkov radiation. formula_59 is purely imaginary if formula_60 is real and formula_61. That is, when formula_60 is real, Cherenkov radiation has the condition that formula_62. This is the statement that the speed of the particle must be larger than the phase velocity of electromagnetic fields in the medium at frequency formula_10 in order to have Cherenkov radiation. With this purely imaginary formula_59 condition, formula_63 and the integral can be simplified to: formula_64 This is the Frank–Tamm equation in Gaussian units. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "dE" }, { "math_id": 1, "text": "d\\omega" }, { "math_id": 2, "text": "\\frac{\\partial^2E}{\\partial x \\, \\partial\\omega} = \\frac{q^2}{4 \\pi} \\mu(\\omega) \\omega \\left(1 - \\frac{c^2} {v^2 n^2(\\omega)}\\right) " }, { "math_id": 3, "text": "\\beta = \\frac{v}{c} > \\frac{1}{n(\\omega)}" }, { "math_id": 4, "text": "\\mu(\\omega)" }, { "math_id": 5, "text": "n(\\omega)" }, { "math_id": 6, "text": "q" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "c" }, { "math_id": 9, "text": "\\frac{dE}{dx} = \\frac{q^2}{4 \\pi} \\int_{v > \\frac{c}{n(\\omega)}} \\mu(\\omega) \\omega \\left(1 - \\frac{c^2} {v^2 n^2(\\omega)}\\right) \\, d\\omega" }, { "math_id": 10, "text": "\\omega" }, { "math_id": 11, "text": "\\frac{c}{n(\\omega)}" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "n(\\omega) = \\sqrt{\\varepsilon(\\omega)}" }, { "math_id": 14, "text": "\\vec v = (v,0,0) " }, { "math_id": 15, "text": "\\left ( k^2 - \\frac{\\omega^2}{c^2} \\varepsilon(\\omega) \\right) \\Phi(\\vec k,\\omega) = \\frac{ 4 \\pi}{\\varepsilon(\\omega)} \\rho(\\vec k, \\omega)" }, { "math_id": 16, "text": "\\left ( k^2 - \\frac{\\omega^2}{c^2} \\varepsilon(\\omega) \\right) \\vec A(\\vec k,\\omega) = \\frac{ 4 \\pi}{c} \\vec J(\\vec k, \\omega)" }, { "math_id": 17, "text": "ze" }, { "math_id": 18, "text": "e" }, { "math_id": 19, "text": "\\rho(\\vec x, t) = q \\delta(\\vec x - \\vec v t)" }, { "math_id": 20, "text": "\\vec J(\\vec x,t) = \\vec v \\rho(\\vec x,t)" }, { "math_id": 21, "text": "\\rho(\\vec k, \\omega) = \\frac{ q}{2 \\pi} \\delta(\\omega - \\vec k \\cdot \\vec v)" }, { "math_id": 22, "text": "\\vec J(\\vec k, \\omega) = \\vec v \\rho (\\vec k ,\\omega) " }, { "math_id": 23, "text": "\\Phi(\\vec k, \\omega) = \\frac{2 q}{\\varepsilon(\\omega)} \\frac{ \\delta(\\omega - \\vec k \\cdot \\vec v)}{k^2 - \\frac{\\omega^2}{c^2} \\varepsilon(\\omega)}" }, { "math_id": 24, "text": "\\vec A(\\vec k,\\omega) = \\varepsilon(\\omega) \\frac{\\vec v}{c} \\Phi(\\vec k,\\omega)" }, { "math_id": 25, "text": "\\vec E(\\vec k,\\omega) = i \\left( \\frac{\\omega \\varepsilon(\\omega)}{c} \\frac{\\vec v}{c} - \\vec k \\right) \\Phi(\\vec k,\\omega)" }, { "math_id": 26, "text": "\\vec B(\\vec k,\\omega) = i \\varepsilon(\\omega) \\vec k \\times \\frac{\\vec v}{c} \\Phi(\\vec k,\\omega)" }, { "math_id": 27, "text": "(0,b,0)" }, { "math_id": 28, "text": "b" }, { "math_id": 29, "text": "\\vec E(\\omega) = \\frac{1}{ ( 2 \\pi)^{3/2}} \\int d^3k \\, \\vec E(\\vec k,\\omega) e^{i bk_2}" }, { "math_id": 30, "text": "E_1" }, { "math_id": 31, "text": "\\vec v" }, { "math_id": 32, "text": "E_1(\\omega) = \\frac{2 i q}{\\varepsilon(\\omega) ( 2\\pi)^{3/2}} \\int d^3k \\, e^{i bk_2} \\left( \\frac{ \\omega \\varepsilon(\\omega) v}{c^2} - k_1 \\right ) \\frac{\\delta(\\omega - v k_1)}{k^2 - \\frac{\\omega^2}{c^2} \\varepsilon(\\omega)}" }, { "math_id": 33, "text": "\\lambda^2 = \\frac{\\omega^2}{v^2} - \\frac{\\omega^2}{c^2} \\varepsilon(\\omega) = \\frac{\\omega^2}{v^2} \\left ( 1 - \\beta^2 \\varepsilon(\\omega) \\right )" }, { "math_id": 34, "text": "k_1, k_2, k_3" }, { "math_id": 35, "text": "k_1" }, { "math_id": 36, "text": "E_1(\\omega) = - \\frac{2 i q \\omega}{v^2 ( 2\\pi)^{3/2}} \\left( \\frac{1}{\\varepsilon(\\omega)} - \\beta^2 \\right) \\int_{-\\infty}^\\infty dk_2 \\, e^{i bk_2} \\int_{-\\infty}^\\infty \\frac{dk_3}{k_2^2 + k_3^2 + \\lambda^2}" }, { "math_id": 37, "text": "k_3" }, { "math_id": 38, "text": "\\frac{\\pi}{ \\left(\\lambda^2 + k^2_2 \\right)^{1/2}}" }, { "math_id": 39, "text": "E_1(\\omega) = - \\frac{ i q \\omega}{v^2 \\sqrt{2\\pi}} \\left( \\frac{1}{\\varepsilon(\\omega)} - \\beta^2 \\right) \\int_{-\\infty}^\\infty dk_2 \\frac{e^{i bk_2}}{(\\lambda^2 + k_2^2)^{1/2}}" }, { "math_id": 40, "text": "k_2" }, { "math_id": 41, "text": "E_1(\\omega) = - \\frac{i q \\omega}{v^2} \\left( \\frac{2}{\\pi} \\right)^{1/2} \\left( \\frac{1}{\\varepsilon(\\omega)} - \\beta^2 \\right) K_0(\\lambda b)" }, { "math_id": 42, "text": "E_2(\\omega) = \\frac{q}{v} \\left( \\frac{2}{\\pi} \\right)^{1/2} \\frac{\\lambda}{\\varepsilon(\\omega)} K_1(\\lambda b), \\quad E_3 = 0 \\quad " }, { "math_id": 43, "text": "\\quad B_1 = B_2 = 0, \\quad B_3(\\omega) = \\varepsilon(\\omega) \\beta E_2(\\omega)" }, { "math_id": 44, "text": "dx_{\\text{particle}} " }, { "math_id": 45, "text": "P_a" }, { "math_id": 46, "text": "a" }, { "math_id": 47, "text": " \\mathbf S = c / (4 \\pi) [ \\mathbf E \\times \\mathbf H] " }, { "math_id": 48, "text": "\\left( \\frac{dE}{dx_{\\text{particle}}} \\right)_{\\text{rad}} = \\frac{1}{v} P_a = - \\frac{c}{4 \\pi v} \\int_{-\\infty}^{\\infty} 2 \\pi a B_3 E_1 \\, dx" }, { "math_id": 49, "text": "dx" }, { "math_id": 50, "text": "dx = v \\, dt" }, { "math_id": 51, "text": "\\left( \\frac{dE}{dx_{\\text{particle}}} \\right)_{\\text{rad}} = - \\frac{c a }{2} \\int_{-\\infty}^\\infty B_3(t) E_1(t) \\, dt" }, { "math_id": 52, "text": "\\left( \\frac{dE}{dx_{\\text{particle}}} \\right)_{\\text{rad}} = -c a \\operatorname{Re} \\left( \\int_0^\\infty B_3^*(\\omega) E_1(\\omega) \\, d\\omega \\right)" }, { "math_id": 53, "text": "| \\lambda b | \\gg 1" }, { "math_id": 54, "text": "E_1(\\omega) \\rightarrow \\frac{i q \\omega}{c^2} \\left( 1 - \\frac{1}{\\beta^2 \\varepsilon(\\omega)} \\right) \\frac{e^{-\\lambda b}}{\\sqrt{\\lambda b}}" }, { "math_id": 55, "text": "E_2(\\omega) \\rightarrow \\frac{q}{v \\varepsilon(\\omega)} \\sqrt{\\frac{\\lambda}{b}} e^{-\\lambda b}" }, { "math_id": 56, "text": "B_3(\\omega) = \\varepsilon(\\omega) \\beta E_2(\\omega)" }, { "math_id": 57, "text": "\\left( \\frac{dE}{dx_{\\text{particle}}} \\right)_{\\text{rad}} = \\operatorname{Re} \\left( \\int_0^\\infty \\frac{q^2}{c^2} \\left(-i \\sqrt{\\frac{\\lambda^*}{\\lambda} }\\right) \\omega \\left( 1 - \\frac{1}{\\beta^2 \\varepsilon(\\omega) } \\right) e^{-(\\lambda + \\lambda^*) a} \\, d\\omega \\right)" }, { "math_id": 58, "text": "\\lambda " }, { "math_id": 59, "text": "\\lambda" }, { "math_id": 60, "text": "\\varepsilon(\\omega)" }, { "math_id": 61, "text": "\\beta^2 \\varepsilon(\\omega) > 1" }, { "math_id": 62, "text": "v > \\frac{c}{\\sqrt{\\varepsilon(\\omega})} = \\frac{c}{n} " }, { "math_id": 63, "text": "\\sqrt{{\\lambda^*}/{\\lambda}} = i" }, { "math_id": 64, "text": "\\left( \\frac{dE}{dx_{\\text{particle}}} \\right)_{\\text{rad}} = \\frac{ q^2}{c^2} \\int_{\\varepsilon(\\omega) > \\frac{1}{\\beta^2}} \\omega \\left( 1 - \\frac{1}{\\beta^2 \\varepsilon(\\omega)} \\right) \\, d\\omega = \\frac{ q^2}{c^2} \\int_{v > \\frac{c}{n(\\omega)}} \\omega \\left( 1 - \\frac{c^2}{v^2 n^2(\\omega)} \\right) \\, d\\omega " } ]
https://en.wikipedia.org/wiki?curid=7702975
77031966
Partial information decomposition
Partial Information Decomposition is an extension of information theory, that aims to generalize the pairwise relations described by information theory to the interaction of multiple variables. Motivation. Information theory can quantify the amount of information a single source variable formula_0 has about a target variable formula_1 via the mutual information formula_2. If we now consider a second source variable formula_3, classical information theory can only describe the mutual information of the joint variable formula_4 with formula_1, given by formula_5. In general however, it would be interesting to know how exactly the individual variables formula_0 and formula_3 and their interactions relate to formula_1. Consider that we are given two source variables formula_6 and a target variable formula_7. In this case the total mutual information formula_8, while the individual mutual information formula_9. That is, there is synergistic information arising from the interaction of formula_10 about formula_1, which cannot be easily captured with classical information theoretic quantities. Definition. Partial information decomposition further decomposes the mutual information between the source variables formula_4 with the target variable formula_1 as formula_11 Here the individual information atoms are defined as There is, thus far, no universal agreement on how these terms should be defined, with different approaches that decompose information into redundant, unique, and synergistic components appearing in the literature. Applications. Despite the lack of universal agreement, partial information decomposition has been applied to diverse fields, including climatology, neuroscience sociology, and machine learning Partial information decomposition has also been proposed as a possible foundation on which to build a mathematically robust definition of emergence in complex systems and may be relevant to formal theories of consciousness. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_1" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "I(X_1;Y)" }, { "math_id": 3, "text": "X_2" }, { "math_id": 4, "text": "\\{X_1,X_2\\}" }, { "math_id": 5, "text": "I(X_1,X_2;Y)" }, { "math_id": 6, "text": "X_1, X_2 \\in \\{0,1\\}" }, { "math_id": 7, "text": "Y=XOR(X_1,X_2)" }, { "math_id": 8, "text": "I(X_1,X_2;Y)=1" }, { "math_id": 9, "text": "I(X_1;Y)=I(X_2;Y)=0" }, { "math_id": 10, "text": "X_1,X_2" }, { "math_id": 11, "text": "I(X_1,X_2;Y)=\\text{Unq}(X_1;Y \\setminus X_2) + \\text{Unq}(X_2;Y \\setminus X_1) + \\text{Syn}(X_1,X_2;Y) + \\text{Red}(X_1,X_2;Y)" }, { "math_id": 12, "text": "\\text{Unq}(X_1;Y \\setminus X_2)" }, { "math_id": 13, "text": "\\text{Syn}(X_1,X_2;Y)" }, { "math_id": 14, "text": "\\text{Red}(X_1,X_2;Y)" } ]
https://en.wikipedia.org/wiki?curid=77031966
77034215
Hole erosion test
Geotechnical engineering method used to quantify resistance of soil to erosion The hole erosion test (HET) is a method used in geotechnical engineering to quantify the resistance of a soil to erosion, and is specifically relevant to the topic of internal erosion in embankment dams. The test can be performed in a laboratory on a remolded soil sample, and provides estimates of both the critical shear stress for erosion of the soil sample as well as a numerical measure of soil erodibility. In the design and engineering of embankment dams, the critical shear stress provided by this test indicates the maximum shear stress that a fluid (such as water) can apply to a soil before a concentrated leak forms and erosion begins. The numerical measure of soil erodibility can be used to predict how quickly this erosion will progress, and it can be found as an input in various computer simulations for dam failure. Procedure. The standard hole erosion test consists of first compacting the soil sample in a standard mold. Then, a small hole (typically 6mm) is drilled lengthwise through the soil. Next, the downstream hydraulic head is set to a standard value, and the initial upstream hydraulic head is chosen using trial-and-error. As the liquid (typically water) flows through the hole, the soil should erode and the hole will expand. The flow rate should be measured throughout the procedure. Directly after the test, the diameter of the hole should be measured. The hydraulic shear stress along the surface of the hole at time "t" can be calculated as: formula_0 where "ρ" is the density of the liquid, "g" is the gravitational acceleration, "Δh" is the difference in hydraulic head across the sample, "L" is the length of the sample, and "Φt" is the diameter of the hole at time "t." While the diameter of the hole is not directly measured throughout the test, it can be estimated using the measured flow rate as well as an estimated friction factor. From the change in diameter of the hole over time, the rate of erosion can thus be plotted against applied hydraulic shear stress and fit to the following equation: formula_1 where "Er" is the rate of erosion over time, "kd" is the soil erodibility, and "τc" is the critical shear stress for erosion. Modified hole erosion test (HET-P). One criticism of the standard hole erosion test is that the use of the hydraulic head rather than the total head implies that the change in velocity head is negligible, which may not be a valid assumption given the sometimes high velocities downstream of the hole. The difference in hydraulic head used to calculate the shear stress also does not factor in the energy dissipated due to flow recirculation and expansion losses downstream of the test specimen. Furthermore, estimating the diameter of the hole throughout the test using an assumed friction factor has been reported as problematic. The modified hole erosion test (HET-P) seeks to rectify these issues with the addition of a pitot-static tube. This allows for the direct measurement of total hydraulic head, thus accounting for the total energy loss between the upstream and downstream ends of the soil sample. While the diameter of the hole is still not measured directly throughout the test, the pitot-static tube provides an independent estimate of the mean flow velocity, which can then be used to calculate the diameter of the hole more directly using the continuity equation. The modified hole erosion test results in significantly smaller values for the critical shear stress - this is makes the results of the test more consistent with other tests, such as the Rotating Cylinder Test or the Jet Erosion Test.
[ { "math_id": 0, "text": "\\tau = \\rho g \\frac{\\Delta h}{L} \\frac{\\Phi_t}{4}" }, { "math_id": 1, "text": "E_r = k_d(\\tau - \\tau_c)" } ]
https://en.wikipedia.org/wiki?curid=77034215
77042503
Dissociative adsorption
Type of adsorption process Dissociative adsorption is a process in which a molecule adsorbs onto a surface and simultaneously dissociates into two or more fragments. This process is the basis of many applications, particularly in heterogeneous catalysis reactions. The dissociation involves cleaving of the molecular bonds in the adsorbate, and formation of new bonds with the substrate. Breaking the atomic bonds of the dissociating molecule requires a large amount of energy, thus dissociative adsorption is an example of chemisorption, where strong adsorbate-substrate bonds are created. These bonds can be atomic, ionic or metallic in nature. In contrast to dissociative adsorption, in molecular adsorption the adsorbate stays intact as it bonds with the surface. Often, a molecular adsorption state can act as a precursor in the adsorption process, after which the molecule can dissociate only after sufficient additional energy is available. A dissociative adsorption process may be "homolytic" or "heterolytic", depending on how the electrons participating in the molecular bond are divided in the dissociation process. In homolytic dissociative adsorption, electrons are divided evenly between the fragments, while in heterolytic dissociation, both electrons of a bond are transferred to one fragment. Kinetic theory. In the Langmuir model. The Langmuir model of adsorption assumes This model is the simplest useful approximation that still retains the dependence of the adsorption rate on the coverage, and in the simplest case, precursor states are not considered. For dissociative adsorption to be possible, each incident molecule requires "n" available adsorption sites, where "n" is the number of dissociated fragments. The probability of an incident molecule impacting a site with a valid configuration has the form formula_0, when the existing coverage is θ and the dissociative products are mobile on the surface. The "order of the kinetics" for the process is "n". The order of kinetics has implications for the sticking coefficient formula_1, where formula_2 denotes the initial sticking coefficient or the sticking coefficient at 0 coverage. The adsorption kinetics are given by formula_3 for (n=2), where I is the impinging flux of molecules on the surface. The shape of the coverage function over time is different for each kinetic order, so assuming desorption is negligible, dissociative adsorption for a system following the Langmuir model can be determined by monitoring the adsorption rate as a function of time under a constant impinging flux. Precursor states. Often the adsorbing molecule does not dissociate directly upon contact with the surface, but is instead first bound to an intermediate "precursor state". The molecule can then attempt to dissociate to the final state through fluctuations. The precursor molecules can be "intrinsic", meaning they occupy an empty site, or "extrinsic", meaning they are bound on top of an already occupied site. The energies of these states can also be different, resulting in different forms of the overall sticking coefficient formula_4. If extrinsic and intrinsic sites are assumed energetically equivalent and the adsorption rate to the precursor state is assumed to follow the Langmuir model, the following expression for the coverage dependence of the overall sticking coefficient is obtained: formula_5, where K is the ratio between the rate constants of dissociation and desorption reactions of the precursor. Temperature dependence. The behaviour of the sticking coefficient as a function of temperature is governed by the shape of the potential energy surface of adsorption. For the direct mechanism, the sticking coefficient is almost temperature independent, because for most systems formula_6. When a precursor state is involved, thermal fluctuations determine the probability of the weakly bound precursor either dissociating into the final state or escaping the surface. The initial sticking coefficient is related to the energy barrier for dissociation formula_7 and desorption formula_8, and their rate constants formula_9 and formula_9 as formula_10. From this arises two distinct cases for the temperature dependence: By measuring the sticking coefficient at different temperatures, it is then possible to extract the value of formula_13. Experimental techniques. The measurement of adsorption properties relies on controlling and measuring the surface coverage and conditions, including the substrate temperature, impinging molecular flux or partial pressure. To detect dissociation on the surface, additional techniques that can distinguish surface ordering due to the interaction of dissociated fragments, identify desorbed particles., determine the order of kinetics or measure the chemical bond energies of the adsorbed species are required. In many experiments, a combination of multiple methods that probe different surface properties is used to form a complete picture of the adsorbed species. Comparisons between the experimental adsorption energy and simulated energies for dissociative and molecular adsorption can also indicate the type of adsorption for a system For measurement of adsorption isotherms, a controlled gas pressure and temperature determine the coverage when adsorption and desorption rates are in balance. The coverage can then be measured with various surface sensitive methods like AES or XPS. Often, the coverage can also be related to a change in the surface work function, which can enable faster measurements in otherwise challenging conditions. The shape of the isotherms is sensitive to the order of kinetics of the adsorption and desorption processes, and though the exact forms can be difficult to find, simulations have been used to find general functional forms for isotherms of dissociative adsorption for specific systems. XPS is a surface sensitive method that allows the direct probing of the chemical bonds of the surface atoms, thus being capable of differentiating bond energies corresponding to intact molecules or dissociated fragments. A challenge with this method is that the incident photons can induce surface modifications that are difficult to separate from the effects to be measured. LEED patterns are often combined with other measurements to verify surface structure and recognize ordering of the adsorbates. Temperature programmed desorption (TPD or TDS) can be used to measure the properties of desorption, namely the desorption energy, order of desorption kinetics and the initial surface coverage. The desorption order contains information about the mechanisms like recombination required for the desorption process. As TPD also measures the masses of the desorbed particles, it can be used to detect individually desorbed dissociated fragments or their different combinations. Presence of masses different from the original molecules, or the detection of additional desorption peaks with higher order kinetics can indicate that the adsorption is dissociative. Modeling. Density functional theory (DFT) can be used to calculate the change in energy caused by the adsorption and dissociation of molecules. The activation energy is calculated as the highest energy point on the optimal molecular paths of the fragments as they transform from the initial molecular state to the dissociated state. This is the saddle point of the potential energy surface of the process. Another approach for considering the stretching and dissociation of adsorbates is through the charge-transfer between the electron bands near the Fermi surface using molecular orbital (MO) theory. A strong charge transfer caused by overlap of unoccupied and occupied orbitals weakens the molecular bonds, which lowers or fully eliminates the barrier for dissociation. The charge transfer can be local or delocalized in terms of the substrate electrons, depending on which orbitals participate in the interaction. The simplest method used for approximating the electronic structure of systems using MO theory is Hartree-Fock self-consistent field, which can be extended to include electron correlations through various approximations. Applications and examples. Water and transition metals. In atmospheric conditions, the adsorption of water and oxygen on transition metal surfaces is a well studied phenomenon. It has also been found that dissociated oxygen content on a surface lowers the activation energy for the dissociation of water, which on a clean metal surface can have a high barrier for dissociation. This is explained by the oxygen atoms binding with one hydrogen of the adsorbing water molecule to form an energetically favourable hydroxyl group. Likewise, molecular pre-adsorbed water can be used to lower the barrier for dissociation of oxygen that is needed in metal-catalyzed oxidation reactions. The relevant effects for this promoting role are hydrogen bonding between the water molecule and oxygen, and the electronic modification of the surface by the adsorbed water. On clean close-packed surfaces of Ag, Au, Pt, Rh and Ni, dissociated oxygen prefers adsorption to hollow sites. Hydroxyl and molecular water prefers to adsorb on low coordination top sites, while the dissociated hydrogen atoms prefer hollow sites for most transition metals. A typical dissociation pathway on these metals is that as a top-site adsorbed molecule dissociates, at least one fragment migrates to a bridge or hollow site. The formation and dissociation of water on transition metals like palladium has important applications in reactions for obtaining hydrogen and for the operation of proton-exchange membrane fuel cells, and much research has been conducted to understand the phenomenon. The rate-determining reaction for water formation is the creation of adsorbed OH. However, details of the specific adsorption sites and preferred reaction pathways for water formation have been difficult to determine. From kinetic Monte Carlo simulations combined with DFT calculations of the reaction energetics, it has been found that water formation on Pd(111) is dominated by step-edges through a combination of reactions: O + H → OH OO + H → OOH OOH → OH + H OH + OH → H2O + O OH + H → H2O At low temperatures and low relative pressure of H2, the dominant reaction path for hydroxyl group formation is the direct association of O and H, and the ratios of each reaction path vary significantly in different conditions. Metal-catalyzed oxidation. The oxidation of carbon monoxide in catalytic converters utilizes a transition metal surface as a catalyst in the reaction 2CO + O2 → 2CO2. This system has been extensively studied to minimize the emissions of toxic CO from internal combustion engines, and there is a trade-off in the preparation of the Pt catalyst surface between the dissociative adsorption of oxygen and the sticking of CO to the metal surface. A larger step density increases the dissociation of oxygen, but at the same time decreases the probability of CO oxidation. The optimal configuration for the reaction is with a CO on a flat terrace and a dissociated O at a step edge. Hydrogen economy. The most prevalent method for hydrogen production, steam reforming, relies on transition metal catalysts which dissociatively adsorb the initial molecules of the reaction to form intermediates, which then can recombine to form gaseous hydrogen. Kinetic models of the possible dissociative adsorption paths have been used to simulate the properties of the reaction. A method for hydrogen purification involves passing the gas through a thin film of Pd-Ag alloy between two gas vessels. The hydrogen gas dissociates on the surface of the film, after which the individual atoms are able to diffuse through the metal, and recombine to form a higher hydrogen content atmosphere inside the low-pressure receiving vessel. A challenge with hydrogen storage and transport through conventional steel vessels is hydrogen-induced-cracking, where a hydrogen atoms enter the container walls through dissociative adsorption. If enough partial pressure builds up inside the material, this can cause cracks, blistering or embrittlement of the walls. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(\\theta) = (1-\\theta)^n" }, { "math_id": 1, "text": "s = s_0(1-\\theta)^n" }, { "math_id": 2, "text": "s_0" }, { "math_id": 3, "text": "\\theta = \\frac{s_0It}{1+s_0It}" }, { "math_id": 4, "text": "f(\\theta)" }, { "math_id": 5, "text": "f(\\theta) = \\frac{(1+K)(1-\\theta)^n}{1+K(1-\\theta)^n}" }, { "math_id": 6, "text": "E_{ads}\\gg k_BT" }, { "math_id": 7, "text": "\\epsilon_{a}" }, { "math_id": 8, "text": "\\epsilon_{d}" }, { "math_id": 9, "text": "\\nu_a" }, { "math_id": 10, "text": "s_0 = \\left( 1 + \\frac{\\nu_d}{\\nu_a}exp\\left( -\\frac{\\epsilon_d-\\epsilon_a}{k_BT} \\right) \\right)^{-1}" }, { "math_id": 11, "text": "\\epsilon_d<\\epsilon_a" }, { "math_id": 12, "text": "\\epsilon_d>\\epsilon_a" }, { "math_id": 13, "text": "\\epsilon_d-\\epsilon_a" } ]
https://en.wikipedia.org/wiki?curid=77042503
770467
Tandem mass spectrometry
Type of mass spectrometry Tandem mass spectrometry, also known as MS/MS or MS2, is a technique in instrumental analysis where two or more stages of analysis using one or more mass analyzer are performed with an additional reaction step in between these analyses to increase their abilities to analyse chemical samples. A common use of tandem MS is the analysis of biomolecules, such as proteins and peptides. The molecules of a given sample are ionized and the first spectrometer (designated MS1) separates these ions by their mass-to-charge ratio (often given as m/z or m/Q). Ions of a particular m/z-ratio coming from MS1 are selected and then made to split into smaller fragment ions, e.g. by collision-induced dissociation, ion-molecule reaction, or photodissociation. These fragments are then introduced into the second mass spectrometer (MS2), which in turn separates the fragments by their m/z-ratio and detects them. The fragmentation step makes it possible to identify and separate ions that have very similar m/z-ratios in regular mass spectrometers. Structure. Typical tandem mass spectrometry instrumentation setups include triple quadrupole mass spectrometers (QqQ), multi-sector mass spectrometer, quadrupole–time of flight (Q-TOF), Fourier transform ion cyclotron resonance mass spectrometers, and hybrid mass spectrometers. Triple quadrupole mass spectrometer. Triple quadrupole mass spectrometers use the first and third quadrupoles as mass filters. When analytes pass the second quadrupole, the fragmentation proceeds through collision with gas. Quadrupole–time of flight (Q-TOF). Q-TOF mass spectrometer combines quadrupole and TOF instruments, which together enable fragmentation experiments that yield highly accurate mass quantitations for product ions. This is a method of mass spectrometry in which fragmented ion ("m"/"z") ratios are determined through a time of flight measurement. Hybrid mass spectrometer. Hybrid mass spectrometer consists of more than two mass analyzers. Instrumentation. Multiple stages of mass analysis separation can be accomplished with individual mass spectrometer elements separated in space or using a single mass spectrometer with the MS steps separated in time. For tandem mass spectrometry in space, the different elements are often noted in a shorthand, giving the type of mass selector used. Tandem in space. In tandem mass spectrometry "in space", the separation elements are physically separated and distinct, although there is a physical connection between the elements to maintain high vacuum. These elements can be sectors, transmission quadrupole, or time-of-flight. When using multiple quadrupoles, they can act as both mass analyzers and collision chambers. Common notation for mass analyzers is "Q" – quadrupole mass analyzer; "q" – radio frequency collision quadrupole; "TOF" – time-of-flight mass analyzer; "B" – magnetic sector, and "E" – electric sector. The notation can be combined to indicate various hybrid instrument, for example "QqQ"' – triple quadrupole mass spectrometer; "QTOF" – quadrupole time-of-flight mass spectrometer (also "QqTOF"); and "BEBE" – four-sector (reverse geometry) mass spectrometer. Tandem in time. By doing tandem mass spectrometry "in time", the separation is accomplished with ions trapped in the same place, with multiple separation steps taking place over time. A quadrupole ion trap or Fourier transform ion cyclotron resonance (FTICR) instrument can be used for such an analysis. Trapping instruments can perform multiple steps of analysis, which is sometimes referred to as MS"n" (MS to the "n"). Often the number of steps, "n", is not indicated, but occasionally the value is specified; for example MS3 indicates three stages of separation. Tandem in time MS instruments do not use the modes described next, but typically collect all of the information from a precursor ion scan and a parent ion scan of the entire spectrum. Each instrumental configuration utilizes a unique mode of mass identification. Tandem in space MS/MS modes. When tandem MS is performed with an in space design, the instrument must operate in one of a variety of modes. There are a number of different tandem MS/MS experimental setups and each mode has its own applications and provides different information. Tandem MS in space uses the coupling of two instrument components which measure the same mass spectrum range but with a controlled fractionation between them in space, while tandem MS in time involves the use of an ion trap. There are four main scan experiments possible using MS/MS: precursor ion scan, product ion scan, neutral loss scan, and selected reaction monitoring. For a precursor ion scan, the product ion is selected in the second mass analyzer, and the precursor masses are scanned in the first mass analyzer. Note that precursor ion is synonymous with parent ion and product ion with daughter ion; however the use of these anthropomorphic terms is discouraged. In a product ion scan, a precursor ion is selected in the first stage, allowed to fragment and then all resultant masses are scanned in the second mass analyzer and detected in the detector that is positioned after the second mass analyzer. This experiment is commonly performed to identify transitions used for quantification by tandem MS. In a neutral loss scan, the first mass analyzer scans all the masses. The second mass analyzer also scans, but at a set offset from the first mass analyzer. This offset corresponds to a neutral loss that is commonly observed for the class of compounds. In a constant-neutral-loss scan, all precursors that undergo the loss of a specified common neutral are monitored. To obtain this information, both mass analyzers are scanned simultaneously, but with a mass offset that correlates with the mass of the specified neutral. Similar to the precursor-ion scan, this technique is also useful in the selective identification of closely related class of compounds in a mixture. In selected reaction monitoring, both mass analyzers are set to a selected mass. This mode is analogous to selected ion monitoring for MS experiments. A selective analysis mode, which can increase sensitivity. Fragmentation. Fragmentation of gas-phase ions is essential to tandem mass spectrometry and occurs between different stages of mass analysis. There are many methods used to fragment the ions and these can result in different types of fragmentation and thus different information about the structure and composition of the molecule. In-source fragmentation. Often, the ionization process is sufficiently violent to leave the resulting ions with sufficient internal energy to fragment within the mass spectrometer. If the product ions persist in their non-equilibrium state for a moderate amount of time before auto-dissociation this process is called metastable fragmentation. Nozzle-skimmer fragmentation refers to the purposeful induction of in-source fragmentation by increasing the nozzle-skimmer potential on usually electrospray based instruments. Although in-source fragmentation allows for fragmentation analysis, it is not technically tandem mass spectrometry unless metastable ions are mass analyzed or selected before auto-dissociation and a second stage of analysis is performed on the resulting fragments. In-source fragmentation can be used in lieu of tandem mass spectrometry through the utilization of Enhanced in-Source Fragmentation Annotation (EISA) technology which generates fragmentation that directly matches tandem mass spectrometry data. Fragments observed by EISA have higher signal intensity than traditional fragments which suffer losses in the collision cells of tandem mass spectrometers. EISA enables fragmentation data acquisition on MS1 mass analyzers such as time-of-flight and single quadrupole instruments. In-source fragmentation is often used in addition to tandem mass spectrometry (with post-source fragmentation) to allow for two steps of fragmentation in a pseudo MS3-type of experiment. Collision-induced dissociation. Post-source fragmentation is most often what is being used in a tandem mass spectrometry experiment. Energy can also be added to the ions, which are usually already vibrationally excited, through post-source collisions with neutral atoms or molecules, the absorption of radiation, or the transfer or capture of an electron by a multiply charged ion. Collision-induced dissociation (CID), also called collisionally activated dissociation (CAD), involves the collision of an ion with a neutral atom or molecule in the gas phase and subsequent dissociation of the ion. For example, consider &lt;chem&gt;{AB+} + M -&gt; {A} + {B+} + M&lt;/chem&gt; where the ion AB+ collides with the neutral species M and subsequently breaks apart. The details of this process are described by collision theory. Due to different instrumental configuration, two main different types of CID are possible: "(i)" beam-type (in which precursor ions are fragmented on-the-flight) and "(ii)" ion trap-type (in which precursor ions are first trapped, and then fragmented). A third and more recent type of CID fragmentation is higher-energy collisional dissociation (HCD). HCD is a CID technique specific to orbitrap mass spectrometers in which fragmentation takes place external to the ion trap, it happens in the HCD cell (in some instruments named "ion routing multipole"). HCD is a trap-type fragmentation that has been shown to have beam-type characteristics. Freely available large scale high resolution tandem mass spectrometry databases exist (e.g. METLIN with 850,000 molecular standards each with experimental CID MS/MS data), and are typically used to facilitate small molecule identification. Electron capture and transfer methods. The energy released when an electron is transferred to or captured by a multiply charged ion can induce fragmentation. Electron-capture dissociation. If an electron is added to a multiply charged positive ion, the Coulomb energy is liberated. Adding a free electron is called electron-capture dissociation (ECD), and is represented by formula_0 for a multiply protonated molecule M. Electron-transfer dissociation. Adding an electron through an ion-ion reaction is called electron-transfer dissociation (ETD). Similar to electron-capture dissociation, ETD induces fragmentation of cations (e.g. peptides or proteins) by transferring electrons to them. It was invented by Donald F. Hunt, Joshua Coon, John E. P. Syka and Jarrod Marto at the University of Virginia. ETD does not use free electrons but employs radical anions (e.g. anthracene or azobenzene) for this purpose: formula_1 where A is the anion. ETD cleaves randomly along the peptide backbone (c and z ions) while side chains and modifications such as phosphorylation are left intact. The technique only works well for higher charge state ions (z&gt;2), however relative to collision-induced dissociation (CID), ETD is advantageous for the fragmentation of longer peptides or even entire proteins. This makes the technique important for top-down proteomics. Much like ECD, ETD is effective for peptides with modifications such as phosphorylation. Electron-transfer and higher-energy collision dissociation (EThcD) is a combination ETD and HCD where the peptide precursor is initially subjected to an ion/ion reaction with fluoranthene anions in a linear ion trap, which generates c- and z-ions. In the second step HCD all-ion fragmentation is applied to all ETD derived ions to generate b- and y- ions prior to final analysis in the orbitrap analyzer. This method employs dual fragmentation to generate ion- and thus data-rich MS/MS spectra for peptide sequencing and PTM localization. Negative electron-transfer dissociation. Fragmentation can also occur with a deprotonated species, in which an electron is transferred from the species to an cationic reagent in a negative electron transfer dissociation (NETD): formula_2 Following this transfer event, the electron-deficient anion undergoes internal rearrangement and fragments. NETD is the ion/ion analogue of electron-detachment dissociation (EDD). NETD is compatible with fragmenting peptide and proteins along the backbone at the Cα-C bond. The resulting fragments are usually a•- and x-type product ions. Electron-detachment dissociation. Electron-detachment dissociation (EDD) is a method for fragmenting anionic species in mass spectrometry. It serves as a negative counter mode to electron capture dissociation. Negatively charged ions are activated by irradiation with electrons of moderate kinetic energy. The result is ejection of electrons from the parent ionic molecule, which causes dissociation via recombination. Charge-transfer dissociation. Reaction between positively charged peptides and cationic reagents, also known as charge transfer dissociation (CTD), has recently been demonstrated as an alternative high-energy fragmentation pathway for low-charge state (1+ or 2+) peptides. The proposed mechanism of CTD using helium cations as the reagent is: formula_3 Initial reports are that CTD causes backbone Cα-C bond cleavage of peptides and provides a•- and x-type product ions. Photodissociation. The energy required for dissociation can be added by photon absorption, resulting in ion photodissociation and represented by &lt;chem&gt;{AB+} + \mathit{h\nu} -&gt; {A} + B+&lt;/chem&gt; where formula_4 represents the photon absorbed by the ion. Ultraviolet lasers can be used, but can lead to excessive fragmentation of biomolecules. Infrared multiphoton dissociation. Infrared photons will heat the ions and cause dissociation if enough of them are absorbed. This process is called infrared multiphoton dissociation (IRMPD) and is often accomplished with a carbon dioxide laser and an ion trapping mass spectrometer such as a FTMS. Blackbody infrared radiative dissociation. Blackbody radiation can be used for photodissociation in a technique known as blackbody infrared radiative dissociation (BIRD). In the BIRD method, the entire mass spectrometer vacuum chamber is heated to create infrared light. BIRD uses this radiation to excite increasingly more energetic vibrations of the ions, until a bond breaks, creating fragments. This is similar to infrared multiphoton dissociation which also uses infrared light, but from a different source. BIRD is most often used with Fourier transform ion cyclotron resonance mass spectrometry. Surface-induced dissociation. With surface-induced dissociation (SID), the fragmentation is a result of the collision of an ion with a surface under high vacuum. Today, SID is used to fragment a wide range of ions. Years ago, it was only common to use SID on lower mass, singly charged species because ionization methods and mass analyzer technologies weren't advanced enough to properly form, transmit, or characterize ions of high m/z. Over time, self-assembled monolayer surfaces (SAMs) composed of CF3(CF2)10CH2CH2S on gold have been the most prominently used collision surfaces for SID in a tandem spectrometer. SAMs have acted as the most desirable collision targets due to their characteristically large effective masses for the collision of incoming ions. Additionally, these surfaces are composed of rigid fluorocarbon chains, which don't significantly dampen the energy of the projectile ions. The fluorocarbon chains are also beneficial because of their ability to resist facile electron transfer from the metal surface to the incoming ions. SID's ability to produce subcomplexes that remain stable and provide valuable information on connectivity is unmatched by any other dissociation technique. Since the complexes produced from SID are stable and retain distribution of charge on the fragment, this produces a unique, spectra which the complex centers around a narrower m/z distribution. The SID products and the energy at which they form are reflective of the strengths and topology of the complex. The unique dissociation patterns help discover the Quaternary structure of the complex. The symmetric charge distribution and dissociation dependence are unique to SID and make the spectra produced distinctive from any other dissociation technique. The SID technique is also applicable to ion-mobility mass spectrometry (IM-MS). Three different methods for this technique include analyzing the characterization of topology, intersubunit connectivity, and the degree of unfolding for protein structure. Analysis of protein structure unfolding is the most commonly used application of the SID technique. For Ion-mobility mass spectrometry (IM-MS), SID is used for dissociation of the source activated precursors of three different types of protein complexes: C-reactive protein (CRP), transthyretin (TTR), and concanavalin A (Con A). This method is used to observe the unfolding degree for each of these complexes. For this observation, SID showed the precursor ions' structures that exist before the collision with the surface. IM-MS utilizes the SID as a direct measure of the conformation for each proteins' subunit. Fourier-transform ion cyclotron resonance (FTICR) are able to provide ultrahigh resolution and high mass accuracy to instruments that take mass measurements. These features make FTICR mass spectrometers a useful tool for a wide variety of applications such as several dissociation experiments such as collision-induced dissociation (CID, electron transfer dissociation (ETD), and others. In addition, surface-induced dissociation has been implemented with this instrument for the study of fundamental peptide fragmentation. Specifically, SID has been applied to the study of energetics and the kinetics of gas-phase fragmentation within an ICR instrument. This approach has been used to understand the gas-phase fragmentation of protonated peptides, odd-electron peptide ions, non-covalent ligand-peptide complexes, and ligated metal clusters. Quantitative proteomics. Quantitative proteomics is used to determine the relative or absolute amount of proteins in a sample. Several quantitative proteomics methods are based on tandem mass spectrometry. MS/MS has become a benchmark procedure for the structural elucidation of complex biomolecules. One method commonly used for quantitative proteomics is isobaric tag labeling. Isobaric tag labeling enables simultaneous identification and quantification of proteins from multiple samples in a single analysis. To quantify proteins, peptides are labeled with chemical tags that have the same structure and nominal mass, but vary in the distribution of heavy isotopes in their structure. These tags, commonly referred to as tandem mass tags, are designed so that the mass tag is cleaved at a specific linker region upon higher-energy collisional-induced dissociation (HCD) during tandem mass spectrometry yielding reporter ions of different masses. Protein quantitation is accomplished by comparing the intensities of the reporter ions in the MS/MS spectra. Two commercially available isobaric tags are iTRAQ and TMT reagents. Isobaric tags for relative and absolute quantitation (iTRAQ). An isobaric tag for relative and absolute quantitation (iTRAQ) is a reagent for tandem mass spectrometry that is used to determine the amount of proteins from different sources in a single experiment. It uses stable isotope labeled molecules that can form a covalent bond with the N-terminus and side chain amines of proteins. The iTRAQ reagents are used to label peptides from different samples that are pooled and analyzed by liquid chromatography and tandem mass spectrometry. The fragmentation of the attached tag generates a low molecular mass reporter ion that can be used to relatively quantify the peptides and the proteins from which they originated. Tandem mass tag (TMT). A tandem mass tag (TMT) is an isobaric mass tag chemical label used for protein quantification and identification. The tags contain four regions: mass reporter, cleavable linker, mass normalization, and protein reactive group. TMT reagents can be used to simultaneously analyze 2 to 11 different peptide samples prepared from cells, tissues or biological fluids. Recent developments allow up to 16 and even 18 samples (16plex or 18plex respectively) to be analyzed. Three types of TMT reagents are available with different chemical reactivities: (1) a reactive NHS ester functional group for labeling primary amines (TMTduplex, TMTsixplex, TMT10plex plus TMT11-131C), (2) a reactive iodoacetyl functional group for labeling free sulfhydryls (iodoTMT) and (3) reactive alkoxyamine functional group for labeling of carbonyls (aminoxyTMT). Multiplexed DIA (plexDIA). The progress in data independent acquisition (DIA) enabled multiplexed quantitative proteomics with non-isobaric mass tags and a new method called plexDIA introduced in 2021. This new approach increases the number of data points by parallelizing both samples and peptides, thus achieving multiplicative gains. It has the potential to continue scaling proteomic throughput with new mass tags and algorithms. plexDIA is applicable to both bulk and single-cell samples and is particularly powerful for single-cell proteomics. Applications. Peptides. Tandem mass spectrometry can be used for protein sequencing. When intact proteins are introduced to a mass analyzer, this is called "top-down proteomics" and when proteins are digested into smaller peptides and subsequently introduced into the mass spectrometer, this is called "bottom-up proteomics". Shotgun proteomics is a variant of bottom up proteomics in which proteins in a mixture are digested prior to separation and tandem mass spectrometry. Tandem mass spectrometry can produce a peptide sequence tag that can be used to identify a peptide in a protein database. A notation has been developed for indicating peptide fragments that arise from a tandem mass spectrum. Peptide fragment ions are indicated by a, b, or c if the charge is retained on the N-terminus and by x, y or z if the charge is maintained on the C-terminus. The subscript indicates the number of amino acid residues in the fragment. Superscripts are sometimes used to indicate neutral losses in addition to the backbone fragmentation, * for loss of ammonia and ° for loss of water. Although peptide backbone cleavage is the most useful for sequencing and peptide identification other fragment ions may be observed under high energy dissociation conditions. These include the side chain loss ions d, v, w and ammonium ions and additional sequence-specific fragment ions associated with particular amino acid residues. Oligosaccharides. Oligosaccharides may be sequenced using tandem mass spectrometry in a similar manner to peptide sequencing. Fragmentation generally occurs on either side of the glycosidic bond (b, c, y and z ions) but also under more energetic conditions through the sugar ring structure in a cross-ring cleavage (x ions). Again trailing subscripts are used to indicate position of the cleavage along the chain. For cross ring cleavage ions the nature of the cross ring cleavage is indicated by preceding superscripts. Oligonucleotides. Tandem mass spectrometry has been applied to DNA and RNA sequencing. A notation for gas-phase fragmentation of oligonucleotide ions has been proposed. Newborn screening. Newborn screening is the process of testing newborn babies for treatable genetic, endocrinologic, metabolic and hematologic diseases. The development of tandem mass spectrometry screening in the early 1990s led to a large expansion of potentially detectable congenital metabolic diseases that affect blood levels of organic acids. Small molecule analysis It has been shown that tandem mass spectrometry data is highly consistent across instrument and manufacturer platforms including quadrupole time-of-flight (QTOF) and Q Exactive instrumentation, especially at 20 eV. Limitation. Tandem mass spectrometry cannot be applied for single-cell analyses as it is insensitive to analyze such small amounts of a cell. These limitations are primarily due to a combination of inefficient ion production and ion losses within the instruments due to chemical noise sources of solvents. Future outlook. Tandem mass spectrometry will be a useful tool for protein characterization, nucleoprotein complexes, and other biological structures. However, some challenges left such as analyzing the characterization of the proteome quantitatively and qualitatively. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "[\\ce M + n\\ce H]^{n+} + \\ce{e^- ->} \\left[ [\\ce M + (n-1)\\ce H]^{(n-1)+} \\right]^* \\ce{-> fragments}" }, { "math_id": 1, "text": "[\\ce M + n\\ce H]^{n+} + \\ce{A^- ->} \\left[ [\\ce M + (n-1)\\ce H]^{(n-1)+} \\right]^* + \\ce{A -> fragments}" }, { "math_id": 2, "text": "[\\ce M-n\\ce H]^{n-} + \\ce{A+ ->} \\left[ [\\ce M-n\\ce H]^{(n+1)-} \\right]^* + \\ce{A -> fragments}" }, { "math_id": 3, "text": "\\ce{{[{M}+H]^1+} + He+ ->} \\left[ \\ce{[{M}+H]^2+} \\right]^* + \\ce{He^0 -> fragments}" }, { "math_id": 4, "text": "h\\nu" } ]
https://en.wikipedia.org/wiki?curid=770467
77052373
Nonlinear dispersion relation
Relation assigning the phase velocity A nonlinear dispersion relation (NDR) is a dispersion relation that assigns the "correct" phase velocity formula_0 to a nonlinear wave structure. As an example of how diverse and intricate the underlying description can be, we deal with plane electrostatic wave structures formula_1 which propagate with formula_0 in a collisionless plasma. Such structures are ubiquitous, for example in the magnetosphere of the Earth, in fusion reactors or in the laboratory. Correct means that this must be done according to the governing equations, in this case the Vlasov-Poisson system, and the conditions prevailing in the plasma during the wave formation process. This means that special attention must be paid to the particle trapping processes acting on the resonant electrons and ions, which requires phase space analyses. Since the latter is stochastic, transient and rather filamentary in nature, the entire dynamic trapping process eludes mathematical treatment, so that it can be adequately taken into account “only” in the asymptotic, quiet regime of wave generation, when the structure is close to equilibrium. This is where the pseudo-potential method in the version of Schamel, also known as Schamel method, comes into play, which is an alternative to the method described by Bernstein, Greene and Kruskal. In the Schamel method the Vlasov equations for the species involved are first solved and only in the second step Poisson's equation to ensure self-consistency. The Schamel method is generally considered the preferred method because it is best suited to describing the immense diversity of electrostatic structures, including their phase velocities. These structures are also known under Bernstein–Greene–Kruskal modes or phase space electron and ion holes, or double layers, respectively. With the S method, the distribution functions for electrons formula_2 and ions formula_3, which solve the corresponding time-independent Vlasov equation, formula_4 and formula_5, respectively, are described as functions of the constants of motion. Thereby the unperturbed plasma conditions are adequately taken into account. Here formula_6 and formula_7 are the single particle energies of electrons and ions, respectively ; formula_8 and formula_9 are the signs of the velocity of untrapped electrons and ions, respectively. Normalized quantities have been used, formula_10 and it is assumed that formula_11, where formula_12 is the amplitude of the structure. Of particular interest is the range in phase space where particles are trapped in the potential wave trough. This area is (partially) filled via the stochastic particle dynamics during the previous creation process and is expressed and parameterized by so-called trapping scenarios. In the second step, Poisson's equation, formula_13, where formula_14 are the corresponding densities obtained by velocity integration of formula_15 and formula_16, is integrated whereby the pseudo-potential formula_17 is introduced. The result is formula_18, which represents the pseudo-energy. In formula_17 the points stand for the different trapping parameters formula_19. Integration of the pseudo-energy results in formula_20, which yields by inversion the desired formula_21. In these expressions the canonical form of formula_17 is used already. There are, however, two further trapping parameters formula_22, which are missing in the canonical pseudo-potential. In its extended previous version formula_23 they fell victim to the necessary constraint that the gradient of the potential formula_21 vanishes at its maximum. This requirement is formula_24 and leads to the term in question, the nonlinear dispersion relation (NDR). It allows the phase speed formula_25 of the structure to be determined in terms of the other parameters and to eliminate formula_25 from formula_23 to obtain the canonical formula_17. Due to the central role, the two expressions formula_26 and formula_27 are playing, the Schamel method is sometimes also called Schamel's pseudo-potential method (SPP method). For more information, see references 1 and 2. A typical example for the canonical pseudo-potential formula_26 and the NDR is given by formula_28 and by formula_29 where formula_30 and formula_31, respectively. These expressions are valid for a current-carrying, thermal background plasma described by Maxwellians (with a drift formula_32 between electrons and ions) and for the presence of the perturbative trapping scenarios formula_33, s=e,i. A well known example is the Thumb-Teardrop dispersion relation, which is valid for single harmonic waves and is given as a simplified version of the NDR above with zero trapping parameters and a vanishing drift. It reads formula_34. It has been thoroughly discussed in but mistakenly as a linear dispersion relation. A plot of formula_35 for a more general type of structures, the periodic cnoidal electron holes, is presented in Fig 1. for the case formula_36 and immobile ions (formula_37) showing the effect of the electron trapping parameter formula_38 for positive and negative values. Finally, it should be mentioned that an NDR has the nice property that it remains valid even if the potential formula_21 has an undisclosed form, i.e. can no longer be described by mathematically known functions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_0" }, { "math_id": 1, "text": "\\phi(x-v_0t)" }, { "math_id": 2, "text": "f_e(x,v)=F_e(\\epsilon_e,\\sigma_e)" }, { "math_id": 3, "text": "f_i(x,u)=F_i(\\epsilon_i,\\sigma_i)" }, { "math_id": 4, "text": "(v\\partial_x +\\phi'(x)\\partial_v)f_e(x,v)=0" }, { "math_id": 5, "text": "(u\\partial_x -\\theta\\phi'(x)\\partial_u)f_i(x,u)=0" }, { "math_id": 6, "text": "\\epsilon_e=\\frac{v^2}{2}-\\phi" }, { "math_id": 7, "text": "\\epsilon_i=\\frac{u^2}{2}-\\theta(\\psi-\\phi)" }, { "math_id": 8, "text": "\\sigma_e=\\frac{v}{|v|}" }, { "math_id": 9, "text": "\\sigma_i=\\frac{u}{|u|}" }, { "math_id": 10, "text": "\\theta=\\frac{T_e}{T_i}" }, { "math_id": 11, "text": "0 \\le\\phi \\le\\psi" }, { "math_id": 12, "text": "\\psi" }, { "math_id": 13, "text": "\\phi_{xx} = n_e - n_i=: - \\mathcal {V} '(\\phi;...) " }, { "math_id": 14, "text": "n_e,n_i" }, { "math_id": 15, "text": "F_e(\\epsilon_e,\\sigma_e)" }, { "math_id": 16, "text": "F_i(\\epsilon_i,\\sigma_i)" }, { "math_id": 17, "text": "\\mathcal{V}(\\phi;...)" }, { "math_id": 18, "text": "\\frac{\\phi_x^2}{2} + \\mathcal{V(\\phi;...)} = 0" }, { "math_id": 19, "text": "B_s, D_{1s}, D_{2s}, C_s, s=e,i" }, { "math_id": 20, "text": "x(\\phi)= \\int_\\phi^\\psi \\frac{d\\xi}{\\sqrt{-2\\mathcal{V}(\\xi;...)}} " }, { "math_id": 21, "text": "\\phi(x)" }, { "math_id": 22, "text": "\\Gamma_s,s=e,i" }, { "math_id": 23, "text": "\\mathcal{V}_0(\\phi;...,\\Gamma_e,\\Gamma_i,v_0)" }, { "math_id": 24, "text": "\\mathcal{V}_0(\\psi;...,\\Gamma_e,\\Gamma_i,v_0)=0" }, { "math_id": 25, "text": "v_0 " }, { "math_id": 26, "text": "\\mathcal {V}(\\phi;...)" }, { "math_id": 27, "text": "\\mathcal {V}_0(\\phi;...,\\Gamma_e,\\Gamma_i ,v_0)" }, { "math_id": 28, "text": "-\\mathcal{V}(\\phi;B_e, B_i)/\\psi^2=\\frac{k_0^2}{2}\\varphi(1-\\varphi)\n+B_e\\frac{\\varphi^2}{2}(1-\\sqrt\\varphi) + B_i\\frac{\\theta^{3/2}}{2}(1 - (1-\\varphi)^{5/2} - \\frac{1}{2}\\varphi(5-3\\varphi))" }, { "math_id": 29, "text": "k_0^2 - \\frac{1}{2}Z_r'(\\frac{|v_D-v_0|}{\\sqrt2}) - \\frac{\\theta}{2}Z_r'(\\sqrt{\\frac{\\theta}{2\\delta}}v_0)=B_e + \\frac{3}{2}\\theta^{3/2}B_i -\\Gamma_e -\\Gamma_i" }, { "math_id": 30, "text": "\\varphi:=\\phi/\\psi" }, { "math_id": 31, "text": "\\delta:=m_e/m_i " }, { "math_id": 32, "text": "v_D" }, { "math_id": 33, "text": "(\\Gamma_s, B_s)" }, { "math_id": 34, "text": "k_0^2 - \\frac{1}{2}Z_r'(\\frac{v_0}{\\sqrt2}) - \\frac{\\theta}{2}Z_r'(\\sqrt{\\frac{\\theta}{2\\delta}}v_0)=0" }, { "math_id": 35, "text": "\\omega_0:=k_0 v_0" }, { "math_id": 36, "text": "B_e=B,B_i=0" }, { "math_id": 37, "text": "\\theta=0" }, { "math_id": 38, "text": "B" } ]
https://en.wikipedia.org/wiki?curid=77052373
77053051
Impulse oscillometry
Lung function test measuring effect of pressure oscillation on airflow Impulse oscillometry (IOS), also known as respiratory oscillometry, forced oscillatory technique (FOT), or just oscillometry, is a non-invasive lung function test that measures the mechanical properties of the respiratory system, particularly the upper and intrathoracic airways, lung tissue and chest wall, usually during the patient's tidal breathing (the way someone breathes when they are relaxed). Principle. Impulse oscillometry measures the mechanical impedance of the respiratory system (Zrs), which encompasses the resistance of the respiratory system to flow (Rrs), the reactance or stiffness of the lung parenchyma in response to changes in volume (Xrs) and the inertance of accelerating gas in the airways (Irs). The following relations hold between these parameters: formula_0, where formula_1 is the imaginary unit (formula_2), and formula_3, where formula_4 is the airway elastance and formula_5 is the angular velocity such that formula_6, where formula_7 is the frequency of the stimulus oscillation. Zrs is measured by comparing the magnitudes of mechanical stimuli, specifically oscillations of pressure, i.e. pressure waves, transmitted into the respiratory system with the magnitudes of the stimuli's effects on tidal airflow; this is done by superimposing these oscillations over spontaneous tidal breathing. Stimulation. The stimulus is an oscillation of pressure of a particular frequency that is transmitted to the lungs of the patient. This is usually done by mouth, though the direct stimulation of the chest wall is also possible. These pressure waves cause changes in the airflow during tidal breathing; the magnitudes of the pressure waves and the changes in airflow they cause are then used to determine the airways' mechanical impedance. Frequencies ranging from 4-50 Hz are commonly generated by a loudspeaker, while frequencies between 0.5 and 4 Hz may alternatively also be generated by a piston or pneumatic proportional solenoid valves. Different frequencies measure the mechanical properties of different parts of the respiratory system; the resistance at 5 Hz (R5) represents total airway resistance, while the resistance at 20 Hz (R20) represents the resistance of the central airways. The reactance at 5 Hz (X5) reflects the elasticity of the peripheral airways. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z_{rs}=R_{rs}+iX_{rs}" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "\\sqrt{-1}" }, { "math_id": 3, "text": "X_{rs} = \\omega I_{rs} - \\frac{E_{rs}}{\\omega}" }, { "math_id": 4, "text": "E_{rs}" }, { "math_id": 5, "text": "\\omega" }, { "math_id": 6, "text": "\\omega=2 \\pi f" }, { "math_id": 7, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=77053051
7706
Cartesian coordinate system
Most common coordinate system (geometry) In geometry, a Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of real numbers called "coordinates", which are the signed distances to the point from two fixed perpendicular oriented lines, called "coordinate lines", "coordinate axes" or just "axes" (plural of "axis") of the system. The point where they meet is called the "origin" and has (0, 0) as coordinates. Similarly, the position of any point in three-dimensional space can be specified by three "Cartesian coordinates", which are the signed distances from the point to three mutually perpendicular planes. More generally, "n" Cartesian coordinates specify the point in an "n"-dimensional Euclidean space for any dimension "n". These coordinates are the signed distances from the point to "n" mutually perpendicular fixed hyperplanes. Cartesian coordinates are named for René Descartes, whose invention of them in the 17th century revolutionized mathematics by allowing the expression of problems of geometry in terms of algebra and calculus. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by equations involving the coordinates of points of the shape. For example, a circle of radius 2, centered at the origin of the plane, may be described as the set of all points whose coordinates "x" and "y" satisfy the equation "x"2 + "y"2 = 4; the area, the perimeter and the tangent line at any point can be computed from this equation by using integrals and derivatives, in a way that can be applied to any curve. Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing. History. The adjective "Cartesian" refers to the French mathematician and philosopher René Descartes, who published this idea in 1637 while he was resident in the Netherlands. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. The French cleric Nicole Oresme used constructions similar to Cartesian coordinates well before the time of Descartes and Fermat. Both Descartes and Fermat used a single axis in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes' "La Géométrie" was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes's work. The development of the Cartesian coordinate system would play a fundamental role in the development of the calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, and the spherical and cylindrical coordinates for three-dimensional space. Description. One dimension. An affine line with a chosen Cartesian coordinate system is called a "number line". Every point on the line has a real-number coordinate, and every real number represents some point on the line. There are two degrees of freedom in the choice of Cartesian coordinate system for a line, which can be specified by choosing two distinct points along the line and assigning them to two distinct real numbers (most commonly zero and one). Other points can then be uniquely assigned to numbers by linear interpolation. Equivalently, one point can be assigned to a specific real number, for instance an "origin" point corresponding to zero, and an oriented length along the line can be chosen as a unit, with the orientation indicating the correspondence between directions along the line and positive or negative numbers. Each point corresponds to its signed distance from the origin (a number with an absolute value equal to the distance and a + or − sign chosen based on direction). A geometric transformation of the line can be represented by a function of a real variable, for example translation of the line corresponds to addition, and scaling the line corresponds to multiplication. Any two Cartesian coordinate systems on the line can be related to each-other by a linear function (function of the form formula_0) taking a specific point's coordinate in one system to its coordinate in the other system. Choosing a coordinate system for each of two different lines establishes an affine map from one line to the other taking each point on one line to the point on the other line with the same coordinate. Two dimensions. A Cartesian coordinate system in two dimensions (also called a rectangular coordinate system or an orthogonal coordinate system) is defined by an ordered pair of perpendicular lines (axes), a single unit of length for both axes, and an orientation for each axis. The point where the axes meet is taken as the origin for both, thus turning each axis into a number line. For any point "P", a line is drawn through "P" perpendicular to each axis, and the position where it meets the axis is interpreted as a number. The two numbers, in that chosen order, are the "Cartesian coordinates" of "P". The reverse construction allows one to determine the point "P" given its coordinates. The first and second coordinates are called the "abscissa" and the "ordinate" of "P", respectively; and the point where the axes meet is called the "origin" of the coordinate system. The coordinates are usually written as two numbers in parentheses, in that order, separated by a comma, as in (3, −10.5). Thus the origin has coordinates (0, 0), and the points on the positive half-axes, one unit away from the origin, have coordinates (1, 0) and (0, 1). In mathematics, physics, and engineering, the first axis is usually defined or depicted as horizontal and oriented to the right, and the second axis is vertical and oriented upwards. (However, in some computer graphics contexts, the ordinate axis may be oriented downwards.) The origin is often labeled "O", and the two coordinates are often denoted by the letters "X" and "Y", or "x" and "y". The axes may then be referred to as the "X"-axis and "Y"-axis. The choices of letters come from the original convention, which is to use the latter part of the alphabet to indicate unknown values. The first part of the alphabet was used to designate known values. A Euclidean plane with a chosen Cartesian coordinate system is called a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Cartesian plane. In a Cartesian plane, one can define canonical representatives of certain geometric figures, such as the unit circle (with radius equal to the length unit, and center at the origin), the unit square (whose diagonal has endpoints at (0, 0) and (1, 1)), the unit hyperbola, and so on. The two axes divide the plane into four right angles, called "quadrants". The quadrants may be named or numbered in various ways, but the quadrant where all coordinates are positive is usually called the "first quadrant". If the coordinates of a point are ("x", "y"), then its distances from the "X"-axis and from the "Y"-axis are |"y"| and |"x"|, respectively; where | · | denotes the absolute value of a number. Three dimensions. A Cartesian coordinate system for a three-dimensional space consists of an ordered triplet of lines (the "axes") that go through a common point (the "origin"), and are pair-wise perpendicular; an orientation for each axis; and a single unit of length for all three axes. As in the two-dimensional case, each axis becomes a number line. For any point "P" of space, one considers a plane through "P" perpendicular to each coordinate axis, and interprets the point where that plane cuts the axis as a number. The Cartesian coordinates of "P" are those three numbers, in the chosen order. The reverse construction determines the point "P" given its three coordinates. Alternatively, each coordinate of a point "P" can be taken as the distance from "P" to the plane defined by the other two axes, with the sign determined by the orientation of the corresponding axis. Each pair of axes defines a "coordinate plane". These planes divide space into eight "octants". The octants are: formula_1 The coordinates are usually written as three numbers (or algebraic formulas) surrounded by parentheses and separated by commas, as in (3, −2.5, 1) or ("t", "u" + "v", "π"/2). Thus, the origin has coordinates (0, 0, 0), and the unit points on the three axes are (1, 0, 0), (0, 1, 0), and (0, 0, 1). Standard names for the coordinates in the three axes are "abscissa", "ordinate" and "applicate". The coordinates are often denoted by the letters "x", "y", and "z". The axes may then be referred to as the "x"-axis, "y"-axis, and "z"-axis, respectively. Then the coordinate planes can be referred to as the "xy"-plane, "yz"-plane, and "xz"-plane. In mathematics, physics, and engineering contexts, the first two axes are often defined or depicted as horizontal, with the third axis pointing up. In that case the third coordinate may be called "height" or "altitude". The orientation is usually chosen so that the 90-degree angle from the first axis to the second axis looks counter-clockwise when seen from the point (0, 0, 1); a convention that is commonly called "the right-hand rule". Higher dimensions. Since Cartesian coordinates are unique and non-ambiguous, the points of a Cartesian plane can be identified with pairs of real numbers; that is, with the Cartesian product formula_2, where formula_3 is the set of all real numbers. In the same way, the points in any Euclidean space of dimension "n" be identified with the tuples (lists) of "n" real numbers; that is, with the Cartesian product formula_4. Generalizations. The concept of Cartesian coordinates generalizes to allow axes that are not perpendicular to each other, and/or different units along each axis. In that case, each coordinate is obtained by projecting the point onto one axis along a direction that is parallel to the other axis (or, in general, to the hyperplane defined by all the other axes). In such an "oblique coordinate system" the computations of distances and angles must be modified from that in standard Cartesian systems, and many standard formulas (such as the Pythagorean formula for the distance) do not hold (see affine plane). Notations and conventions. The Cartesian coordinates of a point are usually written in parentheses and separated by commas, as in (10, 5) or (3, 5, 7). The origin is often labelled with the capital letter "O". In analytic geometry, unknown or generic coordinates are often denoted by the letters ("x", "y") in the plane, and ("x", "y", "z") in three-dimensional space. This custom comes from a convention of algebra, which uses letters near the end of the alphabet for unknown values (such as the coordinates of points in many geometric problems), and letters near the beginning for given quantities. These conventional names are often used in other domains, such as physics and engineering, although other letters may be used. For example, in a graph showing how a pressure varies with time, the graph coordinates may be denoted "p" and "t". Each axis is usually named after the coordinate which is measured along it; so one says the "x-axis", the "y-axis", the "t-axis", etc. Another common convention for coordinate naming is to use subscripts, as ("x"1, "x"2, ..., "x""n") for the "n" coordinates in an "n"-dimensional space, especially when "n" is greater than 3 or unspecified. Some authors prefer the numbering ("x"0, "x"1, ..., "x""n"−1). These notations are especially advantageous in computer programming: by storing the coordinates of a point as an array, instead of a record, the subscript can serve to index the coordinates. In mathematical illustrations of two-dimensional Cartesian systems, the first coordinate (traditionally called the abscissa) is measured along a horizontal axis, oriented from left to right. The second coordinate (the ordinate) is then measured along a vertical axis, usually oriented from bottom to top. Young children learning the Cartesian system, commonly learn the order to read the values before cementing the "x"-, "y"-, and "z"-axis concepts, by starting with 2D mnemonics (for example, 'Walk along the hall then up the stairs' akin to straight across the "x"-axis then up vertically along the "y"-axis). Computer graphics and image processing, however, often use a coordinate system with the "y"-axis oriented downwards on the computer display. This convention developed in the 1960s (or earlier) from the way that images were originally stored in display buffers. For three-dimensional systems, a convention is to portray the "xy"-plane horizontally, with the "z"-axis added to represent height (positive up). Furthermore, there is a convention to orient the "x"-axis toward the viewer, biased either to the right or left. If a diagram (3D projection or 2D perspective drawing) shows the "x"- and "y"-axis horizontally and vertically, respectively, then the "z"-axis should be shown pointing "out of the page" towards the viewer or camera. In such a 2D diagram of a 3D coordinate system, the "z"-axis would appear as a line or ray pointing down and to the left or down and to the right, depending on the presumed viewer or camera perspective. In any diagram or display, the orientation of the three axes, as a whole, is arbitrary. However, the orientation of the axes relative to each other should always comply with the right-hand rule, unless specifically stated otherwise. All laws of physics and math assume this right-handedness, which ensures consistency. For 3D diagrams, the names "abscissa" and "ordinate" are rarely used for "x" and "y", respectively. When they are, the "z"-coordinate is sometimes called the applicate. The words "abscissa", "ordinate" and "applicate" are sometimes used to refer to coordinate axes rather than the coordinate values. Quadrants and octants. The axes of a two-dimensional Cartesian system divide the plane into four infinite regions, called "quadrants", each bounded by two half-axes. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the coordinates both have positive signs), II (where the abscissa is negative − and the ordinate is positive +), III (where both the abscissa and the ordinate are −), and IV (abscissa +, ordinate −). When the axes are drawn according to the mathematical custom, the numbering goes counter-clockwise starting from the upper right ("north-east") quadrant. Similarly, a three-dimensional Cartesian system defines a division of space into eight regions or octants, according to the signs of the coordinates of the points. The convention used for naming a specific octant is to list its signs; for example, (+ + +) or (− + −). The generalization of the quadrant and octant to an arbitrary number of dimensions is the orthant, and a similar naming system applies. Cartesian formulae for the plane. Distance between two points. The Euclidean distance between two points of the plane with Cartesian coordinates formula_5 and formula_6 is formula_7 This is the Cartesian version of Pythagoras's theorem. In three-dimensional space, the distance between points formula_8 and formula_9 is formula_10 which can be obtained by two consecutive applications of Pythagoras' theorem. Euclidean transformations. The Euclidean transformations or Euclidean motions are the (bijective) mappings of points of the Euclidean plane to themselves which preserve distances between points. There are four types of these mappings (also called isometries): translations, rotations, reflections and glide reflections. Translation. Translating a set of points of the plane, preserving the distances and directions between them, is equivalent to adding a fixed pair of numbers ("a", "b") to the Cartesian coordinates of every point in the set. That is, if the original coordinates of a point are ("x", "y"), after the translation they will be formula_11 Rotation. To rotate a figure counterclockwise around the origin by some angle formula_12 is equivalent to replacing every point with coordinates ("x","y") by the point with coordinates ("x'","y'"), where formula_13 Thus: formula_14 Reflection. If ("x", "y") are the Cartesian coordinates of a point, then (−"x", "y") are the coordinates of its reflection across the second coordinate axis (the y-axis), as if that line were a mirror. Likewise, ("x", −"y") are the coordinates of its reflection across the first coordinate axis (the x-axis). In more generality, reflection across a line through the origin making an angle formula_12 with the x-axis, is equivalent to replacing every point with coordinates ("x", "y") by the point with coordinates ("x"′,"y"′), where formula_15 Thus: formula_16 Glide reflection. A glide reflection is the composition of a reflection across a line followed by a translation in the direction of that line. It can be seen that the order of these operations does not matter (the translation can come first, followed by the reflection). General matrix form of the transformations. All affine transformations of the plane can be described in a uniform way by using matrices. For this purpose, the coordinates formula_17 of a point are commonly represented as the column matrix formula_18 The result formula_19 of applying an affine transformation to a point formula_17 is given by the formula formula_20 where formula_21 is a 2×2 matrix and formula_22 is a column matrix. That is, formula_23 Among the affine transformations, the Euclidean transformations are characterized by the fact that the matrix formula_24 is orthogonal; that is, its columns are orthogonal vectors of Euclidean norm one, or, explicitly, formula_25 and formula_26 This is equivalent to saying that "A" times its transpose is the identity matrix. If these conditions do not hold, the formula describes a more general affine transformation. The transformation is a translation if and only if "A" is the identity matrix. The transformation is a rotation around some point if and only if "A" is a rotation matrix, meaning that it is orthogonal and formula_27 A reflection or glide reflection is obtained when, formula_28 Assuming that translations are not used (that is, formula_29) transformations can be composed by simply multiplying the associated transformation matrices. In the general case, it is useful to use the augmented matrix of the transformation; that is, to rewrite the transformation formula formula_30 where formula_31 With this trick, the composition of affine transformations is obtained by multiplying the augmented matrices. Affine transformation. Affine transformations of the Euclidean plane are transformations that map lines to lines, but may change distances and angles. As said in the preceding section, they can be represented with augmented matrices: formula_32 The Euclidean transformations are the affine transformations such that the 2×2 matrix of the formula_33 is orthogonal. The augmented matrix that represents the composition of two affine transformations is obtained by multiplying their augmented matrices. Some affine transformations that are not Euclidean transformations have received specific names. Scaling. An example of an affine transformation which is not Euclidean is given by scaling. To make a figure larger or smaller is equivalent to multiplying the Cartesian coordinates of every point by the same positive number "m". If ("x", "y") are the coordinates of a point on the original figure, the corresponding point on the scaled figure has coordinates formula_34 If "m" is greater than 1, the figure becomes larger; if "m" is between 0 and 1, it becomes smaller. Shearing. A shearing transformation will push the top of a square sideways to form a parallelogram. Horizontal shearing is defined by: formula_35 Shearing can also be applied vertically: formula_36 Orientation and handedness. In two dimensions. Fixing or choosing the "x"-axis determines the "y"-axis up to direction. Namely, the "y"-axis is necessarily the perpendicular to the "x"-axis through the point marked 0 on the "x"-axis. But there is a choice of which of the two half lines on the perpendicular to designate as positive and which as negative. Each of these two choices determines a different orientation (also called "handedness") of the Cartesian plane. The usual way of orienting the plane, with the positive "x"-axis pointing right and the positive "y"-axis pointing up (and the "x"-axis being the "first" and the "y"-axis the "second" axis), is considered the "positive" or "standard" orientation, also called the "right-handed" orientation. A commonly used mnemonic for defining the positive orientation is the "right-hand rule". Placing a somewhat closed right hand on the plane with the thumb pointing up, the fingers point from the "x"-axis to the "y"-axis, in a positively oriented coordinate system. The other way of orienting the plane is following the "left-hand rule", placing the left hand on the plane with the thumb pointing up. When pointing the thumb away from the origin along an axis towards positive, the curvature of the fingers indicates a positive rotation along that axis. Regardless of the rule used to orient the plane, rotating the coordinate system will preserve the orientation. Switching any one axis will reverse the orientation, but switching both will leave the orientation unchanged. In three dimensions. Once the "x"- and "y"-axes are specified, they determine the line along which the "z"-axis should lie, but there are two possible orientations for this line. The two possible coordinate systems, which result are called 'right-handed' and 'left-handed'. The standard orientation, where the "xy"-plane is horizontal and the "z"-axis points up (and the "x"- and the "y"-axis form a positively oriented two-dimensional coordinate system in the "xy"-plane if observed from "above" the "xy"-plane) is called right-handed or positive. The name derives from the right-hand rule. If the index finger of the right hand is pointed forward, the middle finger bent inward at a right angle to it, and the thumb placed at a right angle to both, the three fingers indicate the relative orientation of the "x"-, "y"-, and "z"-axes in a "right-handed" system. The thumb indicates the "x"-axis, the index finger the "y"-axis and the middle finger the "z"-axis. Conversely, if the same is done with the left hand, a left-handed system results. Figure 7 depicts a left and a right-handed coordinate system. Because a three-dimensional object is represented on the two-dimensional screen, distortion and ambiguity result. The axis pointing downward (and to the right) is also meant to point "towards" the observer, whereas the "middle"-axis is meant to point "away" from the observer. The red circle is "parallel" to the horizontal "xy"-plane and indicates rotation from the "x"-axis to the "y"-axis (in both cases). Hence the red arrow passes "in front of" the "z"-axis. Figure 8 is another attempt at depicting a right-handed coordinate system. Again, there is an ambiguity caused by projecting the three-dimensional coordinate system into the plane. Many observers see Figure 8 as "flipping in and out" between a convex cube and a concave "corner". This corresponds to the two possible orientations of the space. Seeing the figure as convex gives a left-handed coordinate system. Thus the "correct" way to view Figure 8 is to imagine the "x"-axis as pointing "towards" the observer and thus seeing a concave corner. Representing a vector in the standard basis. A point in space in a Cartesian coordinate system may also be represented by a position vector, which can be thought of as an arrow pointing from the origin of the coordinate system to the point. If the coordinates represent spatial positions (displacements), it is common to represent the vector from the origin to the point of interest as formula_37. In two dimensions, the vector from the origin to the point with Cartesian coordinates (x, y) can be written as: formula_38 where formula_39 and formula_40 are unit vectors in the direction of the "x"-axis and "y"-axis respectively, generally referred to as the "standard basis" (in some application areas these may also be referred to as versors). Similarly, in three dimensions, the vector from the origin to the point with Cartesian coordinates formula_41 can be written as: formula_42 where formula_43 formula_44 and formula_45 There is no "natural" interpretation of multiplying vectors to obtain another vector that works in all dimensions, however there is a way to use complex numbers to provide such a multiplication. In a two-dimensional cartesian plane, identify the point with coordinates ("x", "y") with the complex number "z" = "x" + "iy". Here, "i" is the imaginary unit and is identified with the point with coordinates (0, 1), so it is "not" the unit vector in the direction of the "x"-axis. Since the complex numbers can be multiplied giving another complex number, this identification provides a means to "multiply" vectors. In a three-dimensional cartesian space a similar identification can be made with a subset of the quaternions. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x \\mapsto ax + b" }, { "math_id": 1, "text": "\n\\begin{align}\n(+x,+y,+z) && (-x,+y,+z) && (+x,-y,+z) && (+x,+y,-z) \\\\\n(+x,-y,-z) && (-x,+y,-z) && (-x,-y,+z) && (-x,-y,-z)\n\\end{align}\n" }, { "math_id": 2, "text": "\\R^2 = \\R\\times\\R" }, { "math_id": 3, "text": "\\R" }, { "math_id": 4, "text": "\\R^n" }, { "math_id": 5, "text": "(x_1, y_1)" }, { "math_id": 6, "text": "(x_2, y_2)" }, { "math_id": 7, "text": "d = \\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}." }, { "math_id": 8, "text": "(x_1,y_1,z_1)" }, { "math_id": 9, "text": "(x_2,y_2,z_2)" }, { "math_id": 10, "text": "d = \\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2+ (z_2-z_1)^2} ," }, { "math_id": 11, "text": "(x', y') = (x + a, y + b) ." }, { "math_id": 12, "text": "\\theta" }, { "math_id": 13, "text": "\n\\begin{align}\nx' &= x \\cos \\theta - y \\sin \\theta \\\\\ny' &= x \\sin \\theta + y \\cos \\theta .\n\\end{align}\n" }, { "math_id": 14, "text": "(x',y') = ((x \\cos \\theta - y \\sin \\theta\\,) , (x \\sin \\theta + y \\cos \\theta\\,)) ." }, { "math_id": 15, "text": "\n\\begin{align}\nx' &= x \\cos 2\\theta + y \\sin 2\\theta \\\\\ny' &= x \\sin 2\\theta - y \\cos 2\\theta .\n\\end{align}\n" }, { "math_id": 16, "text": "(x',y') = ((x \\cos 2\\theta + y \\sin 2\\theta\\,) , (x \\sin 2\\theta - y \\cos 2\\theta\\,)) ." }, { "math_id": 17, "text": "(x,y)" }, { "math_id": 18, "text": "\\begin{pmatrix}x\\\\y\\end{pmatrix}." }, { "math_id": 19, "text": "(x', y')" }, { "math_id": 20, "text": "\\begin{pmatrix}x'\\\\y'\\end{pmatrix} = A \\begin{pmatrix}x\\\\y\\end{pmatrix} + b," }, { "math_id": 21, "text": "A = \\begin{pmatrix} A_{1,1} & A_{1,2} \\\\ A_{2,1} & A_{2,2} \\end{pmatrix}" }, { "math_id": 22, "text": "b=\\begin{pmatrix}b_1\\\\b_2\\end{pmatrix}" }, { "math_id": 23, "text": "\n\\begin{align}\nx' &= x A_{1,1} + y A_{1,1} + b_{1} \\\\\ny' &= x A_{2,1} + y A_{2, 2} + b_{2}.\n\\end{align}\n" }, { "math_id": 24, "text": "A" }, { "math_id": 25, "text": "A_{1,1} A_{1, 2} + A_{2,1} A_{2, 2} = 0" }, { "math_id": 26, "text": "A_{1, 1}^2 + A_{2,1}^2 = A_{1,2}^2 + A_{2, 2}^2 = 1." }, { "math_id": 27, "text": " A_{1, 1} A_{2, 2} - A_{2, 1} A_{1, 2} = 1 ." }, { "math_id": 28, "text": " A_{1, 1} A_{2, 2} - A_{2, 1} A_{1, 2} = -1 ." }, { "math_id": 29, "text": "b_1=b_2=0" }, { "math_id": 30, "text": "\\begin{pmatrix}x'\\\\y'\\\\1\\end{pmatrix} = A' \\begin{pmatrix}x\\\\y\\\\1\\end{pmatrix}," }, { "math_id": 31, "text": "A' = \\begin{pmatrix} A_{1,1} & A_{1,2}&b_1 \\\\ A_{2,1} & A_{2,2}&b_2\\\\0&0&1 \\end{pmatrix}." }, { "math_id": 32, "text": "\\begin{pmatrix} A_{1,1} & A_{2,1} & b_{1} \\\\ A_{1,2} & A_{2,2} & b_{2} \\\\ 0 & 0 & 1 \\end{pmatrix}\n\\begin{pmatrix} x \\\\ y \\\\ 1 \\end{pmatrix}\n=\n\\begin{pmatrix} x' \\\\ y' \\\\ 1 \\end{pmatrix}." }, { "math_id": 33, "text": "A_{i,j}" }, { "math_id": 34, "text": "(x',y') = (m x, m y)." }, { "math_id": 35, "text": "(x',y') = (x+y s, y)" }, { "math_id": 36, "text": "(x',y') = (x, x s+y)" }, { "math_id": 37, "text": "\\mathbf{r}" }, { "math_id": 38, "text": " \\mathbf{r} = x \\mathbf{i} + y \\mathbf{j}," }, { "math_id": 39, "text": "\\mathbf{i} = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}" }, { "math_id": 40, "text": "\\mathbf{j} = \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}" }, { "math_id": 41, "text": "(x,y,z)" }, { "math_id": 42, "text": " \\mathbf{r} = x \\mathbf{i} + y \\mathbf{j} + z \\mathbf{k}," }, { "math_id": 43, "text": "\\mathbf{i} = \\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\end{pmatrix}," }, { "math_id": 44, "text": "\\mathbf{j} = \\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\end{pmatrix}," }, { "math_id": 45, "text": "\\mathbf{k} = \\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\end{pmatrix}." } ]
https://en.wikipedia.org/wiki?curid=7706
77060756
Selman's theorem
Theorem in computability theory In computability theory, Selman's theorem is a theorem relating enumeration reducibility with enumerability relative to oracles. It is named after Alan Selman, who proved it as part of his PhD thesis in 1971. Statement. Informally, a set "A" is enumeration-reducible to a set "B" if there is a Turing machine which receives an enumeration of "B" (it has a special instruction to get the next element, or none if it has not yet been provided), and produces an enumeration of "A". See enumeration reducibility for a precise account. A set "A" is computably enumerable with oracle "B" (or simply "in "B"") when there is a Turing machine with oracle "B" which enumerates the members of "A"; this is the relativized version of computable enumerability. Selman's theorem: A set "A" is enumeration-reducible to a set "B" if and only if "A" is computably enumerable with an oracle "X" whenever "B" is computably enumerable with the same oracle "X". Discussion. Informally, the hypothesis means that whenever there is a program enumerating "B" using some source of information (the oracle), there is also a program enumerating "A" using the same source of information. A priori, the program enumerating "A" could be running the program enumerating "B" as a subprogram in order to produce the elements of "A" from those of "B", but it could also be using the source of information directly, perhaps in a different way than the program enumerating "B", and it could be difficult to deduce from the program enumerating "B". However, the theorem asserts that, in fact, there exists a single program which produces an enumeration of "A" solely from an enumeration of "B", without direct access to the source of information used to enumerate "B". From a slightly different point of view, the theorem is an automatic uniformity result. Let "P" be the set of total computable functions formula_0 such that the range of "f" with ⊥ removed equals "A", and let "Q" be similarly defined for "B". A possible reformulation of the theorem is that if "P" is Mučnik-reducible to "Q", then it is also Medvedev-reducible to "Q". 5. Informally: if every enumeration of "B" can be used to compute an enumeration of "A", then there is a single (uniform) oracle Turing machine which computes some enumeration of "A" whenever it is given an enumeration of "B" as the oracle. Proof. If "A" is enumeration-reducible to "B" and "B" is computably enumerable with oracle "X", then "A" is computably enumerable with oracle "X" (it suffices to compose a machine that enumerates "A" given an enumeration of "B" with a machine that enumerates "B" with an oracle "X"). Conversely, assume that "A" is not enumeration-reducible to "B". We shall build "X" such that "B" is computably enumerable with oracle "X", but "A" is not. Let formula_1 denote some computable pairing function. We build "X" as a set of elements formula_2 where formula_3, such that for each formula_3, there is at least one pair formula_2 in "X". This ensures that "B" is computably enumerable with oracle "X" (through a semi-algorithm that takes an input "x" and searches for "y" such that formula_4). The construction of "X" is done by stages, following the priority method. It is convenient to view the eventual value of "X" as an infinite bit string ("i"-th bit is the boolean formula_5) which is constructed by incrementally appending to a finite bit string. Initially, "X" is the empty string. We describe the "n"-th step of the construction. It extends "X" in two ways. First, we ensure that "X" has a 1 bit at some index formula_6, where "x" is the "n"-th element of "X". If there is none yet, we choose "y" large enough such that the index formula_6 is outside the current string "X", and we add a 1 bit at this index (padding with 0 bits before it). Doing this ensures that in the eventual value of "X", there is some pair formula_6 for each formula_3. Second, let us call "admissible extension" an extension of the current "X" which respects the property that 1 bits are pairs formula_7. Denote by "M" the "n"-th oracle Turing machine. We use "M"("Z") to mean "M" associated to a specific oracle "Z" (if "Z" is a finite bit string, out of bounds requests return 0). We distinguish three cases. 1. There is an admissible extension "Y" such that "M"("Y") enumerates some "x" that is not in "A". Fix such an "x". We further extend "Y" by padding it with 0s until all oracle queries that were used by "M"("Y") before enumerating "x" become in bounds, and we set "X" to this extended "Y". This ensures that, however "X" is later extended, "M"("X") does not enumerate "A", as it enumerates "x" which is not in "A". 2. There is some value "x" in "A" which is not enumerated by any "M"("Y"), for any admissible extension "Y". In this case, we do not change "X"; it is already ensured that eventually "M"("X") will not enumerate "A", because it cannot enumerate "x" — indeed, if it did, this would be done after a finite number of oracle invocations, which would lie in some admissible extension "Y". 3. We show that the remaining case is absurd. Here, we know that all values enumerated by "M"("Y"), for "Y" admissible extension, are in "A", and conversely, every element of "A" is enumerated by "M"("Y") for at least one admissible extension "Y". In other words, "A" is exactly the set of all values enumerated by "M"("Y") for an admissible extension "Y". We can build a machine which receives an enumeration of "B", uses it to enumerates admissible extensions "Y", runs the "M"("Y") in parallel, and enumerates the values they yield. This machine is an enumeration reduction from "A" to "B", which is absurd since we assumed no such reduction exists.
[ { "math_id": 0, "text": "f : \\mathbb{N} \\rarr \\mathbb{N} \\cup \\{\\bot\\}" }, { "math_id": 1, "text": "\\langle \\bullet, \\bullet \\rangle" }, { "math_id": 2, "text": "\\langle x, y \\rangle" }, { "math_id": 3, "text": "x \\in B" }, { "math_id": 4, "text": "\\langle x, y \\rangle \\in X" }, { "math_id": 5, "text": "i \\in X" }, { "math_id": 6, "text": "\\langle x, y\\rangle" }, { "math_id": 7, "text": "\\langle x, y\\rangle, x \\in B" } ]
https://en.wikipedia.org/wiki?curid=77060756
77069742
Gower's distance
Distance measure in statistics In statistics, Gower's distance between two mixed-type objects is a similarity measure that can handle different types of data within the same dataset and is particularly useful in cluster analysis or other multivariate statistical techniques. Data can be binary, ordinal, or continuous variables. It works by normalizing the differences between each pair of variables and then computing a weighted average of these differences. The distance was defined in 1971 by Gower and it takes values between 0 and 1 with smaller values indicating higher similarity. Definition. For two objects formula_0 and formula_1 having formula_2 descriptors, the similarity formula_3 is defined as: formula_4 where the formula_5 are non-negative weights usually set to formula_6 and formula_7 is the similarity between the two objects regarding their formula_8-th variable. If the variable is binary or ordinal, the values of formula_7 are 0 or 1, with 1 denoting equality. If the variable is continuous, formula_9 with formula_10 being the range of formula_8-th variable and thus ensuring formula_11. As a result, the overall similarity formula_12 between two objects is the weighted average of the similarities calculated for all their descriptors. In its original exposition, the distance does not treat ordinal variables in a special manner. In the 1990s, first Kaufman and Rousseeuw and later Podani suggested extensions where the ordering of an ordinal feature is used. For example, Podani obtains relative rank differences as formula_13 with formula_14 being the ranks corresponding to the ordered categories of the formula_8-th variable. Software implementations. Many programming languages and statistical packages, such as R, Python, etc., include implementations of Gower's distance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "j" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "S_{ij} = \\frac{\\sum_{k=1}^pw_{ijk}s_{ijk}}{\\sum_{k=1}^pw_{ijk}}," }, { "math_id": 5, "text": "w_{ijk}" }, { "math_id": 6, "text": "1" }, { "math_id": 7, "text": "s_{ijk}" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "s_{ijk} = 1- \\frac{|x_i-x_j|}{R_k}" }, { "math_id": 10, "text": "R_k" }, { "math_id": 11, "text": "0\\leq s_{ijk}\\leq 1" }, { "math_id": 12, "text": "S_{ij}" }, { "math_id": 13, "text": "s_{ijk} = 1- \\frac{|r_i-r_j|}{\\max{\\{r\\}}- \\min{\\{r\\}}}" }, { "math_id": 14, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=77069742
77073430
Helffer–Sjöstrand formula
This is a mathematical page on the Helffer-Sjoestrand formula. In mathematics, more specifically, in functional analysis, the Helffer–Sjöstrand formula is a formula for computing a function of a self-adjoint operator. Background. If formula_0, then we can find a function formula_1 such that formula_2, and for each formula_3, there exists a formula_4 such that formula_5 Such a function formula_6 is called an almost analytic extension of formula_7. The Formula. If formula_8 and formula_9 is a self-adjoint operator on a Hilbert space, then formula_10 where formula_11 is an almost analytic extension of formula_12, and formula_13. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f \\in C_0^\\infty (\\mathbb{R}) " }, { "math_id": 1, "text": " \\tilde f \\in C_0^\\infty (\\mathbb{C}) " }, { "math_id": 2, "text": " \\tilde{f}|_{\\mathbb{R}} = f " }, { "math_id": 3, "text": " N \\ge 0" }, { "math_id": 4, "text": " C_N > 0" }, { "math_id": 5, "text": " |\\bar{\\partial} \\tilde{f}| \\leq C_N |\\operatorname{Im} z|^N. " }, { "math_id": 6, "text": "\\tilde{f} " }, { "math_id": 7, "text": " f" }, { "math_id": 8, "text": "f \\in C_0^\\infty(\\mathbb{R})" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": " f(A) = \\frac{1}{\\pi} \\int_{\\mathbb{C}} \\bar{\\partial} \\tilde{f}(z) (z - A)^{-1} \\, dx \\, dy " }, { "math_id": 11, "text": " \\tilde{f} " }, { "math_id": 12, "text": " f " }, { "math_id": 13, "text": " \\bar{\\partial}_z := \\frac{1}{2}(\\partial_{Re(z)} + i\\partial_{Im(z)}) " } ]
https://en.wikipedia.org/wiki?curid=77073430
77073703
Random subcube model
Model in statistical mechanics In statistical mechanics, the random-subcube model (RSM) is an exactly solvable model that reproduces key properties of hard constraint satisfaction problems (CSPs) and optimization problems, such as geometrical organization of solutions, the effects of frozen variables, and the limitations of various algorithms like decimation schemes. The RSM consists of a set of "N" binary variables, where solutions are defined as points in a hypercube. The model introduces clusters, which are random subcubes of the hypercube, representing groups of solutions sharing specific characteristics. As the density of constraints increases, the solution space undergoes a series of phase transitions similar to those observed in CSPs like random k-satisfiability (k-SAT) and random k-coloring (k-COL). These transitions include clustering, condensation, and ultimately the unsatisfiable phase where no solutions exist. The RSM is equivalent to these real CSPs in the limit of large constraint size. Notably, it reproduces the cluster size distribution and freezing properties of k-SAT and k-COL in the large-k limit. This is similar to how the random energy model is the large-p limit of the p-spin glass model. Setup. Subcubes. There are formula_0 particles. Each particle can be in one of two states formula_1. The state space formula_2 has formula_3 states. Not all are available. Only those satisfying the constraints are allowed. Each constraint is a subset formula_4 of the state space. Each formula_4 is a "subcube", structured like formula_5 where each formula_6 can be one of formula_7. The available states is the union of these subsets: formula_8 Random subcube model. Each random subcube model is defined by two parameters formula_9. To generate a random subcube formula_4, sample its components formula_6 IID according to formula_10 Now sample formula_11 random subcubes, and union them together. Entropies. The entropy density of the formula_12-th cluster in bits is formula_13 The entropy density of the system in bits is formula_14 Phase structure. Cluster sizes and numbers. Let formula_15 be the number of clusters with entropy density formula_16, then it is binomially distributed, thus formula_17 where formula_18 By the Chebyshev inequality, if formula_19, then formula_15 concentrates to its mean value. Otherwise, since formula_20, formula_15 also concentrates to formula_21 by the Markov inequality. Thus, formula_22 almost surely as formula_23. When formula_24 exactly, the two forces exactly balance each other out, and formula_15 does not collapse, but instead converges in distribution to the Poisson distribution formula_25 by the law of small numbers. Liquid phase. For each state, the number of clusters it is in is also binomially distributed, with expectationformula_26 So if formula_27, then it concentrates to formula_28, and so each state is in an exponential number of clusters. Indeed, in that case, the probability that "all" states are allowed isformula_29 Thus almost surely, all states are allowed, and the entropy density is 1 bit per particle. Clustered phase. If formula_30, then it concentrates to zero exponentially, and so most states are not in any cluster. Those that do are exponentially unlikely to be in more than one. Thus, we find that almost all states are in zero clusters, and of those in at least one cluster, almost all are in just one cluster. The state space is thus roughly speaking the disjoint union of the clusters. Almost surely, there are formula_31 clusters of size formula_32, therefore, the state space is dominated by clusters with optimal entropy density formula_33. Thus, in the clustered phase, the state space is almost entirely partitioned among formula_34 clusters of size formula_35 each. Roughly, the state space looks like exponentially many equally-sized clusters. Condensation phase. Another phase transition occurs when formula_36, that is,formula_37When formula_38, the optimal entropy density becomes unreachable, as there almost surely exists zero clusters with entropy density formula_39. Instead, the state space is dominated by clusters with entropy close to formula_40, the larger solution to formula_41. Near formula_40, the contribution of clusters with entropy density formula_42 to the total state space is formula_43 At large formula_0, the possible entropy densities are formula_44. The contribution of each is formula_45 We can tabulate them as follows: Thus, we see that for any formula_46, at formula_47 limit, over formula_48 of the total state space is covered by only a finite number of clusters. The state space looks partitioned into clusters with exponentially decaying sizes. This is the condensation phase. Unsatisfiable phase. When formula_49, the number of clusters is zero, so there are no states. Extensions. The RSM can be extended to include energy landscapes, allowing for the study of glassy behavior, temperature chaos, and the dynamic transition. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "-1, +1" }, { "math_id": 2, "text": "\\{-1, +1\\}^N" }, { "math_id": 3, "text": "2^N" }, { "math_id": 4, "text": "A_i" }, { "math_id": 5, "text": "A_i = \\prod_{j \\in 1:N} A_{ij}" }, { "math_id": 6, "text": "A_{ij}" }, { "math_id": 7, "text": "\\{-1\\}, \\{+1\\}, \\{-1, +1\\}" }, { "math_id": 8, "text": "S = \\cup_i A_i " }, { "math_id": 9, "text": "\\alpha, p \\in (0, 1)" }, { "math_id": 10, "text": "\n\\begin{aligned}\nPr(A_{ij} &= \\{-1\\}) &= p/2 \\\\\nPr(A_{ij} &= \\{+1\\}) &= p/2 \\\\\nPr(A_{ij} &= \\{-1, +1\\}) &= 1-p\n\\end{aligned}\n" }, { "math_id": 11, "text": "2^{(1-\\alpha)N}" }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": "s_r := \\frac 1N \\log_2 |A_r|" }, { "math_id": 14, "text": "s := \\frac 1N \\log_2 |\\cup_r A_r|" }, { "math_id": 15, "text": "n(s)" }, { "math_id": 16, "text": "s" }, { "math_id": 17, "text": "\n\\begin{aligned}\nE[n(s)] &= 2^{(1-\\alpha)N} P \\to 2^{N\\Sigma(s) + o(N)} \\\\\nVar[n(s)] &= 2^{(1-\\alpha)N} P(1-P) \\\\\n\\frac{Var[n(s)]}{E[n(s)]^2} &\\to 2^{-N\\Sigma(s)}\n\\end{aligned}\n" }, { "math_id": 18, "text": "\n\\begin{aligned}\nP &:= \\binom{N}{sN}p^{(1-s)N}(1-p)^{sN}, \\\\\n\\Sigma(s) &:= 1-\\alpha - D_{KL}(s \\| 1-p) \\\\\nD_{KL}(s \\| 1-p) &:= s\\log_2\\frac{s}{1-p} + (1-s) \\log_2\\frac{1-s}{p}\n\\end{aligned}\n" }, { "math_id": 19, "text": "\\Sigma > 0" }, { "math_id": 20, "text": "E[n(s)] \\to 0" }, { "math_id": 21, "text": "0" }, { "math_id": 22, "text": "n(s) \\to \\begin{cases}\n2^{N\\Sigma(s) + o(N)} \\quad &\\text{if }\\Sigma(s) > 0\\\\\n0 \\quad &\\text{if }\\Sigma(s) < 0\n\\end{cases}" }, { "math_id": 23, "text": "N\\to\\infty" }, { "math_id": 24, "text": "\\Sigma = 0" }, { "math_id": 25, "text": "Poisson(1)" }, { "math_id": 26, "text": "2^{(1-\\alpha)N}(1-p/2)^N = 2^{N(\\log_2(2-p) - \\alpha)}" }, { "math_id": 27, "text": "\\alpha < \\log_2(2-p)" }, { "math_id": 28, "text": "2^{N(\\log_2(2-p) - \\alpha)}" }, { "math_id": 29, "text": "[1-[1-(1 - p/2)^N]^{2^{(1-\\alpha) N}}]^{2^N}\\sim e^{-e^{-2^{N(\\log_2(2-p) - \\alpha)} + N\\ln 2}} \\to 1" }, { "math_id": 30, "text": "\\alpha > \\alpha_d := \\log_2(2-p)" }, { "math_id": 31, "text": "n(s) = 2^{N\\Sigma(s)}" }, { "math_id": 32, "text": "2^{Ns}" }, { "math_id": 33, "text": "s^* = \\arg \\max_s (\\Sigma (s) + s)" }, { "math_id": 34, "text": "2^{N\\Sigma(s^*)}" }, { "math_id": 35, "text": "2^{Ns^*}" }, { "math_id": 36, "text": "\\Sigma(s^*) = 0" }, { "math_id": 37, "text": "\\alpha = \\alpha_c := \\frac{p}{(2-p)}+\\log _2(2-p)" }, { "math_id": 38, "text": "\\alpha > \\alpha_c" }, { "math_id": 39, "text": "s^*" }, { "math_id": 40, "text": "s_c" }, { "math_id": 41, "text": "\\Sigma(s_c) = 0" }, { "math_id": 42, "text": "s = s_c - \\delta" }, { "math_id": 43, "text": "\\underbrace{2^{Ns}}_{\\text{size of clusters}} \\times \\underbrace{2^{N\\Sigma(s)}}_{\\text{number of clusters}} = 2^{N(s + \\Sigma(s))} = 2^{N(s_c - \\delta - \\Sigma'(s_c)\\delta)}" }, { "math_id": 44, "text": "s_c, s_c - 1/N, s_c - 2/N, \\dots " }, { "math_id": 45, "text": "2^{Ns_c}, 2^{Ns_c}2^{-(1+\\Sigma'(s_c))}, 2^{Ns_c}2^{-2(1+\\Sigma'(s_c))}, \\dots" }, { "math_id": 46, "text": "\\epsilon > 0" }, { "math_id": 47, "text": "N \\to \\infty" }, { "math_id": 48, "text": "1-\\epsilon" }, { "math_id": 49, "text": "\\alpha > 1" } ]
https://en.wikipedia.org/wiki?curid=77073703
770768
Liquid fuel
Liquids that can be used to create energy Liquid fuels are combustible or energy-generating molecules that can be harnessed to create mechanical energy, usually producing kinetic energy; they also must take the shape of their container. It is the fumes of liquid fuels that are flammable instead of the fluid. Most liquid fuels in widespread use are derived from fossil fuels; however, there are several types, such as hydrogen fuel (for automotive uses), ethanol, and biodiesel, which are also categorized as a liquid fuel. Many liquid fuels play a primary role in transportation and the economy. Liquid fuels are contrasted with solid fuels and gaseous fuels. General properties. Some common properties of liquid fuels are that they are easy to transport, and can be handled with relative ease. Physical properties of liquid fuels vary by temperature, though not as greatly as for gaseous fuels. Some of these properties are: flash point, the lowest temperature at which a flammable concentration of vapor is produced; fire point, the temperature at which sustained burning of vapor will occur; cloud point for diesel fuels, the temperature at which dissolved waxy compounds begin to coalesce, and pour point, the temperature below which the fuel is too thick to pour freely. These properties affect the safety and handling of the fuel. Petroleum fuels. Most liquid fuels used currently are produced from petroleum. The most notable of these is gasoline. Scientists generally accept that petroleum formed from the fossilized remains of dead plants and animals by exposure to heat and pressure in the Earth's crust. Gasoline. Gasoline is the most widely used liquid fuel. Gasoline, as it is known in United States and Canada, or petrol virtually everywhere else, is made of hydrocarbon molecules (compounds that contain hydrogen and carbon only) forming aliphatic compounds, or chains of carbons with hydrogen atoms attached. However, many aromatic compounds (carbon chains forming rings) such as benzene are found naturally in gasoline and cause the health risks associated with prolonged exposure to the fuel. Production of gasoline is achieved by distillation of crude oil. The desirable liquid is separated from the crude oil in refineries. Crude oil is extracted from the ground in several processes, the most commonly seen may be beam pumps. To create gasoline, petroleum must first be removed from crude oil. Liquid gasoline itself is not actually burned, but its fumes ignite, causing the remaining liquid to evaporate and then burn. Gasoline is extremely volatile and easily combusts, making any leakage potentially extremely dangerous. Gasoline sold in most countries carries a published octane rating. The octane number is an empirical measure of the resistance of gasoline to combusting prematurely, known as knocking. The higher the octane rating, the more resistant the fuel is to autoignition under high pressures, which allows for a higher compression ratio. Engines with a higher compression ratio, commonly used in race cars and high-performance regular-production automobiles, can produce more power; however, such engines require a higher octane fuel. Increasing the octane rating has, in the past, been achieved by adding 'anti-knock' additives such as lead-tetra-ethyl. Because of the environmental impact of lead additives, the octane rating is increased today by refining out the impurities that cause knocking. Diesel. Conventional diesel is similar to gasoline in that it is a mixture of aliphatic hydrocarbons extracted from petroleum. Diesel may cost more or less than gasoline, but generally costs less to produce because the extraction processes used are simpler. Some countries (particularly Canada, India and Italy) also have lower tax rates on diesel fuels. After distillation, the diesel fraction is normally processed to reduce the amount of sulfur in the fuel. Sulfur causes corrosion in vehicles, acid rain and higher emissions of soot from the tail pipe (exhaust pipe). Historically, in Europe lower sulfur levels than in the United States were legally required. However, recent US legislation reduced the maximum sulfur content of diesel from 3,000 ppm to 500 ppm in 2007, and 15 ppm by 2010. Similar changes are also underway in Canada, Australia, New Zealand and several Asian countries. See also Ultra-low-sulfur diesel. A diesel engine is a type of internal combustion engine which ignites fuel by injecting it into a combustion chamber previously compressed with air (which in turn raises the temperature) as opposed to using an outside ignition source, such as a spark plug. Kerosene. Kerosene is used in kerosene lamps and as a fuel for cooking, heating, and small engines. It displaced whale oil for lighting use. Jet fuel for jet engines is made in several grades (Avtur, Jet A, Jet A-1, Jet B, JP-4, JP-5, JP-7 or JP-8) that are kerosene-type mixtures. One form of the fuel known as RP-1 is burned with liquid oxygen as rocket fuel. These fuel grade kerosenes meet specifications for smoke points and freeze points. In the mid-20th century, kerosene or "TVO" (Tractor Vaporising Oil) was used as a cheap fuel for tractors. The engine would start on gasoline, then switch over to kerosene once the engine warmed up. A "heat valve" on the manifold would route the exhaust gases around the intake pipe, heating the kerosene to the point where it can be ignited by an electric spark. Kerosene is sometimes used as an additive in diesel fuel to prevent gelling or waxing in cold temperatures. However, this is not advisable in some recent vehicle diesel engines, as doing so may interfere with the engine's emissions regulation equipment. Liquefied petroleum gas (LPG). LP gas is a mixture of propane and butane, both of which are easily compressible gases under standard atmospheric conditions. It offers many of the advantages of compressed natural gas (CNG), but does not burn as cleanly, is denser than air and is much more easily compressed. Commonly used for cooking and space heating, LP gas and compressed propane are seeing increased use in motorized vehicles; propane is the third most commonly used motor fuel globally. Carbon dioxide formation from petroleum fuels.. Petroleum fuels, when burnt, release carbon dioxide that is necessary for plant growth, but which (given the large scale of global emissions) is potentially harmful to world climate. The amount of carbon dioxide released when one liter of fuel is combusted can be estimated: As a good approximation the chemical formula of e.g. diesel is CnH2n. Note that diesel is a mixture of different molecules. As carbon has a molar mass of 12 g/mol and hydrogen (atomic!) has a molar mass of about 1 g/mol, so the fraction by weight of carbon in diesel is roughly 12/14. The reaction of diesel combustion is given by: 2CnH2n + 3nO2 ⇌ 2nCO2 + 2nH2O Carbon dioxide has a molar mass of 44g/mol as it consists of 2 atoms of oxygen (16 g/mol) and 1 atom of carbon (12 g/mol). So 12 g of carbon yield 44 g of Carbon dioxide. Diesel has a density of 0.838 kg per liter. Putting everything together the mass of carbon dioxide that is produced by burning 1 liter of diesel can be calculated as: formula_0 The number of 2.63 kg of carbon dioxide from 1 liter of Diesel is close to the values found in the literature. For gasoline, with a density of 0.75 kg/L and a ratio of carbon to hydrogen atoms of about 6 to 14, the estimated value of carbon emission if 1 liter of gasoline is burnt gives: formula_1 Non-petroleum fossil fuels. When petroleum is not easily available, chemical processes such as the Fischer–Tropsch process can be used to produce liquid fuels from coal or natural gas. Synthetic fuels from coal were strategically important during World War II for the German military. Today synthetic fuels produced from natural gas are manufactured, to take advantage of the higher value of liquid fuels in transportation. Natural gas. Natural gas, composed chiefly of methane, can be compressed to a liquid and used as a substitute for other traditional liquid fuels. Its combustion is very clean compared to other hydrocarbon fuels, but the fuel's low boiling point requires the fuel to be kept at high pressures to keep it in the liquid state. Though it has a much lower flash point than fuels such as gasoline, it is in many ways safer due to its higher autoignition temperature and its low density, which causes it to dissipate when released in air. Biodiesel. Biodiesel is similar to diesel but has differences akin to those between petrol and ethanol. For instance, biodiesel has a higher cetane rating (45-60 compared to 45-50 for crude-oil-derived diesel) and it acts as a cleaning agent to get rid of dirt and deposits. It has been argued that it only becomes economically feasible above oil prices of $80 (£40 or €60 as of late February, 2007) per barrel. This does, however, depend on locality, economic situation, government stance on biodiesel and a host of other factors- and it has been proven to be viable at much lower costs in some countries. Also, it yields about 10% less energy than ordinary diesel. Analogous to the use of higher compression ratios used for engines burning higher octane alcohols and petrol in spark-ignition engines, taking advantage of biodiesel's high cetane rating can potentially overcome the energy deficit compared to ordinary Number 2 diesel. Alcohols. Generally, the term alcohol refers to ethanol, the first organic chemical produced by humans, but any alcohol can be burned as a fuel. Ethanol and methanol are the most common, being sufficiently inexpensive to be useful. Methanol. Methanol is the lightest and simplest alcohol, produced from the natural gas component methane. Its application is limited primarily due to its toxicity (similar to gasoline), but also due to its high corrosivity and miscibility with water. Small amounts are used in some types of gasoline to increase the octane rating. Methanol-based fuels are used in some race cars and model aeroplanes. Methanol is also called "methyl alcohol" or "wood alcohol", the latter because it was formerly produced from the distillation of wood. It is also known by the name "methyl hydrate". Ethanol. Ethanol, also known as grain alcohol or ethyl alcohol, is commonly found in alcoholic beverages. However, it may also be used as a fuel, most often in combination with gasoline. For the most part, it is used in a 9:1 ratio of gasoline to ethanol to reduce the negative environmental effects of gasoline. There is increasing interest in the use of a blend of 85% fuel ethanol blended with 15% gasoline. This fuel blend called E85 has a higher fuel octane than most premium types of gasoline. When used in a modern Flexible fuel vehicle, it delivers more performance to the gasoline it replaces at the expense of higher fuel consumption due to ethanol's lesser specific energy content. Ethanol for use in gasoline and industrial purposes may be considered a fossil fuel because it is often synthesized from the petroleum product ethylene, which is cheaper than production from fermentation of grains or sugarcane. Butanol. Butanol is an alcohol which can be used as a fuel in most gasoline internal combustion engines without engine modification. It is typically a product of the fermentation of biomass by the bacterium "Clostridium acetobutylicum" (also known as the Weizmann organism). This process was first delineated by Chaim Weizmann in 1916 for the production of acetone from starch for making cordite, a smokeless gunpowder. The advantages of butanol are its high octane rating (over 100) and high energy content, only about 10% lower than gasoline, and subsequently about 50% more energy-dense than ethanol, 100% more so than methanol. Butanol's only major disadvantages are its high flashpoint (35 °C or 95 °F), toxicity (note that toxicity levels exist but are not precisely confirmed), and the fact that the fermentation process for renewable butanol emits a foul odour. The Weizmann organism can only tolerate butanol levels up to 2% or so, compared to 14% for ethanol and yeast. Making butanol from oil produces no such odour, but the limited supply and environmental impact of oil usage defeat the purpose of alternative fuels. The cost of butanol is about $1.25–$1.32 per kilogram ($0.57-$0.58 per pound or $4 approx. per US gallon). Butanol is much more expensive than ethanol (approximately $0.40 per litre or 1.50 per gallon) and methanol. On June 20, 2006, DuPont and BP announced that they were converting an existing ethanol plant to produce 9 million gallons (34 000 cubic meters) of butanol per year from sugar beets. DuPont stated a goal of being competitive with oil at $30–$40 per barrel ($0.19-$0.25 per liter) without subsidies, so the price gap with ethanol is narrowing. Hydrogen. Liquefied hydrogen is the liquid state of the element hydrogen. It is a common liquid rocket fuel for rocket applications and can be used as a fuel in an internal combustion engine or fuel cell. Various concept hydrogen vehicles have been lower volumetric energy, the hydrogen volumes needed for combustion are large. Hydrogen was liquefied for the first time by James Dewar in 1898. Ammonia. Ammonia (NH3) has been used as a fuel before at times when gasoline is unavailable (e.g. for buses in Belgium during WWII). It has a volumetric energy density of 17 Megajoules per liter (compared to 10 for hydrogen, 18 for methanol, 21 for dimethyl ether and 34 for gasoline). It must be compressed or cooled to be a liquid fuel, although it does not require cryogenic cooling as hydrogen does to be liquefied. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.838 kg/L \\cdot {\\frac{12}{14}}\\cdot {\\frac{44}{12}}= 2.63 kg/L" }, { "math_id": 1, "text": " 0.75 kg/L \\cdot {\\frac{6 \\cdot 12}{6\\cdot 12 + 14}\\cdot 1} \\cdot {\\frac{44}{12}}= 2.3 kg/L " } ]
https://en.wikipedia.org/wiki?curid=770768
77099904
Brahmagupta triangle
Triangle whose side lengths are consecutive positive integers and area is a positive integer A Brahmagupta triangle is a triangle whose side lengths are consecutive positive integers and area is a positive integer. The triangle whose side lengths are 3, 4, 5 is a Brahmagupta triangle and so also is the triangle whose side lengths are 13, 14, 15. The Brahmagupta triangle is a special case of the Heronian triangle which is a triangle whose side lengths and area are all positive integers but the side lengths need not necessarily be consecutive integers. A Brahmagupta triangle is called as such in honor of the Indian astronomer and mathematician Brahmagupta (c. 598 – c. 668 CE) who gave a list of the first eight such triangles without explaining the method by which he computed that list. A Brahmagupta triangle is also called a Fleenor-Heronian triangle in honor of Charles R. Fleenor who discussed the concept in a paper published in 1996. Some of the other names by which Brahmagupta triangles are known are super-Heronian triangle and almost-equilateral Heronian triangle. The problem of finding all Brahmagupta triangles is an old problem. A closed form solution of the problem was found by Reinhold Hoppe in 1880. Generating Brahmagupta triangles. Let the side lengths of a Brahmagupta triangle be formula_0, formula_1 and formula_2 where formula_3 is an integer greater than 1. Using Heron's formula, the area formula_4 of the triangle can be shown to be formula_5 Since formula_6 has to be an integer, formula_3 must be even and so it can be taken as formula_7 where formula_8 is an integer. Thus, formula_9 Since formula_10 has to be an integer, one must have formula_11 for some integer formula_12. Hence, formula_13 must satisfy the following Diophantine equation: formula_14. This is an example of the so-called Pell's equation formula_15 with formula_16. The methods for solving the Pell's equation can be applied to find values of the integers formula_17 and formula_18. Obviously formula_20, formula_21 is a solution of the equation formula_14. Taking this as an initial solution formula_22 the set of all solutions formula_23 of the equation can be generated using the following recurrence relations formula_24 or by the following relations formula_25 They can also be generated using the following property: formula_26 The following are the first eight values of formula_27 and formula_19 and the corresponding Brahmagupta triangles: The sequence formula_29 is entry in the Online Encyclopedia of Integer Sequences (OEIS) and the sequence formula_30 is entry in OEIS. Generalized Brahmagupta triangles. In a Brahmagupta triangle the side lengths form an integer arithmetic progression with a common difference 1. A generalized Brahmagupta triangle is a Heronian triangle in which the side lengths form an arithmetic progression of positive integers. Generalized Brahmagupta triangles can be easily constructed from Brahmagupta triangles. If formula_31 are the side lengths of a Brahmagupta triangle then, for any positive integer formula_32, the integers formula_33 are the side lengths of a generalized Brahmagupta triangle which form an arithmetic progression with common difference formula_32. There are generalized Brahmagupta triangles which are not generated this way. A primitive generalized Brahmagupta triangle is a generalized Brahmagupta triangle in which the side lengths have no common factor other than 1. To find the side lengths of such triangles, let the side lengths be formula_34 where formula_35 are integers satisfying formula_36. Using Heron's formula, the area formula_37 of the triangle can be shown to be formula_38. For formula_37 to be an integer, formula_3 must be even and one may take formula_7 for some integer. This makes formula_39. Since, again, formula_40 has to be an integer, formula_41 has to be in the form formula_42 for some integer formula_18. Thus, to find the side lengths of generalized Brahmagupta triangles, one has to find solutions to the following homogeneous quadratic Diophantine equation: formula_43. It can be shown that all primitive solutions of this equation are given by formula_44 where formula_45 and formula_28 are relatively prime positive integers and formula_46. If we take formula_47 we get the Brahmagupta triangle formula_48. If we take formula_49 we get the Brahmagupta triangle formula_50. But if we take formula_51 we get the generalized Brahmagupta triangle formula_52 which cannot be reduced to a Brahmagupta triangle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t -1 " }, { "math_id": 1, "text": " t" }, { "math_id": 2, "text": " t+1 " }, { "math_id": 3, "text": "t " }, { "math_id": 4, "text": " A " }, { "math_id": 5, "text": "A=\\big(\\tfrac{t}{2}\\big)\\sqrt{3\\big[ \\big(\\tfrac{t}{2}\\big)^2 -1 \\big] } " }, { "math_id": 6, "text": " A" }, { "math_id": 7, "text": "t=2x " }, { "math_id": 8, "text": " x" }, { "math_id": 9, "text": "A = x\\sqrt{3(x^2-1) } " }, { "math_id": 10, "text": " \\sqrt{3(x^2-1) } " }, { "math_id": 11, "text": "x^2-1 =3y^2 " }, { "math_id": 12, "text": " y " }, { "math_id": 13, "text": " x " }, { "math_id": 14, "text": "x^2-3y^2=1 " }, { "math_id": 15, "text": "x^2-Ny^2=1 " }, { "math_id": 16, "text": "N=3" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "y_n" }, { "math_id": 20, "text": " x=2 " }, { "math_id": 21, "text": " y=1 " }, { "math_id": 22, "text": "x_1=2, y_1=1" }, { "math_id": 23, "text": "\\{(x_n, y_n)\\}" }, { "math_id": 24, "text": "\nx_{n+1}=2x_n+3y_n, \\quad y_{n+1}= x_n+2y_n \\text{ for } n=1,2,\\ldots\n" }, { "math_id": 25, "text": "\n\\begin{align}\nx_{n+1} & = 4x_{n}-x_{n-1}\\text{ for }n=2,3,\\ldots \\text{ with } x_1=2, x_2=7\\\\\ny_{n+1} & = 4y_{n}-y_{n-1}\\text{ for }n=2,3,\\ldots \\text{ with } y_1=1, y_2=4.\n\\end{align}\n" }, { "math_id": 26, "text": "\nx_n+\\sqrt{3} y_n=(x_1+\\sqrt{3}y_1)^n\\text{ for } n=1,2, \\ldots\n" }, { "math_id": 27, "text": "x_n " }, { "math_id": 28, "text": "n" }, { "math_id": 29, "text": "\\{x_n\\}" }, { "math_id": 30, "text": "\\{y_n\\}" }, { "math_id": 31, "text": "t-1, t, t+1" }, { "math_id": 32, "text": "k" }, { "math_id": 33, "text": "k(t-1), kt, k(t+1)" }, { "math_id": 34, "text": "t-d, t, t+d" }, { "math_id": 35, "text": "b,d" }, { "math_id": 36, "text": "1\\le d\\le t" }, { "math_id": 37, "text": "A" }, { "math_id": 38, "text": " A = \\big(\\tfrac{b}{4}\\big)\\sqrt{3(t^2-4d^2)}" }, { "math_id": 39, "text": "A=x\\sqrt{3(x^2-d^2)}" }, { "math_id": 40, "text": "A " }, { "math_id": 41, "text": " x^2-d^2 " }, { "math_id": 42, "text": "3y^2" }, { "math_id": 43, "text": "x^2-3y^2=d^2" }, { "math_id": 44, "text": "\n\\begin{align}\nd & = \\vert m^2 - 3n^2\\vert /g\\\\\nx & = (m^2 + 3n^2)/g\\\\\ny & = 2mn/g\n\\end{align}\n" }, { "math_id": 45, "text": "m" }, { "math_id": 46, "text": "g = \\text{gcd}(m^2 - 3n^2, 2mn, m^2 + 3n^2) " }, { "math_id": 47, "text": " m=n=1" }, { "math_id": 48, "text": "(3,4,5)" }, { "math_id": 49, "text": " m=2, n=1" }, { "math_id": 50, "text": "(13,14,15)" }, { "math_id": 51, "text": " m=1, n=2" }, { "math_id": 52, "text": "(15, 26, 37)" } ]
https://en.wikipedia.org/wiki?curid=77099904
7710044
Cassini projection
Cylindrical equidistant map projection The Cassini projection (also sometimes known as the Cassini–Soldner projection or Soldner projection) is a map projection first described in an approximate form by César-François Cassini de Thury in 1745. Its precise formulas were found through later analysis by Johann Georg von Soldner around 1810. It is the transverse aspect of the equirectangular projection, in that the globe is first rotated so the central meridian becomes the "equator", and then the normal equirectangular projection is applied. Considering the earth as a sphere, the projection is composed of the operations: formula_0 where "λ" is the longitude from the central meridian and "φ" is the latitude. When programming these equations, the inverse tangent function used is actually the atan2 function, with the first argument sin "φ" and the second cos "φ" cos "λ". The reverse operation is composed of the operations: formula_1 In practice, the projection has always been applied to models of the earth as an ellipsoid, which greatly complicates the mathematical development but is suitable for surveying. Nevertheless, the use of the Cassini projection has largely been superseded by the transverse Mercator projection, at least with central mapping agencies. Distortions. Areas along the central meridian, and at right angles to it, are not distorted. Elsewhere, the distortion is largely in a north–south direction, and varies by the square of the distance from the central meridian. As such, the greater the longitudinal extent of the area, the worse the distortion becomes. Due to this, the Cassini projection works best for areas with greater north–south extent than east–west. For example, Ordnance Survey maps of Great Britain used the Cassini projection from 1924 until the introduction of the National Grid. Elliptical form. Cassini is known as a spherical projection, but can be generalised as an elliptical form. Considering the earth as an ellipse, the projection is composed of these operations: formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 and "M" is the meridional distance function. The reverse operation is composed of the operations: formula_8 If formula_9 then formula_10 and formula_11 Otherwise calculate "T" and "N" as above with formula_12, and formula_13 formula_14 formula_15 formula_16 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x = \\arcsin(\\cos \\varphi \\sin \\lambda) \\qquad y = \\arctan\\left(\\frac{\\tan\\varphi}{\\cos\\lambda}\\right)." }, { "math_id": 1, "text": "\\varphi = \\arcsin(\\sin y \\cos x) \\qquad \\lambda = \\operatorname{atan2}(\\tan x, \\cos y)." }, { "math_id": 2, "text": "N = (1 - e^2 \\sin^2 \\varphi)^{-1/2}" }, { "math_id": 3, "text": "T = \\tan^2 \\varphi" }, { "math_id": 4, "text": "A = \\lambda \\cos \\varphi" }, { "math_id": 5, "text": "C = \\frac{e^2}{1-e^2} \\cos^2 \\varphi" }, { "math_id": 6, "text": "x = N \\left( A - T \\frac{A^3}{6} - (8-T+8C)T\\frac{A^5}{120} \\right)" }, { "math_id": 7, "text": "y = M(\\varphi) - M(\\varphi_0) + (N \\tan \\varphi) \\left(\\frac{A^2}{2} + (5-T+6C)\\frac{A^4}{24} \\right)" }, { "math_id": 8, "text": "\\varphi' = M^{-1}(M(\\varphi_0)+y)" }, { "math_id": 9, "text": "\\varphi' = \\frac{\\pi}{2}" }, { "math_id": 10, "text": "\\varphi=\\varphi'" }, { "math_id": 11, "text": "\\lambda=0." }, { "math_id": 12, "text": "\\varphi'" }, { "math_id": 13, "text": "R = (1 - e^2)(1 - e^2 \\sin^2 \\varphi')^{-3/2}" }, { "math_id": 14, "text": "D = x/N" }, { "math_id": 15, "text": "\\varphi = \\varphi' - \\frac{N \\tan \\varphi'}{R}\\left(\\frac{D^2}{2}-(1+3T)\\frac{D^4}{24}\\right)" }, { "math_id": 16, "text": "\\lambda = \\frac{D - T\\frac{D^3}{3} + (1+3T)T\\frac{D^5}{15}}{\\cos \\varphi'}" } ]
https://en.wikipedia.org/wiki?curid=7710044
771168
Polynomial remainder theorem
On the remainder of division by x – r In algebra, the polynomial remainder theorem or little Bézout's theorem (named after Étienne Bézout) is an application of Euclidean division of polynomials. It states that, for every number formula_0 any polynomial formula_1 is the sum of formula_2 and the product by formula_3 of a polynomial in formula_4 of degree less than the degree of formula_5 In particular, formula_2 is the remainder of the Euclidean division of formula_1 by formula_6 and formula_3 is a divisor of formula_1 if and only if formula_7 a property known as the factor theorem. Examples. Example 1. Let formula_8. Polynomial division of formula_1 by formula_9 gives the quotient formula_10 and the remainder formula_11. Therefore, formula_12. Example 2. Proof that the polynomial remainder theorem holds for an arbitrary second degree polynomial formula_13 by using algebraic manipulation: formula_14 So, formula_15 which is exactly the formula of Euclidean division. The generalization of this proof to any degree is given below in . Proofs. Using Euclidean division. The polynomial remainder theorem follows from the theorem of Euclidean division, which, given two polynomials "f"("x") (the dividend) and "g"("x") (the divisor), asserts the existence (and the uniqueness) of a quotient "Q"("x") and a remainder "R"("x") such that formula_16 If the divisor is formula_17 where r is a constant, then either "R"("x") = 0 or its degree is zero; in both cases, "R"("x") is a constant that is independent of "x"; that is formula_18 Setting formula_19 in this formula, we obtain: formula_20 Direct proof. A constructive proof—that does not involve the existence theorem of Euclidean division—uses the identity formula_21 If formula_22 denotes the large factor in the right-hand side of this identity, and formula_23 one has formula_24 (since formula_25). Adding formula_2 to both sides of this equation, one gets simultaneously the polynomial remainder theorem and the existence part of the theorem of Euclidean division for this specific case. Applications. The polynomial remainder theorem may be used to evaluate formula_2 by calculating the remainder, formula_26. Although polynomial long division is more difficult than evaluating the function itself, synthetic division is computationally easier. Thus, the function may be more "cheaply" evaluated using synthetic division and the polynomial remainder theorem. The factor theorem is another application of the remainder theorem: if the remainder is zero, then the linear divisor is a factor. Repeated application of the factor theorem may be used to factorize the polynomial. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r," }, { "math_id": 1, "text": "f(x)" }, { "math_id": 2, "text": "f(r)" }, { "math_id": 3, "text": "x-r" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "f." }, { "math_id": 6, "text": "x-r," }, { "math_id": 7, "text": "f(r)=0," }, { "math_id": 8, "text": "f(x) = x^3 - 12x^2 - 42" }, { "math_id": 9, "text": "(x-3)" }, { "math_id": 10, "text": "x^2 - 9x - 27" }, { "math_id": 11, "text": "-123" }, { "math_id": 12, "text": "f(3)=-123" }, { "math_id": 13, "text": "f(x) = ax^2 + bx + c" }, { "math_id": 14, "text": "\\begin{align}\nf(x)-f(r)\n &= ax^2+bx+c-(ar^2+br+c)\\\\\n &= a(x^2-r^2)+ b(x-r)\\\\\n &= a(x-r)(x+r)+b(x-r)\\\\\n &= (x-r)(ax +ar+ b)\n\\end{align}" }, { "math_id": 15, "text": "f(x) = (x - r)(ax + ar + b) + f(r), " }, { "math_id": 16, "text": "f(x)=Q(x)g(x) + R(x)\\quad \\text{and}\\quad R(x) = 0 \\ \\text{ or } \\deg(R)<\\deg(g)." }, { "math_id": 17, "text": "g(x) = x-r," }, { "math_id": 18, "text": "f(x)=Q(x)(x-r) + R." }, { "math_id": 19, "text": "x=r" }, { "math_id": 20, "text": "f(r)=R." }, { "math_id": 21, "text": "x^k-r^k=(x-r)(x^{k-1}+x^{k-2}r+\\dots+xr^{k-2}+r^{k-1})." }, { "math_id": 22, "text": "S_{k}" }, { "math_id": 23, "text": "f(x)=a_nx^n+a_{n-1}x^{n-1} + \\cdots + a_1x +a_0," }, { "math_id": 24, "text": "f(x)-f(r)=(x-r)(a_n S_n +\\cdots + a_2S_2 +a_1)," }, { "math_id": 25, "text": "S_1=1" }, { "math_id": 26, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=771168
77123741
Wu manifold
In mathematics, the Wu manifold is a 5-manifold defined as a quotient space of Lie groups appearing in the mathematical area of Lie theory. Due to its special properties it is of interest in algebraic topology, cobordism theory and spin geometry. The manifold was first studied and named after Wu Wenjun. Definition. The special orthogonal group formula_0 embeds canonically in the special unitary group formula_1. The orbit space: formula_2 is the Wu manifold.
[ { "math_id": 0, "text": "\\operatorname{SO}(n)" }, { "math_id": 1, "text": "\\operatorname{SU}(n)" }, { "math_id": 2, "text": "W:=\\operatorname{SU}(3)/\\operatorname{SO}(3)" }, { "math_id": 3, "text": "W" }, { "math_id": 4, "text": "H_0(W)\\cong\\mathbb{Z}" }, { "math_id": 5, "text": "H_2(W)\\cong\\mathbb{Z}_2" }, { "math_id": 6, "text": "H_5(W)\\cong\\mathbb{Z}" }, { "math_id": 7, "text": "H^0(W;\\mathbb{Z}_2)\n=\\mathbb{Z}_2" }, { "math_id": 8, "text": "H^1(W;\\mathbb{Z}_2)\n=1" }, { "math_id": 9, "text": "H^2(W;\\mathbb{Z}_2)\n=\\mathbb{Z}_2" }, { "math_id": 10, "text": "H^3(W;\\mathbb{Z}_2)\n=\\mathbb{Z}_2" }, { "math_id": 11, "text": "H^4(W;\\mathbb{Z}_2)\n=1" }, { "math_id": 12, "text": "H^5(W;\\mathbb{Z}_2)\n=\\mathbb{Z}_2" }, { "math_id": 13, "text": "\\Omega_5^{\\operatorname{SO}}\\cong\\mathbb{Z}_2" }, { "math_id": 14, "text": "\\operatorname{Spin}^h" }, { "math_id": 15, "text": "\\operatorname{Spin}^c" } ]
https://en.wikipedia.org/wiki?curid=77123741
77125910
Mučnik reducibility
Concept in computability theory In computability theory, a set "P" of functions formula_0 is said to be Mučnik-reducible to another set "Q" of functions formula_0 when for every function "g" in "Q", there exists a function "f" in "P" which is Turing-reducible to "g". Unlike most reducibility relations in computability, Mučnik reducibility is not defined between functions formula_0 but between sets of such functions. These sets are called "mass problems" and can be viewed as problems with more than one solution. Informally, "P" is Mučnik-reducible to "Q" when any solution of "Q" can be used to compute some solution of "P".
[ { "math_id": 0, "text": "\\mathbb{N} \\rarr \\mathbb{N}" } ]
https://en.wikipedia.org/wiki?curid=77125910
77125925
Medvedev reducibility
Concept in computability theory In computability theory, a set "P" of functions formula_0 is said to be Medvedev-reducible to another set "Q" of functions formula_0 when there exists an oracle Turing machine which computes some function of "P" whenever it is given some function from "Q" as an oracle. Medvedev reducibility is a uniform variant of Mučnik reducibility, requiring a single oracle machine that can compute some function of "P" given any oracle from "Q", instead of a family of oracle machines, one per oracle from "Q", which compute functions from "P".
[ { "math_id": 0, "text": "\\mathbb{N} \\rarr \\mathbb{N}" } ]
https://en.wikipedia.org/wiki?curid=77125925
77126934
Conjunction/disjunction duality
Properties linking logical conjunction and disjunction In propositional logic and Boolean algebra, there is a duality between conjunction and disjunction, also called the duality principle. It is, undoubtedly, the most widely known example of duality in logic. The duality consists in these metalogical theorems: This article will prove these results, in the and sections respectively. Mutual definability. Because of their semantics, i.e. the way they are commonly interpreted in classical propositional logic, conjunction and disjunction can be defined in terms of each other with the aid of negation, so that consequently, only one of them needs to be taken as primitive. For example, if conjunction (∧) and negation (¬) are taken as primitives, then disjunction (∨) can be defined as follows: formula_14 (1) Alternatively, if disjunction is taken as primitive, then conjunction can be defined as follows: formula_15 (2) Also, each of these equivalences can be derived from the other one; for example, if (1) is taken as primitive, then (2) is obtained as follows: formula_16 (3) Functional completeness. Since the Disjunctive Normal Form Theorem shows that the set of connectives formula_17 is functionally complete, these results show that the sets of connectives formula_18 and formula_19 are themselves functionally complete as well. De Morgan's laws. De Morgan's laws also follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it. If conjunction is taken as primitive, then (4) follows immediately from (1), while (5) follows from (1) via (3): formula_20 (4) formula_21 (5) Negation is semantically equivalent to dual. Theorem: Let formula_22 be any sentence in formula_23. (That is, the language with the propositional variables formula_24 and the connectives formula_17.) Let formula_25 be obtained from formula_22 by replacing every occurrence of formula_26 in formula_22 by formula_27, every occurrence of formula_27 by formula_26, and every occurrence of formula_28 by formula_29. Then formula_22 ⟚ formula_30. (formula_25 is called the dual of formula_22.) Proof: A sentence formula_22 of formula_31, where formula_31 is as in the theorem, will be said to have the property formula_32 if formula_22 ⟚ formula_30. We shall prove by induction on immediate predecessors that all sentences of formula_31 have formula_32. (An "immediate predecessor" of a well-formed formula is any of the formulas that are connected by its &lt;dfn id=""&gt;dominant connective&lt;/dfn&gt;; it follows that sentence letters have no immediate predecessors.) So we have to establish that the following two conditions are satisfied: (1) each formula_28 has formula_32; and (2) for any non-atomic formula_22, from the inductive hypothesis that the immediate predecessors of formula_22 have formula_32, it follows that formula_22 does also. Further duality theorems. Assume formula_33. Then formula_34 by uniform substitution of formula_35 for formula_36. Hence, formula_37, by contraposition; so finally, formula_38, by the property that formula_0 ⟚ formula_8, which was just proved above. And since formula_39, it is also true that formula_9 if, and only if, formula_38. And it follows, as a corollary, that if formula_40, then formula_41. Conjunctive and disjunctive normal forms. For a formula formula_3 in disjunctive normal form, the formula formula_13 will be in conjunctive normal form, and given the result that , it will be semantically equivalent to formula_42. This provides a procedure for converting between conjunctive normal form and disjunctive normal form. Since the Disjunctive Normal Form Theorem shows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi^{D}" }, { "math_id": 1, "text": "p \\land q" }, { "math_id": 2, "text": "q \\lor p" }, { "math_id": 3, "text": "\\varphi" }, { "math_id": 4, "text": "\\overline{\\varphi}" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "\\neg p" }, { "math_id": 7, "text": "\\models" }, { "math_id": 8, "text": "\\neg \\overline{\\varphi}" }, { "math_id": 9, "text": "\\varphi \\models \\psi" }, { "math_id": 10, "text": "\\psi^{D} \\models \\varphi^{D}" }, { "math_id": 11, "text": "\\psi" }, { "math_id": 12, "text": "\\psi^{D}" }, { "math_id": 13, "text": "\\overline{\\varphi}^{D}" }, { "math_id": 14, "text": "\\varphi \\lor \\psi :\\equiv \\neg (\\neg \\varphi \\land \\neg \\psi)." }, { "math_id": 15, "text": "\\varphi \\land \\psi :\\equiv \\neg (\\neg \\varphi \\lor \\neg \\psi)." }, { "math_id": 16, "text": "\\neg (\\neg \\varphi \\lor \\neg \\psi) \\equiv \\neg \\neg (\\neg \\varphi \\land \\neg \\psi) \\equiv \\varphi \\land \\psi." }, { "math_id": 17, "text": "\\{\\land, \\lor, \\neg\\}" }, { "math_id": 18, "text": "\\{\\land, \\neg\\}" }, { "math_id": 19, "text": "\\{\\lor, \\neg\\}" }, { "math_id": 20, "text": "\\neg (\\varphi \\lor \\psi) \\equiv \\neg \\varphi \\land \\neg \\psi." }, { "math_id": 21, "text": "\\neg (\\varphi \\land \\psi) \\equiv \\neg \\varphi \\lor \\neg \\psi." }, { "math_id": 22, "text": "X" }, { "math_id": 23, "text": "\\mathcal{L}[A_1, \\ldots, A_n; \\land, \\lor, \\neg]" }, { "math_id": 24, "text": "A_1, \\ldots, A_n" }, { "math_id": 25, "text": "\\overline{X}^{D}" }, { "math_id": 26, "text": "\\land" }, { "math_id": 27, "text": "\\lor" }, { "math_id": 28, "text": "A_i" }, { "math_id": 29, "text": "\\neg A_i" }, { "math_id": 30, "text": "\\neg \\overline{X}^{D}" }, { "math_id": 31, "text": "\\mathcal{L}" }, { "math_id": 32, "text": "P" }, { "math_id": 33, "text": "\\phi \\models \\psi" }, { "math_id": 34, "text": "\\overline{\\phi} \\models \\overline{\\psi}" }, { "math_id": 35, "text": "\\neg P_i" }, { "math_id": 36, "text": "P_i" }, { "math_id": 37, "text": "\\neg \\psi \\models \\neg \\phi" }, { "math_id": 38, "text": "\\psi^D \\models \\phi^D" }, { "math_id": 39, "text": "\\varphi^{DD} = \\phi" }, { "math_id": 40, "text": "\\phi \\models \\neg \\psi" }, { "math_id": 41, "text": "\\phi^D \\models \\neg \\psi^D" }, { "math_id": 42, "text": "\\neg \\varphi" } ]
https://en.wikipedia.org/wiki?curid=77126934
7712754
Exclamation mark
Punctuation mark to show strong feelings (!) The exclamation mark (!) (also known as exclamation point in American English) is a punctuation mark usually used after an interjection or exclamation to indicate strong feelings or to show emphasis. The exclamation mark often marks the end of a sentence, for example: "Watch out!". Similarly, a bare exclamation mark (with nothing before or after) is often used in warning signs. The exclamation mark is often used in writing to make a character seem as though they are shouting, excited, or surprised. Other uses include: History. Graphically, the exclamation mark is represented by variations on the theme of a period with a vertical line above. One theory of its origin posits derivation from a Latin exclamation of joy, namely , analogous to "hooray"; copyists wrote the Latin word at the end of a sentence, to indicate expression of joy. Over time, the "i" moved above the "o"; that "o" first became smaller, and (with time) a dot. Its evolution as a punctuation symbol after the Ancient Era can be traced back to the Middle Ages, when scribes would often add various marks and symbols to manuscripts to indicate changes in tone, pauses, or emphasis. These symbols included the punctus admirativus, a symbol that was similar in shape to the modern exclamation mark and was used to indicate admiration, surprise, or other strong emotions. The modern use of the exclamation mark was supposedly first described in the 14th century by Italian scholar Alpoleio da Urbisaglia. According to 21st-century literary scholar Florence Hazrat, da Urbisaglia "felt very annoyed" that people were reading script with a flat tone, even if it was written to elicit emotions. The exclamation mark was introduced into English printing during this time to show emphasis. It was later called by many names, including "point of admiration" (1611), "note of exclamation or admiration" (1657), "sign of admiration or exclamation", "exclamation point" (1824), and finally, "exclamation mark" (1839). Many older or portable typewriters did not have the exclamation mark. Instead the user typed a period and then backspaced and overtyped an apostrophe. Slang and other names for the exclamation mark. Now obsolete, the name "ecphoneme" was documented in the early 20th century. In the 1950s, secretarial dictation and typesetting manuals in America referred to the mark as "bang", perhaps from comic books – where the ! appeared in dialogue bubbles to represent a gun being fired – although the nickname probably emerged from letterpress printing. This "bang" usage is behind the names of the interrobang, an unconventional typographic character, and a shebang, a feature of Unix computer systems. In the printing world, the exclamation mark can be called a screamer, a gasper, a slammer, a dog's cock, or a startler. In hacker culture, the exclamation mark is called "bang", "shriek", or, in the British slang known as Commonwealth Hackish, "". For example, the password communicated in the spoken phrase "Your password is em-zero-pee-aitch-bang-en-three" ("" in Commonwealth Hackish) is codice_0. Languages. The exclamation mark is mainly used in languages that use the Latin alphabet, although usage varies slightly. It has also been adopted in languages written in other scripts, such as languages written with Cyrillic or Arabic scripts, Chinese characters, and Devanagari. English. A sentence ending in an exclamation mark may represent an exclamation or an interjection (such as "Wow!", "Boo!"), or an imperative ("Stop!"), or may indicate astonishment or surprise: "They were the footprints of a gigantic hound!" Exclamation marks are occasionally placed mid-sentence with a function similar to a comma, for dramatic effect, although this usage is obsolete: "On the walk, oh! there was a frightful noise." Informally, exclamation marks may be repeated for additional emphasis ("That's great!!!"), but this practice is generally considered unacceptable in formal prose. The exclamation mark is sometimes used in conjunction with the question mark. This can be in protest or astonishment ("Out of all places, the squatter-camp?!"); a few writers replace this with a single, nonstandard punctuation mark, the interrobang, which is the combination of a question mark and an exclamation mark. Overly frequent use of the exclamation mark is generally considered poor writing, as it distracts the reader and decreases the mark's significance. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Cut out all these exclamation points... An exclamation point is like laughing at your own joke. Some authors, most notably Tom Wolfe, are known for unashamedly liberal use of the exclamation mark. In comic books, the very frequent use of exclamation mark is common—see Comics, below. For information on the use of spaces after an exclamation mark, see the discussion of spacing after a period. Several studies have shown that women use exclamation marks more than men do. One study suggests that, in addition to other uses, exclamation marks may also function as markers of friendly interaction, for example, by making "Hi!" or "Good luck!" seem friendlier than simply "Hi." or "Good luck." (with periods). However, use of exclamation marks in contexts that are not unambiguously positive can be misinterpreted as indicating hostility. In English writing and often subtitles, a (!) symbol (an exclamation mark within parentheses) implies that a character has made an obviously sarcastic comment e.g.: "Ooh, a sarcasm detector. That's a really useful invention(!)" It also is used to indicate surprise at one's own experience or statement. French. In French, as well as marking exclamations or indicating astonishment, the exclamation mark is also commonly used to mark orders or requests: (English: 'Come here!'). When available, a 'narrow no-break space' () is used between the last word and the exclamation mark in European French. If not, a regular non-breaking space () is currently used. In Canadian French, either no space is used or a small space () is inserted if available. One can also combine an exclamation mark with a question mark at the end of a sentence where appropriate. German. German uses the exclamation mark for several things that English conveys with other punctuation: Cantonese. Cantonese has not historically used dedicated punctuation marks, rather relying on grammatical markers to denote the end of a statement. Usage of exclamation marks is common in written Mandarin and in some Yue speaking regions. The Canton and Hong Kong regions, however, generally refused to accept the exclamation mark as it was seen as carrying with it unnecessary and confusing Western connotations; however, an exclamation mark, including in some written representations of colloquy in Cantonese, can be used informally to indicate strong feeling. Greek. In Modern Greek, the exclamation mark (, ) has been introduced from Latin scripts and is used identically, although without the reluctance seen in English usage. A minor grammatical difference is that, while a series of interjections each employ an exclamation mark (e.g., , , 'Oops! Oh!'), an interjection should only be separated from an extended exclamation by a comma (e.g., , , 'Oops! I left the stove on.'). Hungarian. In Hungarian, an exclamation mark is put at the end of exclamatory, imperative or prohibitive sentences, and sentences expressing a wish (e.g. – 'How beautiful!', – 'Keep off the grass', – 'If only my plan would work out.'). The use of the exclamation mark is also needed when addressing someone and the addressing is a separate sentence. (typically at the beginning of letters, e.g. – 'Dear Peter,'). Greetings are also typically terminated with an exclamation mark (e.g. – 'Good evening.'). Solomon Islands Pidgin. In Solomon Islands Pidgin, the phrase may be between admiration marks. Compare ("No.") and ("Certainly not!"). Spanish. In Spanish, a sentence or clause ending in an exclamation mark must also begin with an inverted exclamation mark (the same also applies to the question mark): , 'Are you crazy? You almost killed her!' As in British English, a bracketed exclamation mark may be used to indicate irony or surprise at a statement: , 'He said that he's not going to a party tonight(!).' Such use is not matched by an inverted opening exclamation mark. Turkish. In Turkish, an exclamation mark is used after a sentence or phrase for emphasis, and is common following both commands and the addressees of such commands. For example, in the ('Armies! Your first target is the Mediterranean') order by Atatürk, ('the armies') constitute the addressee. It is further used in parentheses, , after a sentence or phrase to indicate irony or sarcasm: , 'You've done a very good job – Not!'. Limbu. In Limbu, an exclamation mark is used after a Limbu sentence or phrase for emphasis, and is common following both commands and the addressees of such commands. For example, in the Limbu sentence "ᤐᤚᤢ᥄ ᤄᤨᤘᤑ ᤂᤥᤆᤌᤙ Mediterranean, ᤚᤦᤛᤅ᥄" — "Paṡu! Ghōwapha khōcathaśa Mediterranean, ṡausaṅa!" (Armies! Your first target is the "Mediterranean"!). It is further used in parentheses, (᥄), after a sentence or phrase to indicate irony or sarcasm: "ᤖᤥᤂᤌ ᤔᤚᤗ ᤐᤤ ᤊᤇ ᤃᤦᤄ (᥄)" — "Rōkhatha maṡala pai yancha gaugha (!)" (You did a very good job — Not!). Phonetics. In Khoisan languages, and the International Phonetic Alphabet, a symbol that looks like the exclamation mark is used as a letter to indicate the postalveolar click sound (represented as "q" in Zulu orthography). It is actually a vertical bar with underdot. In Unicode, this letter is properly coded as and distinguished from the common punctuation symbol to allow software to deal properly with word breaks. The exclamation mark has sometimes been used as a phonetic symbol to indicate that a consonant is ejective. More commonly this is represented by an apostrophe, or a superscript glottal stop symbol (). Proper names. Although not part of dictionary words, exclamation marks appear in some brand names and trade names, including Yum! Brands (parent of fast food chains like Taco Bell and KFC), Web services Yahoo! and Joomla!, and the online game Kahoot!. It appears in the titles of stage and screen works, especially comedies and musicals; examples include the game show "Jeopardy!"; the '60s musical TV show "Shindig!"; musicals "Oklahoma!", "Mamma Mia!", "Oliver!" and "Oh! Calcutta!"; and movies "Airplane!" and "Moulin Rouge!". Writer Elliot S! Maggin and cartoonist Scott Shaw! include exclamation marks in their names. In the 2016 United States presidential campaign, Republican candidate Jeb Bush used "Jeb!" as his campaign logo. Place names. The English town of Westward Ho!, named after the novel by Charles Kingsley, is the only place name in the United Kingdom that officially contains an exclamation mark. There is a town in Quebec called Saint-Louis-du-Ha! Ha!, which is spelled with two exclamation marks. The city of Hamilton, Ohio, changed its name to Hamilton! in 1986, but neither the United States Board on Geographic Names nor mapmakers Rand McNally recognised the change. The city of Ostrava, Czech Republic, changed its logotype to Ostrava!!! in 2008. Warnings. Exclamation marks are used to emphasize a precautionary statement. On warning signs, an exclamation mark is often used to draw attention to a warning of danger, hazards, and the unexpected. These signs are common in hazardous environments or on potentially dangerous equipment. A common type of this warning is a yellow triangle with a black exclamation mark, but a white triangle with a red border is common on European road warning signs. (In most cases, a pictogram indicating the nature of the hazard is enclosed in the triangle but an exclamation mark may be used instead as a generic symbol; a plate beneath identifies the hazard.) Use in various fields. Mathematics and formal logic. In elementary mathematics, the symbol represents the factorial operation. The expression n! means "the product of the integers from 1 to n". For example, 4! (read "four factorial") is 4 × 3 × 2 × 1 = 24. (0! is defined as 1, which is a neutral element in multiplication, not multiplied by anything.) Additionally, it can also represent the uniqueness quantifier or, if used in front of a number, it can represent a subfactorial. In linear logic, the exclamation mark denotes one of the modalities that control weakening and contraction. Computing. In computing, the exclamation mark is ASCII character 33 (21 in hexadecimal). Due to its availability on even early computers, the character was used for many purposes. The name given to "!" by programmers varies according to their background, though it was very common to give it a short name to make reading code aloud easier. "Bang" is very popular. In the UK the term pling was popular in the earlier days of computing, whilst in the United States, the term shriek was used. It is claimed that these word usages were invented in the US and "shriek" is from Stanford or MIT; however, "shriek" for the ! sign is found in the "Oxford English Dictionary" dating from the 1860s. Many computer languages using C-style syntax use "!" for logical negation; means "not A", and means "A is not equal to B". This negation principle has spread to ordinary language; for example, the word "!clue" is used as a synonym for "no-clue" or "clueless". The symbol in formal logic for negation is but, as this symbol is not present as standard on most keyboards, the C convention has spread informally to other contexts. Early e-mail systems also used the exclamation mark as a separator character between hostnames for routing information, usually referred to as "bang path" notation. In the IRC protocol, a user's nickname and ident are separated by an exclamation mark in the hostmask assigned to him or her by the server. In UNIX scripting (typically for UNIX shell or Perl), "!" is usually used after a "#" in the first line of a script, the interpreter directive, to tell the OS what program to use to run the script. is usually called a "hash-bang" or shebang. A similar convention for PostScript files calls for the first line to begin with , called "percent-bang". An exclamation mark starts history expansions in many Unix shells such as bash and tcsh where executes the previous command and refers to all of the arguments from the previous command. Acorn RISC OS uses filenames starting with pling to create an application directory: for instance a file called codice_1 is executed when the folder containing it is double-clicked (holding down shift prevents this). There is also codice_2 (executed the first time the application containing it comes into view of the filer), codice_3 (icons), codice_4, and others. In APL, is used for factorial of x (backwards from math notation), and also for the binomial coefficient: means formula_0 or . BBC BASIC used pling as an indirection operator, equivalent to PEEK and POKE of four bytes at once. BCPL, the precursor of C, used "!" for pointer and array indirection: is equivalent to in C, and is equivalent to in C. In the Haskell programming language, "!" is used to express strictness. In the Kotlin programming language, "!!" ("double-bang") is the not-null assertion operator, used to override null safety so as to allow a null pointer exception. In the ML programming language (including Standard ML and OCaml), "!" is the operator to get the value out of a "reference" data structure. In the Raku programming language, the "!" twigil is used to access private attributes or methods in a class (like or ). In the Scheme, Julia, and Ruby programming languages, "!" is conventionally the suffix for functions and special forms that mutate their input. In the Swift programming language, a type followed by "!" denotes an "implicitly unwrapped optional", an option type where the compiler does not enforce safe unwrapping. The "!" operator "force unwraps" an option type, causing an error if it is nil. In Geek Code version 3, "!" is used before a letter to denote that the geek refuses to participate in the topic at hand. In some cases, it has an alternate meaning, such as "G!" denoting a geek of no qualifications, "!d" denoting not wearing any clothes, "P!" denoting not being allowed to use Perl, and so on. They all share some negative connotations, however. is used to denote changed lines in output in the . In the , changes to a single line are denoted as an addition and deletion. Video games. The exclamation mark can be used in video games to signify that a character is startled or alarmed. In the "Metal Gear" and "Paper Mario" series, an exclamation mark appears over enemies' heads when they notice the player. In massively multiplayer online (MMO) games such as "World of Warcraft", an exclamation mark hovering over a character's head is often used to indicate that they are offering a quest for the player to complete. In "Dota 2", an exclamation mark is shown above the head of a unit if it is killed by means not granting enemies experience or gold (if it is "denied"). In the 2005 arcade dance simulation game "In the Groove 2", there is a song titled "!" (also referred to as "bang") by the artist Onyx. Internet culture. In Internet culture, especially where leet is used, multiple exclamation marks may be affixed with the numeral "1" as in "!!!!!!111". The notation originates from a common error: when typing multiple exclamation points quickly, the typist may fail to hold the combination that produces the exclamation mark on many keyboard layouts. This error, first used intentionally as a joke in the leet linguistic community, is now an accepted form of exclamation in leet and derivative dialects such as Lolspeak. Some utterances include further substitutions, for example "!!!111oneeleven". In fandom and fanfiction, ! is used to signify a defining quality in a character, usually signifying an alternative interpretation of a character from a canonical work. Examples of this would be "Romantic!Draco" or "Vampire!Harry" from Harry Potter fandom. It is also used to clarify the current persona of a character with multiple identities or appearances, such as to distinguish "Armor!Al" from "Human!Al" in a work based on Fullmetal Alchemist. The origin of this usage is unknown, although Comics. Some comic books, especially superhero comics of the mid-20th century, routinely use the exclamation point instead of the period, which means the character has just realized something; unlike when the question mark appears instead, which means the character is confused, surprised or they do not know what is happening. This tends to lead to exaggerated speech, in line with the other hyperboles common in comic books. A portion of the motivation, however, was simply that a period might disappear in the printing process used at the time, whereas an exclamation point would likely remain recognizable even if there was a printing glitch. For a short period Stan Lee, as editor-in-chief of Marvel Comics, attempted to curb their overuse by a short-lived ban on exclamation points altogether, which led to an inadvertent lack of ending punctuation on many sentences. Comic book writer Elliot S! Maggin once accidentally signed his name with an exclamation due to the habit of using them when writing comic scripts; it became his professional name from then on. Similarly, comic artist Scott Shaw! has used the exclamation point after his name throughout his career. In comic books and comics in general, a large exclamation point is often used near or over a character's head to indicate surprise. A question mark can similarly be used to indicate confusion. Chess. In chess notation denotes a good move, denotes an excellent move, "?!" denotes a dubious move, and "!?" denotes an interesting, risky move. In some chess variants such as large-board Shogi variants, "!" is used to record pieces capturing by stationary feeding or burning. "Scrabble". In "Scrabble", an exclamation mark written after a word is used to indicate its presence in the Official Tournament and Club Word List but its absence from the "Official Scrabble Players Dictionary", usually because the word has been judged offensive. Baseball. Exclamation points or asterisks can be used on scorecards to denote a "great defensive play". Popular music. The band !!! (pronounced "Chk Chk Chk") uses exclamation points as its name. In 2008, the pop-punk band Panic! at the Disco dropped the exclamation point in its name; this became the "most-discussed topic on [fan] message boards around the world". In 2009, the exclamation mark was re-inserted following the band's split. The band Bomb the Music Industry! utilizes an exclamation mark in its name, as well as several album and song titles and promotional material. Examples include their songs "(Shut) Up The Punx!!!" and the album "". American musician Pink stylizes her stage name "P!NK", and uses three exclamation points in the subtitle of her 2010 release, "Greatest Hits... So Far!!!". Television. The exclamation mark was included in the title of Dinah Shore's TV series, "Dinah!" The exclamation mark was later the subject of a bitter argument between Elaine Benes and her boyfriend, Jake Jarmel, in the "Seinfeld" episode, "The Sniffing Accountant". Elaine got upset with Jake for not putting an exclamation mark at the end of a message about her friend having a baby. Jake took extreme exception to the trivial criticism and broke up with Elaine, putting an exclamation mark after his parting words: "I'm leaving!" Unicode code-points (with HTML). Related forms have these code points: Some emojis include an exclamation mark: Some scripts have their own exclamation mark: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tbinom nk" } ]
https://en.wikipedia.org/wiki?curid=7712754
7713
Chinese remainder theorem
Theorem for solving simultaneous congruences In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer "n" by several integers, then one can determine uniquely the remainder of the division of "n" by the product of these integers, under the condition that the divisors are pairwise coprime (no two divisors share a common factor other than 1). For example, if we know that the remainder of "n" divided by 3 is 2, the remainder of "n" divided by 5 is 3, and the remainder of "n" divided by 7 is 2, then without knowing the value of "n", we can determine that the remainder of "n" divided by 105 (the product of 3, 5, and 7) is 23. Importantly, this tells us that if "n" is a natural number less than 105, then 23 is the only possible value of "n". The earliest known statement of the theorem is by the Chinese mathematician Sunzi in the "Sunzi Suanjing" in the 3rd to 5th century CE. The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers. The Chinese remainder theorem (expressed in terms of congruences) is true over every principal ideal domain. It has been generalized to any ring, with a formulation involving two-sided ideals. History. The earliest known statement of the theorem, as a problem with specific numbers, appears in the 5th-century book "Sunzi Suanjing" by the Chinese mathematician Sunzi: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There are certain things whose number is unknown. If we count them by threes, we have two left over; by fives, we have three left over; and by sevens, two are left over. How many things are there? Sunzi's work contains neither a proof nor a full algorithm. What amounts to an algorithm for solving this problem was described by Aryabhata (6th century). Special cases of the Chinese remainder theorem were also known to Brahmagupta (7th century) and appear in Fibonacci's Liber Abaci (1202). The result was later generalized with a complete solution called "Da-yan-shu" () in Qin Jiushao's 1247 "Mathematical Treatise in Nine Sections" which was translated into English in early 19th century by British missionary Alexander Wylie. The notion of congruences was first introduced and used by Carl Friedrich Gauss in his "Disquisitiones Arithmeticae" of 1801. Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, "to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction." Gauss introduces a procedure for solving the problem that had already been used by Leonhard Euler but was in fact an ancient method that had appeared several times. Statement. Let "n"1, ..., "n""k" be integers greater than 1, which are often called "moduli" or "divisors". Let us denote by "N" the product of the "n""i". The Chinese remainder theorem asserts that if the "n""i" are pairwise coprime, and if "a"1, ..., "a""k" are integers such that 0 ≤ "a""i" &lt; "n""i" for every "i", then there is one and only one integer "x", such that 0 ≤ "x" &lt; "N" and the remainder of the Euclidean division of "x" by "n""i" is "a""i" for every "i". This may be restated as follows in terms of congruences: If the formula_0 are pairwise coprime, and if "a"1, ..., "a""k" are any integers, then the system formula_1 has a solution, and any two solutions, say "x"1 and "x"2, are congruent modulo "N", that is, "x"1 ≡ "x"2 (mod "N"&amp;hairsp;). In abstract algebra, the theorem is often restated as: if the "n""i" are pairwise coprime, the map formula_2 defines a ring isomorphism formula_3 between the ring of integers modulo "N" and the direct product of the rings of integers modulo the "n""i". This means that for doing a sequence of arithmetic operations in formula_4 one may do the same computation independently in each formula_5 and then get the result by applying the isomorphism (from the right to the left). This may be much faster than the direct computation if "N" and the number of operations are large. This is widely used, under the name "multi-modular computation", for linear algebra over the integers or the rational numbers. The theorem can also be restated in the language of combinatorics as the fact that the infinite arithmetic progressions of integers form a Helly family. Proof. The existence and the uniqueness of the solution may be proven independently. However, the first proof of existence, given below, uses this uniqueness. Uniqueness. Suppose that x and y are both solutions to all the congruences. As x and y give the same remainder, when divided by "ni", their difference "x" − "y" is a multiple of each "ni". As the "ni" are pairwise coprime, their product "N" also divides "x" − "y", and thus x and y are congruent modulo "N". If x and y are supposed to be non-negative and less than "N" (as in the first statement of the theorem), then their difference may be a multiple of "N" only if "x" = "y". Existence (first proof). The map formula_6 maps congruence classes modulo "N" to sequences of congruence classes modulo "ni". The proof of uniqueness shows that this map is injective. As the domain and the codomain of this map have the same number of elements, the map is also surjective, which proves the existence of the solution. This proof is very simple but does not provide any direct way for computing a solution. Moreover, it cannot be generalized to other situations where the following proof can. Existence (constructive proof). Existence may be established by an explicit construction of x. This construction may be split into two steps, first solving the problem in the case of two moduli, and then extending this solution to the general case by induction on the number of moduli. Case of two moduli. We want to solve the system: formula_7 where formula_8 and formula_9 are coprime. Bézout's identity asserts the existence of two integers formula_10 and formula_11 such that formula_12 The integers formula_10 and formula_11 may be computed by the extended Euclidean algorithm. A solution is given by formula_13 Indeed, formula_14 implying that formula_15 The second congruence is proved similarly, by exchanging the subscripts 1 and 2. General case. Consider a sequence of congruence equations: formula_16 where the formula_0 are pairwise coprime. The two first equations have a solution formula_17 provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation formula_18 As the other formula_0 are coprime with formula_19 this reduces solving the initial problem of k equations to a similar problem with formula_20 equations. Iterating the process, one gets eventually the solutions of the initial problem. Existence (direct construction). For constructing a solution, it is not necessary to make an induction on the number of moduli. However, such a direct construction involves more computation with large numbers, which makes it less efficient and less used. Nevertheless, Lagrange interpolation is a special case of this construction, applied to polynomials instead of integers. Let formula_21 be the product of all moduli but one. As the formula_0 are pairwise coprime, formula_22 and formula_0 are coprime. Thus Bézout's identity applies, and there exist integers formula_23 and formula_24 such that formula_25 A solution of the system of congruences is formula_26 In fact, as formula_27 is a multiple of formula_0 for formula_28 we have formula_29 for every formula_30 Computation. Consider a system of congruences: formula_31 where the formula_0 are pairwise coprime, and let formula_32 In this section several methods are described for computing the unique solution for formula_33, such that formula_34 and these methods are applied on the example formula_35 Several methods of computation are presented. The two first ones are useful for small examples, but become very inefficient when the product formula_36 is large. The third one uses the existence proof given in . It is the most convenient when the product formula_36 is large, or for computer computation. Systematic search. It is easy to check whether a value of x is a solution: it suffices to compute the remainder of the Euclidean division of x by each "n""i". Thus, to find the solution, it suffices to check successively the integers from 0 to N until finding the solution. Although very simple, this method is very inefficient. For the simple example considered here, 40 integers (including 0) have to be checked for finding the solution, which is 39. This is an exponential time algorithm, as the size of the input is, up to a constant factor, the number of digits of N, and the average number of operations is of the order of N. Therefore, this method is rarely used, neither for hand-written computation nor on computers. Search by sieving. The search of the solution may be made dramatically faster by sieving. For this method, we suppose, without loss of generality, that formula_37 (if it were not the case, it would suffice to replace each formula_38 by the remainder of its division by formula_0). This implies that the solution belongs to the arithmetic progression formula_39 By testing the values of these numbers modulo formula_40 one eventually finds a solution formula_41 of the two first congruences. Then the solution belongs to the arithmetic progression formula_42 Testing the values of these numbers modulo formula_43 and continuing until every modulus has been tested eventually yields the solution. This method is faster if the moduli have been ordered by decreasing value, that is if formula_44 For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4, 9 = 4 + 5, 14 = 9 + 5, ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding 20 = 5 × 4 at each step, and computing only the remainders by 3. This gives 4 mod 4 → 0. Continue 4 + 5 = 9 mod 4 →1. Continue 9 + 5 = 14 mod 4 → 2. Continue 14 + 5 = 19 mod 4 → 3. OK, continue by considering remainders modulo 3 and adding 5 × 4 = 20 each time 19 mod 3 → 1. Continue 19 + 20 = 39 mod 3 → 0. OK, this is the result. This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has an exponential time complexity and is therefore not used on computers. Using the existence construction. The constructive existence proof shows that, in the case of two moduli, the solution may be obtained by the computation of the Bézout coefficients of the moduli, followed by a few multiplications, additions and reductions modulo formula_45 (for getting a result in the interval formula_46). As the Bézout's coefficients may be computed with the extended Euclidean algorithm, the whole computation, at most, has a quadratic time complexity of formula_47 where formula_48 denotes the number of digits of formula_49 For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers. Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is, algorithms working in quasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time. On the current example (which has only three moduli), both strategies are identical and work as follows. Bézout's identity for 3 and 4 is formula_50 Putting this in the formula given for proving the existence gives formula_51 for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of 3 × 4 = 12. One may continue with any of these solutions, but the solution 3 = −9 +12 is smaller (in absolute value) and thus leads probably to an easier computation Bézout identity for 5 and 3 × 4 = 12 is formula_52 Applying the same formula again, we get a solution of the problem: formula_53 The other solutions are obtained by adding any multiple of 3 × 4 × 5 = 60, and the smallest positive solution is −21 + 60 = 39. As a linear Diophantine system. The system of congruences solved by the Chinese remainder theorem may be rewritten as a system of linear Diophantine equations: formula_54 where the unknown integers are formula_33 and the formula_55 Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of the matrix of the system to Smith normal form or Hermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use of Bézout's identity. Over principal ideal domains. In , the Chinese remainder theorem has been stated in three different ways: in terms of remainders, of congruences, and of a ring isomorphism. The statement in terms of remainders does not apply, in general, to principal ideal domains, as remainders are not defined in such rings. However, the two other versions make sense over a principal ideal domain "R": it suffices to replace "integer" by "element of the domain" and formula_56 by "R". These two versions of the theorem are true in this context, because the proofs (except for the first existence proof), are based on Euclid's lemma and Bézout's identity, which are true over every principal domain. However, in general, the theorem is only an existence theorem and does not provide any way for computing the solution, unless one has an algorithm for computing the coefficients of Bézout's identity. Over univariate polynomial rings and Euclidean domains. The statement in terms of remainders given in cannot be generalized to any principal ideal domain, but its generalization to Euclidean domains is straightforward. The univariate polynomials over a field is the typical example of a Euclidean domain which is not the integers. Therefore, we state the theorem for the case of the ring formula_57 for a field formula_58 For getting the theorem for a general Euclidean domain, it suffices to replace the degree by the Euclidean function of the Euclidean domain. The Chinese remainder theorem for polynomials is thus: Let formula_59 (the moduli) be, for formula_60, pairwise coprime polynomials in formula_57. Let formula_61 be the degree of formula_59, and formula_62 be the sum of the formula_63 If formula_64 are polynomials such that formula_65 or formula_66 for every "i", then, there is one and only one polynomial formula_67, such that formula_68 and the remainder of the Euclidean division of formula_67 by formula_59 is formula_69 for every "i". The construction of the solution may be done as in or . However, the latter construction may be simplified by using, as follows, partial fraction decomposition instead of the extended Euclidean algorithm. Thus, we want to find a polynomial formula_67, which satisfies the congruences formula_70 for formula_71 Consider the polynomials formula_72 The partial fraction decomposition of formula_73 gives k polynomials formula_74 with degrees formula_75 such that formula_76 and thus formula_77 Then a solution of the simultaneous congruence system is given by the polynomial formula_78 In fact, we have formula_79 for formula_80 This solution may have a degree larger than formula_81 The unique solution of degree less than formula_62 may be deduced by considering the remainder formula_82 of the Euclidean division of formula_83 by formula_84 This solution is formula_85 Lagrange interpolation. A special case of Chinese remainder theorem for polynomials is Lagrange interpolation. For this, consider k monic polynomials of degree one: formula_86 They are pairwise coprime if the formula_87 are all different. The remainder of the division by formula_59 of a polynomial formula_67 is formula_88, by the polynomial remainder theorem. Now, let formula_89 be constants (polynomials of degree 0) in formula_58 Both Lagrange interpolation and Chinese remainder theorem assert the existence of a unique polynomial formula_90 of degree less than formula_91 such that formula_92 for every formula_30 Lagrange interpolation formula is exactly the result, in this case, of the above construction of the solution. More precisely, let formula_93 The partial fraction decomposition of formula_94 is formula_95 In fact, reducing the right-hand side to a common denominator one gets formula_96 and the numerator is equal to one, as being a polynomial of degree less than formula_97 which takes the value one for formula_91 different values of formula_98 Using the above general formula, we get the Lagrange interpolation formula: formula_99 Hermite interpolation. Hermite interpolation is an application of the Chinese remainder theorem for univariate polynomials, which may involve moduli of arbitrary degrees (Lagrange interpolation involves only moduli of degree one). The problem consists of finding a polynomial of the least possible degree, such that the polynomial and its first derivatives take given values at some fixed points. More precisely, let formula_100 be formula_91 elements of the ground field formula_101 and, for formula_102 let formula_103 be the values of the first formula_104 derivatives of the sought polynomial at formula_87 (including the 0th derivative, which is the value of the polynomial itself). The problem is to find a polynomial formula_67 such that its "j"&amp;hairsp;th derivative takes the value formula_105 at formula_106 for formula_107 and formula_108 Consider the polynomial formula_109 This is the Taylor polynomial of order formula_110 at formula_87, of the unknown polynomial formula_111 Therefore, we must have formula_112 Conversely, any polynomial formula_113 that satisfies these formula_91 congruences, in particular verifies, for any formula_114 formula_115 therefore formula_59 is its Taylor polynomial of order formula_116 at formula_87, that is, formula_67 solves the initial Hermite interpolation problem. The Chinese remainder theorem asserts that there exists exactly one polynomial of degree less than the sum of the formula_117 which satisfies these formula_91 congruences. There are several ways for computing the solution formula_111 One may use the method described at the beginning of . One may also use the constructions given in or . Generalization to non-coprime moduli. The Chinese remainder theorem can be generalized to non-coprime moduli. Let formula_118 be any integers, let formula_119; formula_120, and consider the system of congruences: formula_121 If formula_122, then this system has a unique solution modulo formula_123. Otherwise, it has no solutions. If one uses Bézout's identity to write formula_124, then the solution is given by formula_125 This defines an integer, as g divides both m and n. Otherwise, the proof is very similar to that for coprime moduli. Generalization to arbitrary rings. The Chinese remainder theorem can be generalized to any ring, by using coprime ideals (also called comaximal ideals). Two ideals I and J are coprime if there are elements formula_126 and formula_127 such that formula_128 This relation plays the role of Bézout's identity in the proofs related to this generalization, which otherwise are very similar. The generalization may be stated as follows. Let "I"1, ..., "Ik" be two-sided ideals of a ring formula_129 and let "I" be their intersection. If the ideals are pairwise coprime, we have the isomorphism: formula_130 between the quotient ring formula_131 and the direct product of the formula_132 where "formula_133" denotes the image of the element formula_33 in the quotient ring defined by the ideal formula_134 Moreover, if formula_129 is commutative, then the ideal intersection of pairwise coprime ideals is equal to their product; that is formula_135 if Ii and Ij are coprime for all "i" ≠ "j". Interpretation in terms of idempotents. Let formula_136 be pairwise coprime two-sided ideals with formula_137 and formula_138 be the isomorphism defined above. Let formula_139 be the element of formula_140 whose components are all 0 except the i&amp;hairsp;th which is 1, and formula_141 The formula_142 are central idempotents that are pairwise orthogonal; this means, in particular, that formula_143 and formula_144 for every i and j. Moreover, one has formula_145 and formula_146 In summary, this generalized Chinese remainder theorem is the equivalence between giving pairwise coprime two-sided ideals with a zero intersection, and giving central and pairwise orthogonal idempotents that sum to 1. Applications. Sequence numbering. The Chinese remainder theorem has been used to construct a Gödel numbering for sequences, which is involved in the proof of Gödel's incompleteness theorems. Fast Fourier transform. The prime-factor FFT algorithm (also called Good-Thomas algorithm) uses the Chinese remainder theorem for reducing the computation of a fast Fourier transform of size formula_45 to the computation of two fast Fourier transforms of smaller sizes formula_8 and formula_9 (providing that formula_8 and formula_9 are coprime). Encryption. Most implementations of RSA use the Chinese remainder theorem during signing of HTTPS certificates and during decryption. The Chinese remainder theorem can also be used in secret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered. Secret sharing using the Chinese remainder theorem uses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certain cardinality. Range ambiguity resolution. The range ambiguity resolution techniques used with medium pulse repetition frequency radar can be seen as a special case of the Chinese remainder theorem. Decomposition of surjections of finite abelian groups. Given a surjection formula_147 of finite abelian groups, we can use the Chinese remainder theorem to give a complete description of any such map. First of all, the theorem gives isomorphisms formula_148 where formula_149. In addition, for any induced map formula_150 from the original surjection, we have formula_151 and formula_152 since for a pair of primes formula_153, the only non-zero surjections formula_154 can be defined if formula_155 and formula_156. These observations are pivotal for constructing the ring of profinite integers, which is given as an inverse limit of all such maps. Dedekind's theorem. Dedekind's theorem on the linear independence of characters. Let M be a monoid and k an integral domain, viewed as a monoid by considering the multiplication on k. Then any finite family ( "fi" )"i"∈"I" of distinct monoid homomorphisms  "fi" : "M" → "k" is linearly independent. In other words, every family ("αi")"i"∈"I" of elements "αi" ∈ "k" satisfying formula_157 must be equal to the family (0)"i"∈"I". Proof. First assume that k is a field, otherwise, replace the integral domain k by its quotient field, and nothing will change. We can linearly extend the monoid homomorphisms  "fi" : "M" → "k" to k-algebra homomorphisms "Fi" : "k"["M"] → "k", where "k"["M"] is the monoid ring of M over k. Then, by linearity, the condition formula_158 yields formula_159 Next, for "i", "j" ∈ "I"; "i" ≠ "j" the two k-linear maps "Fi" : "k"["M"] → "k" and "Fj" : "k"["M"] → "k" are not proportional to each other. Otherwise  "fi"  and  "fj"  would also be proportional, and thus equal since as monoid homomorphisms they satisfy:  "fi"&amp;hairsp;(1) 1  "fj"&amp;hairsp;(1), which contradicts the assumption that they are distinct. Therefore, the kernels Ker "Fi" and Ker "Fj" are distinct. Since "k"["M"]/Ker "Fi" ≅ "Fi"&amp;hairsp;("k"["M"]) "k" is a field, Ker "Fi" is a maximal ideal of "k"["M"] for every i in I. Because they are distinct and maximal the ideals Ker "Fi" and Ker "Fj" are coprime whenever "i" ≠ "j". The Chinese Remainder Theorem (for general rings) yields an isomorphism: formula_160 where formula_161 Consequently, the map formula_162 is surjective. Under the isomorphisms "k"["M"]/Ker "Fi" → "Fi"&amp;hairsp;("k"["M"]) "k", the map Φ corresponds to: formula_163 Now, formula_164 yields formula_165 for every vector ("ui")"i"∈"I" in the image of the map ψ. Since ψ is surjective, this means that formula_165 for every vector formula_166 Consequently, ("αi")"i"∈"I" (0)"i"∈"I". QED. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n_i" }, { "math_id": 1, "text": "\\begin{align}\n x &\\equiv a_1 \\pmod{n_1} \\\\\n &\\,\\,\\,\\vdots \\\\\n x &\\equiv a_k \\pmod{n_k},\n\\end{align}" }, { "math_id": 2, "text": "x \\bmod N \\;\\mapsto\\;(x \\bmod n_1,\\, \\ldots,\\, x \\bmod n_k)" }, { "math_id": 3, "text": "\\mathbb{Z}/N\\mathbb{Z} \\cong \\mathbb{Z}/n_1\\mathbb{Z} \\times \\cdots \\times \\mathbb{Z}/n_k\\mathbb{Z}" }, { "math_id": 4, "text": "\\mathbb{Z}/N\\mathbb{Z}," }, { "math_id": 5, "text": "\\mathbb{Z}/n_i\\mathbb{Z}" }, { "math_id": 6, "text": "x \\bmod N \\mapsto (x \\bmod n_1, \\ldots, x\\bmod n_k)" }, { "math_id": 7, "text": "\n\\begin{align}\n x &\\equiv a_1 \\pmod {n_1}\\\\\n x &\\equiv a_2 \\pmod {n_2},\n\\end{align}\n" }, { "math_id": 8, "text": "n_1" }, { "math_id": 9, "text": "n_2" }, { "math_id": 10, "text": "m_1" }, { "math_id": 11, "text": "m_2" }, { "math_id": 12, "text": "m_1n_1+m_2n_2=1." }, { "math_id": 13, "text": "x = a_1m_2n_2+a_2m_1n_1." }, { "math_id": 14, "text": "\\begin{align}\nx&=a_1m_2n_2+a_2m_1n_1\\\\\n&=a_1(1 - m_1n_1) + a_2m_1n_1 \\\\\n&=a_1 + (a_2 - a_1)m_1n_1,\n\\end{align}" }, { "math_id": 15, "text": "x \\equiv a_1 \\pmod {n_1}." }, { "math_id": 16, "text": "\n\\begin{align} \n x &\\equiv a_1 \\pmod{n_1} \\\\ \n &\\vdots \\\\ \n x &\\equiv a_k \\pmod{n_k},\n\\end{align}\n" }, { "math_id": 17, "text": "a_{1,2}" }, { "math_id": 18, "text": "x \\equiv a_{1,2} \\pmod{n_1n_2}." }, { "math_id": 19, "text": "n_1n_2," }, { "math_id": 20, "text": "k-1" }, { "math_id": 21, "text": "N_i = N/n_i" }, { "math_id": 22, "text": "N_i" }, { "math_id": 23, "text": "M_i" }, { "math_id": 24, "text": "m_i" }, { "math_id": 25, "text": "M_iN_i + m_in_i=1." }, { "math_id": 26, "text": "x=\\sum_{i=1}^k a_iM_iN_i." }, { "math_id": 27, "text": "N_j" }, { "math_id": 28, "text": "i\\neq j," }, { "math_id": 29, "text": "x \\equiv a_iM_iN_i \\equiv a_i(1-m_in_i) \\equiv a_i \\pmod{n_i}, " }, { "math_id": 30, "text": "i." }, { "math_id": 31, "text": "\\begin{align} \n x &\\equiv a_1 \\pmod{n_1} \\\\\n &\\vdots \\\\ \n x &\\equiv a_k \\pmod{n_k}, \\\\ \n\\end{align}" }, { "math_id": 32, "text": "N=n_1 n_2\\cdots n_k." }, { "math_id": 33, "text": "x" }, { "math_id": 34, "text": "0\\le x<N," }, { "math_id": 35, "text": "\n\\begin{align}\n x &\\equiv 0 \\pmod 3 \\\\\n x &\\equiv 3 \\pmod 4 \\\\\n x &\\equiv 4 \\pmod 5.\n\\end{align}\n" }, { "math_id": 36, "text": "n_1\\cdots n_k" }, { "math_id": 37, "text": "0\\le a_i <n_i" }, { "math_id": 38, "text": "a_i" }, { "math_id": 39, "text": "a_1, a_1 + n_1, a_1+2n_1, \\ldots" }, { "math_id": 40, "text": "n_2," }, { "math_id": 41, "text": "x_2" }, { "math_id": 42, "text": "x_2, x_2 + n_1n_2, x_2+2n_1n_2, \\ldots" }, { "math_id": 43, "text": "n_3," }, { "math_id": 44, "text": "n_1>n_2> \\cdots > n_k." }, { "math_id": 45, "text": "n_1n_2" }, { "math_id": 46, "text": "(0, n_1n_2-1)" }, { "math_id": 47, "text": "O((s_1+s_2)^2)," }, { "math_id": 48, "text": "s_i" }, { "math_id": 49, "text": "n_i." }, { "math_id": 50, "text": "1\\times 4 + (-1)\\times 3 = 1." }, { "math_id": 51, "text": "0\\times 1\\times 4 + 3\\times (-1)\\times 3 =-9" }, { "math_id": 52, "text": "5\\times 5 +(-2)\\times 12 =1." }, { "math_id": 53, "text": "5\\times 5 \\times 3 + 12\\times (-2)\\times 4 = -21." }, { "math_id": 54, "text": "\\begin{align} \n x &= a_1 +x_1n_1\\\\ \n &\\vdots \\\\ \n x &=a_k+x_kn_k, \n\\end{align}" }, { "math_id": 55, "text": "x_i." }, { "math_id": 56, "text": "\\mathbb Z" }, { "math_id": 57, "text": "R=K[X]" }, { "math_id": 58, "text": "K." }, { "math_id": 59, "text": "P_i(X)" }, { "math_id": 60, "text": "i = 1, \\dots, k" }, { "math_id": 61, "text": "d_i =\\deg P_i" }, { "math_id": 62, "text": "D" }, { "math_id": 63, "text": "d_i." }, { "math_id": 64, "text": "A_i(X), \\ldots,A_k(X)" }, { "math_id": 65, "text": "A_i(X)=0" }, { "math_id": 66, "text": "\\deg A_i<d_i" }, { "math_id": 67, "text": "P(X)" }, { "math_id": 68, "text": "\\deg P<D" }, { "math_id": 69, "text": "A_i(X)" }, { "math_id": 70, "text": "P(X)\\equiv A_i(X) \\pmod {P_i(X)}," }, { "math_id": 71, "text": "i=1,\\ldots,k." }, { "math_id": 72, "text": "\\begin{align}\n Q(X) &= \\prod_{i=1}^{k}P_i(X) \\\\\n Q_i(X) &= \\frac{Q(X)}{P_i(X)}.\n\\end{align}" }, { "math_id": 73, "text": "1/Q(X)" }, { "math_id": 74, "text": "S_i(X)" }, { "math_id": 75, "text": "\\deg S_i(X) < d_i," }, { "math_id": 76, "text": "\\frac{1}{Q(X)} = \\sum_{i=1}^k \\frac{S_i(X)}{P_i(X)}," }, { "math_id": 77, "text": "1 = \\sum_{i=1}^{k}S_i(X) Q_i(X)." }, { "math_id": 78, "text": "\\sum_{i=1}^k A_i(X) S_i(X) Q_i(X)." }, { "math_id": 79, "text": "\\sum_{i=1}^k A_i(X) S_i(X) Q_i(X)= A_i(X)+ \\sum_{j=1}^{k}(A_j(X) - A_i(X)) S_j(X) Q_j(X) \\equiv A_i(X)\\pmod{P_i(X)}," }, { "math_id": 80, "text": "1 \\leq i \\leq k." }, { "math_id": 81, "text": "D=\\sum_{i=1}^k d_i." }, { "math_id": 82, "text": "B_i(X)" }, { "math_id": 83, "text": "A_i(X)S_i(X)" }, { "math_id": 84, "text": "P_i(X)." }, { "math_id": 85, "text": "P(X)=\\sum_{i=1}^k B_i(X) Q_i(X)." }, { "math_id": 86, "text": "P_i(X)=X-x_i." }, { "math_id": 87, "text": "x_i" }, { "math_id": 88, "text": "P(x_i)" }, { "math_id": 89, "text": "A_1, \\ldots, A_k" }, { "math_id": 90, "text": "P(X)," }, { "math_id": 91, "text": "k" }, { "math_id": 92, "text": "P(x_i)=A_i," }, { "math_id": 93, "text": "\\begin{align}\n Q(X) &= \\prod_{i=1}^{k}(X-x_i) \\\\[6pt]\n Q_i(X) &= \\frac{Q(X)}{X-x_i}.\n\\end{align}" }, { "math_id": 94, "text": "\\frac{1}{Q(X)}" }, { "math_id": 95, "text": "\\frac{1}{Q(X)} = \\sum_{i=1}^k \\frac{1}{Q_i(x_i)(X-x_i)}." }, { "math_id": 96, "text": " \\sum_{i=1}^k \\frac{1}{Q_i(x_i)(X-x_i)}= \\frac{1}{Q(X)} \\sum_{i=1}^k \\frac{Q_i(X)}{Q_i(x_i)}," }, { "math_id": 97, "text": "k," }, { "math_id": 98, "text": "X." }, { "math_id": 99, "text": "P(X)=\\sum_{i=1}^k A_i\\frac{Q_i(X)}{Q_i(x_i)}." }, { "math_id": 100, "text": "x_1, \\ldots, x_k" }, { "math_id": 101, "text": "K," }, { "math_id": 102, "text": "i=1,\\ldots, k," }, { "math_id": 103, "text": "a_{i,0}, a_{i,1}, \\ldots, a_{i,r_i-1}" }, { "math_id": 104, "text": "r_i" }, { "math_id": 105, "text": "a_{i,j} " }, { "math_id": 106, "text": "x_i," }, { "math_id": 107, "text": "i=1,\\ldots,k" }, { "math_id": 108, "text": "j=0,\\ldots,r_j." }, { "math_id": 109, "text": "P_i(X) = \\sum_{j=0}^{r_i - 1}\\frac{a_{i,j}}{j!}(X - x_i)^j." }, { "math_id": 110, "text": "r_i-1" }, { "math_id": 111, "text": "P(X)." }, { "math_id": 112, "text": "P(X)\\equiv P_i(X) \\pmod {(X-x_i)^{r_i}}." }, { "math_id": 113, "text": "P(X) " }, { "math_id": 114, "text": "i=1, \\ldots, k" }, { "math_id": 115, "text": "P(X)= P_i(X) +o(X-x_i)^{r_i-1} " }, { "math_id": 116, "text": " r_i - 1" }, { "math_id": 117, "text": "r_i," }, { "math_id": 118, "text": "m, n, a, b" }, { "math_id": 119, "text": "g = \\gcd(m,n)" }, { "math_id": 120, "text": "M = \\operatorname{lcm}(m,n)" }, { "math_id": 121, "text": "\n\\begin{align}\nx &\\equiv a \\pmod m \\\\\nx &\\equiv b \\pmod n,\n\\end{align}\n" }, { "math_id": 122, "text": "a \\equiv b \\pmod g" }, { "math_id": 123, "text": "M = mn/g" }, { "math_id": 124, "text": "g = um + vn" }, { "math_id": 125, "text": " x = \\frac{avn+bum}{g}." }, { "math_id": 126, "text": "i\\in I" }, { "math_id": 127, "text": "j\\in J" }, { "math_id": 128, "text": "i+j=1." }, { "math_id": 129, "text": "R" }, { "math_id": 130, "text": "\\begin{align}\n R/I &\\to (R/I_1) \\times \\cdots \\times (R/I_k) \\\\\n x \\bmod I &\\mapsto (x \\bmod I_1,\\, \\ldots,\\, x \\bmod I_k),\n\\end{align}" }, { "math_id": 131, "text": "R/I" }, { "math_id": 132, "text": "R/I_i," }, { "math_id": 133, "text": "x \\bmod I" }, { "math_id": 134, "text": "I." }, { "math_id": 135, "text": "\nI= I_1\\cap I_2 \\cap\\cdots\\cap I_k= I_1I_2\\cdots I_k,\n" }, { "math_id": 136, "text": "I_1, I_2, \\dots, I_k" }, { "math_id": 137, "text": " \\bigcap_{i = 1}^k I_i = 0," }, { "math_id": 138, "text": "\\varphi:R\\to (R/I_1) \\times \\cdots \\times (R/I_k)" }, { "math_id": 139, "text": "f_i=(0,\\ldots,1,\\ldots, 0)" }, { "math_id": 140, "text": "(R/I_1) \\times \\cdots \\times (R/I_k)" }, { "math_id": 141, "text": "e_i=\\varphi^{-1}(f_i)." }, { "math_id": 142, "text": "e_i" }, { "math_id": 143, "text": "e_i^2=e_i" }, { "math_id": 144, "text": "e_ie_j=e_je_i=0" }, { "math_id": 145, "text": "e_1+\\cdots+e_n=1," }, { "math_id": 146, "text": "I_i=R(1-e_i)." }, { "math_id": 147, "text": "\\mathbb{Z}/n \\to \\mathbb{Z}/m" }, { "math_id": 148, "text": "\\begin{align}\n\\mathbb{Z}/n &\\cong \\mathbb{Z}/p_{n_1}^{a_1} \\times \\cdots \\times \\mathbb{Z}/p_{n_i}^{a_i} \\\\\n\\mathbb{Z}/m &\\cong \\mathbb{Z}/p_{m_1}^{b_1} \\times \\cdots \\times \\mathbb{Z}/p_{m_j}^{b_j}\n\\end{align}" }, { "math_id": 149, "text": "\\{p_{m_1},\\ldots,p_{m_j} \\} \\subseteq \\{ p_{n_1},\\ldots, p_{n_i} \\}" }, { "math_id": 150, "text": "\\mathbb{Z}/p_{n_k}^{a_k} \\to \\mathbb{Z}/p_{m_l}^{b_l}" }, { "math_id": 151, "text": "a_k \\geq b_l" }, { "math_id": 152, "text": "p_{n_k} = p_{m_l}," }, { "math_id": 153, "text": "p,q" }, { "math_id": 154, "text": "\\mathbb{Z}/p^a \\to \\mathbb{Z}/q^b" }, { "math_id": 155, "text": "p = q" }, { "math_id": 156, "text": "a \\geq b" }, { "math_id": 157, "text": "\\sum_{i \\in I}\\alpha_i f_i = 0" }, { "math_id": 158, "text": "\\sum_{i\\in I}\\alpha_i f_i = 0," }, { "math_id": 159, "text": "\\sum_{i \\in I}\\alpha_i F_i = 0." }, { "math_id": 160, "text": "\\begin{align}\n \\phi: k[M] / K &\\to \\prod_{i \\in I}k[M] / \\mathrm{Ker} F_i \\\\\n \\phi(x + K) &= \\left(x + \\mathrm{Ker} F_i\\right)_{i \\in I}\n\\end{align}" }, { "math_id": 161, "text": "K = \\prod_{i \\in I}\\mathrm{Ker} F_i = \\bigcap_{i \\in I}\\mathrm{Ker} F_i." }, { "math_id": 162, "text": "\\begin{align}\n \\Phi: k[M] &\\to \\prod_{i \\in I}k[M]/ \\mathrm{Ker} F_i \\\\\n \\Phi(x) &= \\left(x + \\mathrm{Ker} F_i\\right)_{i \\in I}\n\\end{align}" }, { "math_id": 163, "text": "\\begin{align}\n \\psi: k[M] &\\to \\prod_{i \\in I}k \\\\\n \\psi(x) &= \\left[F_i(x)\\right]_{i \\in I}\n\\end{align}" }, { "math_id": 164, "text": "\\sum_{i \\in I}\\alpha_i F_i = 0" }, { "math_id": 165, "text": "\\sum_{i \\in I}\\alpha_i u_i = 0" }, { "math_id": 166, "text": "\\left(u_i\\right)_{i \\in I} \\in \\prod_{i \\in I}k." } ]
https://en.wikipedia.org/wiki?curid=7713
77139118
Eleven-dimensional supergravity
Supergravity in eleven dimensions In supersymmetry, eleven-dimensional supergravity is the theory of supergravity in the highest number of dimensions allowed for a supersymmetric theory. It contains a graviton, a gravitino, and a 3-form gauge field, with their interactions uniquely fixed by supersymmetry. Discovered in 1978 by Eugène Cremmer, Bernard Julia, and Joël Scherk, it quickly became a popular candidate for a theory of everything during the 1980s. However, interest in it soon faded due to numerous difficulties that arise when trying to construct physically realistic models. It came back to prominence in the mid-1990s when it was found to be the low energy limit of M-theory, making it crucial for understanding various aspects of string theory. History. Supergravity was discovered in 1976 through the construction of pure four-dimensional supergravity with one gravitino. One important direction in the supergravity program was to try to construct four-dimensional formula_0 supergravity since this was an attractive candidate for a theory of everything, stemming from the fact that it unifies particles of all physically admissible spins into a single multiplet. The theory may additionally be UV finite. Werner Nahm showed in 1978 that supersymmetry with spin less than or equal to two is only possible in eleven dimensions or lower. Motivated by this, eleven-dimensional supergravity was constructed by Eugène Cremmer, Bernard Julia, and Joël Scherk later the same year, with the aim of dimensionally reducing it to four dimensions to acquire the formula_0 theory, which was done in 1979. During the 1980s, 11D supergravity was of great interest in its own right as a possible fundamental theory of nature. This began in 1980 when Peter Freund and Mark Ruben showed that supergravity compactifies preferentially to four or seven dimensions when using a background where the field strength tensor is turned on. Additionally, Edward Witten argued in 1981 that eleven dimensions are also the minimum number of dimensions needed to acquire the Standard Model gauge group, assuming that this arises as subgroup of the isometry group of the compact manifold. The main area of study was understanding how 11D supergravity compactifies down to four dimensions. While there are many ways to do this, depending on the choice of the compact manifold, the most popular one was using the 7-sphere. However, a number of problems were quickly identified with these approaches which eventually caused the program to be abandoned. One of the main issues was that many of the well-motivated manifolds could not yield the Standard Model gauge group. Another problem at the time was that standard Kaluza–Klein compactification made it hard to acquire chiral fermions needed to build the Standard Model. Additionally, these compactifications generally yielded very large negative cosmological constants which could be hard to remove. Lastly, quantizing the theory gave rise to quantum anomalies which were difficult to eliminate. Some of these problems can be overcome with more modern methods which were unknown at the time. For example, chiral fermions can be acquired by using singular manifolds, using noncompact manifolds, utilising the end-of-world 9-brane of the theory, or by exploiting string dualities that relate the 11D theory to chiral string theories. Similarly, the presence of branes can also be used to build larger gauge groups. Due to these issues, 11D supergravity was abandoned in the late 1980s, although it remained an intriguing theory. Indeed, in 1988 Michael Green, John Schwartz, and Edward Witten wrote of it that &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It is hard to believe that its existence is just an accident, but it is difficult at the present time to state a compelling conjecture for what its role may be in the scheme of things. In 1995, Edward Witten discovered M-theory, whose low-energy limit is 11D supergravity, bringing the theory back into the forefront of physics and giving it an important place in string theory. Theory. In supersymmetry, the maximum number of real supercharges that give supermultiplets containing particles of spin less than or equal to two, is 32. Supercharges with more components result in supermultiplets that necessarily include higher spin states, making such theories unphysical. Since supercharges are spinors, supersymmetry can only be realized in dimensions that admit spinoral representations with no more than 32 components, which only occurs in eleven or fewer dimensions. Eleven-dimensional supergravity is uniquely fixed by supersymmetry, with its structure being relatively simple compared to supergravity theories in other dimensions. The only free parameter is the Planck mass, setting the scale of the theory. It has a single multiplet consisting of the graviton, a Majorana gravitino, and a 3-form gauge field. The necessity of the 3-form field is seen by noting that it provides the missing 84 bosonic degrees of freedom needed to complete the multiplet since the graviton has 44 degrees of freedom while the gravitino has 128. Superalgebra. The maximally-extended algebra for supersymmetry in eleven dimensions is given by formula_2 where formula_3 is the charge conjugation operator which ensures that the combination formula_4 is either symmetric or antisymmetric. Since the anticommutator is symmetric, the only admissible entries on the right-hand side are those which are symmetric on their spinor indices, which in eleven dimensions only occurs for one, two, and five spacetime indices, with the rest being equivalent up to Poincaré duality. The corresponding coefficients formula_5 and formula_6 are known as quasi-central charges. They aren't regular central charges in the group theoretic sense since they are not Lorentz scalars and so do not commute with the Lorentz generators, but their interpretation is the same. They indicate that there are extended objects that preserve some amount of supersymmetry, these being the M2-brane and the M5-brane. Additionally, there is no R-symmetry group. Supergravity action. The action for eleven-dimensional supergravity is given by formula_7 formula_8 formula_9 Here gravity is described using the vielbein formalism formula_10 with an eleven-dimensional gravitational coupling constant formula_11 and formula_12 formula_13 formula_14 formula_15 The torsion-free connection is given by formula_16, while formula_17 is the contorsion tensor. Meanwhile, formula_18 is the covariant derivative with a spin connection formula_19, which acting on spinors takes the form formula_20 where formula_21. The regular gamma matrices satisfying the Dirac algebra are denote by formula_22, while formula_23 are position-dependent fields. The first line in the action contains the covariantized kinetic terms given by the Einstein–Hilbert action, the Rarita–Schwinger equation, and the gauge kinetic action. The second line corresponds to cubic graviton-gauge field terms along with some quartic gravitino terms. The last line in the Lagrangian is a Chern–Simons term. The supersymmetry transformation rules are given by formula_24 formula_25 formula_26 where formula_27 is the supersymmetry Majorana gauge parameter. All hatted variables are supercovariant in the sense that they do not depend on the derivative of the supersymmetry parameter formula_28. The action is additionally invariant under parity, with the gauge field transforming as a pseudotensor formula_29. The equations of motion for this supergravity also have a rigid symmetry known as the trombone symmetry under which formula_30 and formula_31. Special solutions. There are a number of special solutions in 11D supergravity, with the most notable ones being the pp-wave, M2-branes, M5-branes, KK-monopoles, and the M9-brane. Brane solutions are solitonic objects within supergravity that are the low-energy limit of the corresponding M-theory branes. The 3-form gauge field couples electrically to M2-branes and magnetically to M5-branes. Explicit supergravity solitonic solutions for the M2-branes and M5-branes are known. M2-branes and M5-branes have a regular non-degenerate event horizon whose constant time cross-sections are topologically 7-spheres and 4-spheres, respectively. The near-horizon limit of the extreme M2-brane is given by an formula_32 geometry while for the extreme M5-brane it is given by formula_33. These extreme-limit solutions preserve half of the supersymmetry of the vacuum solution, meaning that both the extreme M2-branes and the M5-branes can be seen as solitons interpolating between two maximally supersymmetric Minkowski vacua at infinity, with an formula_34 or formula_35 horizon, respectively. Compactification. The Freund–Rubin compactification of 11D supergravity shows that it preferentially compactifies to seven and four dimensions, which led to it being extensively studied throughout the 1980s. This compactification is most easily achieved by demanding that the compact and noncompact manifolds have a Ricci tensor that is proportional to the metric, meaning that they are Einstein manifolds. One additionally demands that the solution is stable against fluctuations, which in anti-de Sitter spacetimes requires that the Bretenlohner–Freedman bound is satisfied. Stability is guaranteed if there is some unbroken supersymmetry, although there also exist classically stable solutions that fully break supersymmetry. One of the main compactification manifolds studied was the 7-sphere. The manifold has 8 Killing spinors, meaning that the resulting four dimensional theory has formula_0 supersymmetry. Additionally, it also results in an formula_1 gauge group, corresponding to the isometry group of the sphere. A similar widely studied compactification was using a squashed 7-sphere, which can be acquired by embedding the 7-sphere in a quaternionic projective space, with this giving a gauge group of formula_36. A key property of 7-sphere Kaluza-Klein compactifications is that their truncation is consistent, which is not necessarily the case for other Einstein manifolds besides the 7-torus. An inconsistent truncation means that the resulting four dimensional theory is not consistent with the higher dimensional field equations. Physically this needs not be a problem in compactifications to Minkowski spacetimes as the inconsistent truncation merely results in additional irrelevant operators in the action. However, most Einstein manifold compactifications are to anti-de Sitter spacetimes which have a relatively large cosmological constant. In this case irrelevant operators can be converted to relevant ones through the equation of motion. Related theories. While eleven-dimensional supergravity is the unique supergravity in eleven dimensions at the level of an action, a related theory can be acquired at the level of the equations of motion, known as modified 11D supergravity. This is done by replacing the spin connection by one that is conformally related to the original. Such a theory is inequivalent to standard 11D supergravity only in spaces that are not simply connected. An action for a massive 11D theory can also be acquired by introducing an auxiliary nondynamical Killing vector field, with this theory reducing to massive type IIA supergravity upon dimensional reduction. This is not a proper eleven-dimensional theory since the fields explicitly do not depend on one of the coordinates, but it is nonetheless useful for studying massive branes. Dimensionally reducing 11D supergravity to ten dimensions gives rise to type IIA supergravity, while dimensionally reducing it to four dimensions can give formula_0 supergravity, which was one of the original motivations for constructing the theory. While eleven-dimensional supergravity is not UV finite, it is the low energy limit of M-theory. The supergravity also receives corrections at the quantum level, where these corrections sometimes playing an important role in various compactification mechanisms. Unlike for supergravity in other dimensions, an extension to eleven dimensional anti-de Sitter spacetime does not exist. While the theory is the supersymmetric theory in the highest number of dimensions, the caveat is that this only holds for spacetime signatures with one temporal dimension. If arbitrary spacetime signatures are allowed, then there also exists a supergravity in twelve dimensions with two temporal dimensions. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal N = 8" }, { "math_id": 1, "text": "\\text{SO}(8)" }, { "math_id": 2, "text": "\n\\{Q_\\alpha,Q_\\beta\\} = (C\\gamma)^\\mu_{\\alpha \\beta}P_\\mu + (C\\gamma)^{\\mu\\nu}_{\\alpha \\beta}Z_{\\mu\\nu} + (C\\gamma)^{\\mu\\nu\\rho\\sigma\\gamma}_{\\alpha \\beta}Z_{\\mu\\nu\\rho\\sigma\\gamma},\n" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "C\\gamma^{\\mu_1\\cdots \\mu_n}" }, { "math_id": 5, "text": "Z_{\\mu\\nu}" }, { "math_id": 6, "text": "Z_{\\mu\\nu\\rho\\sigma\\gamma}" }, { "math_id": 7, "text": "\nS = \\frac{1}{2\\kappa_{11}^2}\\int d^{11} x \\ e \\bigg[ R(\\omega) -\\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho}D_\\nu(\\tfrac{1}{2}(\\omega+\\hat \\omega))\\psi_\\rho - \\frac{1}{24}F_{\\mu\\nu\\rho\\sigma}F^{\\mu\\nu\\rho\\sigma}\n" }, { "math_id": 8, "text": "\n-\\frac{\\sqrt 2}{192} (\\bar \\psi_\\nu \\gamma^{\\alpha \\beta \\gamma \\delta \\nu \\rho}\\psi_\\rho+12 \\bar \\psi^\\gamma \\gamma^{\\alpha \\beta}\\psi^\\delta)(F_{\\alpha \\beta \\gamma \\delta}+\\hat F_{\\alpha \\beta \\gamma \\delta})\n" }, { "math_id": 9, "text": "\n-\\frac{2\\sqrt 2}{(144)^2}\\epsilon^{\\alpha \\beta \\gamma \\delta \\alpha'\\beta'\\gamma'\\delta'\\mu\\nu\\rho}F_{\\alpha\\beta \\gamma \\delta}F_{\\alpha'\\beta'\\gamma'\\delta'}A_{\\mu \\nu \\rho}\\bigg].\n" }, { "math_id": 10, "text": "e^a_\\mu" }, { "math_id": 11, "text": "\\kappa_{11}" }, { "math_id": 12, "text": "\n\\omega_{\\mu a b} = \\omega_{\\mu a b}(e)+K_{\\mu ab},\n" }, { "math_id": 13, "text": "\n\\hat \\omega_{\\mu a b} = \\omega_{\\mu ab} - \\tfrac{1}{8}\\bar \\psi_\\nu \\gamma^{\\nu \\rho}{}_{\\mu ab} \\psi_\\rho,\n" }, { "math_id": 14, "text": "\nK_{\\mu ab} = -\\tfrac{1}{4}(\\bar \\psi_\\mu \\gamma_b \\psi_a -\\bar \\psi_a \\gamma_\\mu \\psi_b + \\bar \\psi_b \\gamma_a \\psi_\\mu)+\\tfrac{1}{8} \\bar \\psi_\\nu \\gamma^{\\nu \\rho}{}_{\\mu ab}\\psi_\\rho,\n" }, { "math_id": 15, "text": "\n\\hat F_{\\mu \\nu \\rho \\sigma} = F_{\\mu\\nu\\rho \\sigma} +\\tfrac{3}{2}\\sqrt 2 \\bar \\psi_{[\\mu}\\gamma_{\\nu \\rho}\\psi_{\\sigma]}.\n" }, { "math_id": 16, "text": "\\omega_{\\mu ab}(e)" }, { "math_id": 17, "text": "K_{\\mu ab}" }, { "math_id": 18, "text": "D_\\nu(\\omega)" }, { "math_id": 19, "text": "\\omega" }, { "math_id": 20, "text": "\nD_\\mu(\\omega)\\psi_\\nu = \\partial_\\mu\\psi_\\nu+\\tfrac{1}{4}\\omega_\\mu^{ab}\\gamma_{ab}\\psi_\\nu,\n" }, { "math_id": 21, "text": "\\gamma_{ab} = \\gamma_{[a}\\gamma_{b]}" }, { "math_id": 22, "text": "\\gamma_a" }, { "math_id": 23, "text": "\\gamma_\\mu = e_\\mu^a\\gamma_a" }, { "math_id": 24, "text": "\n\\delta_s e^a_\\mu = \\tfrac{1}{2}\\bar \\epsilon \\gamma^a \\psi_\\mu,\n" }, { "math_id": 25, "text": "\n\\delta_s \\psi_\\mu = D_\\mu(\\hat \\omega)\\epsilon + \\tfrac{\\sqrt 2}{288}(\\gamma^{\\alpha \\beta \\gamma \\delta}{}_\\mu - 8 \\gamma^{\\beta \\gamma \\delta}\\delta^\\alpha_\\mu) \\hat F_{\\alpha \\beta \\gamma \\delta}\\epsilon,\n" }, { "math_id": 26, "text": "\n\\delta_s A_{\\mu \\nu \\rho} = -\\tfrac{3\\sqrt2}{4}\\bar \\epsilon \\gamma_{[\\mu\\nu}\\psi_{\\rho]},\n" }, { "math_id": 27, "text": "\\epsilon" }, { "math_id": 28, "text": "\\partial_\\mu \\epsilon" }, { "math_id": 29, "text": "A\\rightarrow -A" }, { "math_id": 30, "text": "g_{\\mu\\nu}\\rightarrow \\alpha^2 g_{\\mu\\nu}" }, { "math_id": 31, "text": "A_{\\mu\\nu\\rho}\\rightarrow \\alpha^3 A_{\\mu\\nu\\rho}" }, { "math_id": 32, "text": "AdS_4 \\times S^7" }, { "math_id": 33, "text": "AdS_7 \\times S^4" }, { "math_id": 34, "text": "AdS_4\\times S^7" }, { "math_id": 35, "text": "AdS_6 \\times S^4" }, { "math_id": 36, "text": "\\text{SO}(5)\\times \\text{SU}(2)" } ]
https://en.wikipedia.org/wiki?curid=77139118
77142804
Metric projection
In mathematics, a metric projection is a function that maps each element of a metric space to the set of points nearest to that element in some fixed sub-space. Formal definition. Formally, let "X" be a metric space with distance metric "d", and let "M" be a fixed subset of "X". Then the metric projection associated with "M", denoted "pM", is the following set-valued function from "X" to "M":formula_0Equivalently:formula_1The elements in the set formula_2 are also called elements of best approximation. This term comes from constrained optimization: we want to find an element nearer to "x", under the constraint that the solution must be a subset of "M". The function "pM" is also called an operator of best approximation. Chebyshev sets. In general, "pM" is set-valued, as for every "x", there may be many elements in "M" that have the same nearest distance to "x". In the special case in which "pM" is single-valued, the set "M" is called a Chebyshev set. As an example, if ("X","d") is a Euclidean space (Rn with the Euclidean distance), then a set "M" is a Chebyshev set if and only if it is closed and convex. Continuity. If "M" is non-empty compact set, then the metric projection "pM" is upper semi-continuous, but might not be lower semi-continuous. But if "X" is a normed space and "M" is a finite-dimensional Chebyshev set, then "pM" is continuous. Moreover, if X is a Hilbert space and M is closed and convex, then "pM" is Lipschitz continuous with Lipschitz constant 1. Applications. Metric projections are used both to investigate theoretical questions in functional analysis and for practical approximation methods. They are also used in constrained optimization. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_M(x) = \\arg\\min_{y\\in M} d(x,y)" }, { "math_id": 1, "text": "p_M(x) = \\{y \\in M : d(x,y) \\leq d(x,y') \\forall y'\\in M \\}\n= \\{y \\in M : d(x,y) = d(x,M) \\}" }, { "math_id": 2, "text": "\\arg\\min_{y\\in M} d(x,y)" } ]
https://en.wikipedia.org/wiki?curid=77142804
7715335
Harpoon reaction
A harpoon reaction is a type of chemical reaction, first proposed by Michael Polanyi in 1920, whose mechanism (also called the harpooning mechanism) involves two neutral reactants undergoing an electron transfer over a relatively long distance to form ions that then attract each other closer together. For example, a metal atom and a halogen might react to form a cation and anion, respectively, leading to a combined metal halide. The main feature of these redox reactions is that, unlike most reactions, they have steric factors greater than unity; that is, they take place faster than predicted by collision theory. This is explained by the fact that the colliding particles have greater cross sections than the pure geometrical ones calculated from their radii, because when the particles are close enough, an electron "jumps" (therefore the name) from one of the particles to the other one, forming an anion and a cation which subsequently attract each other. Harpoon reactions usually take place in the gas phase, but they are also possible in condensed media. The predicted rate constant can be improved by using a better estimation of the steric factor. A rough approximation is that the largest separation Rx at which charge transfer can take place on energetic grounds, can be estimated from the solution of the following equation that determines the largest distance at which the Coulombic attraction between the two oppositely charged ions is sufficient to provide the energy formula_0. formula_1 With formula_2, where formula_3 is the ionization potential of the metal and formula_4 is the electron affinity of the halogen. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta E_0" }, { "math_id": 1, "text": "\\frac{-q_e^2}{R_x}+\\Delta E_0 = 0" }, { "math_id": 2, "text": "\\Delta E_0 = E_i - E_{ea} " }, { "math_id": 3, "text": "E_{i}" }, { "math_id": 4, "text": "E_{ea}" } ]
https://en.wikipedia.org/wiki?curid=7715335
771562
Mikhael Gromov (mathematician)
Russian-French mathematician Mikhael Leonidovich Gromov (also Mikhail Gromov, Michael Gromov or Misha Gromov; ; born 23 December 1943) is a Russian-French mathematician known for his work in geometry, analysis and group theory. He is a permanent member of Institut des Hautes Études Scientifiques in France and a professor of mathematics at New York University. Gromov has won several prizes, including the Abel Prize in 2009 "for his revolutionary contributions to geometry". Biography. Mikhail Gromov was born on 23 December 1943 in Boksitogorsk, Soviet Union. His father Leonid Gromov was Russian-Slavic and his mother Lea was of Jewish heritage. Both were pathologists. His mother was the cousin of World Chess Champion Mikhail Botvinnik, as well as of the mathematician Isaak Moiseevich Rabinovich. Gromov was born during World War II, and his mother, who worked as a medical doctor in the Soviet Army, had to leave the front line in order to give birth to him. When Gromov was nine years old, his mother gave him the book "The Enjoyment of Mathematics" by Hans Rademacher and Otto Toeplitz, a book that piqued his curiosity and had a great influence on him. Gromov studied mathematics at Leningrad State University where he obtained a master's degree in 1965, a doctorate in 1969 and defended his postdoctoral thesis in 1973. His thesis advisor was Vladimir Rokhlin. Gromov married in 1967. In 1970, he was invited to give a presentation at the International Congress of Mathematicians in Nice, France. However, he was not allowed to leave the USSR. Still, his lecture was published in the conference proceedings. Disagreeing with the Soviet system, he had been thinking of emigrating since the age of 14. In the early 1970s he ceased publication, hoping that this would help his application to move to Israel. He changed his last name to that of his mother. He received a coded letter saying that, if he could get out of the Soviet Union, he could go to Stony Brook, where a position had been arranged for him. When the request was granted in 1974, he moved directly to New York and worked at Stony Brook. In 1981 he left Stony Brook University to join the faculty of University of Paris VI and in 1982 he became a permanent professor at the Institut des Hautes Études Scientifiques where he remains today. At the same time, he has held professorships at the University of Maryland, College Park from 1991 to 1996, and at the Courant Institute of Mathematical Sciences in New York since 1996. He adopted French citizenship in 1992. Work. Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties.[G00] He is also interested in mathematical biology, the structure of the brain and the thinking process, and the way scientific ideas evolve. Motivated by Nash and Kuiper's isometric embedding theorems and the results on immersions by Morris Hirsch and Stephen Smale, Gromov introduced the h-principle in various formulations. Modeled upon the special case of the Hirsch–Smale theory, he introduced and developed the general theory of "microflexible sheaves", proving that they satisfy an h-principle on open manifolds.[G69] As a consequence (among other results) he was able to establish the existence of positively curved and negatively curved Riemannian metrics on any open manifold whatsoever. His result is in counterpoint to the well-known topological restrictions (such as the Cheeger–Gromoll soul theorem or Cartan–Hadamard theorem) on "geodesically complete" Riemannian manifolds of positive or negative curvature. After this initial work, he developed further h-principles partly in collaboration with Yakov Eliashberg, including work building upon Nash and Kuiper's theorem and the Nash–Moser implicit function theorem. There are many applications of his results, including topological conditions for the existence of exact Lagrangian immersions and similar objects in symplectic and contact geometry. His well-known book "Partial Differential Relations" collects most of his work on these problems.[G86] Later, he applied his methods to complex geometry, proving certain instances of the "Oka principle" on deformation of continuous maps to holomorphic maps.[G89] His work initiated a renewed study of the Oka–Grauert theory, which had been introduced in the 1950s. Gromov and Vitali Milman gave a formulation of the concentration of measure phenomena.[GM83] They defined a "Lévy family" as a sequence of normalized metric measure spaces in which any asymptotically nonvanishing sequence of sets can be metrically thickened to include almost every point. This closely mimics the phenomena of the law of large numbers, and in fact the law of large numbers can be put into the framework of Lévy families. Gromov and Milman developed the basic theory of Lévy families and identified a number of examples, most importantly coming from sequences of Riemannian manifolds in which the lower bound of the Ricci curvature or the first eigenvalue of the Laplace–Beltrami operator diverge to infinity. They also highlighted a feature of Lévy families in which any sequence of continuous functions must be asymptotically almost constant. These considerations have been taken further by other authors, such as Michel Talagrand. Since the seminal 1964 publication of James Eells and Joseph Sampson on harmonic maps, various rigidity phenomena had been deduced from the combination of an existence theorem for harmonic mappings together with a vanishing theorem asserting that (certain) harmonic mappings must be totally geodesic or holomorphic. Gromov had the insight that the extension of this program to the setting of mappings into metric spaces would imply new results on discrete groups, following Margulis superrigidity. Richard Schoen carried out the analytical work to extend the harmonic map theory to the metric space setting; this was subsequently done more systematically by Nicholas Korevaar and Schoen, establishing extensions of most of the standard Sobolev space theory. A sample application of Gromov and Schoen's methods is the fact that lattices in the isometry group of the quaternionic hyperbolic space are arithmetic.[GS92] Riemannian geometry. In 1978, Gromov introduced the notion of almost flat manifolds.[G78] The famous quarter-pinched sphere theorem in Riemannian geometry says that if a complete Riemannian manifold has sectional curvatures which are all sufficiently close to a given positive constant, then M must be finitely covered by a sphere. In contrast, it can be seen by scaling that every closed Riemannian manifold has Riemannian metrics whose sectional curvatures are arbitrarily close to zero. Gromov showed that if the scaling possibility is broken by only considering Riemannian manifolds of a fixed diameter, then a closed manifold admitting such a Riemannian metric, with sectional curvatures sufficiently close to zero, must be finitely covered by a nilmanifold. The proof works by replaying the proofs of the Bieberbach theorem and Margulis lemma. Gromov's proof was given a careful exposition by Peter Buser and Hermann Karcher. In 1979, Richard Schoen and Shing-Tung Yau showed that the class of smooth manifolds which admit Riemannian metrics of positive scalar curvature is topologically rich. In particular, they showed that this class is closed under the operation of connected sum and of surgery in codimension at least three. Their proof used elementary methods of partial differential equations, in particular to do with the Green's function. Gromov and Blaine Lawson gave another proof of Schoen and Yau's results, making use of elementary geometric constructions.[GL80b] They also showed how purely topological results such as Stephen Smale's h-cobordism theorem could then be applied to draw conclusions such as the fact that every closed and simply-connected smooth manifold of dimension 5, 6, or 7 has a Riemannian metric of positive scalar curvature. They further introduced the new class of "enlargeable manifolds", distinguished by a condition in homotopy theory.[GL80a] They showed that Riemannian metrics of positive scalar curvature "cannot" exist on such manifolds. A particular consequence is that the torus cannot support any Riemannian metric of positive scalar curvature, which had been a major conjecture previously resolved by Schoen and Yau in low dimensions. In 1981, Gromov identified topological restrictions, based upon Betti numbers, on manifolds which admit Riemannian metrics of nonnegative sectional curvature.[G81a] The principal idea of his work was to combine Karsten Grove and Katsuhiro Shiohama's Morse theory for the Riemannian distance function, with control of the distance function obtained from the Toponogov comparison theorem, together with the Bishop–Gromov inequality on volume of geodesic balls. This resulted in topologically controlled covers of the manifold by geodesic balls, to which spectral sequence arguments could be applied to control the topology of the underlying manifold. The topology of lower bounds on sectional curvature is still not fully understood, and Gromov's work remains as a primary result. As an application of Hodge theory, Peter Li and Yau were able to apply their gradient estimates to find similar Betti number estimates which are weaker than Gromov's but allow the manifold to have convex boundary. In Jeff Cheeger's fundamental compactness theory for Riemannian manifolds, a key step in constructing coordinates on the limiting space is an injectivity radius estimate for closed manifolds. Cheeger, Gromov, and Michael Taylor localized Cheeger's estimate, showing how to use Bishop−Gromov volume comparison to control the injectivity radius in absolute terms by curvature bounds and volumes of geodesic balls.[CGT82] Their estimate has been used in a number of places where the construction of coordinates is an important problem. A particularly well-known instance of this is to show that Grigori Perelman's "noncollapsing theorem" for Ricci flow, which controls volume, is sufficient to allow applications of Richard Hamilton's compactness theory. Cheeger, Gromov, and Taylor applied their injectivity radius estimate to prove Gaussian control of the heat kernel, although these estimates were later improved by Li and Yau as an application of their gradient estimates. Gromov made foundational contributions to systolic geometry. Systolic geometry studies the relationship between size invariants (such as volume or diameter) of a manifold M and its topologically non-trivial submanifolds (such as non-contractible curves). In his 1983 paper "Filling Riemannian manifolds"[G83] Gromov proved that every essential manifold formula_0 with a Riemannian metric contains a closed non-contractible geodesic of length at most formula_1. Gromov−Hausdorff convergence and geometric group theory. In 1981, Gromov introduced the Gromov–Hausdorff metric, which endows the set of all metric spaces with the structure of a metric space.[G81b] More generally, one can define the Gromov-Hausdorff distance between two metric spaces, relative to the choice of a point in each space. Although this does not give a metric on the space of all metric spaces, it is sufficient in order to define "Gromov-Hausdorff convergence" of a sequence of pointed metric spaces to a limit. Gromov formulated an important compactness theorem in this setting, giving a condition under which a sequence of pointed and "proper" metric spaces must have a subsequence which converges. This was later reformulated by Gromov and others into the more flexible notion of an ultralimit.[G93] Gromov's compactness theorem had a deep impact on the field of geometric group theory. He applied it to understand the asymptotic geometry of the word metric of a group of polynomial growth, by taking the limit of well-chosen rescalings of the metric. By tracking the limits of isometries of the word metric, he was able to show that the limiting metric space has unexpected continuities, and in particular that its isometry group is a Lie group.[G81b] As a consequence he was able to settle the Milnor-Wolf conjecture as posed in the 1960s, which asserts that any such group is virtually nilpotent. Using ultralimits, similar asymptotic structures can be studied for more general metric spaces.[G93] Important developments on this topic were given by Bruce Kleiner, Bernhard Leeb, and Pierre Pansu, among others. Another consequence is Gromov's compactness theorem, stating that the set of compact Riemannian manifolds with Ricci curvature ≥ "c" and diameter ≤ "D" is relatively compact in the Gromov–Hausdorff metric.[G81b] The possible limit points of sequences of such manifolds are Alexandrov spaces of curvature ≥ "c", a class of metric spaces studied in detail by Burago, Gromov and Perelman in 1992.[BGP92] Along with Eliyahu Rips, Gromov introduced the notion of hyperbolic groups.[G87] Symplectic geometry. Gromov's theory of pseudoholomorphic curves is one of the foundations of the modern study of symplectic geometry.[G85] Although he was not the first to consider pseudo-holomorphic curves, he uncovered a "bubbling" phenomena paralleling Karen Uhlenbeck's earlier work on Yang–Mills connections, and Uhlenbeck and Jonathan Sack's work on harmonic maps. In the time since Sacks, Uhlenbeck, and Gromov's work, such bubbling phenomena has been found in a number of other geometric contexts. The corresponding compactness theorem encoding the bubbling allowed Gromov to arrive at a number of analytically deep conclusions on existence of pseudo-holomorphic curves. A particularly famous result of Gromov's, arrived at as a consequence of the existence theory and the monotonicity formula for minimal surfaces, is the "non-squeezing theorem," which provided a striking qualitative feature of symplectic geometry. Following ideas of Edward Witten, Gromov's work is also fundamental for Gromov-Witten theory, which is a widely studied topic reaching into string theory, algebraic geometry, and symplectic geometry. From a different perspective, Gromov's work was also inspirational for much of Andreas Floer's work. Yakov Eliashberg and Gromov developed some of the basic theory for symplectic notions of convexity.[EG91] They introduce various specific notions of convexity, all of which are concerned with the existence of one-parameter families of diffeomorphisms which contract the symplectic form. They show that convexity is an appropriate context for an h-principle to hold for the problem of constructing certain symplectomorphisms. They also introduced analogous notions in contact geometry; the existence of convex contact structures was later studied by Emmanuel Giroux. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Publications. Books display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; Major articles &lt;templatestyles src="Refbegin/styles.css" /&gt; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; External links. Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "C(n)\\operatorname{Vol}(M)^{1/n}" } ]
https://en.wikipedia.org/wiki?curid=771562
77173325
Equivalent radius
Radius of a circle or sphere equivalent to a non-circular or non-spherical object In applied sciences, the equivalent radius (or mean radius) is the radius of a circle or sphere with the same perimeter, area, or volume of a non-circular or non-spherical object. The equivalent diameter (or mean diameter) (formula_0) is twice the equivalent radius. Perimeter equivalent. The perimeter of a circle of radius "R" is formula_1. Given the perimeter of a non-circular object "P", one can calculate its perimeter-equivalent radius by setting formula_2 or, alternatively: formula_3 For example, a square of side "L" has a perimeter of formula_4. Setting that perimeter to be equal to that of a circle imply that formula_5 Applications: Area equivalent. The area of a circle of radius "R" is formula_6. Given the area of an non-circular object "A", one can calculate its area-equivalent radius by setting formula_7 or, alternatively: formula_8 Often the area considered is that of a cross section. For example, a square of side length "L" has an area of formula_9. Setting that area to be equal that of a circle imply that formula_10 Similarly, an ellipse with semi-major axis formula_11 and semi-minor axis formula_12 has mean radius formula_13. For a circle, where formula_14, this simplifies to formula_15. Applications: formula_16 as one would expect. This is equivalent to the above definition of the 2D mean radius. However, for historical reasons, the hydraulic radius is defined as the cross-sectional area of a pipe "A", divided by it's wetted perimeter "P", which leads to formula_17, and the hydraulic radius is "half" of the 2D mean radius. Volume equivalent. The volume of a sphere of radius "R" is formula_19. Given the volume of an non-spherical object "V", one can calculate its volume-equivalent radius by setting formula_20 or, alternatively: formula_21 For example, a cube of side length "L" has a volume of formula_22. Setting that volume to be equal that of a sphere imply that formula_23 Similarly, a tri-axial ellipsoid with axes formula_11, formula_12 and formula_24 has mean radius formula_25. The formula for a rotational ellipsoid is the special case where formula_14. Likewise, an oblate spheroid or rotational ellipsoid with axes formula_11 and formula_24 has a mean radius of formula_26. For a sphere, where formula_27, this simplifies to formula_15. Applications: Other equivalences. The "authalic radius" is an surface area-equivalent radius for solid figures such as an ellipsoid. The osculating circle and osculating sphere define curvature-equivalent radii at a particular point of tangency for plane figures and solid figures, respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "2 \\pi R" }, { "math_id": 2, "text": "P = 2\\pi R_\\text{mean}" }, { "math_id": 3, "text": "R_\\text{mean} = \\frac{P}{2\\pi}" }, { "math_id": 4, "text": "4L" }, { "math_id": 5, "text": "R_\\text{mean} = \\frac{2L}{\\pi} \\approx 0.6366 L" }, { "math_id": 6, "text": "\\pi R^2" }, { "math_id": 7, "text": "A = \\pi R^2_\\text{mean}" }, { "math_id": 8, "text": "R_\\text{mean} = \\sqrt{\\frac{A}{\\pi}}" }, { "math_id": 9, "text": "L^2" }, { "math_id": 10, "text": "R_\\text{mean} = \\sqrt{\\frac{1}{\\pi}} L \\approx 0.3183 L" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "b" }, { "math_id": 13, "text": "R_\\text{mean}=\\sqrt{a \\cdot b}" }, { "math_id": 14, "text": "a=b" }, { "math_id": 15, "text": "R_\\text{mean}=a" }, { "math_id": 16, "text": "D_\\text{H} = \\frac{4 \\pi R^2}{2 \\pi R} = 2R" }, { "math_id": 17, "text": "D_\\text{H} = 4 R_\\H" }, { "math_id": 18, "text": "D = 2 \\sqrt{\\frac{A}{\\pi}}" }, { "math_id": 19, "text": "\\frac{4}{3}\\pi R^3" }, { "math_id": 20, "text": "V = \\frac{4}{3}\\pi R^3_\\text{mean}" }, { "math_id": 21, "text": "R_\\text{mean} = \\sqrt[3]{\\frac{3V}{4\\pi}}" }, { "math_id": 22, "text": "L^3" }, { "math_id": 23, "text": "R_\\text{mean} = \\sqrt[3]{\\frac{3}{4\\pi}} L \\approx 0.6204 L" }, { "math_id": 24, "text": "c" }, { "math_id": 25, "text": "R_\\text{mean}=\\sqrt[3]{a \\cdot b \\cdot c}" }, { "math_id": 26, "text": "R_\\text{mean}=\\sqrt[3]{a^{2} \\cdot c }" }, { "math_id": 27, "text": "a=b=c" }, { "math_id": 28, "text": "R=\\sqrt[3]{6378.1^{2}\\cdot6356.8}=6371.0\\text{ km}" } ]
https://en.wikipedia.org/wiki?curid=77173325
77175631
Kompaneyets equation
Kompaneyets equation refers to a non-relativistic, Fokker–Planck type, kinetic equation for photon number density with which photons interact with an electron gas via Compton scattering, first derived by Alexander Kompaneyets in 1949 and published in 1957 after declassification. The Kompaneyets equation describes how an initial photon distribution relaxes to the equilibrium Bose–Einstein distribution. Komapaneyets pointed out the radiation field on its own cannot reach the equilibrium distribution since the Maxwells equation are linear but it needs to exchange energy with the electron gas. The Kompaneyets equation has been used as a basis for analysis of the Sunyaev–Zeldovich effect. Mathematical description. Consider a non-relativistic electron bath that is at an equilibirum temperature formula_0, i.e., formula_1, where formula_2 is the electron mass. Let there be a low-frequency radiation field that satisfies the soft-photon approximation, i.e., formula_3 where formula_4 is the photon frequency. Then, the enery exchange in any collision between photon and electron will be small. Assuming homogeneity and isotropy and expanding the collision integral of the Boltzmann equation in terms of small energy exchange, one obtains the Kompaneyets equation. The Kompaneyets equation for the photon number density formula_5 reads formula_6 where formula_7 is the total Thomson cross-section and formula_8 is the electron number density; formula_9 is the Compton range or the scattering mean free path. As evident, the equation can be written in the form of the continuity equation formula_10 If we introudce the rescalings formula_11 the equation can be brought to the form formula_12 The Kompaneyets equation conserves the photon number formula_13 where formula_14 is a sufficiently large volume, since the energy exchange between photon and electron is small. Furthermore, the equilibrium distribution of the Kompaneyets equation is the Bose–Einstein distribution for the photon gas, formula_15 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_e" }, { "math_id": 1, "text": "k_B T_e\\ll m_e c^2" }, { "math_id": 2, "text": "m_e" }, { "math_id": 3, "text": "\\hbar\\omega \\ll m_ec^2" }, { "math_id": 4, "text": "\\omega" }, { "math_id": 5, "text": "n(\\omega,t)" }, { "math_id": 6, "text": "\\frac{\\partial n}{\\partial t} = \\frac{\\sigma_Tn_e\\hbar}{m_ec}\\frac{1}{\\omega^2}\\frac{\\partial}{\\partial\\omega}\\left[\\omega^4\\left(\\frac{k_BT_e}{\\hbar}\\frac{\\partial n}{\\partial\\omega}+ n^2+n\\right)\\right]" }, { "math_id": 7, "text": "\\sigma_T" }, { "math_id": 8, "text": "n_e" }, { "math_id": 9, "text": "\\lambda_e = 1/(n_e\\sigma_T)" }, { "math_id": 10, "text": "\\frac{\\partial n}{\\partial t} + \\frac{1}{\\omega^2}\\frac{\\partial }{\\partial \\omega}(\\omega^2 j)=0,\\quad j = -\\frac{\\sigma_Tn_e\\hbar}{m_ec}\\omega^2\\left(\\frac{k_BT_e}{\\hbar}\\frac{\\partial n}{\\partial\\omega}+ n^2+n\\right)." }, { "math_id": 11, "text": "\\tau = \\frac{\\sigma_Tn_ek_B T_e}{m_e c} t, \\quad x = \\frac{\\hbar\\omega}{k_B T_e}" }, { "math_id": 12, "text": "\\frac{\\partial n}{\\partial \\tau} = \\frac{1}{x^2}\\frac{\\partial}{\\partial x}\\left[x^4\\left(\\frac{\\partial n}{\\partial x}+ n^2+n\\right)\\right]." }, { "math_id": 13, "text": "N= \\frac{Vk_B^3T_e^3}{\\pi^2c^3\\hbar^3}\\int_0^\\infty n\\,x^2dx" }, { "math_id": 14, "text": "V" }, { "math_id": 15, "text": "n_{\\mathrm{eq}} = \\frac{1}{e^{x}-1}." } ]
https://en.wikipedia.org/wiki?curid=77175631
7717738
Contact resistance
Electrical resistance attributed to contacting interfaces Electrical contact resistance (ECR, or simply contact resistance) is resistance to the flow of electric current caused by incomplete contact of the surfaces through which the current is flowing, and by films or oxide layers on the contacting surfaces. It occurs at electrical connections such as switches, connectors, breakers, contacts, and measurement probes. Contact resistance values are typically small (in the microohm to milliohm range). Contact resistance can cause significant voltage drops and heating in circuits with high current. Because contact resistance adds to the intrinsic resistance of the conductors, it can cause significant measurement errors when exact resistance values are needed. Contact resistance may vary with temperature. It may also vary with time (most often decreasing) in a process known as resistance creep. Electrical contact resistance is also called "interface resistance", "transitional resistance", or the "correction term". "Parasitic resistance" is a more general term, of which it is usually assumed that contact resistance is a major component. William Shockley introduced the idea of a potential drop on an injection electrode to explain the difference between experimental results and the model of gradual channel approximation. Measurement methods. Because contact resistance is usually comparatively small, it can difficult to measure, and four-terminal measurement gives better results than a simple two-terminal measurement made with an ohmmeter. Specific contact resistance can be obtained by multiplying by contact area. Experimental characterization. For experimental characterization, a distinction must be made between contact resistance evaluation in two-electrode systems (for example, diodes) and three-electrode systems (for example, transistors). In two-electrode systems, specific contact resistivity is experimentally defined as the slope of the I–V curve at "V" 0: formula_0 where formula_1 is the current density, or current per area. The units of specific contact resistivity are typically therefore in ohm-square metre, or Ω⋅m2. When the current is a linear function of the voltage, the device is said to have ohmic contacts. Inductive and capacitive methods could be used in principle to measure an intrinsic impedance without the complication of contact resistance. In practice, direct current methods are more typically used to determine resistance. The three electrode systems such as transistors require more complicated methods for the contact resistance approximation. The most common approach is the transmission line model (TLM). Here, the total device resistance formula_2 is plotted as a function of the channel length: formula_3 where formula_4 and formula_5 are contact and channel resistances, respectively, formula_6 is the channel length/width, formula_7 is gate insulator capacitance (per unit of area), formula_8 is carrier mobility, and formula_9 and formula_10 are gate-source and drain-source voltages. Therefore, the linear extrapolation of total resistance to the zero channel length provides the contact resistance. The slope of the linear function is related to the channel transconductance and can be used for estimation of the ”contact resistance-free” carrier mobility. The approximations used here (linear potential drop across the channel region, constant contact resistance, ...) lead sometimes to the channel dependent contact resistance. Beside the TLM it was proposed the gated four-probe measurement and the modified time-of-flight method (TOF). The direct methods able to measure potential drop on the injection electrode directly are the Kelvin probe force microscopy (KFM) and the electric-field induced second harmonic generation. In the semiconductor industry, Cross-Bridge Kelvin Resistor(CBKR) structures are the mostly used test structures to characterize metal-semiconductor contacts in the Planar devices of VLSI technology. During the measurement process, force the current (formula_11) between contacts 1 and 2 and measure the potential deference between contacts 3 and 4. The contact resistance formula_12 can be then calculated as formula_13 . Mechanisms. For given physical and mechanical material properties, parameters that govern the magnitude of electrical contact resistance (ECR) and its variation at an interface relate primarily to surface structure and applied load (Contact mechanics). Surfaces of metallic contacts generally exhibit an external layer of oxide material and adsorbed water molecules, which lead to capacitor-type junctions at weakly contacting asperities and resistor type contacts at strongly contacting asperities, where sufficient pressure is applied for asperities to penetrate the oxide layer, forming metal-to-metal contact patches. If a contact patch is sufficiently small, with dimensions comparable or smaller than the mean free path of electrons resistance at the patch can be described by the Sharvin mechanism, whereby electron transport can be described by ballistic conduction. Generally, over time, contact patches expand and the contact resistance at an interface relaxes, particularly at weakly contacting surfaces, through current induced welding and dielectric breakdown. This process is known also as resistance creep. The coupling of surface chemistry, contact mechanics and charge transport mechanisms needs to be considered in the mechanistic evaluation of ECR phenomena. Quantum limit. When a conductor has spatial dimensions close to formula_14, where formula_15 is Fermi wavevector of the conducting material, Ohm's law does not hold anymore. These small devices are called quantum point contacts. Their conductance must be an integer multiple of the value formula_16, where formula_17 is the elementary charge and formula_18 is the Planck constant. Quantum point contacts behave more like waveguides than the classical wires of everyday life and may be described by the Landauer scattering formalism. Point-contact tunneling is an important technique for characterizing superconductors. Other forms of contact resistance. Measurements of thermal conductivity are also subject to contact resistance, with particular significance in heat transport through granular media. Similarly, a drop in hydrostatic pressure (analogous to electrical voltage) occurs when fluid flow transitions from one channel to another. Significance. Bad contacts are the cause of failure or poor performance in a wide variety of electrical devices. For example, corroded jumper cable clamps can frustrate attempts to start a vehicle that has a low battery. Dirty or corroded contacts on a fuse or its holder can give the false impression that the fuse is blown. A sufficiently high contact resistance can cause substantial heating in a high current device. Unpredictable or noisy contacts are a major cause of the failure of electrical equipment. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_\\text{c} = \\left\\{ \\frac{\\partial V}{\\partial J} \\right\\}_{V=0}" }, { "math_id": 1, "text": "J" }, { "math_id": 2, "text": "R_\\text{tot}" }, { "math_id": 3, "text": "R_\\text{tot} = R_\\text{c} + R_\\text{ch} = R_\\text{c} + \\frac{L}{W C \\mu \\left(V_\\text{gs} - V_\\text{ds}\\right)}" }, { "math_id": 4, "text": "R_\\text{c}" }, { "math_id": 5, "text": "R_\\text{ch}" }, { "math_id": 6, "text": "L/W" }, { "math_id": 7, "text": "C" }, { "math_id": 8, "text": "\\mu" }, { "math_id": 9, "text": "V_\\text{gs}" }, { "math_id": 10, "text": "V_\\text{ds}" }, { "math_id": 11, "text": "I" }, { "math_id": 12, "text": "R_\\text{k}" }, { "math_id": 13, "text": "R_\\text{k}=V_{34}/I" }, { "math_id": 14, "text": "2\\pi/k_\\text{F}" }, { "math_id": 15, "text": "k_\\text{F}" }, { "math_id": 16, "text": "2e^2/h" }, { "math_id": 17, "text": "e" }, { "math_id": 18, "text": "h" } ]
https://en.wikipedia.org/wiki?curid=7717738
77183860
Zeldovich regularization
Zeldovich regularization refers to a regularization method to calculate divergent integrals and divergent series, that was first introduced by Yakov Zeldovich in 1961. Zeldovich was originally interested in calculating the norm of the Gamow wave function which is divergent since there is an outgoing spherical wave. Zeldovich regularization uses a Gaussian type-regularization and is defined, for divergent integrals, by formula_0 and, for divergent series, by formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\int_0^\\infty f(x) dx \\equiv \\lim_{\\alpha\\to 0^+}\\int_0^\\infty f(x) e^{-\\alpha x^2} dx." }, { "math_id": 1, "text": "\\sum_n c_n \\equiv \\lim_{\\alpha\\to 0^+}\\sum_n c_n e^{-\\alpha n^2}." } ]
https://en.wikipedia.org/wiki?curid=77183860
7719010
Armitage–Doll multistage model of carcinogenesis
Statistical model in biology The Armitage–Doll model is a statistical model of carcinogenesis, proposed in 1954 by Peter Armitage and Richard Doll, in which a series of discrete mutations result in cancer. The original paper has recently been reprinted with a set of commentary articles. The model. The rate of incidence and mortality from a wide variety of common cancers follows a power law: someone's risk of developing a cancer increases with a power of their age. The model is very simple, and reads formula_0 in Ashley's notation. Their interpretation was that a series of formula_1 mutations were required to initiate a tumour. This is now widely accepted, and part of the mainstream view of carcinogenesis. In their original paper, they found that formula_1 was typically between 5 and 7. Other cancers were later discovered to require fewer mutations: retinoblastoma, typically emerging in early childhood, can emerge from as few as 1 or 2, depending on pre-existing genetic factors. History. This was some of the earliest strong evidence that cancer was the result of an accumulation of mutations. With their 1954 paper, Armitage and Doll began a line of research that led to Knudson's two-hit hypothesis and thus the discovery of tumour suppressor genes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathrm{rate} = \\frac{N p_1 p_2 p_3 \\cdots p_r}{(r-1)!} t^{r-1}\n" }, { "math_id": 1, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=7719010
77190923
Sullivan vortex
Solution to the Navier–Stokes equations In fluid dynamics, the Sullivan vortex is an exact solution of the Navier–Stokes equations describing a two-celled vortex in an axially strained flow, that was discovered by Roger D. Sullivan in 1959. At large radial distances, the Sullivan vortex resembles a Burgers vortex, however, it exhibits a two-cell structure near the center, creating a downdraft at the axis and an updraft at a finite radial location. Specifically, in the outer cell, the fluid spirals inward and upward and in the inner cell, the fluid spirals down at the axis and spirals upwards at the boundary with the outer cell. Due to its multi-celled structure, the vortex is used to model tornadoes and large-scale complex vortex structures in turbulent flows. Flow description. Consider the velocity components formula_0 of an incompressible fluid in cylindrical coordinates in the form formula_1 formula_2 formula_3 where formula_4 and formula_5 is the strain rate of the axisymmetric stagnation-point flow. The Burgers vortex solution is simply given by formula_6 and formula_7. Sullivan showed that there exists a non-trivial solution for formula_8 from the Navier-Stokes equations accompanied by a function formula_9 that is not the Burgers vortex. The solution is given by formula_10 formula_11 where formula_12 is the exponential integral. For formula_13, the function formula_9 behaves like formula_14 with formula_15 being is the Euler–Mascheroni constant, whereas for large values of formula_16, we have formula_17. The boundary between the inner cell and the outer cell is given by formula_18, which is obtained by solving the equation formula_19 Within the inner cell, the transition between the downdraft and the updraft occurs at formula_20, which is obtained by solving the equation formula_21 The vorticity components of the Sullivan vortex are given by formula_22 The pressure field formula_23 with respect to its central value formula_24 is given by formula_25 where formula_26 is the fluid density. The first term on the right-hand side corresponds to the potential flow motion, i.e., formula_27, whereas the remaining two terms originates from the motion associated with the Sullivan vortex. Sullvin vortex in cylindrical stagnation surfaces. Explicit solution of the Navier–Stokes equations for the Sullivan vortex in stretched cylindrical stagnation surfaces was solved by P. Rajamanickam and A. D. Weiss and is given by formula_28 formula_2 formula_3 where formula_29, formula_30 formula_31 Note that the location of the stagnation cylindrical surface is not longer given by formula_32(or equivalently formula_33), but is given by formula_34 where formula_35 is the principal branch of the Lambert W function. Thus, formula_36 here should be interpreted as the measure of the volumetric source strength formula_37 and not the location of the stagnation surface. Here, the vorticity components of the Sullivan vortex are given by formula_38 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(v_r,v_\\theta,v_z)" }, { "math_id": 1, "text": "v_r=- \\alpha r + \\frac{2\\nu}{r} f(\\eta)," }, { "math_id": 2, "text": "v_z=2\\alpha z\\left[1-f'(\\eta)\\right]," }, { "math_id": 3, "text": "v_\\theta=\\frac{\\Gamma}{2\\pi r}\\frac{g(\\eta)}{g(\\infty)}," }, { "math_id": 4, "text": "\\eta =\\alpha r^2/(2\\nu)" }, { "math_id": 5, "text": "\\alpha>0" }, { "math_id": 6, "text": "f(\\eta)=0" }, { "math_id": 7, "text": "g(\\eta)/g(\\infty)=1-e^{-\\eta}" }, { "math_id": 8, "text": "f(\\eta)" }, { "math_id": 9, "text": "g(\\eta)" }, { "math_id": 10, "text": "f(\\eta) = 3 (1-e^{-\\eta})," }, { "math_id": 11, "text": "g(\\eta)= \\int_0^\\eta t^3 e^{-t- 3\\operatorname{Ei}(-t)} \\, \\mathrm{d} t" }, { "math_id": 12, "text": "\\operatorname{Ei}" }, { "math_id": 13, "text": "\\eta\\ll 1" }, { "math_id": 14, "text": "g=e^{-3\\gamma}(\\eta+\\eta^2+\\cdots)" }, { "math_id": 15, "text": "\\gamma" }, { "math_id": 16, "text": "\\eta" }, { "math_id": 17, "text": "g(\\infty)=6.7088" }, { "math_id": 18, "text": "\\eta=2.821" }, { "math_id": 19, "text": "v_r=0." }, { "math_id": 20, "text": "\\eta=1.099" }, { "math_id": 21, "text": "\\partial v_z/\\partial r=0." }, { "math_id": 22, "text": "\\omega_r=0,\\quad \\omega_\\theta= - \\frac{6\\alpha^2}{\\nu} rz e^{-\\alpha r^2/2\\nu}, \\quad \\omega_z=\\frac{\\alpha\\Gamma}{2\\pi\\nu} \\frac{\\eta^3e^{-\\eta- 3\\operatorname{Ei}(-\\eta)}}{g(\\infty)}." }, { "math_id": 23, "text": "p" }, { "math_id": 24, "text": "p_0" }, { "math_id": 25, "text": "\\frac{p-p_0}{\\rho} = - \\frac{\\alpha^2}{2}(r^2+4z^2) - \\frac{18\\nu^2}{r^2}(1-e^{-\\alpha r^2/2\\nu}) + \\int_0^r \\frac{v_\\theta^2}{r}dr," }, { "math_id": 26, "text": "\\rho" }, { "math_id": 27, "text": "(v_r,v_\\theta,v_z) = (-\\alpha r,0,2\\alpha z)" }, { "math_id": 28, "text": "v_r=- \\alpha \\left(r-\\frac{r_s^2}{r}\\right) + \\frac{2\\nu}{r} f(\\eta)," }, { "math_id": 29, "text": "\\eta=\\alpha r^2/(2\\nu)" }, { "math_id": 30, "text": "f(\\eta) = (3-\\eta_s) (1-e^{-\\eta})," }, { "math_id": 31, "text": "g(\\eta)=\\int_0^\\eta t^3 e^{-t-(3-\\eta_s) \\operatorname{Ei}(-t)} \\, \\mathrm{d} t." }, { "math_id": 32, "text": "r=r_s" }, { "math_id": 33, "text": "\\eta=\\eta_s" }, { "math_id": 34, "text": "\\eta_{\\operatorname{stag}} = 3 + W_0[e^{-3}(\\eta_s-3)]" }, { "math_id": 35, "text": "W_0" }, { "math_id": 36, "text": "r_s" }, { "math_id": 37, "text": "Q=2\\pi \\alpha r_s^2" }, { "math_id": 38, "text": "\\omega_r=0,\\quad \\omega_\\theta= - \\frac{2\\alpha^2}{\\nu}\\left(3-\\frac{\\alpha r_s^2}{2\\nu}\\right) rz e^{-\\alpha r^2/2\\nu}, \\quad \\omega_z=\\frac{\\alpha\\Gamma}{2\\pi\\nu} \\frac{\\eta^3 e^{-\\eta+(\\eta_s- 3)\\operatorname{Ei}(-\\eta)}}{g(\\infty)}." } ]
https://en.wikipedia.org/wiki?curid=77190923
77192372
Persistent random walk
Modification of the random walk model The persistent random walk is a modification of the random walk model. A population of particles are distributed on a line, with constant speed formula_0, and each particle's velocity may be reversed at any moment. The reversal time is exponentially distributed as formula_1, then the population density formula_2 evolves according toformula_3which is the telegrapher's equation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_0" }, { "math_id": 1, "text": "e^{-t/\\tau}/\\tau" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "(2\\tau^{-1} \\partial_t + \\partial_{tt} - c_0^2 \\partial_{xx}) n = 0" } ]
https://en.wikipedia.org/wiki?curid=77192372
771965
Strassen algorithm
Recursive algorithm for matrix multiplication In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist. Strassen's algorithm works for any ring, such as plus/multiply, but not all semirings, such as min-plus or boolean algebra, where the naive algorithm still works, and so called combinatorial matrix multiplication. History. Volker Strassen first published this algorithm in 1969 and thereby proved that the formula_0 general matrix multiplication algorithm was not optimal. The Strassen algorithm's publication resulted in more research about matrix multiplication that led to both asymptotically lower bounds and improved computational upper bounds. Algorithm. Let formula_1, formula_2 be two square matrices over a ring formula_3, for example matrices whose entries are integers or the real numbers. The goal of matrix multiplication is to calculate the matrix product formula_4. The following exposition of the algorithm assumes that all of these matrices have sizes that are powers of two (i.e., formula_5), but this is only conceptually necessary — if the matrices formula_1, formula_2 are not of type formula_6, the "missing" rows and columns can be filled with zeros to obtain matrices with sizes of powers of two — though real implementations of the algorithm do not do this in practice. The Strassen algorithm partitions formula_1, formula_2 and formula_7 into equally sized block matrices formula_8 with formula_9. The naive algorithm would be: formula_10 This construction does not reduce the number of multiplications: 8 multiplications of matrix blocks are still needed to calculate the formula_11 matrices, the same number of multiplications needed when using standard matrix multiplication. The Strassen algorithm defines instead new values: formula_12 using only 7 multiplications (one for each formula_13) instead of 8. We may now express the formula_11 in terms of formula_13: formula_14 We recursively iterate this division process until the submatrices degenerate into numbers (elements of the ring formula_3). If, as mentioned above, the original matrix had a size that was not a power of 2, then the resulting product will have zero rows and columns just like formula_1 and formula_2, and these will then be stripped at this point to obtain the (smaller) matrix formula_7 we really wanted. Practical implementations of Strassen's algorithm switch to standard methods of matrix multiplication for small enough submatrices, for which those algorithms are more efficient. The particular crossover point for which Strassen's algorithm is more efficient depends on the specific implementation and hardware. Earlier authors had estimated that Strassen's algorithm is faster for matrices with widths from 32 to 128 for optimized implementations. However, it has been observed that this crossover point has been increasing in recent years, and a 2010 study found that even a single step of Strassen's algorithm is often not beneficial on current architectures, compared to a highly optimized traditional multiplication, until matrix sizes exceed 1000 or more, and even for matrix sizes of several thousand the benefit is typically marginal at best (around 10% or less). A more recent study (2016) observed benefits for matrices as small as 512 and a benefit around 20%. Winograd form. It is possible to reduce the number of matrix additions by instead using the following form discovered by Winograd: formula_15 where formula_16. This reduces the number of matrix additions and subtractions from 18 to 15. The number of matrix multiplications is still 7, and the asymptotic complexity is the same. Asymptotic complexity. The outline of the algorithm above showed that one can get away with just 7, instead of the traditional 8, matrix-matrix multiplications for the sub-blocks of the matrix. On the other hand, one has to do additions and subtractions of blocks, though this is of no concern for the overall complexity: Adding matrices of size formula_17 requires only formula_18 operations whereas multiplication is substantially more expensive (traditionally formula_19 addition or multiplication operations). The question then is how many operations exactly one needs for Strassen's algorithms, and how this compares with the standard matrix multiplication that takes approximately formula_20 (where formula_21) arithmetic operations, i.e. an asymptotic complexity formula_22. The number of additions and multiplications required in the Strassen algorithm can be calculated as follows: let formula_23 be the number of operations for a formula_6 matrix. Then by recursive application of the Strassen algorithm, we see that formula_24, for some constant formula_25 that depends on the number of additions performed at each application of the algorithm. Hence formula_26, i.e., the asymptotic complexity for multiplying matrices of size formula_21 using the Strassen algorithm is formula_27. The reduction in the number of arithmetic operations however comes at the price of a somewhat reduced numerical stability, and the algorithm also requires significantly more memory compared to the naive algorithm. Both initial matrices must have their dimensions expanded to the next power of 2, which results in storing up to four times as many elements, and the seven auxiliary matrices each contain a quarter of the elements in the expanded ones. Strassen's algorithm needs to be compared to the "naive" way of doing the matrix multiplication that would require 8 instead of 7 multiplications of sub-blocks. This would then give rise to the complexity one expects from the standard approach: formula_28. The comparison of these two algorithms shows that "asymptotically", Strassen's algorithm is faster: There exists a size formula_29 so that matrices that are larger are more efficiently multiplied with Strassen's algorithm than the "traditional" way. However, the asymptotic statement does not imply that Strassen's algorithm is "always" faster even for small matrices, and in practice this is in fact not the case: For small matrices, the cost of the additional additions of matrix blocks outweighs the savings in the number of multiplications. There are also other factors not captured by the analysis above, such as the difference in cost on today's hardware between loading data from memory onto processors vs. the cost of actually doing operations on this data. As a consequence of these sorts of considerations, Strassen's algorithm is typically only used on "large" matrices. This kind of effect is even more pronounced with alternative algorithms such as the one by Coppersmith and Winograd: While "asymptotically" even faster, the cross-over point formula_29 is so large that the algorithm is not generally used on matrices one encounters in practice. Rank or bilinear complexity. The bilinear complexity or rank of a bilinear map is an important concept in the asymptotic complexity of matrix multiplication. The rank of a bilinear map formula_30 over a field F is defined as (somewhat of an abuse of notation) formula_31 In other words, the rank of a bilinear map is the length of its shortest bilinear computation. The existence of Strassen's algorithm shows that the rank of formula_32 matrix multiplication is no more than seven. To see this, let us express this algorithm (alongside the standard algorithm) as such a bilinear computation. In the case of matrices, the dual spaces A* and B* consist of maps into the field F induced by a scalar double-dot product, (i.e. in this case the sum of all the entries of a Hadamard product.) It can be shown that the total number of elementary multiplications formula_33 required for matrix multiplication is tightly asymptotically bound to the rank formula_34, i.e. formula_35, or more specifically, since the constants are known, formula_36. One useful property of the rank is that it is submultiplicative for tensor products, and this enables one to show that formula_37 matrix multiplication can be accomplished with no more than formula_38 elementary multiplications for any formula_39. (This formula_39-fold tensor product of the formula_40 matrix multiplication map with itself — an formula_39-th tensor power—is realized by the recursive step in the algorithm shown.) Cache behavior. Strassen's algorithm is cache oblivious. Analysis of its cache behavior algorithm has shown it to incur formula_41 cache misses during its execution, assuming an idealized cache of size formula_42 (i.e. with formula_43 lines of length formula_44). Implementation considerations. The description above states that the matrices are square, and the size is a power of two, and that padding should be used if needed. This restriction allows the matrices to be split in half, recursively, until limit of scalar multiplication is reached. The restriction simplifies the explanation, and analysis of complexity, but is not actually necessary; and in fact, padding the matrix as described will increase the computation time and can easily eliminate the fairly narrow time savings obtained by using the method in the first place. A good implementation will observe the following: Furthermore, there is no need for the matrices to be square. Non-square matrices can be split in half using the same methods, yielding smaller non-square matrices. If the matrices are sufficiently non-square it will be worthwhile reducing the initial operation to more square products, using simple methods which are essentially formula_45, for instance: These techniques will make the implementation more complicated, compared to simply padding to a power-of-two square; however, it is a reasonable assumption that anyone undertaking an implementation of Strassen, rather than conventional multiplication, will place a higher priority on computational efficiency than on simplicity of the implementation. In practice, Strassen's algorithm can be implemented to attain better performance than conventional multiplication even for matrices as small as formula_61, for matrices that are not at all square, and without requiring workspace beyond buffers that are already needed for a high-performance conventional multiplication. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n^3" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "\\mathcal{R}" }, { "math_id": 4, "text": "C = AB" }, { "math_id": 5, "text": "A, \\, B, \\, C \\in \\operatorname{Matr}_{2^n \\times 2^n} (\\mathcal{R})" }, { "math_id": 6, "text": "2^n \\times 2^n" }, { "math_id": 7, "text": "C" }, { "math_id": 8, "text": " \nA =\n\\begin{bmatrix}\nA_{11} & A_{12} \\\\\nA_{21} & A_{22}\n\\end{bmatrix}, \\quad\nB =\n\\begin{bmatrix}\nB_{11} & B_{12} \\\\\nB_{21} & B_{22}\n\\end{bmatrix}, \\quad\nC =\n\\begin{bmatrix}\nC_{11} & C_{12} \\\\\nC_{21} & C_{22}\n\\end{bmatrix}, \\quad\n" }, { "math_id": 9, "text": "A_{ij}, B_{ij}, C_{ij} \\in \\operatorname{Mat}_{2^{n-1} \\times 2^{n-1}} (\\mathcal{R})" }, { "math_id": 10, "text": "\n\\begin{bmatrix}\nC_{11} & C_{12} \\\\\nC_{21} & C_{22}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nA_{11} {\\color{red}\\times} B_{11} + A_{12} {\\color{red}\\times} B_{21} \\quad &\nA_{11} {\\color{red}\\times} B_{12} + A_{12} {\\color{red}\\times} B_{22} \\\\\nA_{21} {\\color{red}\\times} B_{11} + A_{22} {\\color{red}\\times} B_{21} \\quad &\nA_{21} {\\color{red}\\times} B_{12} + A_{22} {\\color{red}\\times} B_{22}\n\\end{bmatrix}.\n" }, { "math_id": 11, "text": "C_{ij}" }, { "math_id": 12, "text": "\n\\begin{align}\nM_1 &= (A_{11} + A_{22}) {\\color{red}\\times} (B_{11} + B_{22}); \\\\\nM_2 &= (A_{21} + A_{22}) {\\color{red}\\times} B_{11}; \\\\\nM_3 &= A_{11} {\\color{red}\\times} (B_{12} - B_{22}); \\\\\nM_4 &= A_{22} {\\color{red}\\times} (B_{21} - B_{11}); \\\\\nM_5 &= (A_{11} + A_{12}) {\\color{red}\\times} B_{22}; \\\\\nM_6 &= (A_{21} - A_{11}) {\\color{red}\\times} (B_{11} + B_{12}); \\\\\nM_7 &= (A_{12} - A_{22}) {\\color{red}\\times} (B_{21} + B_{22}), \\\\\n\\end{align}\n" }, { "math_id": 13, "text": "M_k" }, { "math_id": 14, "text": "\n\\begin{bmatrix}\nC_{11} & C_{12} \\\\\nC_{21} & C_{22}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nM_1 + M_4 - M_5 + M_7 \\quad &\nM_3 + M_5 \\\\\nM_2 + M_4 \\quad &\nM_1 - M_2 + M_3 + M_6\n\\end{bmatrix}.\n" }, { "math_id": 15, "text": "\n\\begin{bmatrix}\na & b \\\\\nc & d\n\\end{bmatrix}\n\\begin{bmatrix}\nA & C \\\\\nB & D\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nt + b{\\color{red}\\times}B & w + v + (a + b - c - d){\\color{red}\\times}D \\\\\nw + u + d{\\color{red}\\times}(B + C - A - D) & w + u + v\n\\end{bmatrix}\n" }, { "math_id": 16, "text": "t = a{\\color{red}\\times}A, \\; u = (c - a){\\color{red}\\times}(C - D), \\; v = (c + d){\\color{red}\\times}(C - A), \\; w = t + (c + d - a){\\color{red}\\times}(A + D - C)" }, { "math_id": 17, "text": "N/2" }, { "math_id": 18, "text": "(N/2)^2" }, { "math_id": 19, "text": "2 (N/2)^3" }, { "math_id": 20, "text": "2 N^3" }, { "math_id": 21, "text": "N = 2^n" }, { "math_id": 22, "text": "\\Theta (N^3)" }, { "math_id": 23, "text": "f(n)" }, { "math_id": 24, "text": "f(n) = 7 f(n-1) + l 4^n" }, { "math_id": 25, "text": "l" }, { "math_id": 26, "text": "f(n) = (7 + o(1))^n" }, { "math_id": 27, "text": "O([7+o(1)]^n) = O(N^{\\log_{2}7+o(1)}) \\approx O(N^{2.8074})" }, { "math_id": 28, "text": "O(8^n) = O(N^{\\log_{2}8}) = O(N^3)" }, { "math_id": 29, "text": "N_\\text{threshold}" }, { "math_id": 30, "text": "\\phi:\\mathbf A \\times \\mathbf B \\rightarrow \\mathbf C" }, { "math_id": 31, "text": "R(\\phi/\\mathbf F) = \\min \\left\\{r\\left|\\exists f_i\\in \\mathbf A^*,g_i\\in\\mathbf B^*,w_i\\in\\mathbf C , \\forall \\mathbf a\\in\\mathbf A, \\mathbf b\\in\\mathbf B, \\phi(\\mathbf a,\\mathbf b) = \\sum_{i=1}^r f_i(\\mathbf a)g_i(\\mathbf b)w_i \\right.\\right\\}" }, { "math_id": 32, "text": "2 \\times 2" }, { "math_id": 33, "text": "L" }, { "math_id": 34, "text": "R" }, { "math_id": 35, "text": "L = \\Theta(R)" }, { "math_id": 36, "text": "R / 2 \\le L \\le R" }, { "math_id": 37, "text": "2^n \\times 2^n \\times 2^n" }, { "math_id": 38, "text": "7n" }, { "math_id": 39, "text": "n" }, { "math_id": 40, "text": "2 \\times 2 \\times 2" }, { "math_id": 41, "text": "\\Theta \\left(1 + \\frac{n^2}{b} + \\frac{n^{\\log_2 7}}{b\\sqrt{M}} \\right)" }, { "math_id": 42, "text": "M" }, { "math_id": 43, "text": "M / b" }, { "math_id": 44, "text": "b" }, { "math_id": 45, "text": "O(n^{2})" }, { "math_id": 46, "text": "1600 \\times 1600" }, { "math_id": 47, "text": "2048 \\times 2048" }, { "math_id": 48, "text": "25 \\times 25" }, { "math_id": 49, "text": "199 \\times 199" }, { "math_id": 50, "text": "100 \\times 100" }, { "math_id": 51, "text": "99 \\times 99" }, { "math_id": 52, "text": "99" }, { "math_id": 53, "text": "100" }, { "math_id": 54, "text": "M_2" }, { "math_id": 55, "text": "A_{21} + A_{22}" }, { "math_id": 56, "text": "A_{22}" }, { "math_id": 57, "text": "A_{21}" }, { "math_id": 58, "text": "[2N \\times N] \\ast [N \\times 10N]" }, { "math_id": 59, "text": "[N \\times N] \\ast [N \\times N]" }, { "math_id": 60, "text": "[N \\times 10N] \\ast [10N \\times N]" }, { "math_id": 61, "text": "500 \\times 500" }, { "math_id": 62, "text": "O(n^{\\log_2 3})" }, { "math_id": 63, "text": "O(n^2)" } ]
https://en.wikipedia.org/wiki?curid=771965
77197868
Trairāśika
Sanskrit word for "rule of three" Trairāśika is the Sanskrit term used by Indian astronomers and mathematicians of the pre-modern era to denote what is known as the "rule of three" in elementary mathematics and algebra. In the contemporary mathematical literature, the term "rule of three" refers to the principle of cross-multiplication which states that if formula_0 then formula_1 or formula_2. The antiquity of the term "trairāśika" is attested by its presence in the Bakhshali manuscript, a document believed to have been composed in the early centuries of the Common Era. The "trairāśika" rule. Basically "trairāśika" is a rule which helps to solve the following problem: "If formula_3 produces formula_4 what would formula_5 produce?" Here formula_6 is referred to as "pramāṇa" ("argument"), formula_7 as "phala" ("fruit") and formula_8 as "ichcā" ("requisition"). The "pramāṇa" and "icchā" must be of the same denomination, that is, of the same kind or type like weights, money, time, or numbers of the same objects. "Phala" can be a of a different denomination. It is also assumed that "phala" increases in proportion to "pramāṇa". The unknown quantity is called "icchā-phala", that is, the "phala" corresponding to the "icchā". Āryabhaṭa gives the following solution to the problem: "In "trairāśika", the "phala" is multiplied by "ichcā" and then divided by "pramāṇa". The result is "icchā-phala"." In modern mathematical notations, formula_9 The four quantities can be presented in a row like this: "pramāṇa" | "phala" | "ichcā" | "icchā-phala" (unknown) Then the rule to get "icchā-phala" can be stated thus: "Multiply the middle two and divide by the first." Illustrative examples. 1. This example is taken from "Bījagaṇita", a treatise on algebra by the Indian mathematician Bhāskara II (c. 1114–1185). Problem: "If two and a half "pala"-s (a unit of weight) of saffron be obtained for three-sevenths of a "nishca" (a unit of money); say instantly, best of merchants, how much is got for nine "nishca"-s?" Solution: "pramāṇa" = formula_10 "nishca", "phala" = formula_11 "pala"-s of saffron, "icchā" = formula_12 "nishca"-s and we have to find the "icchā-phala". formula_13 "pala"-s of safron. 2. This example is taken from Yuktibhāṣā, a work on mathematics and astronomy, composed by Jyesthadeva of the Kerala school of astronomy and mathematics around 1530. Problem: "When 5 measures of paddy is known to yield 2 measures of rice how many measures of rice will be obtained from 12 measures of paddy?" Solution: "pramāṇa" = 5 measures of paddy, "phala" = 2 measures of rice, "icchā" = 12 measures of rice and we have to find the "icchā-phala". formula_14 measures of rice. "Vyasta-trairāśika": Inverse rule of three. The four quantities associated with "trairāśika" are presented in a row as follows: "pramāṇa" | "phala" | "ichcā" | "icchā-phala" (unknown) In "trairāśika" it was assumed that the "phala" increases with "pramāṇa". If it is assumed that "phala" decreases with increases in "pramāṇa", the rule for finding "icchā-phala" is called "vyasta-trairāśika" (or, "viloma-trairāśika") or "inverse rule of three". In "vyasta-trairāśika" the rule for finding the "icchā-phala" may be stated as follows assuming that the relevant quantities are written in a row as indicated above. "In the three known quantities, multiply the middle term by the first and divide by the last." In modern mathematical notations we have, formula_15 Illustrative example. This example is from "Bījagaṇita": Problem: "If a female slave sixteen years of age, bring thirty-two "nishca"-s, what will one aged twenty cost?" Solution: "pramāṇa" = 16 years, "phala" 32 = "nishca"-s, "ichcā" = 20 years. It is assumed that "phala" decreases with "pramāṇa". Hence formula_16 "nishca"-s. Compound proportion. In "trairāśika" there is only one "pramāṇa" and the corresponding "phala". We are required to find the "phala" corresponding to a given value of "ichcā" for the "pramāṇa". The relevant quantities may also be represented in the following form: Indian mathematicians have generalized this problem to the case where there are more than one "pramāṇa". Let there be "n" "pramāṇa"-s "pramāṇa"-1, "pramāṇa"-2, . . ., "pramāṇa"-"n" and the corresponding "phala". Let the "iccha"-s corresponding to the "pramāṇa"-s be "iccha"-1, "iccha"-2, . . ., "iccha"-"n". The problem is to find the "phala" corresponding to these "iccha"-s. This may be represented in the following tabular form: This is the problem of compound proportion. The "ichcā-phala" is given by formula_17 Since there are formula_18 quantities, the method for solving the problem may be called the "rule of formula_18". In his "Bǐjagaṇita" Bhāskara II has discussed some special cases of this general principle, like, "rule of five" ("pañjarāśika"), "rule of seven" ("saptarāśika"), "rule of nine" ("navarāśika") and "rule of eleven" ("ekādaśarāśika"). Illustrative example. This example for rule of nine is taken from "Bǐjagaṇita": Problem: If thirty benches, twelve fingers thick, square of four wide, and fourteen cubits long, cost a hundred [nishcas]; tell me, my friend, what price will fourteen benches fetch, which are four less in every dimension? Solution: The data is presented in the following tabular form: "iccha-phala = formula_19. Importance of the "trairāśika". All Indian astronomers and mathematicians have placed the "trairāśika" principle on a high pedestal. For example, Bhaskara II in his "Līlāvatī" even compares the "trairāśika" to God himself! "As the being, who relieves the minds of his worshipers from suffering, and who is the sole cause of the production of this universe, pervades the whole, and does so with his various manifestations, as worlds, paradises, mountains, rivers, gods, demons, men, trees," and cities; so is all this collection of instructions for computations pervaded by the rule of three terms." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{a}{b}=\\tfrac{c}{d}" }, { "math_id": 1, "text": "ad=bc" }, { "math_id": 2, "text": "a=\\tfrac{bc}{d}" }, { "math_id": 3, "text": " p " }, { "math_id": 4, "text": " h " }, { "math_id": 5, "text": "i " }, { "math_id": 6, "text": "p " }, { "math_id": 7, "text": "h " }, { "math_id": 8, "text": " i " }, { "math_id": 9, "text": "\\text{icchā-phala }=\\tfrac{\\text{phala}\\times\\text{icchā}}{\\text{pramāṇa}}." }, { "math_id": 10, "text": "\\tfrac{3}{7}" }, { "math_id": 11, "text": "2\\tfrac{1}{2}" }, { "math_id": 12, "text": "9" }, { "math_id": 13, "text": "\\text{icchā-phala }=\\tfrac{\\text{phala}\\times\\text{icchā}}{\\text{pramāṇa}}=\\tfrac{(2\\tfrac{1}{2})\\times 9}{\\tfrac{3}{7}} = 52\\tfrac{1}{2}" }, { "math_id": 14, "text": "\\text{icchā-phala }=\\tfrac{\\text{phala}\\times\\text{icchā}}{\\text{pramāṇa}}=\\tfrac{2\\times 12}{5} = \\tfrac{24}{5}" }, { "math_id": 15, "text": "\\text{icchā-phala } = \\tfrac{\\text{phala}\\times \\text{pramāṇa}}{\\text{icchā}}." }, { "math_id": 16, "text": "\\text{icchā-phala } = \\tfrac{\\text{phala}\\times \\text{pramāṇa}}{\\text{icchā}} =\\tfrac{32\\times 16}{20}=25\\tfrac{3}{5}" }, { "math_id": 17, "text": "\\text{ ichcā-phala } = \\tfrac{ ( \\text{ ichcā-1 } \\times \\text{ ichcā-2 } \\times \\cdots \\times \\text{ ichcā-n }) \\times\\text{ phala }}{ \\text{ pramāṇa-1 }\\times \\text{ pramāṇa-2 }\\times\\cdots \\times \\text{ pramāṇa-n }}. " }, { "math_id": 18, "text": "2n+1" }, { "math_id": 19, "text": "\\tfrac{(14\\times 8 \\times 12 \\times 10)\\times 100}{30\\times 12\\times 16 \\times 14}=\\tfrac{100}{6}=16\\tfrac{2}{3}" } ]
https://en.wikipedia.org/wiki?curid=77197868
772
Ampere
SI base unit of electric current &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The ampere ( , ; symbol: A), often shortened to amp, is the unit of electric current in the International System of Units (SI). One ampere is equal to 1 coulomb (C) moving past a point per second. It is named after French mathematician and physicist André-Marie Ampère (1775–1836), considered the father of electromagnetism along with Danish physicist Hans Christian Ørsted. As of the 2019 redefinition of the SI base units, the ampere is defined by fixing the elementary charge e to be exactly , which means an ampere is an electric current equivalent to elementary charges moving every seconds or elementary charges moving in a second. Prior to the redefinition the ampere was defined as the current passing through two parallel wires 1 metre apart that produces a magnetic force of newtons per metre. The earlier CGS system has two units of current, one structured similarly to the SI's and the other using Coulomb's law as a fundamental relationship, with the CGS unit of charge defined by measuring the force between two charged metal plates. The CGS unit of current is then defined as one unit of charge per second. History. The ampere is named for French physicist and mathematician André-Marie Ampère (1775–1836), who studied electromagnetism and laid the foundation of electrodynamics. In recognition of Ampère's contributions to the creation of modern electrical science, an international convention, signed at the 1881 International Exposition of Electricity, established the ampere as a standard unit of electrical measurement for electric current. The ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the units derived from it in the MKSA system would be conveniently sized. The "international ampere" was an early realization of the ampere, defined as the current that would deposit of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is . Since power is defined as the product of current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship "I" = "P"/"V", and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance. Former definition in the SI. Until 2019, the SI defined the ampere as follows: The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length. Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere. The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: formula_0 In general, charge Q was determined by steady current I flowing for a time t as "Q" = "It". This definition of the ampere was most accurately realised using a Kibble balance, but in practice the unit was maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two could be tied to physical phenomena that are relatively easy to reproduce, the Josephson effect and the quantum Hall effect, respectively. Techniques to establish the realisation of an ampere had a relative uncertainty of approximately a few parts in 107, and involved realisations of the watt, the ohm and the volt. Present definition. The 2019 redefinition of the SI base units defined the ampere by taking the fixed numerical value of the elementary charge e to be when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of ∆"ν"Cs, the unperturbed ground state hyperfine transition frequency of the caesium-133 atom. The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: formula_1 In general, charge Q is determined by steady current I flowing for a time t as "Q" = "I" "t". Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is "). The relation of the ampere (C/s) to the coulomb is the same as that of the watt (J/s) to the joule. Units derived from the ampere. The international system of units (SI) is based on seven SI base units the second, metre, kilogram, kelvin, ampere, mole, and candela representing seven fundamental types of physical quantity, or "dimensions", (time, length, mass, temperature, electric current, amount of substance, and luminous intensity respectively) with all other SI units being defined using these. These SI derived units can either be given special names e.g. watt, volt, lux, etc. or defined in terms of others, e.g. metre per second. The units with special names derived from the ampere are: There are also some SI units that are frequently used in the context of electrical engineering and electrical appliances, but are defined independently of the ampere, notably the hertz, joule, watt, candela, lumen, and lux. SI prefixes. Like other SI units, the ampere can be modified by adding a prefix that multiplies it by a power of 10. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rm 1\\ A=1\\frac C s." }, { "math_id": 1, "text": "\\rm 1\\ A=1\\,\\text{C/s}." } ]
https://en.wikipedia.org/wiki?curid=772
77207161
Erdős–Delange theorem
Distribution of primes The Erdős–Delange theorem is a theorem in number theory concerning the distribution of prime numbers. It is named after Paul Erdős and Hubert Delange. Let formula_0 denote the number of prime factors of an integer formula_1, counted with multiplicity, and formula_2 be any irrational number. The theorem states that the real numbers formula_3 are asymptotically uniformly distributed modulo 1. It implies the prime number theorem. The theorem was stated without proof in 1946 by Paul Erdős, with a remark that "the proof is not easy". Hubert Delange found a simpler proof and published it in 1958, together with two other ways of deducing it from results of Erdős and of Atle Selberg. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega(n)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "\\lambda\\omega(n)" } ]
https://en.wikipedia.org/wiki?curid=77207161
7720970
Femtosecond pulse shaping
In optics, femtosecond pulse shaping refers to manipulations with temporal profile of an ultrashort laser pulse. Pulse shaping can be used to shorten/elongate the duration of optical pulse, or to generate complex pulses. Introduction. Generation of sequences of ultrashort optical pulses is key in realizing ultra high speed optical networks, Optical Code Division Multiple Access (OCDMA) systems, chemical and biological reaction triggering and monitoring etc. Based on the requirement, pulse shapers may be designed to stretch, compress or produce a train of pulses from a single input pulse. The ability to produce trains of pulses with femtosecond or picosecond separation implies transmission of optical information at very high speeds. In ultrafast laser science pulse shapers are often used as a complement to pulse compressors in order to fine-tune high-order dispersion compensation and achieve transform-limited few-cycle optical pulses. Techniques. A pulse shaper may be visualized as a modulator. The input pulse is multiplied with a modulating function to get a desired output pulse. The modulating function in pulse shapers may be in time domain or a frequency domain (obtained by Fourier transform of time profile of pulse). However, application of direct pulse shaping technique on a femtosecond time scale faces the same problem as direct femtosecond pulse measurement: electronics speed limitations. Michelson interferometer can be regarded as direct space-to-time pulse shaper since position of the moving mirror is directly transferred to the inter-pulse delay of the output pulse pair. Fourier transform pulse shaping. An ultrashort pulse with a well-defined electrical field formula_0 can be modified with an appropriate filter acting in the frequency domain. Mathematically, the pulse is Fourier transformed, filtered, and back-transformed to yield a new pulse: formula_1 It is possible to design an optical setup with an arbitrary filter function formula_2 which can be complex-valued, as long as &amp;NoBreak;&amp;NoBreak;. Figure 1 shows how a bandwidth-limited pulse could be transformed into a chirped pulse (with a filter only acting on the phase) or into a more complex pulse (with the filter acting on both phase and amplitude). Design. One can distinguish Fourier transform pulse shapers by their optical design: i.e., collinear shapers and transverse shapers, and by their programmability, i.e., static (or manually adjustable) shapers and programmable shapers. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E(t)" }, { "math_id": 1, "text": "E'(t)=\\mathcal{F}^{-1}\\{\\mathcal{F}\\{E(t)\\}(\\omega)f(\\omega)\\}(t)." }, { "math_id": 2, "text": "f(\\omega)" } ]
https://en.wikipedia.org/wiki?curid=7720970
77210919
Becker–Morduchow–Libby solution
Becker–Morduchow–Libby solution is an exact solution of the compressible Navier–Stokes equations, that describes the structure of one-dimensional shock waves. The solution was discovered in a restrictive form by Richard Becker in 1922, which was generalized by Morris Morduchow and Paul A. Libby in 1949. The solution was also discovered independently by M. Roy and L. H. Thomas in 1944The solution showed that there is a non-monotonic variation of the entropy across the shock wave. Before these works, Lord Rayleigh obtained solutions in 1910 for fluids with viscosity but without heat conductivity and for fluids with heat conductivity but without viscosity. Following this, in the same year G. I. Taylor solved the whole problem for weak shock waves by taking both viscosity and heat conductivity into account. Mathematical description. In a frame fixed with a planar shock wave, the shock wave is steady. In this frame, the steady Navier–Stokes equations for a viscous and heat conducting gas can be written as formula_0 where formula_1 is the density, formula_2 is the velocity, formula_3 is the pressure, formula_4 is the internal energy per unit mass, formula_5 is the temperature, formula_6 is an effective coefficient of viscosity, formula_7 is the coefficient of viscosity, formula_8 is the second viscosity and formula_9 is the thermal conductivity. To this set of equations, one has to prescribe an equation of state formula_10 and an expression for the energy in terms of any two thermodynamics variables, say formula_11. Instead of formula_4, it is convenient to work with the specific enthalpy formula_12 Let us denote properties pertaining upstream of the shock with the subscript "formula_13" and downstream with "formula_14". The shock wave speed itself is denoted by formula_15. The first integral of the governing equations, after imposing the condition that all gradients vanish upstream, are found to be formula_16 By evaluating these on the downstream side where all gradients vanish, one recovers the familiar Rankine–Hugoniot conditions, formula_17, formula_18 and formula_19 Further integration of the above equations require numerical computations, except in one special case where integration can be carried out analytically. Analytical solution. Two assumptions has to be made to facilitate explicit integration of the third equation. First, assume that the gas is ideal (polytropic since we shall assume constant values for the specific heats) in which case the equation of state is formula_20 and further formula_21, where formula_22 is the specific heat at constant pressure and formula_23 is the specific heat ratio. The third equation then becomes formula_24 where formula_25 is the Prandtl number based on formula_26; when formula_27, say as in monoatomic gases, this Prandtl number is just the ordinary Prandtl number formula_28. The second assumption made is formula_29 so that the terms inside the parenthesis becomes a total derivative, i.e., formula_30. This is a reasonably good approximation since in normal gases, Pradntl number is approximately equal to formula_31. With this approximation and integrating once more by imposing the condition that formula_32 is bounded downstream, we find formula_33 This above relation indicates that the quantity formula_32 is conserved everywhere, not just on the upstream and downstream side. Since for the polytropic gas formula_34, where formula_35 is the specific volume and formula_36 is the sound speed, the above equation provides the relation between the ratio formula_37 and the corresponding velocity (or density or specific volume) ratio formula_38, i.e., formula_39 where formula_40 is the Mach number of the wave with respect to upstream and formula_41. Combining this with momentum and continuity integrals, we obtain the equation for formula_42 as follows formula_43 We can introduce the reciprocal-viscosity-weighted coordinate formula_44 where formula_45, so that formula_46 The equation clearly exhibits the translation invariant in the formula_47-direction which can be fixed, say, by fixing the origin to be the location where the intermediate value formula_48 is reached. Using this last condition, the solution to this equation is found to be formula_49 As formula_50 (or, formula_51), we have formula_52 and as formula_53 (or, formula_54), we have formula_55 This ends the search for the analytical solution. From here, other thermodynamics variables of interest can be evaluated. For instance, the temperature ratio formula_56 is esaily to found to given by formula_57 and the specific entropy formula_58, by formula_59 The analytical solution is plotted in the figure for formula_60 and formula_61. The notable feature is that the entropy does not monotonically increase across the shock wave, but it increases to a larger value and then decreases to a constant behind the shock wave. Such scenario is possible because of the heat conduction, as it will become apparent by looking at the entropy equation which is obtained from the original energy equation by substituting the thermodynamic relation formula_62, i.e., formula_63 While the viscous dissipation associated with the term formula_64 always increases the entropy, heat conduction increases the entropy in the colder layers where formula_65, whereas it decreases the entropy in the hotter layers where formula_66. Taylor's solution: Weak shock waves. When formula_67, analytical solution is possible only in the weak shock-wave limit, as first shown by G. I. Taylor in 1910. In the weak shock-wave limit, all terms such as formula_68, formula_69 etc., will be small. The thickness formula_70 of the shock wave is of the order formula_71 so that differentiation with respect to formula_47 increases the order smallness by one; e.g. formula_72 is a second-order small quantity. Without going into the details and treating the gas to a generic gas (not just polytropic), the solution for formula_73 is found to be related to the steady travelling-wave solution of the Burgers' equation and is given by formula_74 where formula_75 in which formula_76 is the Landau derivative (for polytropic gas formula_77) and formula_78 is a constant which when multiplied by some characteristic frequency squared provides the acoustic absorption coefficient. The specific entropy is found to be proportional to formula_79 and is given by formula_80 Note that formula_81 is a second-order small quantity, although formula_82 is a third-order small quantity as can be inferred from the above expression which shows that formula_83 for both formula_84. This is allowed since formula_85, unlike formula_73, passes through a maximum within the shock wave. Validity of continuum hypothesis: since the thermal velocity of the molecules is of the order formula_36 and the kinematic viscosity is of the order formula_86, where formula_87 is the mean free path of the gas molecules, we have formula_88; an estimation based on heat conduction gives the same result. Combining this with the relation formula_89, shows that formula_90 i.e., the shock-wave thickness is of the order the mean free path of the molecules. However, in the continuum hypothesis, the mean free path is taken to be zero. It follows that the continuum equations alone cannot be strictly used to describe the internal structure of strong shock waves; in weak shock waves, formula_91 can be made as small as possible to make formula_70 large. Rayleigh's solution. Two problems that were originally considered by Lord Rayleigh is given here. Fluids with heat conduction and without viscosity formula_92. The problem when viscosity is neglected but heat conduction is allowed is of significant interest in astrophysical context due to presence of other heat exchange mechanisms such as radiative heat transfer, electron heat transfer in plasmas, etc. Neglect of viscosity means viscous forces in the momentum equation and the viscous dissipation in the energy equation disappear. Hence the first integral of the governing equations are simply given by formula_93 All the required ratios can be expreses in terms of formula_94 immediately, formula_95 By eliminating formula_94 from the last two equations, one can obtain equation formula_96, which can be integrated. It turns out there is no continuous solution for strong shock waves, precisely when formula_97 for formula_60 this condition becomes formula_98 Fluids with viscosity and without heat conduction formula_99. Here continuous solutions can be found for all shock wave strengths. Further, here the entropy increases monotonically across the shock wave due to the absence of heat conduction. Here the first integrals are given by formula_100 One can eliminate the viscous terms in the last two equations and obtain a relation between formula_101 and formula_94. Substituting this back in any one of the equations, we obtain an equation for formula_42, which can be integrated. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n\\frac{d}{dx}(\\rho u) &= 0,\\\\\n\\rho u \\frac{du}{dx} + \\frac{dp}{dx} - \\frac{4}{3}\\frac{d}{dx}\\left(\\mu'\\frac{du}{dx}\\right) &= 0,\\\\\n\\rho u \\frac{d\\varepsilon}{dx} + p\\frac{du}{dx} - \\frac{d}{dx}\\left(\\lambda\\frac{dT}{dx}\\right) - \\frac{4}{3}\\mu' \\left(\\frac{du}{dx}\\right)^2&=0,\n\\end{align}\n" }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "u" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "\\mu'=\\mu+3\\zeta/4" }, { "math_id": 7, "text": "\\mu" }, { "math_id": 8, "text": "\\zeta" }, { "math_id": 9, "text": "\\lambda" }, { "math_id": 10, "text": "f(p,\\rho,T)=0" }, { "math_id": 11, "text": "\\varepsilon=\\varepsilon(p,\\rho)" }, { "math_id": 12, "text": "h=\\varepsilon + p/\\rho." }, { "math_id": 13, "text": "0" }, { "math_id": 14, "text": "1" }, { "math_id": 15, "text": "D=u_0" }, { "math_id": 16, "text": "\\begin{align}\n\\rho u &= \\rho_0 D,\\\\\np + \\rho u^2 - \\frac{4}{3}\\mu' \\frac{du}{dx} &= p_0 +\\rho_0 D^2,\\\\\nh + \\frac{u^2}{2} - \\frac{1}{\\rho_0 D} \\left(\\lambda \\frac{dT}{dx} + \\frac{4}{3} \\mu' \n u\\frac{du}{dx}\\right)&=h_0 +\\frac{D^2}{2}.\n\\end{align}\n" }, { "math_id": 17, "text": "\\rho_1u_1=\\rho_0 D" }, { "math_id": 18, "text": "p_1 + \\rho_1 u_1^2=p_0+\\rho_0 D^2" }, { "math_id": 19, "text": "h_1+u_1^2/2 = h_0 + D^2/2." }, { "math_id": 20, "text": "p/\\rho T = c_p(\\gamma-1)/\\gamma" }, { "math_id": 21, "text": "h=c_pT" }, { "math_id": 22, "text": "c_p" }, { "math_id": 23, "text": "\\gamma" }, { "math_id": 24, "text": "h + \\frac{u^2}{2} - \\frac{4\\mu'}{3\\rho_0 D} \\left( \\frac{3}{4Pr'}\\frac{dh}{dx} + \\frac{1}{2} \\frac{du^2}{dx}\\right) =h_0 +\\frac{D^2}{2}" }, { "math_id": 25, "text": "Pr'=\\mu'c_p/\\lambda" }, { "math_id": 26, "text": "\\mu'" }, { "math_id": 27, "text": "\\zeta=0" }, { "math_id": 28, "text": "Pr=\\mu c_p/\\lambda" }, { "math_id": 29, "text": "Pr'=3/4" }, { "math_id": 30, "text": "d(h+u^2/2)/dx" }, { "math_id": 31, "text": "0.72" }, { "math_id": 32, "text": "h+u^2/2" }, { "math_id": 33, "text": "h+\\frac{u^2}{2} = h_0 +\\frac{D^2}{2}." }, { "math_id": 34, "text": "h=c_p T = \\gamma p\\upsilon/(\\gamma-1)=c^2/(\\gamma-1)" }, { "math_id": 35, "text": "\\upsilon=1/\\rho" }, { "math_id": 36, "text": "c" }, { "math_id": 37, "text": "p(x)/p_0" }, { "math_id": 38, "text": "\\eta(x)=\\frac{u}{D}=\\frac{\\rho_0}{\\rho}=\\frac{\\upsilon}{\\upsilon_0}" }, { "math_id": 39, "text": "\\frac{p}{p_0} = \\frac{1}{\\eta}\\left[1+\\frac{\\gamma-1}{2}M_0^2(1-\\eta^2)\\right]= \\frac{\\eta_1(\\gamma+1)/(\\gamma-1)-\\eta^2}{[\\eta_1(\\gamma+1)/(\\gamma-1)-1]\\eta}, \\quad \\eta_1 = \\frac{\\gamma-1}{\\gamma+1} + \\frac{2}{(\\gamma+1)M_0^2}," }, { "math_id": 40, "text": "M_0=D/c_0" }, { "math_id": 41, "text": "\\eta_1=u_1/D=\\rho_0/\\rho_1=\\upsilon_1/\\upsilon_0" }, { "math_id": 42, "text": "\\eta(x)" }, { "math_id": 43, "text": "\\frac{8\\gamma}{3(\\gamma+1)}\\frac{\\mu'}{\\rho_0 D} \\eta \\frac{d\\eta}{dx} = -(1-\\eta)(\\eta-\\eta_1)." }, { "math_id": 44, "text": "\\xi = \\frac{3(\\gamma+1)}{8\\gamma} \\rho_0 D \\int_0^x \\frac{dt}{\\mu'(t)}" }, { "math_id": 45, "text": "\\mu'(x)=\\mu'[T(x)]" }, { "math_id": 46, "text": "\\eta \\frac{d\\eta}{d\\xi} = -(1-\\eta)(\\eta-\\eta_1)." }, { "math_id": 47, "text": "x" }, { "math_id": 48, "text": "(\\eta_1+1)/2" }, { "math_id": 49, "text": "\\frac{1-\\eta}{(\\eta-\\eta_1)^{\\eta_1}}= \\left(\\frac{1-\\eta_1}{2}\\right)^{1-\\eta_1}e^{(1-\\eta_1)\\xi}." }, { "math_id": 50, "text": "\\xi\\to-\\infty" }, { "math_id": 51, "text": "x\\to-\\infty" }, { "math_id": 52, "text": "\\eta\\to 1" }, { "math_id": 53, "text": "\\xi\\to+\\infty" }, { "math_id": 54, "text": "x\\to+\\infty" }, { "math_id": 55, "text": "\\eta\\to \\eta_1." }, { "math_id": 56, "text": "T/T_0" }, { "math_id": 57, "text": "\\frac{T}{T_0} = 1 + \\frac{\\gamma-1}{2}M_0^2 (1-\\eta^2)" }, { "math_id": 58, "text": "s = c_p \\ln (p^{1/\\gamma}/\\rho) = c_p\\{\\ln(p/\\rho) - [(\\gamma-1)/\\gamma]\\ln p\\} " }, { "math_id": 59, "text": "\\frac{s-s_0}{c_p} = \\ln \\frac{T}{T_0} - \\frac{\\gamma-1}{\\gamma} \\ln \\frac{p}{p_0}." }, { "math_id": 60, "text": "\\gamma=1.4" }, { "math_id": 61, "text": "M_0=2" }, { "math_id": 62, "text": "Tds=d\\varepsilon+pd\\upsilon = dh-\\rho^{-1}dp" }, { "math_id": 63, "text": "\\rho u T \\frac{ds}{dx} = \\frac{4}{3}\\mu' \\left(\\frac{du}{dx}\\right)^2 + \\frac{d}{dx}\\left(\\lambda\\frac{dT}{dx}\\right)." }, { "math_id": 64, "text": "(du/dx)^2" }, { "math_id": 65, "text": "d(\\lambda dT/dx)/dx>0" }, { "math_id": 66, "text": "d(\\lambda dT/dx)/dx<0" }, { "math_id": 67, "text": "Pr'\\neq 3/4" }, { "math_id": 68, "text": "p-p_0" }, { "math_id": 69, "text": "\\rho-\\rho_0" }, { "math_id": 70, "text": "\\delta" }, { "math_id": 71, "text": "\\delta \\sim p_1-p_0" }, { "math_id": 72, "text": "dp/dx" }, { "math_id": 73, "text": "p(x)" }, { "math_id": 74, "text": "p(x) = \\frac{1}{2}(p_1+p_0) + \\frac{1}{2}(p_1-p_0) \\tanh \\frac{x}{\\delta}" }, { "math_id": 75, "text": "\\delta = \\frac{8a\\rho_0 c_0^4}{(p_1-p_0)\\Gamma}, \\quad a = \\frac{\\lambda_0}{2\\rho_0 c_0^3c_p}\\left(\\frac{4}{3}Pr'+ \\gamma-1\\right)," }, { "math_id": 76, "text": "\\Gamma" }, { "math_id": 77, "text": "\\Gamma=(\\gamma+1)/2" }, { "math_id": 78, "text": "a" }, { "math_id": 79, "text": "(1/T)dp/dx" }, { "math_id": 80, "text": "\\frac{s(x) - s_0}{c_p} = \\frac{\\lambda_0 \\Gamma}{8a c_p c_0T_0} \\left(\\frac{\\partial T}{\\partial p}\\right)_{s_0}\\frac{(p_1-p_0)^2}{\\rho_0^2c_0^4} \\frac{1}{\\cosh^2 (x/\\delta)}." }, { "math_id": 81, "text": "s(x)-s_0" }, { "math_id": 82, "text": "s_1-s_0" }, { "math_id": 83, "text": "s=s_0" }, { "math_id": 84, "text": "x\\to\\pm\\infty" }, { "math_id": 85, "text": "s(x)" }, { "math_id": 86, "text": "lc" }, { "math_id": 87, "text": "k" }, { "math_id": 88, "text": "a\\sim l/c^2" }, { "math_id": 89, "text": "p/\\rho\\sim c^2" }, { "math_id": 90, "text": "\\delta \\sim l," }, { "math_id": 91, "text": "p_2-p_1" }, { "math_id": 92, "text": "(Pr'\\to 0)" }, { "math_id": 93, "text": "\\begin{align}\n\\rho u &= \\rho_0 D,\\\\\np + \\rho u^2 &= p_0 +\\rho_0 D^2,\\\\\nh + \\frac{u^2}{2} - \\frac{\\lambda}{\\rho_0 D} \\frac{dT}{dx} &=h_0 +\\frac{D^2}{2}.\n\\end{align}\n" }, { "math_id": 94, "text": "\\eta" }, { "math_id": 95, "text": "\\begin{align}\n\\frac{p}{p_0} &= 1 + \\gamma M_0^2(1-\\eta),\\\\\n\\frac{T}{T_0} &= 1 + (1-\\eta) (\\gamma M_0^2 \\eta-1),\\\\\n\\frac{\\lambda}{\\rho_0 D^3}\\frac{dT}{dx} & = \\frac{1}{2}\\frac{\\gamma-1}{\\gamma+1}(1-\\eta)(\\eta-\\eta_1).\n\\end{align}" }, { "math_id": 96, "text": "dT/dx=f(T)" }, { "math_id": 97, "text": "M_0^2 > \\frac{3\\gamma-1}{\\gamma(3-\\gamma)};" }, { "math_id": 98, "text": "M_0>1.2." }, { "math_id": 99, "text": "(Pr'\\to \\infty)" }, { "math_id": 100, "text": "\\begin{align}\n\\rho u &= \\rho_0 D,\\\\\np + \\rho u^2 - \\frac{4}{3}\\mu' \\frac{du}{dx} &= p_0 +\\rho_0 D^2,\\\\\nh + \\frac{u^2}{2} - \\frac{4\\mu'}{3\\rho_0 D} u\\frac{du}{dx}&=h_0 +\\frac{D^2}{2}.\n\\end{align}\n" }, { "math_id": 101, "text": "p/p_0" } ]
https://en.wikipedia.org/wiki?curid=77210919
77212117
Agnew's theorem
Agnew's theorem characterizes term rearrangements that preserve convergence of series. It was proposed by American mathematician Ralph Palmer Agnew. Statement. Let "p" be a permutation of formula_0, i.e., a bijective function formula_1. Then the following two statements are equivalent: Examples. Let us split formula_0 in intervals: formula_5 where formula_6 and formula_7 for any formula_8. Let us also consider a permutation formula_9 composed of an infinite number of permutations formula_10 that permute numbers within corresponding intervals: formula_11 Since each formula_10 maps formula_12 to itself, it follows that formula_13 maps formula_14 to: Hence, the total number of intervals in the image under formula_13 of formula_14 equals 1 plus whatever number of additional intervals is created by formula_10. Bounded intervals. Permutation formula_10 can create at most formula_20 additional intervals by mapping the first half of its interval, formula_21, in an interleaving fashion: formula_22 If the lengths of the intervals are bounded, i.e., formula_23, then permutation formula_10 can create at most formula_24 additional intervals, fulfilling the criterion in Agnew's theorem. Therefore, any formula_10 may be used. This means that the terms of any convergent series formula_2 may be rearranged freely within groups, if the lengths of these groups are bounded by a constant. Unbounded intervals. Permutations formula_10 that mirror their interval: formula_25 permutations formula_10 that perform right circular shifts of their interval by formula_26 positions (formula_27): formula_28 and permutations formula_10 that are the inverses of the interleaving permutations described above: formula_29 all create 1 additional interval, fulfilling the criterion in Agnew's theorem. Permutations formula_10 that rearrange their interval as formula_30 blocks can create at most formula_31 additional intervals. If the number of these blocks is bounded, then the criterion in Agnew's theorem is fulfilled. This means that within groups of arbitrary unbounded length the terms of any convergent series formula_2 may be mirrored, circularly shifted and rearranged in blocks (if the number of these blocks is bounded by a constant); terms at even positions within groups may be gathered at the beginning of the group (in the same order). Dealing with unknown series. The permutations described by Agnew's theorem can transform a divergent series into a convergent one. Let us consider a permutation formula_13 as described above with intervals increasing and formula_10 being interleaving permutations described above. Such formula_13 does not fulfill the criterion in Agnew's theorem, therefore, there exists a convergent series formula_32 such that formula_33 is either divergent or converges to a different sum. But it can't converge to a different sum: the inverse permutation formula_34 is composed of inverses of interleaving permutations formula_35, which all fulfill the criterion in Agnew's theorem, therefore formula_36 would converge to the same sum as formula_33. This means that formula_33 must be divergent. However, if we require both formula_13 and formula_34 to satisfy the criterion in Agnew's theorem, then formula_13 will preserve both convergence (with the same sum) and divergence. (If it didn't preserve divergence, then the inverse wouldn't preserve convergence.) In fact, such permutations preserve absolute convergence (with the same sum), conditional convergence (with the same sum) and divergence. (All permutations preserve absolute convergence with the same sum; a conditionally convergent series can't be turned into an absolutely convergent one because the reverse permutation wouldn't preserve absolute convergence.) This means that, when dealing with a series for which it is unknown whether it converges and what type of convergence it has, its terms may be rearranged using permutations formula_13, such that both formula_13 and formula_34 map formula_14 to at most formula_37 intervals, without changing the type of convergence/divergence of the series. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{N}" }, { "math_id": 1, "text": "p: \\mathbb{N} \\to \\mathbb{N}" }, { "math_id": 2, "text": "\\sum_{n=1}^\\infty a_n" }, { "math_id": 3, "text": "\\sum_{n=1}^\\infty a_{p(n)}" }, { "math_id": 4, "text": "n \\in \\mathbb{N}" }, { "math_id": 5, "text": "[g_0+1,\\,g_1],\\,\\ldots,\\,[g_{k-1}+1,\\,g_k],\\,\\ldots\\;," }, { "math_id": 6, "text": "g_0=0" }, { "math_id": 7, "text": "g_k>g_{k-1}" }, { "math_id": 8, "text": "k \\in \\mathbb{N}" }, { "math_id": 9, "text": "p=p_1 \\circ \\cdots \\circ p_k \\circ \\cdots" }, { "math_id": 10, "text": "p_k" }, { "math_id": 11, "text": "\\begin{cases}\np_k(n) \\in [g_{k-1}+1,\\,g_k] &\\text{if}\\;\\; n \\in [g_{k-1}+1,\\,g_k]\\\\\np_k(n) = n &\\text{otherwise}\n\\end{cases}" }, { "math_id": 12, "text": "[g_{k-1}+1,\\,g_k]" }, { "math_id": 13, "text": "p" }, { "math_id": 14, "text": "[1,n]" }, { "math_id": 15, "text": "n=g_k" }, { "math_id": 16, "text": "k" }, { "math_id": 17, "text": "[1,\\,g_{k-1}]" }, { "math_id": 18, "text": "[g_{k-1}+1,\\,n]" }, { "math_id": 19, "text": "n \\in [g_{k-1}+1,\\,g_k-1]" }, { "math_id": 20, "text": "\\left\\lfloor\\frac{g_k-g_{k-1}}{2}\\right\\rfloor" }, { "math_id": 21, "text": "[g_{k-1}+1,\\,g_{k-1}+\\left\\lfloor\\frac{g_k-g_{k-1}}{2}\\right\\rfloor]" }, { "math_id": 22, "text": "p_k(g_{k-1}+n) = g_{k-1}+2n\\;." }, { "math_id": 23, "text": "g_k-g_{k-1} \\le L" }, { "math_id": 24, "text": "\\left\\lfloor\\frac{L}{2}\\right\\rfloor" }, { "math_id": 25, "text": "p_k(g_{k-1}+n) = g_k+1-n\\;," }, { "math_id": 26, "text": "S" }, { "math_id": 27, "text": "0 < S < g_k-g_{k-1}" }, { "math_id": 28, "text": "p_k(g_{k-1}+n) = g_{k-1}+1+\\left((n-1+S) \\bmod (g_k-g_{k-1})\\right)\\;," }, { "math_id": 29, "text": "p_k(g_{k-1}+n) = \\begin{cases}\ng_{k-1}+\\left\\lfloor\\frac{g_k-g_{k-1}}{2}\\right\\rfloor+\\frac{n+1}{2} &\\text{if}\\;n\\;\\text{odd}\\\\\ng_{k-1}+\\frac{n}{2} &\\text{if}\\;n\\;\\text{even}\n\\end{cases}" }, { "math_id": 30, "text": "B > 1" }, { "math_id": 31, "text": "\\min(\\left\\lceil\\frac{B}{2}\\right\\rceil,\\left\\lfloor\\frac{g_k-g_{k-1}}{2}\\right\\rfloor)" }, { "math_id": 32, "text": "\\sum_{i=1}^\\infty a_n" }, { "math_id": 33, "text": "\\sum_{i=1}^\\infty a_{p(n)}" }, { "math_id": 34, "text": "p^{-1}" }, { "math_id": 35, "text": "p_k^{-1}" }, { "math_id": 36, "text": "\\sum_{i=1}^\\infty a_{p^{-1}(p(n))} = \\sum_{i=1}^\\infty a_n" }, { "math_id": 37, "text": "K" } ]
https://en.wikipedia.org/wiki?curid=77212117
772136
Vigorish
Fee charged by a bookmaker for accepting a gambler's wager Vigorish (also known as juice, under-juice, the cut, the take, the margin, the house edge or the vig) is the fee charged by a bookmaker for accepting a gambler's wager. In American English, it can also refer to the interest owed a loanshark in consideration for credit. The term came to English usage via Yiddish slang () which was itself a loanword from Russian (). As a business practice it is an example of risk management; by doing so bookmakers can guarantee turning a profit regardless of the underlying event's outcome. As a rule, bookmakers do not want to have a financial interest creating a preference for one result over another in any given sporting event. This is accomplished by incentivizing their clientele to wager offsetting amounts on all potential outcomes of the event. The normal method by which this is achieved is by adjusting the payouts for each outcome (collectively called the line) as imbalances of total amounts wagered between them occur. Within the mathematical disciplines of probability and statistics this is analogous to an overround, though the two are not synonymous but are related by the connecting formulae below. Over round occurs when the sum of the implied probabilities for all possible event results is above 100%, whereas the vigorish is the bookmaker's percentage profit on the total stakes made on the event. For example, an overround of 20% results in 16.66% vigorish. The connecting formulae are formula_0 where v represents vigorish and o represents over round. Proportionality. It is simplest to assume that vigorish is factored in proportionally to the true odds, although this need not be the case. Under proportional vigorish, a "fair odds" betting line of 2.00/2.00 without vigorish would decrease the payouts of all outcomes equally, perhaps to 1.95/1.95, once it was added. More commonly though, disproportional vigorish will be applied as part of the efforts to keep the amounts wagered balanced, such as 1.90/2.00, making the outcome with fewer dollars wagered appear more attractive due to the larger payout. Example. In the context of betting, two individuals may choose to place a wager on opposite outcomes of an event, agreeing on "fair odds" or evens. This arrangement involves each party risking an equal amount, such as $100, with the potential to win the same amount. The arrangement is made directly between the individuals, bypassing a bookmaker. Consequently, the winner is entitled to the total amount staked by both parties, while the loser forfeits their stake. This direct betting approach implies that both parties accept the counterparty risk, acknowledging the possibility that the losing party may not honor the payment upon the event's conclusion, a risk typically mitigated by a bookmaker through the payment of vigorish. In sports betting, vigorish is applied in scenarios with a 50/50 probability outcome, such as a coin toss, where the bookmaker adjusts the odds to ensure a profit regardless of the bet outcome. For a practical illustration of how vigorish is calculated in sports betting, consider an NBA game with odds set at +210 (32.26%) and -250 (71.43%), where the combined implied probabilities equal 103.69%, resulting in a vigorish of 3.69%. By contrast, when using a sportsbook with the odds set at 1.90/2.00 (10 to 11) with vigorish factored in, each person would have to risk or lay $110 to win $100 (the sportsbook collects $220 "in the pot"). The extra $10 per person is, in effect, a bookmaker's commission for taking the action. This $10 is not in play and cannot be doubled by the winning bettor; it can only be lost. A losing bettor simply loses his $110. A winning bettor wins back his original $110, plus his $100 winnings, for a total of $210. From the $220 collected, the sportsbook keeps the remaining $10 after paying out the winner. Theory versus practice. Vigorish can be defined independent of the outcome of the event and of bettors' behaviors, by defining it as the percentage of total dollars wagered retained by the bookmaker in a risk-free wager. This definition is largely theoretical in practice as it makes the assumption that the bookmaker has balanced the wagers perfectly, such that they make equal profit regardless of the contest result. For a two-outcome event, the vigorish percentage, v is formula_1 where the p and q are the decimal payouts for each outcome. This should not be confused with the percentage a bettor pays due to vigorish. No consistent definition of the percentage a bettor pays due to vigorish can be made without first defining the bettor's behavior under juiced odds and assuming a win-percentage for the bettor. These factors are discussed under the debate section. For example, 1.90/2.00 pricing of an even match is 4.55% vigorish, and 1.95/1.95 pricing is 2.38% vigorish. Vigorish percentage for three-way events may be calculated using the following formula: formula_2 where p, q and t are the decimal payouts for each outcome. For comparison, for over round calculation only the upper part of the equation is used, leading to slightly higher percentage results than the vigorish calculation. Other kinds of vigorish. Casino games. More generically, vigorish can refer to the bookmaker/casino's theoretical advantage from all possible wagers on any Baccarat, in the house-banked version of baccarat (also mini-baccarat) commonly played in North American casinos, vigorish refers to the 5% commission (called the "cagnotte") charged to players who win a bet on the banker hand. The rules of the game are structured so that the banker hand wins slightly more often than the player hand; the 5% vigorish restores the house advantage to the casino for both bets. In most casinos, a winning banker bet is paid at even money, with a running count of the commission owed kept by special markers in a commission box in front of the dealer. This commission must be paid when all the cards are dealt from the shoe or when the player leaves the game. Some casinos do not keep a running commission amount, and instead withdraw the commission directly from the winnings; a few require the commission to be posted along with the bet, in a separate space on the table. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v = \\frac{o}{1 + o} \\quad \\text{ and } \\quad o = \\frac{v}{1 - v}" }, { "math_id": 1, "text": "v = 100\\left(1 - {pq \\over p + q}\\right)" }, { "math_id": 2, "text": " v = 100 \\cdot \\frac{ 1/p + 1/q + 1/t-1}{1/p + 1/q + 1/t}" } ]
https://en.wikipedia.org/wiki?curid=772136
77214642
IUCN Green Status of Species
Conservation assessment system The IUCN Green Status of Species is a conservation assessment system published by the International Union for Conservation of Nature (IUCN) that grades the impact of recovery and conservation efforts for individual species. The first version of the Green Status assessment guidelines was published in 2018, and integration of Green statuses into Red List assessments was formalized as an optional component in 2020. The second version of the framework was published in 2021. History. The creation of the Green Status system began with the formal call of the World Conservation Congress (WCC) in 2012 for the creation of a "Green List" of ecosystems, nature preserves and species based on a set of measurement systems for conservation success. In Resolution 41, the WCC noted that merely preventing extinction of species or loss of ecosystems, the goal of the Red Lists, was insufficient to retain biodiversity, preserve the valuable ecological services provided by ecosystems and species and maintain their resilience in the face of threats like those posed by climate change. Ultimately, the Green List of Species was developed separately from what became the IUCN Green List of Protected and Conserved Areas. In 2020, the IUCN decided to rename the Green List of Species the IUCN Green Status of Species due to methodological differences between it and the Green List of Protected and Conserved Areas and concern that having a species receive a Green "listing" might be perceived as implying that it is not at risk of extinction. The Green Status complements the Red List assessment but does not replace it: both assessments are performed by the IUCN for a given species and, with the exception of species extinct in the wild that would require reintroduction as a conservation measure and whose current Green Score is by definition 0%, one status does not determine the other. Pilot program. As of April 2020, preliminary IUCN Green Status assessments had been performed for 179 species. Among the IUCN Species Survival Commission Specialist Groups and IUCN Red List Authorities in existence in 2018, 52 out of the 135 working groups chose to contribute to the Green Status pilot. In interviews of stakeholders performed by the IUCN, it was suggested that Green List assessments may be most effective if performed at multiple spatial scales, such as in a regional assessment. Interviewees expressed concerns over the difficulty of establishing baseline Green Scores, especially for species that live in places difficult to survey, like the ocean, and in places, such as Europe, where human change has been occurring for a long time. They were also concerned about the cost of producing the new, complex assessments. The pilot was judged successful by the IUCN, leading to the launch of the program in mid-2021 and publication of Green Status assessments in the IUCN Red List using the updated Green Status of Species standard. Assessment. The score (Green Score) is an average of spatial units currently occupied or occupied in the past by a species weighted by their integrity. Representative values of assigned weights are 0, if the species is not present in the area, 3, if the species is present, 6, if the population is viable and 9, if the population is assessed as functional, although depending on the exact criteria used by the assessor, the functional weight can be assigned to 8 or 10 and decimal weights may be used. The exact meaning of these terms varies by assessor and species, but the IUCN suggests conducting the assessment as would be done when assigning a regional Red List status, with the exception of assessing functionality, which is based on the ability of the population within the spatial unit to carry out natural processes, such as migration, the integrity of its interactions within its habitat, such as predator-prey relationships with other species, and its contributions to ecosystem processes within the unit, such as seed dispersal. Spatial units can represent reproductively isolated populations or subspecies, areas where the species faces a unique threat, division by ecosystem types the species inhabits or may be based on geographical features with some barrier to dispersal. National borders may also be considered when delineating spatial units. The definition and number of spatial units chosen by the assessor directly influences the Green Score and conservation metrics that are obtained. The Green Score is expressed as a percentage equal to: formula_0 Where WS is the weight (integrity) of the spatial unit, N is the total number of spatial units and WF is the weight of a functional unit (highest weight possible). A Green Score of 100% is defined for a fully recovered or non-depleted species that is present in all parts of its historic range (prior to any major human disturbance), each with viable populations that are ecologically functional, a score that may not be realistically attainable for many species even if they achieve their Recovery Potential. Conservation metrics. A Green Status assessment also includes four conservation metrics that represent changes in Green Score in different conservation scenarios over periods of time. In the first version of the assessment, the first metric is "Conservation Legacy", which measures the difference in the estimated change in Green Score from 1950 to present if no conservation actions had been undertaken (counterfactual scenario) to the actual change to present. If no conservation actions had occurred in this time, the Conservation Legacy would be 0%. "Conservation Dependence", the second metric, assesses the change in Green Scores between the present and the short-term future, defined as three generations of the species or 10 years, whichever is longer, in the first version of the assessment, or 10 years alone in the second version, if no conservation actions are undertaken. "Conservation Gain", the third metric, is the change in Green Scores between the present and short-term future with conservation action. Finally, "Recovery Potential" is the change in Green Score between the present and the long-term future, defined as 100 years after present, in an optimal conservation scenario. In the case of assessing Conservation Legacy, the spatial units used for calculating Green Score reflect indigenous range, such as the range the species occupied before 1500 (estimate of the beginning of European expansion) or 1750 (approximate beginning of the Industrial Revolution). Expected additional range, such as habitats that a species may begin to occupy under anticipated climate warming scenarios, is used in calculating long-term future Green scores. While the Green Status and the Red List statuses showed a moderate negative correlation among species assessed in a Green Status pilot, with progressively more depleted species being more likely to be threatened with extinction, among conservation metrics, only Recovery Potential showed differences between IUCN Red List categories, with currently imperiled species generally possessing a higher Recovery Potential than the species of Least Concern. Green Status. The Green Status or Species Recovery Category is expressed in words in the second version of the Green Status Assessment. It is based on the Green Score, also known as the Species Recovery Score, which is a point estimate (SRSbest), with a corresponding confidence interval (bounded by SRSmax and SRSmin). The present-day Green Status or Species Recovery Category is defined as follows: The conservation metrics are also expressed as point estimates with their own confidence intervals and verbal descriptors. They assess the effectiveness of conservation measures as measured by predicted changes in species' Green Scores over time. The verbal descriptors have criteria based on absolute change in Green Score (magnitude of the conservation metric), change relative to the baseline present-day Green Score or any benefit that prevents extinction, in cases of species with high conservation needs. The metrics can have zero and negative values. In the case of the Conservation Dependence and Conservation Gain metrics, "false negative" values have been attributed to use of a static Green Score baseline that does not indicate whether the species is projected to decline or recover if threats to a species change in the short term, and future versions of the Green Score assessments are expected to instead use a dynamic baseline to compensate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\sum_{S=1}^N W_S}{W_F*N} * 100" } ]
https://en.wikipedia.org/wiki?curid=77214642
772150
Degenerate bilinear form
In mathematics, specifically linear algebra, a degenerate bilinear form "f"&amp;hairsp;("x", "y"&amp;hairsp;) on a vector space "V" is a bilinear form such that the map from "V" to "V"∗ (the dual space of "V"&amp;hairsp;) given by "v" ↦ ("x" ↦ "f"&amp;hairsp;("x", "v"&amp;hairsp;)) is not an isomorphism. An equivalent definition when "V" is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero "x" in "V" such that formula_0 for all formula_1 Nondegenerate forms. A nondegenerate or nonsingular form is a bilinear form that is not degenerate, meaning that formula_2 is an isomorphism, or equivalently in finite dimensions, if and only if formula_3 for all formula_4 implies that formula_5. The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map formula_6 be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold. Using the determinant. If "V" is finite-dimensional then, relative to some basis for "V", a bilinear form is degenerate if and only if the determinant of the associated matrix is zero – if and only if the matrix is "singular", and accordingly degenerate forms are also called singular forms. Likewise, a nondegenerate form is one for which the associated matrix is non-singular, and accordingly nondegenerate forms are also referred to as non-singular forms. These statements are independent of the chosen basis. Related notions. If for a quadratic form "Q" there is a non-zero vector "v" ∈ "V" such that "Q"("v") = 0, then "Q" is an isotropic quadratic form. If "Q" has the same sign for all non-zero vectors, it is a definite quadratic form or an anisotropic quadratic form. There is the closely related notion of a unimodular form and a perfect pairing; these agree over fields but not over general rings. Examples. The study of real, quadratic algebras shows the distinction between types of quadratic forms. The product "zz"* is a quadratic form for each of the complex numbers, split-complex numbers, and dual numbers. For "z" = "x" + ε "y", the dual number form is "x"2 which is a degenerate quadratic form. The split-complex case is an isotropic form, and the complex case is a definite form. The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map formula_6 be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold. Infinite dimensions. Note that in an infinite-dimensional space, we can have a bilinear form ƒ for which formula_2 is injective but not surjective. For example, on the space of continuous functions on a closed bounded interval, the form formula_7 is not surjective: for instance, the Dirac delta functional is in the dual space but not of the required form. On the other hand, this bilinear form satisfies formula_8 for all formula_9 implies that formula_10 In such a case where ƒ satisfies injectivity (but not necessarily surjectivity), ƒ is said to be "weakly nondegenerate". Terminology. If "f" vanishes identically on all vectors it is said to be totally degenerate. Given any bilinear form "f" on "V" the set of vectors formula_11 forms a totally degenerate subspace of "V". The map "f" is nondegenerate if and only if this subspace is trivial. Geometrically, an isotropic line of the quadratic form corresponds to a point of the associated quadric hypersurface in projective space. Such a line is additionally isotropic for the bilinear form if and only if the corresponding point is a singularity. Hence, over an algebraically closed field, Hilbert's Nullstellensatz guarantees that the quadratic form always has isotropic lines, while the bilinear form has them if and only if the surface is singular. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x,y)=0\\," }, { "math_id": 1, "text": "\\,y \\in V." }, { "math_id": 2, "text": "v \\mapsto (x \\mapsto f(x,v))" }, { "math_id": 3, "text": "f(x,y)=0" }, { "math_id": 4, "text": "y \\in V" }, { "math_id": 5, "text": "x = 0" }, { "math_id": 6, "text": "V \\to V^*" }, { "math_id": 7, "text": " f(\\phi,\\psi) = \\int\\psi(x)\\phi(x) \\,dx" }, { "math_id": 8, "text": "f(\\phi,\\psi)=0" }, { "math_id": 9, "text": "\\phi" }, { "math_id": 10, "text": "\\psi=0.\\," }, { "math_id": 11, "text": "\\{x\\in V \\mid f(x,y) = 0 \\mbox{ for all } y \\in V\\}" } ]
https://en.wikipedia.org/wiki?curid=772150
7721927
Siegel modular form
Major type of automorphic form in mathematics In mathematics, Siegel modular forms are a major type of automorphic form. These generalize conventional "elliptic" modular forms which are closely related to elliptic curves. The complex manifolds constructed in the theory of Siegel modular forms are Siegel modular varieties, which are basic models for what a moduli space for abelian varieties (with some extra level structure) should be and are constructed as quotients of the Siegel upper half-space rather than the upper half-plane by discrete groups. Siegel modular forms are holomorphic functions on the set of symmetric "n" × "n" matrices with positive definite imaginary part; the forms must satisfy an automorphy condition. Siegel modular forms can be thought of as multivariable modular forms, i.e. as special functions of several complex variables. Siegel modular forms were first investigated by Carl Ludwig Siegel (1939) for the purpose of studying quadratic forms analytically. These primarily arise in various branches of number theory, such as arithmetic geometry and elliptic cohomology. Siegel modular forms have also been used in some areas of physics, such as conformal field theory and black hole thermodynamics in string theory. Definition. Preliminaries. Let formula_0 and define formula_1 the Siegel upper half-space. Define the symplectic group of level formula_2, denoted by formula_3 as formula_4 where formula_5 is the formula_6 identity matrix. Finally, let formula_7 be a rational representation, where formula_8 is a finite-dimensional complex vector space. Siegel modular form. Given formula_9 and formula_10 define the notation formula_11 Then a holomorphic function formula_12 is a "Siegel modular form" of degree formula_13 (sometimes called the genus), weight formula_14, and level formula_2 if formula_15 for all formula_16. In the case that formula_17, we further require that formula_18 be holomorphic 'at infinity'. This assumption is not necessary for formula_19 due to the Koecher principle, explained below. Denote the space of weight formula_14, degree formula_13, and level formula_2 Siegel modular forms by formula_20 Examples. Some methods for constructing Siegel modular forms include: Level 1, small degree. For degree 1, the level 1 Siegel modular forms are the same as level 1 modular forms. The ring of such forms is a polynomial ring C["E"4,"E"6] in the (degree 1) Eisenstein series "E"4 and "E"6. For degree 2, (Igusa 1962, 1967) showed that the ring of level 1 Siegel modular forms is generated by the (degree 2) Eisenstein series "E"4 and "E"6 and 3 more forms of weights 10, 12, and 35. The ideal of relations between them is generated by the square of the weight 35 form minus a certain polynomial in the others. For degree 3, described the ring of level 1 Siegel modular forms, giving a set of 34 generators. For degree 4, the level 1 Siegel modular forms of small weights have been found. There are no cusp forms of weights 2, 4, or 6. The space of cusp forms of weight 8 is 1-dimensional, spanned by the Schottky form. The space of cusp forms of weight 10 has dimension 1, the space of cusp forms of weight 12 has dimension 2, the space of cusp forms of weight 14 has dimension 3, and the space of cusp forms of weight 16 has dimension 7 . For degree 5, the space of cusp forms has dimension 0 for weight 10, dimension 2 for weight 12. The space of forms of weight 12 has dimension 5. For degree 6, there are no cusp forms of weights 0, 2, 4, 6, 8. The space of Siegel modular forms of weight 2 has dimension 0, and those of weights 4 or 6 both have dimension 1. Level 1, small weight. For small weights and level 1, give the following results (for any positive degree): Table of dimensions of spaces of level 1 Siegel modular forms. The following table combines the results above with information from and and . Koecher principle. The theorem known as the "Koecher principle" states that if formula_18 is a Siegel modular form of weight formula_14, level 1, and degree formula_19, then formula_18 is bounded on subsets of formula_21 of the form formula_22 where formula_23. Corollary to this theorem is the fact that Siegel modular forms of degree formula_19 have Fourier expansions and are thus holomorphic at infinity. Applications to physics. In the D1D5P system of supersymmetric black holes in string theory, the function that naturally captures the microstates of black hole entropy is a Siegel modular form. In general, Siegel modular forms have been described as having the potential to describe black holes or other gravitational systems. Siegel modular forms also have uses as generating functions for families of CFT2 with increasing central charge in conformal field theory, particularly the hypothetical AdS/CFT correspondence.
[ { "math_id": 0, "text": "g, N \\in \\mathbb{N}" }, { "math_id": 1, "text": "\\mathcal{H}_g=\\left\\{\\tau \\in M_{g \\times g}(\\mathbb{C}) \\ \\big| \\ \\tau^{\\mathrm{T}}=\\tau, \\textrm{Im}(\\tau) \\text{ positive definite} \\right\\}," }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "\\Gamma_g(N)," }, { "math_id": 4, "text": "\\Gamma_g(N)=\\left\\{ \\gamma \\in GL_{2g}(\\mathbb{Z}) \\ \\big| \\ \\gamma^{\\mathrm{T}} \\begin{pmatrix} 0 & I_g \\\\ -I_g & 0 \\end{pmatrix} \\gamma= \\begin{pmatrix} 0 & I_g \\\\ -I_g & 0 \\end{pmatrix} , \\ \\gamma \\equiv I_{2g}\\mod N\\right\\}," }, { "math_id": 5, "text": "I_g" }, { "math_id": 6, "text": "g \\times g" }, { "math_id": 7, "text": "\\rho:\\textrm{GL}_g(\\mathbb{C}) \\rightarrow \\textrm{GL}(V)" }, { "math_id": 8, "text": "V" }, { "math_id": 9, "text": "\\gamma=\\begin{pmatrix} A & B \\\\ C & D \\end{pmatrix}" }, { "math_id": 10, "text": "\\gamma \\in \\Gamma_g(N)," }, { "math_id": 11, "text": "(f\\big|\\gamma)(\\tau)=(\\rho(C\\tau+D))^{-1}f(\\gamma\\tau)." }, { "math_id": 12, "text": "f:\\mathcal{H}_g \\rightarrow V" }, { "math_id": 13, "text": "g" }, { "math_id": 14, "text": "\\rho" }, { "math_id": 15, "text": "(f\\big|\\gamma)=f" }, { "math_id": 16, "text": "\\gamma \\in \\Gamma_g(N)" }, { "math_id": 17, "text": "g=1" }, { "math_id": 18, "text": "f" }, { "math_id": 19, "text": "g>1" }, { "math_id": 20, "text": "M_{\\rho}(\\Gamma_g(N))." }, { "math_id": 21, "text": "\\mathcal{H}_g" }, { "math_id": 22, "text": "\\left\\{\\tau \\in \\mathcal{H}_g \\ | \\textrm{Im}(\\tau) > \\epsilon I_g \\right\\}," }, { "math_id": 23, "text": "\\epsilon>0" } ]
https://en.wikipedia.org/wiki?curid=7721927
772241
Hurwitz quaternion
Generalization of algebraic integers In mathematics, a Hurwitz quaternion (or Hurwitz integer) is a quaternion whose components are "either" all integers "or" all half-integers (halves of odd integers; a mixture of integers and half-integers is excluded). The set of all Hurwitz quaternions is formula_0 That is, either "a", "b", "c", "d" are all integers, or they are all half-integers. "H" is closed under quaternion multiplication and addition, which makes it a subring of the ring of all quaternions H. Hurwitz quaternions were introduced by Adolf Hurwitz (1919). A Lipschitz quaternion (or Lipschitz integer) is a quaternion whose components are all integers. The set of all Lipschitz quaternions formula_1 forms a subring of the Hurwitz quaternions "H". Hurwitz integers have the advantage over Lipschitz integers that it is possible to perform Euclidean division on them, obtaining a small remainder. Both the Hurwitz and Lipschitz quaternions are examples of noncommutative domains which are not division rings. Structure of the ring of Hurwitz quaternions. As an additive group, "H" is free abelian with generators {(1 + "i" + "j" + "k")&amp;hairsp;/&amp;hairsp;2, "i", "j", "k"}. It therefore forms a lattice in R4. This lattice is known as the "F"4 lattice since it is the root lattice of the semisimple Lie algebra "F"4. The Lipschitz quaternions "L" form an index 2 sublattice of "H". The group of units in "L" is the order 8 quaternion group "Q" = {±1, ±"i", ±"j", ±"k"}. The group of units in "H" is a nonabelian group of order 24 known as the binary tetrahedral group. The elements of this group include the 8 elements of "Q" along with the 16 quaternions {(±1 ± "i" ± "j" ± "k")&amp;hairsp;/&amp;hairsp;2}, where signs may be taken in any combination. The quaternion group is a normal subgroup of the binary tetrahedral group U("H"). The elements of U("H"), which all have norm 1, form the vertices of the 24-cell inscribed in the 3-sphere. The Hurwitz quaternions form an order (in the sense of ring theory) in the division ring of quaternions with rational components. It is in fact a maximal order; this accounts for its importance. The Lipschitz quaternions, which are the more obvious candidate for the idea of an "integral quaternion", also form an order. However, this latter order is not a maximal one, and therefore (as it turns out) less suitable for developing a theory of left ideals comparable to that of algebraic number theory. What Adolf Hurwitz realised, therefore, was that this definition of Hurwitz integral quaternion is the better one to operate with. For a non-commutative ring such as H, maximal orders need not be unique, so one needs to fix a maximal order, in carrying over the concept of an algebraic integer. The lattice of Hurwitz quaternions. The (arithmetic, or field) norm of a Hurwitz quaternion "a" + "bi" + "cj" + "dk", given by "a"2 + "b"2 + "c"2 + "d"2, is always an integer. By a theorem of Lagrange every nonnegative integer can be written as a sum of at most four squares. Thus, every nonnegative integer is the norm of some Lipschitz (or Hurwitz) quaternion. More precisely, the number "c"("n") of Hurwitz quaternions of given positive norm "n" is 24 times the sum of the odd divisors of "n". The generating function of the numbers "c"("n") is given by the level 2 weight 2 modular form formula_2 OEIS:  where formula_3 and formula_4 is the weight 2 level 1 Eisenstein series (which is a quasimodular form) and "σ"1("n") is the sum of the divisors of "n". Factorization into irreducible elements. A Hurwitz integer is called irreducible if it is not 0 or a unit and is not a product of non-units. A Hurwitz integer is irreducible if and only if its norm is a prime number. The irreducible quaternions are sometimes called prime quaternions, but this can be misleading as they are not primes in the usual sense of commutative algebra: it is possible for an irreducible quaternion to divide a product "ab" without dividing either "a" or "b". Every Hurwitz quaternion can be factored as a product of irreducible quaternions. This factorization is not in general unique, even up to units and order, because a positive odd prime "p" can be written in 24("p"+1) ways as a product of two irreducible Hurwitz quaternions of norm "p", and for large "p" these cannot all be equivalent under left and right multiplication by units as there are only 24 units. However, if one excludes this case then there is a version of unique factorization. More precisely, every Hurwitz quaternion can be written uniquely as the product of a positive integer and a primitive quaternion (a Hurwitz quaternion not divisible by any integer greater than 1). The factorization of a primitive quaternion into irreducibles is unique up to order and units in the following sense: if "p"0"p"1..."p""n" and "q"0"q"1..."q""n" are two factorizations of some primitive Hurwitz quaternion into irreducible quaternions where "p""k" has the same norm as "q""k" for all "k", then formula_5 for some units "u""k". Division with remainder. The ordinary integers and the Gaussian integers allow a division with remainder or Euclidean division. For positive integers "N" and "D", there is always a quotient "Q" and a nonnegative remainder "R" such that For complex or Gaussian integers "N" = "a" + i"b" and "D" = "c" + i"d", with the norm N("D") &gt; 0, there always exist "Q" = "p" + i"q" and "R" = "r" + i"s" such that However, for Lipschitz integers "N" = ("a", "b", "c", "d") and "D" = ("e", "f", "g", "h") it can happen that N("R") = N("D"). This motivated a switch to Hurwitz integers, for which the condition N("R") &lt; N("D") is guaranteed. Many algorithms depend on division with remainder, for example, Euclid's algorithm for the greatest common divisor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H = \\left\\{a+bi+cj+dk \\in \\mathbb{H} \\mid a,b,c,d \\in \\mathbb{Z} \\;\\mbox{ or }\\, a,b,c,d \\in \\mathbb{Z} + \\tfrac{1}{2}\\right\\}." }, { "math_id": 1, "text": "L = \\left\\{a+bi+cj+dk \\in \\mathbb{H} \\mid a,b,c,d \\in \\mathbb{Z}\\right\\}" }, { "math_id": 2, "text": "2E_2(2\\tau)-E_2(\\tau) = \\sum_nc(n)q^n = 1 + 24q + 24q^2 + 96q^3 + 24q^4 + 144q^5 + \\dots" }, { "math_id": 3, "text": "q=e^{2\\pi i \\tau}" }, { "math_id": 4, "text": "E_2(\\tau) = 1-24\\sum_n\\sigma_1(n)q^n" }, { "math_id": 5, "text": "\\begin{align}\nq_0 & = p_0 u_1 \\\\\nq_1 & = u_1^{-1} p_1 u_2 \\\\\n& \\,\\,\\,\\vdots \\\\\nq_n & = u_n^{-1} p_n\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=772241
7723
Carmichael number
Composite number in number theory In number theory, a Carmichael number is a composite number &amp;NoBreak;&amp;NoBreak; which in modular arithmetic satisfies the congruence relation: formula_0 for all integers &amp;NoBreak;&amp;NoBreak;. The relation may also be expressed in the form: formula_1 for all integers formula_2 that are relatively prime to &amp;NoBreak;&amp;NoBreak;. They are infinite in number. They constitute the comparatively rare instances where the strict converse of Fermat's Little Theorem does not hold. This fact precludes the use of that theorem as an absolute test of primality. The Carmichael numbers form the subset "K"1 of the Knödel numbers. The Carmichael numbers were named after the American mathematician Robert Carmichael by Nicolaas Beeger, in 1950. Øystein Ore had referred to them in 1948 as numbers with the "Fermat property", or ""F" numbers" for short. Overview. Fermat's little theorem states that if formula_3 is a prime number, then for any integer &amp;NoBreak;&amp;NoBreak;, the number formula_4 is an integer multiple of &amp;NoBreak;&amp;NoBreak;. Carmichael numbers are composite numbers which have the same property. Carmichael numbers are also called Fermat pseudoprimes or absolute Fermat pseudoprimes. A Carmichael number will pass a Fermat primality test to every base formula_2 relatively prime to the number, even though it is not actually prime. This makes tests based on Fermat's Little Theorem less effective than strong probable prime tests such as the Baillie–PSW primality test and the Miller–Rabin primality test. However, no Carmichael number is either an Euler–Jacobi pseudoprime or a strong pseudoprime to every base relatively prime to it so, in theory, either an Euler or a strong probable prime test could prove that a Carmichael number is, in fact, composite. Arnault gives a 397-digit Carmichael number formula_5 that is a "strong" pseudoprime to all "prime" bases less than 307: formula_6 where formula_7 2 9674495668 6855105501 5417464290 5332730771 9917998530 4335099507 5531276838 7531717701 9959423859 6428121188 0336647542 1834556249 3168782883 is a 131-digit prime. formula_3 is the smallest prime factor of &amp;NoBreak;&amp;NoBreak;, so this Carmichael number is also a (not necessarily strong) pseudoprime to all bases less than &amp;NoBreak;&amp;NoBreak;. As numbers become larger, Carmichael numbers become increasingly rare. For example, there are 20,138,200 Carmichael numbers between 1 and 1021 (approximately one in 50 trillion (5·1013) numbers). Korselt's criterion. An alternative and equivalent definition of Carmichael numbers is given by Korselt's criterion. Theorem (A. Korselt 1899): A positive composite integer formula_8 is a Carmichael number if and only if formula_8 is square-free, and for all prime divisors formula_3 of &amp;NoBreak;&amp;NoBreak;, it is true that &amp;NoBreak;&amp;NoBreak;. It follows from this theorem that all Carmichael numbers are odd, since any even composite number that is square-free (and hence has only one prime factor of two) will have at least one odd prime factor, and thus formula_9 results in an even dividing an odd, a contradiction. (The oddness of Carmichael numbers also follows from the fact that formula_10 is a Fermat witness for any even composite number.) From the criterion it also follows that Carmichael numbers are cyclic. Additionally, it follows that there are no Carmichael numbers with exactly two prime divisors. Discovery. The first seven Carmichael numbers, from 561 to 8911, were all found by the Czech mathematician Václav Šimerka in 1885 (thus preceding not just Carmichael but also Korselt, although Šimerka did not find anything like Korselt's criterion). His work, published in Czech scientific journal "Časopis pro pěstování matematiky a fysiky", however, remained unnoticed. Korselt was the first who observed the basic properties of Carmichael numbers, but he did not give any examples. That 561 is a Carmichael number can be seen with Korselt's criterion. Indeed, formula_11 is square-free and &amp;NoBreak;&amp;NoBreak;, formula_12 and &amp;NoBreak;&amp;NoBreak;. The next six Carmichael numbers are (sequence in the OEIS): formula_13 formula_14 formula_15 formula_16 formula_17 formula_18 In 1910, Carmichael himself also published the smallest such number, 561, and the numbers were later named after him. Jack Chernick proved a theorem in 1939 which can be used to construct a subset of Carmichael numbers. The number formula_19 is a Carmichael number if its three factors are all prime. Whether this formula produces an infinite quantity of Carmichael numbers is an open question (though it is implied by Dickson's conjecture). Paul Erdős heuristically argued there should be infinitely many Carmichael numbers. In 1994 W. R. (Red) Alford, Andrew Granville and Carl Pomerance used a bound on Olson's constant to show that there really do exist infinitely many Carmichael numbers. Specifically, they showed that for sufficiently large formula_8, there are at least formula_20 Carmichael numbers between 1 and &amp;NoBreak;&amp;NoBreak;. Thomas Wright proved that if formula_21 and formula_22 are relatively prime, then there are infinitely many Carmichael numbers in the arithmetic progression &amp;NoBreak;&amp;NoBreak;, where &amp;NoBreak;&amp;NoBreak;. Löh and Niebuhr in 1992 found some very large Carmichael numbers, including one with 1,101,518 factors and over 16 million digits. This has been improved to 10,333,229,505 prime factors and 295,486,761,787 digits, so the largest known Carmichael number is much greater than the largest known prime. Properties. Factorizations. Carmichael numbers have at least three positive prime factors. The first Carmichael numbers with formula_23 prime factors are (sequence in the OEIS): The first Carmichael numbers with 4 prime factors are (sequence in the OEIS): The second Carmichael number (1105) can be expressed as the sum of two squares in more ways than any smaller number. The third Carmichael number (1729) is the Hardy-Ramanujan Number: the smallest number that can be expressed as the sum of two cubes (of positive numbers) in two different ways. Distribution. Let formula_24 denote the number of Carmichael numbers less than or equal to &amp;NoBreak;&amp;NoBreak;. The distribution of Carmichael numbers by powers of 10 (sequence in the OEIS): In 1953, Knödel proved the upper bound: formula_25 for some constant &amp;NoBreak;&amp;NoBreak;. In 1956, Erdős improved the bound to formula_26 for some constant &amp;NoBreak;&amp;NoBreak;. He further gave a heuristic argument suggesting that this upper bound should be close to the true growth rate of &amp;NoBreak;&amp;NoBreak;. In the other direction, Alford, Granville and Pomerance proved in 1994 that for sufficiently large "X", formula_27 In 2005, this bound was further improved by Harman to formula_28 who subsequently improved the exponent to &amp;NoBreak;&amp;NoBreak;. Regarding the asymptotic distribution of Carmichael numbers, there have been several conjectures. In 1956, Erdős conjectured that there were formula_29 Carmichael numbers for "X" sufficiently large. In 1981, Pomerance sharpened Erdős' heuristic arguments to conjecture that there are at least formula_30 Carmichael numbers up to &amp;NoBreak;&amp;NoBreak;, where &amp;NoBreak;}&amp;NoBreak;. However, inside current computational ranges (such as the counts of Carmichael numbers performed by Pinch up to 1021), these conjectures are not yet borne out by the data. In 2021, Daniel Larsen proved an analogue of Bertrand's postulate for Carmichael numbers first conjectured by Alford, Granville, and Pomerance in 1994. Using techniques developed by Yitang Zhang and James Maynard to establish results concerning small gaps between primes, his work yielded the much stronger statement that, for any formula_31 and sufficiently large formula_32 in terms of formula_33, there will always be at least formula_34 Carmichael numbers between formula_32 and formula_35 Generalizations. The notion of Carmichael number generalizes to a Carmichael ideal in any number field &amp;NoBreak;&amp;NoBreak;. For any nonzero prime ideal formula_36 in &amp;NoBreak;&amp;NoBreak;, we have formula_37 for all formula_38 in &amp;NoBreak;&amp;NoBreak;, where formula_39 is the norm of the ideal &amp;NoBreak;&amp;NoBreak;. (This generalizes Fermat's little theorem, that formula_40 for all integers &amp;NoBreak;&amp;NoBreak; when &amp;NoBreak;&amp;NoBreak; is prime.) Call a nonzero ideal formula_41 in formula_42 Carmichael if it is not a prime ideal and formula_43 for all &amp;NoBreak;&amp;NoBreak;, where formula_44 is the norm of the ideal &amp;NoBreak;&amp;NoBreak;. When &amp;NoBreak;&amp;NoBreak; is &amp;NoBreak;&amp;NoBreak;, the ideal formula_41 is principal, and if we let &amp;NoBreak;&amp;NoBreak; be its positive generator then the ideal formula_45 is Carmichael exactly when &amp;NoBreak;&amp;NoBreak; is a Carmichael number in the usual sense. When &amp;NoBreak;&amp;NoBreak; is larger than the rationals it is easy to write down Carmichael ideals in &amp;NoBreak;&amp;NoBreak;: for any prime number &amp;NoBreak;&amp;NoBreak; that splits completely in &amp;NoBreak;&amp;NoBreak;, the principal ideal formula_46 is a Carmichael ideal. Since infinitely many prime numbers split completely in any number field, there are infinitely many Carmichael ideals in &amp;NoBreak;&amp;NoBreak;. For example, if &amp;NoBreak;&amp;NoBreak; is any prime number that is 1 mod 4, the ideal &amp;NoBreak;&amp;NoBreak; in the Gaussian integers formula_47 is a Carmichael ideal. Both prime and Carmichael numbers satisfy the following equality: formula_48 Lucas–Carmichael number. A positive composite integer formula_8 is a Lucas–Carmichael number if and only if formula_8 is square-free, and for all prime divisors formula_3 of &amp;NoBreak;&amp;NoBreak;, it is true that &amp;NoBreak;&amp;NoBreak;. The first Lucas–Carmichael numbers are: 399, 935, 2015, 2915, 4991, 5719, 7055, 8855, 12719, 18095, 20705, 20999, 22847, 29315, 31535, 46079, 51359, 60059, 63503, 67199, 73535, 76751, 80189, 81719, 88559, 90287, ... (sequence in the OEIS) Quasi–Carmichael number. Quasi–Carmichael numbers are squarefree composite numbers &amp;NoBreak;&amp;NoBreak; with the property that for every prime factor &amp;NoBreak;&amp;NoBreak; of &amp;NoBreak;&amp;NoBreak;, &amp;NoBreak;&amp;NoBreak; divides &amp;NoBreak;&amp;NoBreak; positively with &amp;NoBreak;&amp;NoBreak; being any integer besides 0. If &amp;NoBreak;&amp;NoBreak;, these are Carmichael numbers, and if &amp;NoBreak;&amp;NoBreak;, these are Lucas–Carmichael numbers. The first Quasi–Carmichael numbers are: 35, 77, 143, 165, 187, 209, 221, 231, 247, 273, 299, 323, 357, 391, 399, 437, 493, 527, 561, 589, 598, 713, 715, 899, 935, 943, 989, 1015, 1073, 1105, 1147, 1189, 1247, 1271, 1295, 1333, 1517, 1537, 1547, 1591, 1595, 1705, 1729, ... (sequence in the OEIS) Knödel number. An "n"-Knödel number for a given positive integer "n" is a composite number "m" with the property that each &amp;NoBreak;&amp;NoBreak; coprime to "m" satisfies &amp;NoBreak;}&amp;NoBreak;. The &amp;NoBreak;&amp;NoBreak; case are Carmichael numbers. Higher-order Carmichael numbers. Carmichael numbers can be generalized using concepts of abstract algebra. The above definition states that a composite integer "n" is Carmichael precisely when the "n"th-power-raising function "p""n" from the ring Z"n" of integers modulo "n" to itself is the identity function. The identity is the only Z"n"-algebra endomorphism on Z"n" so we can restate the definition as asking that "p""n" be an algebra endomorphism of Z"n". As above, "p""n" satisfies the same property whenever "n" is prime. The "n"th-power-raising function "p""n" is also defined on any Z"n"-algebra A. A theorem states that "n" is prime if and only if all such functions "p""n" are algebra endomorphisms. In-between these two conditions lies the definition of Carmichael number of order m for any positive integer "m" as any composite number "n" such that "p""n" is an endomorphism on every Z"n"-algebra that can be generated as Z"n"-module by "m" elements. Carmichael numbers of order 1 are just the ordinary Carmichael numbers. An order-2 Carmichael number. According to Howe, 17 · 31 · 41 · 43 · 89 · 97 · 167 · 331 is an order 2 Carmichael number. This product is equal to 443,372,888,629,441. Properties. Korselt's criterion can be generalized to higher-order Carmichael numbers, as shown by Howe. A heuristic argument, given in the same paper, appears to suggest that there are infinitely many Carmichael numbers of order "m", for any "m". However, not a single Carmichael number of order 3 or above is known. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "b^n\\equiv b\\pmod{n}" }, { "math_id": 1, "text": "b^{n-1}\\equiv 1\\pmod{n}" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "b^p-b" }, { "math_id": 5, "text": "N" }, { "math_id": 6, "text": "N = p \\cdot (313(p - 1) + 1) \\cdot (353(p - 1) + 1 )" }, { "math_id": 7, "text": "p = " }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "p-1 \\mid n-1" }, { "math_id": 10, "text": "-1" }, { "math_id": 11, "text": "561 = 3 \\cdot 11 \\cdot 17" }, { "math_id": 12, "text": "10 \\mid 560" }, { "math_id": 13, "text": "1105 = 5 \\cdot 13 \\cdot 17 \\qquad (4 \\mid 1104;\\quad 12 \\mid 1104;\\quad 16 \\mid 1104)" }, { "math_id": 14, "text": "1729 = 7 \\cdot 13 \\cdot 19 \\qquad (6 \\mid 1728;\\quad 12 \\mid 1728;\\quad 18 \\mid 1728)" }, { "math_id": 15, "text": "2465 = 5 \\cdot 17 \\cdot 29 \\qquad (4 \\mid 2464;\\quad 16 \\mid 2464;\\quad 28 \\mid 2464)" }, { "math_id": 16, "text": "2821 = 7 \\cdot 13 \\cdot 31 \\qquad (6 \\mid 2820;\\quad 12 \\mid 2820;\\quad 30 \\mid 2820)" }, { "math_id": 17, "text": "6601 = 7 \\cdot 23 \\cdot 41 \\qquad (6 \\mid 6600;\\quad 22 \\mid 6600;\\quad 40 \\mid 6600)" }, { "math_id": 18, "text": "8911 = 7 \\cdot 19 \\cdot 67 \\qquad (6 \\mid 8910;\\quad 18 \\mid 8910;\\quad 66 \\mid 8910)." }, { "math_id": 19, "text": "(6k + 1)(12k + 1)(18k + 1)" }, { "math_id": 20, "text": "n^{2/7}" }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "k = 3, 4, 5, \\ldots" }, { "math_id": 24, "text": "C(X)" }, { "math_id": 25, "text": "C(X) < X \\exp\\left({-k_1 \\left( \\log X \\log \\log X\\right)^\\frac{1}{2}}\\right)" }, { "math_id": 26, "text": "C(X) < X \\exp\\left(\\frac{-k_2 \\log X \\log \\log \\log X}{\\log \\log X}\\right)" }, { "math_id": 27, "text": "C(X) > X^\\frac{2}{7}." }, { "math_id": 28, "text": "C(X) > X^{0.332}" }, { "math_id": 29, "text": "X^{1-o(1)}" }, { "math_id": 30, "text": " X \\cdot L(X)^{-1 + o(1)} " }, { "math_id": 31, "text": "\\delta>0" }, { "math_id": 32, "text": "x" }, { "math_id": 33, "text": "\\delta" }, { "math_id": 34, "text": "\\exp{\\left(\\frac{\\log{x}}{(\\log \\log{x})^{2+\\delta}}\\right)} " }, { "math_id": 35, "text": "x+\\frac{x}{(\\log{x})^{\\frac{1}{2+\\delta}}}." }, { "math_id": 36, "text": "\\mathfrak p" }, { "math_id": 37, "text": "\\alpha^{{\\rm N}(\\mathfrak p)} \\equiv \\alpha \\bmod {\\mathfrak p}" }, { "math_id": 38, "text": "\\alpha" }, { "math_id": 39, "text": "{\\rm N}(\\mathfrak p)" }, { "math_id": 40, "text": "m^p \\equiv m \\bmod p" }, { "math_id": 41, "text": "\\mathfrak a" }, { "math_id": 42, "text": "{\\mathcal O}_K" }, { "math_id": 43, "text": "\\alpha^{{\\rm N}(\\mathfrak a)} \\equiv \\alpha \\bmod {\\mathfrak a}" }, { "math_id": 44, "text": "{\\rm N}(\\mathfrak a)" }, { "math_id": 45, "text": "\\mathfrak a = (a)" }, { "math_id": 46, "text": "p{\\mathcal O}_K" }, { "math_id": 47, "text": "\\mathbb Z[i]" }, { "math_id": 48, "text": "\\gcd \\left(\\sum_{x=1}^{n-1} x^{n-1}, n\\right) = 1." }, { "math_id": 49, "text": "10^{18}" } ]
https://en.wikipedia.org/wiki?curid=7723
772441
Memorylessness
Waiting time property of certain probability distributions In probability and statistics, memorylessness is a property of certain probability distributions. It describes situations where the time you've already waited for an event doesn't affect how much longer you'll have to wait. To model memoryless situations accurately, we have to disregard the past state of the system – the probabilities remain unaffected by the history of the process. Only two kinds of distributions are memoryless: geometric and exponential probability distributions. Waiting time examples. With memory. Most phenomena are not memoryless, which means that observers will obtain information about them over time. For example, suppose that X is a random variable, the lifetime of a car engine, expressed in terms of "number of miles driven until the engine breaks down". It is clear, based on our intuition, that an engine which has already been driven for 300,000 miles will have a much lower X than would a second (equivalent) engine which has only been driven for 1,000 miles. Hence, this random variable would not have the memorylessness property. Without memory. In contrast, let us examine a situation which would exhibit memorylessness. Imagine a long hallway, lined on one wall with thousands of safes. Each safe has a dial with 500 positions, and each has been assigned an opening position at random. Imagine that an eccentric person walks down the hallway, stopping once at each safe to make a single random attempt to open it. In this case, we might define random variable X as the lifetime of their search, expressed in terms of "number of attempts the person must make until they successfully open a safe". In this case, E["X"] will always be equal to the value of 500, regardless of how many attempts have already been made. Each new attempt has a (1/500) chance of succeeding, so the person is likely to open exactly one safe sometime in the next 500 attempts – but with each new failure they make no "progress" toward ultimately succeeding. Even if the safe-cracker has just failed 499 consecutive times (or 4,999 times), we expect to wait 500 more attempts until we observe the next success. If, instead, this person focused their attempts on a single safe, and "remembered" their previous attempts to open it, they would be guaranteed to open the safe after, at most, 500 attempts (and, in fact, at onset would only expect to need 250 attempts, not 500). The universal law of radioactive decay, which describes the time until a given radioactive particle decays, is a real-life example of memorylessness. An often used (theoretical) example of memorylessness in queueing theory is the time a storekeeper must wait before the arrival of the next customer. Discrete memorylessness. If a discrete random variable formula_0 is memoryless, then it satisfies formula_1where formula_2 and formula_3 are natural numbers. The equality is still true when formula_4 is substituted for formula_5 on the left hand side of the equation. The only discrete random variable that is memoryless is the geometric random variable taking values in formula_6. This random variable describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs. The memorylessness property asserts that the number of previously failed trials has no effect on the number of future trials needed for a success. Geometric random variables can also be defined as taking values in formula_7, which describes the number of failed trials before the first success in a sequence of indepent and identically distributed bernoulli trials. These random variables do not satisfy the memoryless condition stated above; however they do satisfy a slightly modified memoryless condition: formula_8 Similar to the first defintion, only discrete random variables that satisfy this memoryless condition are geometric random variables taking values in formula_7. In the continuous case, these two defintions of memorylessness are equivalent. Continuous memorylessness. If a continuous random variable formula_0 is memoryless, then it satisfiesformula_9where formula_10 and formula_11 are nonnegative real numbers. The equality is still true when formula_4 is substituted. The only continuous random variable that is memoryless is the exponential random variable. It models random processes like time between consecutive events. The memorylessness property asserts that the amount of time since the previous event has no effect on the future time until the next event occurs. Exponential distribution and memorylessness proof. The only memoryless continuous probability distribution is the exponential distribution, shown in the following proof: First, define formula_12, also known as the distribution's survival function. From the memorylessness property and the definition of conditional probability, it follows thatformula_13 This gives the functional equationformula_14which implies formula_15where formula_16 is a natural number. Similarly, formula_17where formula_18 is a natural number, excluding formula_19. Therefore, all rational numbers formula_20 satisfyformula_21Since formula_22 is continuous and the set of rational numbers is dense in the set of real numbers, formula_23where formula_24 is a nonnegative real number. When formula_25, formula_26As a result, formula_27where formula_28. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\Pr(X>m+n \\mid X>m)=\\Pr(X>n)" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\ge" }, { "math_id": 5, "text": ">" }, { "math_id": 6, "text": "\\mathbb{N}" }, { "math_id": 7, "text": "\\mathbb{N}_0" }, { "math_id": 8, "text": "\\Pr(X>m+n \\mid X\\geq m)=\\Pr(X>n)." }, { "math_id": 9, "text": "\\Pr(X>s+t \\mid X>t )=\\Pr(X>s)" }, { "math_id": 10, "text": "s" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "S(t) = \\Pr(X > t)" }, { "math_id": 13, "text": "\\frac{\\Pr(X > t + s)}{\\Pr(X > t)} = \\Pr(X > s)" }, { "math_id": 14, "text": "S(t + s) = S(t) S(s)" }, { "math_id": 15, "text": "S(pt) = S(t)^p" }, { "math_id": 16, "text": "p" }, { "math_id": 17, "text": "S\\left(\\frac{t}{q}\\right) = S(t)^\\frac{1}{q}" }, { "math_id": 18, "text": "q" }, { "math_id": 19, "text": "0" }, { "math_id": 20, "text": "a=\\tfrac{p}{q}" }, { "math_id": 21, "text": "S(at) = S(t)^a" }, { "math_id": 22, "text": "S" }, { "math_id": 23, "text": "S(xt) = S(t)^x" }, { "math_id": 24, "text": "x" }, { "math_id": 25, "text": "t=1" }, { "math_id": 26, "text": "S(x) = S(1)^x" }, { "math_id": 27, "text": "S(x) = e^{-\\lambda x}" }, { "math_id": 28, "text": "\\lambda = -\\ln S(1) \\geq 0" } ]
https://en.wikipedia.org/wiki?curid=772441
77244347
Ball divergence
Nonparametric two-sample test methods Ball divergence is a non-parametric two-sample statistical test method in metric spaces. It measures the difference between two population probability distributions by integrating the difference over all balls in the space. Therefore, its value is zero if and only if the two probability measures are the same. Similar to common non-parametric test methods, ball divergence calculates the p-value through permutation tests. Background. Distinguishing between two unknown samples in multivariate data is an important and challenging task. Previously, a more common non-parametric two-sample test method was the energy distance test. However, the effectiveness of the energy distance test relies on the assumption of moment conditions, making it less effective for extremely imbalanced data (where one sample size is disproportionately larger than the other). To address this issue, Chen, Dou, and Qiao proposed a non-parametric multivariate test method using ensemble subsampling nearest neighbors (ESS-NN) for imbalanced data. This method effectively handles imbalanced data and increases the test's power by fixing the size of the smaller group while increasing the size of the larger group. Additionally, Gretton et al. introduced the maximum mean discrepancy (MMD) for the two-sample problem. Both methods require additional parameter settings, such as the number of groups 𝑘 in ESS-NN and the kernel function in MMD. Ball divergence addresses the two-sample test problem for extremely imbalanced samples without introducing other parameters. Definition. Let's start with the population ball divergence. Suppose that we have a metric space (formula_0), where norm formula_1 introduces a metric formula_2 for two point formula_3 in space formula_4 by formula_5. Besides, we use formula_6 to show a closed ball with the center formula_7 and radius formula_8. Then, the population ball divergence of Borel probability measures formula_9 is formula_10 For convenience, we can decompose the Ball Divergence into two parts: formula_11 and formula_12 Thus formula_13 Next, we will introduce the sample ball divergence. Let formula_14 denote whether point formula_15 locates in the ball formula_16. Given two independent samples formula_17 form formula_18 and formula_19 form formula_20 formula_21 where formula_22 means the proportion of samples from the probability measure formula_18 located in the ball formula_23 and formula_24 means the proportion of samples from the probability measure formula_20 located in the ball formula_23. Meanwhile, formula_25 and formula_26 means the proportion of samples from the probability measure formula_18 and formula_20 located in the ball formula_27. The sample versions of formula_28 and formula_29 are as follows formula_30 Finally, we can give the sample ball divergence formula_31 Properties. formula_32 1. Given two Borel probability measures formula_18 and formula_20 on a finite dimensional Banach space formula_33, then formula_34 where the equality holds if and only if formula_35. 2. Suppose formula_18 and formula_20 are two Borel probability measures in a separable Banach space formula_33. Denote their support formula_36 and formula_37, if formula_38 or formula_37, then we have formula_34 where the equality holds if and only if formula_35. 3.Consistency: We have formula_39 where formula_40 for some formula_41. Define formula_42, and then let formula_43 where formula_44 The function formula_45 has spectral decomposition: formula_46 where formula_47 and formula_48 are the eigenvalues and eigenfunctions of formula_49. For formula_50, formula_51 are i.i.d. formula_52, and formula_53 4.Asymptotic distribution under the null hypothesis: Suppose that both formula_54 and formula_55 in such a way that formula_56. Under the null hypothesis, we have formula_57 5. Distribution under the alternative hypothesis: let formula_58 Suppose that both formula_54 and formula_55 in such a way that formula_56. Under the alternative hypothesis, we have formula_59 6. The test based on formula_60 is consistent against any general alternative formula_61. More specifically, formula_62 and formula_63 More importantly, formula_64 can also be expressed as formula_65 which is independent of formula_66. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V, \\|\\cdot\\|" }, { "math_id": 1, "text": "\\|\\cdot\\|" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "u,v" }, { "math_id": 4, "text": "V" }, { "math_id": 5, "text": "\\rho(u,v)= \\|u-v\\|" }, { "math_id": 6, "text": " \\bar{B}(u, \\rho(u, v)) " }, { "math_id": 7, "text": " u " }, { "math_id": 8, "text": " \\rho(u, v) " }, { "math_id": 9, "text": " \\mu,\\nu " }, { "math_id": 10, "text": "\nBD(\\mu, \\nu)=\\iint_{\\mathrm{V} \\times \\mathrm{V}}[\\mu-\\nu]^2(\\bar{B}(u, \\rho(u, v)))(\\mu(d u) \\mu(d v)+\\nu(d u) \\nu(d u)).\n" }, { "math_id": 11, "text": "\nA=\\iint_{V \\times V}[\\mu-\\nu]^2(\\bar{B}(u, \\rho(u, v))) \\mu(d u) \\mu(d v),\n" }, { "math_id": 12, "text": "\nC=\\iint_{V \\times V}[\\mu-\\nu]^2(\\bar{B}(u, \\rho(u, v))) \\nu(d u) \\nu(d v) .\n" }, { "math_id": 13, "text": "\nBD(\\mu, \\nu)=A+C .\n" }, { "math_id": 14, "text": " \\delta(x, y, z)=I(z \\in \\bar{B}(x, \\rho(x, y))) " }, { "math_id": 15, "text": " z " }, { "math_id": 16, "text": " \\bar{B}(x, \\rho(x, y)) " }, { "math_id": 17, "text": " \\{ X_1,\\ldots, X_n \\} " }, { "math_id": 18, "text": " \\mu " }, { "math_id": 19, "text": " \\{ Y_1,\\ldots, Y_m \\} " }, { "math_id": 20, "text": " \\nu " }, { "math_id": 21, "text": "\n\\begin{aligned}\n& A_{i j}^X=\\frac{1}{n} \\sum_{u=1}^n \\delta\\left(X_i, X_j, X_u\\right), A_{i j}^Y=\\frac{1}{m} \\sum_{v=1}^m \\delta\\left(X_i, X_j, Y_v\\right), \\\\\n& C_{k l}^X=\\frac{1}{n} \\sum_{u=1}^n \\delta\\left(Y_k, Y_l, X_u\\right), C_{i j}^Y=\\frac{1}{m} \\sum_{v=1}^m \\delta \\left(Y_k, Y_l, Y_v\\right) ,\n\\end{aligned}\n" }, { "math_id": 22, "text": " A_{i j}^X " }, { "math_id": 23, "text": " \\bar{B}\\left(X_i, \\rho\\left(X_i, X_j\\right)\\right) " }, { "math_id": 24, "text": " A_{i j}^Y " }, { "math_id": 25, "text": " C_{i j}^X " }, { "math_id": 26, "text": " C_{i j}^Y " }, { "math_id": 27, "text": " \\bar{B}\\left(Y_i, \\rho\\left(Y_i, Y_j\\right)\\right) " }, { "math_id": 28, "text": " A " }, { "math_id": 29, "text": " C " }, { "math_id": 30, "text": "\nA_{n, m}=\\frac{1}{n^2} \\sum_{i, j=1}^n\\left(A_{i j}^X-A_{i j}^Y\\right)^2, \\qquad C_{n, m}=\\frac{1}{m^2} \\sum_{k, l=1}^m\\left(C_{k l}^X-C_{k l}^Y\\right)^2.\n" }, { "math_id": 31, "text": "\nBD_{n, m}=A_{n, m}+C_{n, m}.\n" }, { "math_id": 32, "text": " " }, { "math_id": 33, "text": " V " }, { "math_id": 34, "text": " BD(\\mu, \\nu) \\geq 0 " }, { "math_id": 35, "text": " \\mu = \\nu " }, { "math_id": 36, "text": " S_\\mu " }, { "math_id": 37, "text": " S_\\nu " }, { "math_id": 38, "text": " S_\\mu=V " }, { "math_id": 39, "text": "\nD_{n, m} \\xrightarrow[n, m \\rightarrow \\infty]{\\text { a.s. }} D(\\mu, v),\n" }, { "math_id": 40, "text": "\\frac{n}{n+m} \\rightarrow \\tau" }, { "math_id": 41, "text": " \\tau \\in[0,1] " }, { "math_id": 42, "text": " \\xi(x,y,z_1,z_2) = \\delta(x,y,z_1) \\cdot \\delta(x,y,z_2) " }, { "math_id": 43, "text": " Q\\left(x, y ; x^{\\prime}, y^{\\prime}\\right)=\\left(\\phi_A^{(2,0)}\\left(x, x^{\\prime}\\right)+\\phi_A^{(1,1)}(x, y)+\\phi_A^{(1,1)}\\left(x^{\\prime}, y^{\\prime}\\right)+\\phi_A^{(0,2)}\\left(y, y^{\\prime}\\right)\\right), " }, { "math_id": 44, "text": "\n\\begin{aligned}\n\\phi_A^{(2,0)}\\left(x, x^{\\prime}\\right)= & E\\left[\\xi\\left(X_1, X_2, x, x^{\\prime}\\right)\\right]+E\\left[\\xi\\left(X_1, X_2, Y, Y_3\\right)\\right] \\\\\n& -E\\left[\\xi\\left(X_1, X_2, x, Y\\right)\\right]-E\\left[\\xi\\left(X_1, X_2, x^{\\prime}, Y_3\\right)\\right] \\\\\n\\phi_A^{(1,1)}(x, y)= & E\\left[\\xi\\left(X_1, X_2, x, X_3\\right)\\right]+E\\left[\\xi\\left(X_1, X_2, y, Y_3\\right)\\right] \\\\\n& -E\\left[\\xi\\left(X_1, X_2, x, y\\right)\\right]-E\\left[\\xi\\left(X_1, X_2, X_3, Y_3\\right)\\right] \\\\\n\\phi_A^{(0,2)}\\left(y, y^{\\prime}\\right)= & E\\left[\\xi\\left(X_1, X_2, X, X_3\\right)\\right]+E\\left[\\xi\\left(X_1, X_2, y, y^{\\prime}\\right)\\right] \\\\\n& -E\\left[\\xi\\left(X_1, X_2, X, y\\right)\\right]-E\\left[\\xi\\left(X_1, X_2, X, y^{\\prime}\\right)\\right].\n\\end{aligned}\n" }, { "math_id": 45, "text": " Q\\left(x, y ; x^{\\prime}, y^{\\prime}\\right) " }, { "math_id": 46, "text": "\nQ\\left(x, y ; x^{\\prime}, y^{\\prime}\\right)=\\sum_{k=1}^{\\infty} \\lambda_k f_k(x, y) f_k\\left(x^{\\prime}, y^{\\prime}\\right),\n" }, { "math_id": 47, "text": "\\lambda_k" }, { "math_id": 48, "text": "f_k" }, { "math_id": 49, "text": "Q" }, { "math_id": 50, "text": "k=1,2, \\ldots" }, { "math_id": 51, "text": " Z_{1 k} , Z_{2 k} " }, { "math_id": 52, "text": "N(0,1)" }, { "math_id": 53, "text": "\n\\begin{aligned}\na_k^2(\\tau) & =(1-\\tau) E_X\\left[E_Y f_k(X, Y)\\right]^2, \\quad b_k^2(\\tau)=\\tau E_Y\\left[E_X f_k(X, Y)\\right]^2, \\\\\n\\theta & =2 E\\left[E\\left(\\delta\\left(X_1, X_2, X\\right)\\left(1-\\delta\\left(X_1, X_2, Y\\right)\\right) \\mid X_1, X_2\\right)\\right] .\n\\end{aligned}\n" }, { "math_id": 54, "text": "n" }, { "math_id": 55, "text": "m \\rightarrow \\infty" }, { "math_id": 56, "text": "\\frac{n}{n+m} \\rightarrow \\tau, 0 \\leq \\tau \\leq 1" }, { "math_id": 57, "text": "\n\\frac{n m}{n+m} BD_{n, m} \\xrightarrow[n \\rightarrow \\infty]{d} \\sum_{k=1}^{\\infty} 2 \\lambda_k\\left[\\left(a_k(\\tau) Z_{1 k}+b_k(\\tau) Z_{2 k}\\right)^2-\\left(a_k^2(\\tau)+b_k^2(\\tau)\\right)\\right]+\\theta \\text {. }\n" }, { "math_id": 58, "text": "\\delta_{1,0}^2=\\operatorname{Var}\\left(g^{(1,0)}(X)\\right) \\quad \\text { and } \\quad \\delta_{0,1}^2=\\operatorname{Var}\\left(g^{(0,1)}(Y)\\right) ." }, { "math_id": 59, "text": "\n\\sqrt{\\frac{n m}{n+m}}\\left(BD_{n, m}-BD(\\mu, \\nu)\\right) \\underset{n \\rightarrow \\infty}{d} N\\left(0,(1-\\tau) \\delta_{1,0}^2+\\tau \\delta_{0,1}^2\\right) .\n" }, { "math_id": 60, "text": "D_{n, m}" }, { "math_id": 61, "text": "H_1" }, { "math_id": 62, "text": "\n\\lim _{n \\rightarrow \\infty} \\operatorname{Var}_{H_1}\\left(D_{n, m}\\right)=0\n" }, { "math_id": 63, "text": "\n\\Delta(\\eta):=\\liminf _{n \\rightarrow \\infty}\\left(E_{H_1} D_{n, m}-E_{H_0} D_{n, m}\\right)>0 .\n" }, { "math_id": 64, "text": "\\Delta(\\eta)" }, { "math_id": 65, "text": "\n\\Delta(\\eta) \\equiv D(\\mu, \\nu),\n" }, { "math_id": 66, "text": "\\eta" } ]
https://en.wikipedia.org/wiki?curid=77244347
772517
Tsiolkovsky rocket equation
Mathematical equation describing the motion of a rocket The classical rocket equation, or ideal rocket equation is a mathematical equation that describes the motion of vehicles that follow the basic principle of a rocket: a device that can apply acceleration to itself using thrust by expelling part of its mass with high velocity and can thereby move due to the conservation of momentum. It is credited to Konstantin Tsiolkovsky, who independently derived it and published it in 1903, although it had been independently derived and published by William Moore in 1810, and later published in a separate book in 1813. Robert Goddard also developed it independently in 1912, and Hermann Oberth derived it independently about 1920. The maximum change of velocity of the vehicle, formula_0 (with no external forces acting) is: formula_1 where: Given the effective exhaust velocity determined by the rocket motor's design, the desired delta-v (e.g., orbital speed or escape velocity), and a given dry mass formula_7, the equation can be solved for the required propellant mass formula_8: formula_9 The necessary wet mass grows exponentially with the desired delta-v. History. The equation is named after Russian scientist Konstantin Tsiolkovsky who independently derived it and published it in his 1903 work. The equation had been derived earlier by the British mathematician William Moore in 1810, and later published in a separate book in 1813. American Robert Goddard independently developed the equation in 1912 when he began his research to improve rocket engines for possible space flight. German engineer Hermann Oberth independently derived the equation about 1920 as he studied the feasibility of space travel. While the derivation of the rocket equation is a straightforward calculus exercise, Tsiolkovsky is honored as being the first to apply it to the question of whether rockets could achieve speeds necessary for space travel. Experiment of the Boat by Tsiolkovsky. In order to understand the principle of rocket propulsion, Konstantin Tsiolkovsky proposed the famous experiment of "the boat". A person is in a boat away from the shore without oars. They want to reach this shore. They notice that the boat is loaded with a certain quantity of stones and have the idea of throwing, one by one and as quickly as possible, these stones in the opposite direction to the bank. Effectively, the quantity of movement of the stones thrown in one direction corresponds to an equal quantity of movement for the boat in the other direction (ignoring friction / drag). Derivation. Most popular derivation. Consider the following system: In the following derivation, "the rocket" is taken to mean "the rocket and all of its unexpended propellant". Newton's second law of motion relates external forces (formula_10) to the change in linear momentum of the whole system (including rocket and exhaust) as follows: formula_11 where formula_12 is the momentum of the rocket at time formula_13: formula_14 and formula_15 is the momentum of the rocket and exhausted mass at time formula_16: formula_17 and where, with respect to the observer: The velocity of the exhaust formula_20 in the observer frame is related to the velocity of the exhaust in the rocket frame formula_24 by: formula_25 thus, formula_26 Solving this yields: formula_27 If formula_18 and formula_28 are opposite, formula_29 have the same direction as formula_18, formula_30 are negligible (since formula_31), and using formula_32 (since ejecting a positive formula_33 results in a decrease in rocket mass in time), formula_34 If there are no external forces then formula_35 (conservation of linear momentum) and formula_36 Assuming that formula_24 is constant (known as Tsiolkovsky's hypothesis), so it is not subject to integration, then the above equation may be integrated as follows: formula_37 This then yields formula_38 or equivalently formula_39 or formula_40 or formula_41 where formula_6 is the initial total mass including propellant, formula_7 the final mass, and formula_24 the velocity of the rocket exhaust with respect to the rocket (the specific impulse, or, if measured in time, that multiplied by gravity-on-Earth acceleration). If formula_24 is NOT constant, we might not have rocket equations that are as simple as the above forms. Many rocket dynamics researches were based on the Tsiolkovsky's constant formula_24 hypothesis. The value formula_8 is the total working mass of propellant expended. formula_42 (delta v) is the integration over time of the magnitude of the acceleration produced by using the rocket engine (what would be the actual acceleration if external forces were absent). In free space, for the case of acceleration in the direction of the velocity, this is the increase of the speed. In the case of an acceleration in opposite direction (deceleration) it is the decrease of the speed. Of course gravity and drag also accelerate the vehicle, and they can add or subtract to the change in velocity experienced by the vehicle. Hence delta-v may not always be the actual change in speed or velocity of the vehicle. Other derivations. Impulse-based. The equation can also be derived from the basic integral of acceleration in the form of force (thrust) over mass. By representing the delta-v equation as the following: formula_43 where T is thrust, formula_6 is the initial (wet) mass and formula_33 is the initial mass minus the final (dry) mass, and realising that the integral of a resultant force over time is total impulse, assuming thrust is the only force involved, formula_44 The integral is found to be: formula_45 Realising that impulse over the change in mass is equivalent to force over propellant mass flow rate (p), which is itself equivalent to exhaust velocity, formula_46 the integral can be equated to formula_47 Acceleration-based. Imagine a rocket at rest in space with no forces exerted on it (Newton's First Law of Motion). From the moment its engine is started (clock set to 0) the rocket expels gas mass at a "constant mass flow rate R" (kg/s) and at "exhaust velocity relative to the rocket ve" (m/s). This creates a constant force "F" propelling the rocket that is equal to "R" × "ve". The rocket is subject to a constant force, but its total mass is decreasing steadily because it is expelling gas. According to Newton's Second Law of Motion, its acceleration at any time "t" is its propelling force "F" divided by its current mass "m": formula_48 Now, the mass of fuel the rocket initially has on board is equal to "m"0 – "mf". For the constant mass flow rate "R" it will therefore take a time "T" = ("m"0 – "mf")/"R" to burn all this fuel. Integrating both sides of the equation with respect to time from "0" to "T" (and noting that "R = dm/dt" allows a substitution on the right) obtains: formula_49 Limit of finite mass "pellet" expulsion. The rocket equation can also be derived as the limiting case of the speed change for a rocket that expels its fuel in the form of formula_50 pellets consecutively, as formula_51, with an effective exhaust speed formula_52 such that the mechanical energy gained per unit fuel mass is given by formula_53. In the rocket's center-of-mass frame, if a pellet of mass formula_54 is ejected at speed formula_55 and the remaining mass of the rocket is formula_22, the amount of energy converted to increase the rocket's and pellet's kinetic energy is formula_56 Using momentum conservation in the rocket's frame just prior to ejection, formula_57, from which we find formula_58 Let formula_59 be the initial fuel mass fraction on board and formula_60 the initial fueled-up mass of the rocket. Divide the total mass of fuel formula_61 into formula_50 discrete pellets each of mass formula_62. The remaining mass of the rocket after ejecting formula_63 pellets is then formula_64. The overall speed change after ejecting formula_63 pellets is the sum formula_65 Notice that for large formula_50 the last term in the denominator formula_66 and can be neglected to give formula_67 where formula_68 and formula_69. As formula_70 this Riemann sum becomes the definite integral formula_71 since the final remaining mass of the rocket is formula_72. Special relativity. If special relativity is taken into account, the following equation can be derived for a relativistic rocket, with formula_0 again standing for the rocket's final velocity (after expelling all its reaction mass and being reduced to a rest mass of formula_73) in the inertial frame of reference where the rocket started at rest (with the rest mass including fuel being formula_6 initially), and formula_74 standing for the speed of light in vacuum: formula_75 Writing formula_76 as formula_77 allows this equation to be rearranged as formula_78 Then, using the identity formula_79 (here "exp" denotes the exponential function; "see also" Natural logarithm as well as the "power" identity at Logarithmic identities) and the identity formula_80 ("see" Hyperbolic function), this is equivalent to formula_81 Terms of the equation. Delta-"v". Delta-"v" (literally "change in velocity"), symbolised as Δ"v" and pronounced "delta-vee", as used in spacecraft flight dynamics, is a measure of the impulse that is needed to perform a maneuver such as launching from, or landing on a planet or moon, or an in-space orbital maneuver. It is a scalar that has the units of speed. As used in this context, it is "not" the same as the physical change in velocity of the vehicle. Delta-"v" is produced by reaction engines, such as rocket engines, is proportional to the thrust per unit mass and burn time, and is used to determine the mass of propellant required for the given manoeuvre through the rocket equation. For multiple manoeuvres, delta-"v" sums linearly. For interplanetary missions delta-"v" is often plotted on a porkchop plot which displays the required mission delta-"v" as a function of launch date. Mass fraction. In aerospace engineering, the propellant mass fraction is the portion of a vehicle's mass which does not reach the destination, usually used as a measure of the vehicle's performance. In other words, the propellant mass fraction is the ratio between the propellant mass and the initial mass of the vehicle. In a spacecraft, the destination is usually an orbit, while for aircraft it is their landing location. A higher mass fraction represents less weight in a design. Another related measure is the payload fraction, which is the fraction of initial weight that is payload. Effective exhaust velocity. The effective exhaust velocity is often specified as a specific impulse and they are related to each other by: formula_82 where Applicability. The rocket equation captures the essentials of rocket flight physics in a single short equation. It also holds true for rocket-like reaction vehicles whenever the effective exhaust velocity is constant, and can be summed or integrated when the effective exhaust velocity varies. The rocket equation only accounts for the reaction force from the rocket engine; it does not include other forces that may act on a rocket, such as aerodynamic or gravitational forces. As such, when using it to calculate the propellant requirement for launch from (or powered descent to) a planet with an atmosphere, the effects of these forces must be included in the delta-V requirement (see Examples below). In what has been called "the tyranny of the rocket equation", there is a limit to the amount of payload that the rocket can carry, as higher amounts of propellant increment the overall weight, and thus also increase the fuel consumption. The equation does not apply to non-rocket systems such as aerobraking, gun launches, space elevators, launch loops, tether propulsion or light sails. The rocket equation can be applied to orbital maneuvers in order to determine how much propellant is needed to change to a particular new orbit, or to find the new orbit as the result of a particular propellant burn. When applying to orbital maneuvers, one assumes an impulsive maneuver, in which the propellant is discharged and delta-v applied instantaneously. This assumption is relatively accurate for short-duration burns such as for mid-course corrections and orbital insertion maneuvers. As the burn duration increases, the result is less accurate due to the effect of gravity on the vehicle over the duration of the maneuver. For low-thrust, long duration propulsion, such as electric propulsion, more complicated analysis based on the propagation of the spacecraft's state vector and the integration of thrust are used to predict orbital motion. Examples. Assume an exhaust velocity of and a formula_0 of (Earth to LEO, including formula_0 to overcome gravity and aerodynamic drag). Stages. In the case of sequentially thrusting rocket stages, the equation applies for each stage, where for each stage the initial mass in the equation is the total mass of the rocket after discarding the previous stage, and the final mass in the equation is the total mass of the rocket just before discarding the stage concerned. For each stage the specific impulse may be different. For example, if 80% of the mass of a rocket is the fuel of the first stage, and 10% is the dry mass of the first stage, and 10% is the remaining rocket, then formula_86 With three similar, subsequently smaller stages with the same formula_24 for each stage, gives: formula_87 and the payload is 10% × 10% × 10% = 0.1% of the initial mass. A comparable SSTO rocket, also with a 0.1% payload, could have a mass of 11.1% for fuel tanks and engines, and 88.8% for fuel. This would give formula_88 If the motor of a new stage is ignited before the previous stage has been discarded and the simultaneously working motors have a different specific impulse (as is often the case with solid rocket boosters and a liquid-fuel stage), the situation is more complicated. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta v" }, { "math_id": 1, "text": "\\Delta v = v_\\text{e} \\ln \\frac{m_0}{m_f} = I_\\text{sp} g_0 \\ln \\frac{m_0}{m_f}," }, { "math_id": 2, "text": "v_\\text{e} = I_\\text{sp} g_0" }, { "math_id": 3, "text": "I_\\text{sp}" }, { "math_id": 4, "text": "g_0" }, { "math_id": 5, "text": "\\ln" }, { "math_id": 6, "text": "m_0" }, { "math_id": 7, "text": "m_f" }, { "math_id": 8, "text": "m_0 - m_f" }, { "math_id": 9, "text": "m_0 = m_f e^{\\Delta v / v_\\text{e}}." }, { "math_id": 10, "text": "\\vec{F}_i" }, { "math_id": 11, "text": "\\sum_i \\vec{F}_i = \\lim_{\\Delta t \\to 0} \\frac{\\vec{P}_2 - \\vec{P}_1}{\\Delta t}" }, { "math_id": 12, "text": "\\vec{P}_1" }, { "math_id": 13, "text": "t = 0" }, { "math_id": 14, "text": "\\vec{P}_1 = m \\vec{V}" }, { "math_id": 15, "text": "\\vec{P}_2" }, { "math_id": 16, "text": "t = \\Delta t" }, { "math_id": 17, "text": "\\vec{P}_2 = \\left(m - \\Delta m \\right) \\left(\\vec{V} + \\Delta \\vec{V} \\right) + \\Delta m \\vec{V}_\\text{e}" }, { "math_id": 18, "text": "\\vec{V}" }, { "math_id": 19, "text": "\\vec{V} + \\Delta \\vec{V}" }, { "math_id": 20, "text": "\\vec{V}_\\text{e}" }, { "math_id": 21, "text": "\\Delta t" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "\\left( m - \\Delta m \\right)" }, { "math_id": 24, "text": "v_\\text{e}" }, { "math_id": 25, "text": "\\vec {v}_\\text{e} = \\vec{V}_\\text{e} - \\vec{V} " }, { "math_id": 26, "text": "\\vec {V}_\\text{e} = \\vec{V} + \\vec{v}_\\text{e} " }, { "math_id": 27, "text": "\\vec{P}_2 - \\vec{P}_1 = m\\Delta \\vec{V} + \\vec{v}_\\text{e} \\Delta m - \\Delta m \\Delta \\vec{V}" }, { "math_id": 28, "text": "\\vec{v}_\\text{e}" }, { "math_id": 29, "text": "\\vec{F}_\\text{i}" }, { "math_id": 30, "text": "\\Delta m \\Delta \\vec{V}" }, { "math_id": 31, "text": "dm \\, d\\vec{v} \\to 0" }, { "math_id": 32, "text": "dm = -\\Delta m" }, { "math_id": 33, "text": "\\Delta m" }, { "math_id": 34, "text": "\\sum_i F_i = m \\frac{dV}{dt} + v_\\text{e} \\frac{dm}{dt}" }, { "math_id": 35, "text": "\\sum_i F_i = 0" }, { "math_id": 36, "text": "-m\\frac{dV}{dt} = v_\\text{e}\\frac{dm}{dt}" }, { "math_id": 37, "text": "-\\int_{V}^{V + \\Delta V} \\, dV = {v_e} \\int_{m_0}^{m_f} \\frac{dm}{m} " }, { "math_id": 38, "text": "\\Delta V = v_\\text{e} \\ln \\frac{m_0}{m_f}" }, { "math_id": 39, "text": "m_f = m_0 e^{-\\Delta V\\ / v_\\text{e}}" }, { "math_id": 40, "text": "m_0 = m_f e^{\\Delta V / v_\\text{e}}" }, { "math_id": 41, "text": "m_0 - m_f = m_f \\left(e^{\\Delta V / v_\\text{e}} - 1\\right)" }, { "math_id": 42, "text": "\\Delta V" }, { "math_id": 43, "text": "\\Delta v = \\int^{t_f}_{t_0} \\frac{|T|}{{m_0}-{t} \\Delta{m}} ~ dt" }, { "math_id": 44, "text": "\\int^{t_f}_{t_0} F ~ dt = J" }, { "math_id": 45, "text": "J ~ \\frac{\\ln({m_0}) - \\ln({m_f})}{\\Delta m}" }, { "math_id": 46, "text": " \\frac{J}{\\Delta m} = \\frac{F}{p} = V_\\text{exh}" }, { "math_id": 47, "text": "\\Delta v = V_\\text{exh} ~ \\ln\\left({\\frac{m_0}{m_f}}\\right)" }, { "math_id": 48, "text": " ~ a = \\frac{dv}{dt} = - \\frac{F}{m(t)} = - \\frac{R v_\\text{e}}{m(t)}" }, { "math_id": 49, "text": " ~ \\Delta v = v_f - v_0 = - v_\\text{e} \\left[ \\ln m_f - \\ln m_0 \\right] = ~ v_\\text{e} \\ln\\left(\\frac{m_0}{m_f}\\right)." }, { "math_id": 50, "text": "N" }, { "math_id": 51, "text": "N \\to \\infty" }, { "math_id": 52, "text": "v_\\text{eff}" }, { "math_id": 53, "text": "\\tfrac{1}{2} v_\\text{eff}^2 " }, { "math_id": 54, "text": "m_p" }, { "math_id": 55, "text": "u" }, { "math_id": 56, "text": "\\tfrac{1}{2} m_p v_\\text{eff}^2 = \\tfrac{1}{2}m_p u^2 + \\tfrac{1}{2}m (\\Delta v)^2. " }, { "math_id": 57, "text": " u = \\Delta v \\tfrac{m}{m_p}" }, { "math_id": 58, "text": "\\Delta v = v_\\text{eff} \\frac{m_p}{\\sqrt{m(m+m_p)}}." }, { "math_id": 59, "text": "\\phi" }, { "math_id": 60, "text": " m_0" }, { "math_id": 61, "text": "\\phi m_0" }, { "math_id": 62, "text": "m_p = \\phi m_0/N" }, { "math_id": 63, "text": "j" }, { "math_id": 64, "text": "m = m_0(1 - j\\phi/N)" }, { "math_id": 65, "text": " \\Delta v = v_\\text{eff} \\sum ^{j=N}_{j=1} \\frac{\\phi/N}{\\sqrt{(1-j\\phi/N)(1-j\\phi/N+\\phi/N)}} " }, { "math_id": 66, "text": "\\phi/N\\ll 1" }, { "math_id": 67, "text": " \\Delta v \\approx v_\\text{eff} \\sum^{j=N}_{j=1}\\frac{\\phi/N}{1-j\\phi/N} = v_\\text{eff} \\sum ^{j=N}_{j=1} \\frac{\\Delta x}{1-x_j} " }, { "math_id": 68, "text": " \\Delta x = \\frac{\\phi}{N}" }, { "math_id": 69, "text": " x_j = \\frac{j\\phi}{N} " }, { "math_id": 70, "text": " N\\rightarrow \\infty" }, { "math_id": 71, "text": " \\lim_{N\\to\\infty}\\Delta v = v_\\text{eff} \\int_{0}^{\\phi} \\frac{dx}{1-x} = v_\\text{eff}\\ln \\frac{1}{1-\\phi} = v_\\text{eff} \\ln \\frac{m_0}{m_f} ," }, { "math_id": 72, "text": " m_f = m_0(1-\\phi)" }, { "math_id": 73, "text": "m_1" }, { "math_id": 74, "text": "c" }, { "math_id": 75, "text": "\\frac{m_0}{m_1} = \\left[\\frac{1 + {\\frac{\\Delta v}{c}}}{1 - {\\frac{\\Delta v}{c}}}\\right]^{\\frac{c}{2v_\\text{e}}}" }, { "math_id": 76, "text": "\\frac{m_0}{m_1}" }, { "math_id": 77, "text": "R" }, { "math_id": 78, "text": "\\frac{\\Delta v}{c} = \\frac{R^{\\frac{2v_\\text{e}}{c}} - 1}{R^{\\frac{2v_\\text{e}}{c}} + 1}" }, { "math_id": 79, "text": "R^{\\frac{2v_\\text{e}}{c}} = \\exp \\left[ \\frac{2v_\\text{e}}{c} \\ln R \\right]" }, { "math_id": 80, "text": "\\tanh x = \\frac{e^{2x} - 1} {e^{2x} + 1}" }, { "math_id": 81, "text": "\\Delta v = c \\tanh\\left(\\frac {v_\\text{e}}{c} \\ln \\frac{m_0}{m_1} \\right)" }, { "math_id": 82, "text": "v_\\text{e} = g_0 I_\\text{sp}," }, { "math_id": 83, "text": "1-e^{-9.7/4.5}" }, { "math_id": 84, "text": "1-e^{-5.0/4.5}" }, { "math_id": 85, "text": "1-e^{-4.7/4.5}" }, { "math_id": 86, "text": "\n\\begin{align}\n\\Delta v \\ & = v_\\text{e} \\ln { 100 \\over 100 - 80 }\\\\\n & = v_\\text{e} \\ln 5 \\\\\n & = 1.61 v_\\text{e}. \\\\\n\\end{align}\n" }, { "math_id": 87, "text": "\\Delta v \\ = 3 v_\\text{e} \\ln 5 \\ = 4.83 v_\\text{e} " }, { "math_id": 88, "text": "\\Delta v \\ = v_\\text{e} \\ln(100/11.2) \\ = 2.19 v_\\text{e}. " } ]
https://en.wikipedia.org/wiki?curid=772517
7725229
Thermal death time
Thermal death time is how long it takes to kill a specific bacterium at a specific temperature. It was originally developed for food canning and has found applications in cosmetics, producing salmonella-free feeds for animals (e.g. poultry) and pharmaceuticals. History. In 1895, William Lyman Underwood of the Underwood Canning Company, a food company founded in 1822 at Boston, Massachusetts and later relocated to Watertown, Massachusetts, approached William Thompson Sedgwick, chair of the biology department at the Massachusetts Institute of Technology, about losses his company was suffering due to swollen and burst cans despite the newest retort technology available. Sedgwick gave his assistant, Samuel Cate Prescott, a detailed assignment on what needed to be done. Prescott and Underwood worked on the problem every afternoon from late 1895 to late 1896, focusing on canned clams. They first discovered that the clams contained heat-resistant bacterial spores that were able to survive the processing; then that these spores' presence depended on the clams' living environment; and finally that these spores would be killed if processed at 250 ˚F (121 ˚C) for ten minutes in a retort. These studies prompted the similar research of canned lobster, sardines, peas, tomatoes, corn, and spinach. Prescott and Underwood's work was first published in late 1896, with further papers appearing from 1897 to 1926. This research, though important to the growth of food technology, was never patented. It would pave the way for thermal death time research that was pioneered by Bigelow and C. Olin Ball from 1921 to 1936 at the National Canners Association (NCA). Bigelow and Ball's research focused on the thermal death time of "Clostridium botulinum" ("C. botulinum") that was determined in the early 1920s. Research continued with inoculated canning pack studies that were published by the NCA in 1968. Mathematical formulas. Thermal death time can be determined one of two ways: 1) by using graphs or 2) by using mathematical formulas. Graphical method. This is usually expressed in minutes at the temperature of . This is designated as "F"0. Each 18 °F or 10 °C change results in a time change by a factor of 10. This would be shown either as F10121 = 10 minutes (Celsius) or F18250 = 10 minutes (Fahrenheit). A lethal ratio ("L") is also a sterilizing effect at 1 minute at other temperatures with ("T"). formula_0 where "T"Ref is the reference temperature, usually ; "z" is the z-value, and "T" is the slowest heat point of the product temperature. Formula method. Prior to the advent of computers, this was plotted on semilogarithmic paper though it can also be done on spreadsheet programs. The time would be shown on the x-axis while the temperature would be shown on the "y"-axis. This simple heating curve can also determine the lag factor ("j") and the slope ("f""h"). It also measures the product temperature rather than the can temperature. formula_1 where "I" = RT (Retort Temperature) − IT (Initial Temperature) and where "j" is constant for a given product. It is also determined in the equation shown below: formula_2 where "g" is the number of degrees below the retort temperature on a simple heating curve at the end of the heating period, "B""B" is the time in minutes from the beginning of the process to the end of the heating period, and "f""h" is the time in minutes required for the straight-line portion of the heating curve plotted semilogarithmically on paper or a computer spreadsheet to pass through a log cycle. A broken heating curve is also used in this method when dealing with different products in the same process such as chicken noodle soup in having to dealing with the meat and the noodles having different cooking times as an example. It is more complex than the simple heating curve for processing. Applications. In the food industry, it is important to reduce the number of microbes in products to ensure proper food safety. This is usually done by thermal processing and finding ways to reduce the number of bacteria in the product. Time-temperature measurements of bacterial reduction is determined by a D-value, meaning how long it would take to reduce the bacterial population by 90% or one log10 at a given temperature. This D-value reference (DR) point is . "z" or z-value is used to determine the time values with different "D"-values at different temperatures with its equation shown below: formula_3 where "T" is temperature in °F or °C. This "D"-value is affected by pH of the product where low pH has faster "D" values on various foods. The "D"-value at an unknown temperature can be calculated knowing the "D"-value at a given temperature provided the "Z"-value is known. The target of reduction in canning is the 12-"D" reduction of "C. botulinum," which means that processing time will reduce the amount of this bacteria by a factor of 1012. The DR for "C. botulinum" is 0.21 minute (12.6 seconds). A 12-D reduction will take 2.52 minutes (151 seconds). This is taught in university courses in food science and microbiology and is applicable to cosmetic and pharmaceutical manufacturing. In 2001, the Purdue University Computer Integrated Food Manufacturing Center and Pilot Plant put Ball's formula online for use.
[ { "math_id": 0, "text": "L = 10^{(T - T_\\mathrm{Ref})/z}" }, { "math_id": 1, "text": "j = {jI \\over I} " }, { "math_id": 2, "text": "\\log g = \\log jI - {B_B \\over f_h}" }, { "math_id": 3, "text": "z = \\frac{T_2 - T_1}{\\log D_1 - \\log D_2}" } ]
https://en.wikipedia.org/wiki?curid=7725229
7726067
Markus–Yamabe conjecture
In mathematics, the Markus–Yamabe conjecture is a conjecture on global asymptotic stability. If the Jacobian matrix of a dynamical system at a fixed point is Hurwitz, then the fixed point is asymptotically stable. Markus-Yamabe conjecture asks if a similar result holds "globally". Precisely, the conjecture states that if a continuously differentiable map on an formula_0-dimensional real vector space has a fixed point, and its Jacobian matrix is everywhere Hurwitz, then the fixed point is globally stable. The conjecture is true for the two-dimensional case. However, counterexamples have been constructed in higher dimensions. Hence, in the two-dimensional case "only", it can also be referred to as the Markus–Yamabe theorem. Related mathematical results concerning global asymptotic stability, which "are" applicable in dimensions higher than two, include various autonomous convergence theorems. Analog of the conjecture for nonlinear control system with scalar nonlinearity is known as Kalman's conjecture. Let formula_1 be a formula_2 map with formula_3 and Jacobian formula_4 which is Hurwitz stable for every formula_5. Then formula_6 is a global attractor of the dynamical system formula_7. Mathematical statement of conjecture. The conjecture is true for formula_8 and false in general for formula_9.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "f:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n" }, { "math_id": 2, "text": "C^1" }, { "math_id": 3, "text": "f(0) = 0" }, { "math_id": 4, "text": "Df(x)" }, { "math_id": 5, "text": "x \\in \\mathbb{R}^n" }, { "math_id": 6, "text": "0" }, { "math_id": 7, "text": "\\dot{x}= f(x)" }, { "math_id": 8, "text": "n=2" }, { "math_id": 9, "text": "n>2" } ]
https://en.wikipedia.org/wiki?curid=7726067
77265896
Cyclohexane-1,2,3,4,5,6-hexol
Family of sugars with a six-carbon ring &lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Cyclohexane-1,2,3,4,5,6-hexol is a family of chemical compounds with formula , whose molecule consists of a ring of six carbon atoms, each bound to one hydrogen atom and one hydroxyl group (–OH). There are nine stereoisomers, that differ by the position of the hydroxyl groups relative to the mean plane of the ring. All these compounds are sometimes called inositol, although this name (especially in biochemistry and related sciences) most often refers to a particular isomer, "myo"-inositol, which has many important physiological roles and medical uses. These compounds are classified as sugars, specifically carbocyclic sugars or sugar alcohols, to distinguish them from the more common aldoses like glucose. They generally have sweet taste. These compounds form several esters with biochemical and industrial importance, such as phytic acid and phosphatidylinositol phosphate, Isomers and structure. The nine stereoisomers of cyclohexane-1,2,3,4,5,6-hexol are distinguished by prefixes: "myo"-, "scyllo"-, "muco"-, D-"chiro"-, L-"chiro"-, "neo"-, "allo"-, "epi"-, and "cis"-inositol. As their names indicate, L- and D-"chiro" inositol are chiral, a pair enantiomers (mirror-image forms). All the others are meso compounds (indistinguishable from their mirror images). Racemate. The designation "rac"-"chiro"-inositol has been used for the racemic mixture (racemate) of equal parts of the two "chiro" isomers. It crystallizes as a single phase, rather than separate D and L crystals, that melts at 250 °C (which is 4–5 °C higher than the melting point of the pure enantiomers) and decomposes between 308 and 344 °C. The crystal structure is monoclinic with the formula_0 group. The crystal cell parameters are "a" = 1014.35 pm, "b" = 815.42 pm, "c" = 862.39 pm, β = 92.3556°, "Z" = 4. The cell volume is 0.71270 nm3, or about 0.178 nm3 per molecule (which is a bit smaller than the typical volumes of other isomers). Ring conformation. As in cyclohexane, the C6 ring of these compounds can be in two conformations, "boat" and "chair". The relative stability of the two forms varies with the isomer, generally favoring the conformation where the hydroxyls are farthest apart from each other. Melting points. Some of the stereoisomers crystallize in more than one polymorph, with different densities and melting points — which range from 225 °C for "myo"-inositol to about 360 °C for polymorph "B" of "scyllo"-inositol. There is a clear correlation between the melting points and the number and type of chains of hydrogen-bonded hydroxyls. Biochemistry. All isomers except "allo-" and "cis-" occur in nature, although "myo"-inositol is substantially more abundant and important than the others. In humans, "myo"-inositol is synthesized mostly in the kidneys, from glucose 6-phosphate. Small amounts of "myo"-inositol are then converted by a specific epimerase to D-"chiro"-inositol, which is an important messenger molecule in insulin signaling. A 2020 study found detectable amounts of "epi"-, "neo"-, "chiro"-, "scyllo"-, and "myo"-inositol in the urine of women, pregnant or not. Concentrations of "myo" and "scyllo" increased significantly in the third trimester of pregnancy, with "scyllo" varying between 20% to 40% of "myo". Concentrations of "epi", "neo", and "chiro" were always a few percent of those of "myo", except that "chiro"- reached 20% of "myo" in the second trimester of pregnancy. The bacterium "Bacillus subtilis" can metabolize "myo"-, "scyllo"-, and D-"chiro"-inositol.and convert to and from these three isomers. Phytic acids. Plants synthesize inositol hexakis-dihydrogenphosphate, also called phytic acid or IP6, as a storage of phosphorus Inositol penta- (IP5), tetra- (IP4), and triphosphate (IP3) are also called "phytates"
[ { "math_id": 0, "text": "P2_1/c" } ]
https://en.wikipedia.org/wiki?curid=77265896
7726759
Intersection graph
Graph representing intersections between given sets In graph theory, an intersection graph is a graph that represents the pattern of intersections of a family of sets. Any graph can be represented as an intersection graph, but some important special classes of graphs can be defined by the types of sets that are used to form an intersection representation of them. Formal definition. Formally, an intersection graph G is an undirected graph formed from a family of sets formula_0 by creating one vertex vi for each set Si, and connecting two vertices vi and vj by an edge whenever the corresponding two sets have a nonempty intersection, that is, formula_1 All graphs are intersection graphs. Any undirected graph G may be represented as an intersection graph. For each vertex vi of G, form a set Si consisting of the edges incident to vi; then two such sets have a nonempty intersection if and only if the corresponding vertices share an edge. Therefore, G is the intersection graph of the sets Si. provide a construction that is more efficient, in the sense that it requires a smaller total number of elements in all of the sets Si combined. For it, the total number of set elements is at most , where n is the number of vertices in the graph. They credit the observation that all graphs are intersection graphs to , but say to see also . The intersection number of a graph is the minimum total number of elements in any intersection representation of the graph. Classes of intersection graphs. Many important graph families can be described as intersection graphs of more restricted types of set families, for instance sets derived from some kind of geometric configuration: characterized the intersection classes of graphs, families of finite graphs that can be described as the intersection graphs of sets drawn from a given family of sets. It is necessary and sufficient that the family have the following properties: If the intersection graph representations have the additional requirement that different vertices must be represented by different sets, then the clique expansion property can be omitted. Related concepts. An order-theoretic analog to the intersection graphs are the inclusion orders. In the same way that an intersection representation of a graph labels every vertex with a set so that vertices are adjacent if and only if their sets have nonempty intersection, so an inclusion representation "f" of a poset labels every element with a set so that for any "x" and "y" in the poset, "x" ≤ "y" if and only if "f"("x") ⊆ "f"("y").
[ { "math_id": 0, "text": "S_i, \\,\\,\\, i = 0, 1, 2, \\dots" }, { "math_id": 1, "text": "E(G) = \\{ \\{ v_i, v_j \\} \\mid i \\neq j, S_i \\cap S_j \\neq \\empty \\}." } ]
https://en.wikipedia.org/wiki?curid=7726759
7726870
Query (complexity)
In descriptive complexity, a query is a mapping from structures of one signature to structures of another vocabulary. Neil Immerman, in his book Descriptive Complexity, "use[s] the concept of query as the fundamental paradigm of computation" (p. 17). Given signatures formula_0 and formula_1, we define the set of structures on each language, formula_2 and formula_3. A query is then any mapping formula_4 Computational complexity theory can then be phrased in terms of the power of the mathematical logic necessary to express a given query. Order-independent queries. A query is order-independent if the ordering of objects in the structure does not affect the results of the query. In databases, these queries correspond to generic queries (Immerman 1999, p. 18). A query is order-independent iff formula_5 for any isomorphic structures formula_6 and formula_7.
[ { "math_id": 0, "text": "\\sigma" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\mbox{STRUC}[\\sigma]" }, { "math_id": 3, "text": "\\mbox{STRUC}[\\tau]" }, { "math_id": 4, "text": "I : \\mbox{STRUC}[\\sigma] \\to \\mbox{STRUC}[\\tau]" }, { "math_id": 5, "text": " I(\\mathfrak{A}) \\equiv I(\\mathfrak{B})" }, { "math_id": 6, "text": "\\mathfrak{A}" }, { "math_id": 7, "text": "\\mathfrak{B}" } ]
https://en.wikipedia.org/wiki?curid=7726870
77276707
Transport ecology
Transport ecology is the science of the human-transport-environment system. There are two chairs of transport ecology in Germany, in Dresden and Karlsruhe. Vocabulary. Mobility is about satisfying the need to travel. To achieve mobility, means of transport are needed. Mobility corresponds to the human need to travel - recognised by article 13 of the Universal Declaration of Human Rights - while transport is a means of achieving mobility. In public debate, mobility is often confused with transport. The "Dresden Declaration" calls for people's mobility needs to be met in a cost-effective and environmental-friendly way · . Suggested measures. Then the proposed measures (whether they involve transport modes, the concept of "traffic avoidance, change of transport mode, technical improvements", the tautology of transport ecology or the "4 E", i.e. Enforcement, Education, Engineering, Economy/Encouragement) are scrutinised for transparency, fairness (polluters pay), unwanted side-effects and the application of the measure ("are there other examples of application elsewhere? "). Traffic avoidance, modal shift and finally technical improvements. The concept of « traffic avoidance, modal shift and technical improvements » involves firstly reducing the volume of transport, then promoting intermodality and finally making technical improvements to vehicles and making the energy they consume sustainable. This means in fact implementing the Kaya identity applied to transport (see below). Enforcement, Education, Engineering, Economy/Encouragement. These methods are also known as "4E". "Enforcement" refers to measures of order, whether obligations or prohibitions. "Education" refers to training, communication. "Engineering" is of a purely technical nature, whereas "Economy/Encouragement" re incentive systems, which may well be financial. Tautology of transport ecology. As long as pollution is proportional to the distance travelled, Udo Becker defines tautology of transport ecology (in German « verkehrsökologische Tautologie ») as follows : formula_0 with : Demand can be decomposed according to: formula_6 with : Pollution can therefore be expressed as the sum of pollution according to the modes of transport : formula_10 with : Kaya identity applied to transport. The general formulation takes on a more specific form when it comes to decarbonising transport, following Kaya identity. Pollution being identified to CO2 formula_13 is replaced by formula_14 with : CO2 emissions can be decomposed according: formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "pollution = D \\times \\frac {C} {D} \\times \\frac {pollution} {C}" }, { "math_id": 1, "text": "pollution" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "\\frac {C} {D}" }, { "math_id": 5, "text": "\\frac {pollution} {C}" }, { "math_id": 6, "text": "D= Population \\times \\frac {journey} {Population} \\times \\frac {distance} {journey}" }, { "math_id": 7, "text": "Population" }, { "math_id": 8, "text": "\\frac {journey} {Population}" }, { "math_id": 9, "text": "\\frac {distance} {journey}" }, { "math_id": 10, "text": "pollution = D \\times \\sum \\frac {D_i} {D} \\times \\frac {C_i} {D_i} \\times \\frac {pollution_i} {C_i}" }, { "math_id": 11, "text": "\\frac {D_i} {D}" }, { "math_id": 12, "text": "\\frac {C_i} {D_i}" }, { "math_id": 13, "text": "\\frac {pollution_i} {C_i}" }, { "math_id": 14, "text": "\\frac {{CO_2}_i} {E_i} \\times \\frac {E_i} {C_i}" }, { "math_id": 15, "text": "\\frac {E_i} {C_i}" }, { "math_id": 16, "text": "\\frac {{CO_2}_i} {E_i}" }, { "math_id": 17, "text": "CO_2 = D \\times \\sum \\frac {D_i} {D} \\times \\frac {C_i} {D_i} \\times \\frac {E_i} {C_i} \\times \\frac {{CO_2}_i} {E_i}" } ]
https://en.wikipedia.org/wiki?curid=77276707
7728392
Entropy (order and disorder)
Interpretation of entropy as the change in arrangement of a system's particles In thermodynamics, entropy is often associated with the amount of order or disorder in a thermodynamic system. This stems from Rudolf Clausius' 1862 assertion that any thermodynamic process always "admits to being reduced [reduction] to the alteration in some way or another of the "arrangement" of the constituent parts of the working body" and that internal work associated with these alterations is quantified energetically by a measure of "entropy" change, according to the following differential expression: formula_0 where Q = motional energy ("heat") that is transferred reversibly to the system from the surroundings and T = the absolute temperature at which the transfer occurs. In the years to follow, Ludwig Boltzmann translated these 'alterations of arrangement' into a probabilistic view of order and disorder in gas-phase molecular systems. In the context of entropy, "perfect internal disorder" has often been regarded as describing thermodynamic equilibrium, but since the thermodynamic concept is so far from everyday thinking, the use of the term in physics and chemistry has caused much confusion and misunderstanding. In recent years, to interpret the concept of entropy, by further describing the 'alterations of arrangement', there has been a shift away from the words 'order' and 'disorder', to words such as 'spread' and 'dispersal'. History. This "molecular ordering" entropy perspective traces its origins to molecular movement interpretations developed by Rudolf Clausius in the 1850s, particularly with his 1862 visual conception of molecular disgregation. Similarly, in 1859, after reading a paper on the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. In 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and was so inspired by it that he spent much of his long and distinguished life developing the subject further. Later, Boltzmann, in efforts to develop a kinetic theory for the behavior of a gas, applied the laws of probability to Maxwell's and Clausius' molecular interpretation of entropy so as to begin to interpret entropy in terms of order and disorder. Similarly, in 1882 Hermann von Helmholtz used the word "Unordnung" (disorder) to describe entropy. Overview. To highlight the fact that order and disorder are commonly understood to be measured in terms of entropy, below are current science encyclopedia and science dictionary definitions of entropy: Entropy and disorder also have associations with equilibrium. Technically, "entropy", from this perspective, is defined as a thermodynamic property which serves as a measure of how close a system is to equilibrium—that is, to perfect internal disorder. Likewise, the value of the entropy of a distribution of atoms and molecules in a thermodynamic system is a measure of the disorder in the arrangements of its particles. In a stretched out piece of rubber, for example, the arrangement of the molecules of its structure has an "ordered" distribution and has zero entropy, while the "disordered" kinky distribution of the atoms and molecules in the rubber in the non-stretched state has positive entropy. Similarly, in a gas, the order is perfect and the measure of entropy of the system has its lowest value when all the molecules are in one place, whereas when more points are occupied the gas is all the more disorderly and the measure of the entropy of the system has its largest value. In systems ecology, as another example, the entropy of a collection of items comprising a system is defined as a measure of their disorder or equivalently the relative likelihood of the instantaneous configuration of the items. Moreover, according to theoretical ecologist and chemical engineer Robert Ulanowicz, "that entropy might provide a quantification of the heretofore subjective notion of disorder has spawned innumerable scientific and philosophical narratives." In particular, many biologists have taken to speaking in terms of the entropy of an organism, or about its antonym negentropy, as a measure of the structural order within an organism. The mathematical basis with respect to the association entropy has with order and disorder began, essentially, with the famous Boltzmann formula, formula_1, which relates entropy "S" to the number of possible states "W" in which a system can be found. As an example, consider a box that is divided into two sections. What is the probability that a certain number, or all of the particles, will be found in one section versus the other when the particles are randomly allocated to different places within the box? If you only have one particle, then that system of one particle can subsist in two states, one side of the box versus the other. If you have more than one particle, or define states as being further locational subdivisions of the box, the entropy is larger because the number of states is greater. The relationship between entropy, order, and disorder in the Boltzmann equation is so clear among physicists that according to the views of thermodynamic ecologists Sven Jorgensen and Yuri Svirezhev, "it is obvious that entropy is a measure of order or, most likely, disorder in the system." In this direction, the second law of thermodynamics, as famously enunciated by Rudolf Clausius in 1865, states that: Thus, if entropy is associated with disorder and if the entropy of the universe is headed towards maximal entropy, then many are often puzzled as to the nature of the "ordering" process and operation of evolution in relation to Clausius' most famous version of the second law, which states that the universe is headed towards maximal "disorder". In the recent 2003 book "SYNC – the Emerging Science of Spontaneous Order" by Steven Strogatz, for example, we find "Scientists have often been baffled by the existence of spontaneous order in the universe. The laws of thermodynamics seem to dictate the opposite, that nature should inexorably degenerate toward a state of greater disorder, greater entropy. Yet all around us we see magnificent structures—galaxies, cells, ecosystems, human beings—that have all somehow managed to assemble themselves." The common argument used to explain this is that, locally, entropy can be lowered by external action, e.g. solar heating action, and that this applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, to growing crystals, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created. The conditioner of this statement suffices that living systems are open systems in which both heat, mass, and or work may transfer into or out of the system. Unlike temperature, the putative entropy of a living system would drastically change if the organism were thermodynamically isolated. If an organism was in this type of "isolated" situation, its entropy would increase markedly as the once-living components of the organism decayed to an unrecognizable mass. Phase change. Owing to these early developments, the typical example of entropy change Δ"S" is that associated with phase change. In solids, for example, which are typically ordered on the molecular scale, usually have smaller entropy than liquids, and liquids have smaller entropy than gases and colder gases have smaller entropy than hotter gases. Moreover, according to the third law of thermodynamics, at absolute zero temperature, crystalline structures are approximated to have perfect "order" and zero entropy. This correlation occurs because the numbers of different microscopic quantum energy states available to an ordered system are usually much smaller than the number of states available to a system that appears to be disordered. From his famous 1896 "Lectures on Gas Theory", Boltzmann diagrams the structure of a solid body, as shown above, by postulating that each molecule in the body has a "rest position". According to Boltzmann, if it approaches a neighbor molecule it is repelled by it, but if it moves farther away there is an attraction. This, of course was a revolutionary perspective in its time; many, during these years, did not believe in the existence of either atoms or molecules (see: history of the molecule). According to these early views, and others such as those developed by William Thomson, if energy in the form of heat is added to a solid, so to make it into a liquid or a gas, a common depiction is that the ordering of the atoms and molecules becomes more random and chaotic with an increase in temperature: Thus, according to Boltzmann, owing to increases in thermal motion, whenever heat is added to a working substance, the rest position of molecules will be pushed apart, the body will expand, and this will create more "molar-disordered" distributions and arrangements of molecules. These disordered arrangements, subsequently, correlate, via probability arguments, to an increase in the measure of entropy. Entropy-driven order. Entropy has been historically, e.g. by Clausius and Helmholtz, associated with disorder. However, in common speech, order is used to describe organization, structural regularity, or form, like that found in a crystal compared with a gas. This commonplace notion of order is described quantitatively by Landau theory. In Landau theory, the development of order in the everyday sense coincides with the change in the value of a mathematical quantity, a so-called order parameter. An example of an order parameter for crystallization is "bond orientational order" describing the development of preferred directions (the crystallographic axes) in space. For many systems, phases with more structural (e.g. crystalline) order exhibit less entropy than fluid phases under the same thermodynamic conditions. In these cases, labeling phases as ordered or disordered according to the relative amount of entropy (per the Clausius/Helmholtz notion of order/disorder) or via the existence of structural regularity (per the Landau notion of order/disorder) produces matching labels. However, there is a broad class of systems that manifest entropy-driven order, in which phases with organization or structural regularity, e.g. crystals, have higher entropy than structurally disordered (e.g. fluid) phases under the same thermodynamic conditions. In these systems phases that would be labeled as disordered by virtue of their higher entropy (in the sense of Clausius or Helmholtz) are ordered in both the everyday sense and in Landau theory. Under suitable thermodynamic conditions, entropy has been predicted or discovered to induce systems to form ordered liquid-crystals, crystals, and quasicrystals. In many systems, directional entropic forces drive this behavior. More recently, it has been shown it is possible to precisely engineer particles for target ordered structures. Adiabatic demagnetization. In the quest for ultra-cold temperatures, a temperature lowering technique called adiabatic demagnetization is used, where atomic entropy considerations are utilized which can be described in order-disorder terms. In this process, a sample of solid such as chrome-alum salt, whose molecules are equivalent to tiny magnets, is inside an insulated enclosure cooled to a low temperature, typically 2 or 4 kelvins, with a strong magnetic field being applied to the container using a powerful external magnet, so that the tiny molecular magnets are aligned forming a well-ordered "initial" state at that low temperature. This magnetic alignment means that the magnetic energy of each molecule is minimal. The external magnetic field is then reduced, a removal that is considered to be closely reversible. Following this reduction, the atomic magnets then assume random less-ordered orientations, owing to thermal agitations, in the "final" state: The "disorder" and hence the entropy associated with the change in the atomic alignments has clearly increased. In terms of energy flow, the movement from a magnetically aligned state requires energy from the thermal motion of the molecules, converting thermal energy into magnetic energy. Yet, according to the second law of thermodynamics, because no heat can enter or leave the container, due to its adiabatic insulation, the system should exhibit no change in entropy, i.e. Δ"S" = 0. The increase in disorder, however, associated with the randomizing directions of the atomic magnets represents an entropy "increase"? To compensate for this, the disorder (entropy) associated with the temperature of the specimen must "decrease" by the same amount. The temperature thus falls as a result of this process of thermal energy being converted into magnetic energy. If the magnetic field is then increased, the temperature rises and the magnetic salt has to be cooled again using a cold material such as liquid helium. Difficulties with the term "disorder". In recent years the long-standing use of term "disorder" to discuss entropy has met with some criticism. Critics of the terminology state that entropy is not a measure of 'disorder' or 'chaos', but rather a measure of energy's diffusion or dispersal to more microstates. Shannon's use of the term 'entropy' in information theory refers to the most compressed, or least dispersed, amount of code needed to encompass the content of a signal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\int\\! \\frac{\\delta Q}{T} \\ge 0" }, { "math_id": 1, "text": "S = k_\\mathrm{B} \\ln W \\! " } ]
https://en.wikipedia.org/wiki?curid=7728392
77284585
Clavin–Garcia equation
Clavin–Garcia equation or Clavin–Garcia dispersion relation provides the relation between the growth rate and the wave number of the perturbation superposed on a planar premixed flame, named after Paul Clavin and Pedro Luis Garcia Ybarra, who derived the dispersion relation in 1983. The dispersion relation accounts for Darrieus–Landau instability, Rayleigh–Taylor instability and diffusive–thermal instability and also accounts for the temperature dependence of transport coefficients. Dispersion relation. Let formula_0 and formula_1 be the wavenumber (measured in units of planar laminar flame thickness formula_2) and the growth rate (measured in units of the residence time formula_3 of the planar laminar flame) of the perturbations to the planar premixed flame. Then the Clavin–Garcia dispersion relation is given by formula_4 where formula_5 and formula_6 Here The function formula_7, in most cases, is simply given by formula_8, where formula_9, in which case, we have formula_10, formula_11 In the constant transport coefficient assumption, formula_12, in which case, we have formula_13 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\sigma" }, { "math_id": 2, "text": "\\delta_L" }, { "math_id": 3, "text": "\\delta_L^2/D_{T,u}" }, { "math_id": 4, "text": "a(k)\\sigma^2 + b(k) \\sigma + c(k)=0" }, { "math_id": 5, "text": "\\begin{align}\na(k) &= \\frac{r+1}{r} + \\frac{r-1}{r} k \\left(\\mathcal{M} - \\frac{r}{r-1}\\mathcal{J}\\right),\\\\\nb(k) &= 2k + 2rk^2 (\\mathcal{M}-\\mathcal{J}),\\\\\nc(k) &= - \\frac{r-1}{r} Ra \\, k - (r-1) k^2\\left[1 -\\frac{Ra}{r} \\left(\\mathcal{M}- \\frac{r}{r-1}\\mathcal{J}\\right)\\right] + (r-1) k^3 \\left[L + \\frac{3r-1}{r-1}\\mathcal{M} - \\frac{2r}{r-1}\\mathcal{J} + (2Pr-1) \\mathcal H\\right],\n\\end{align}" }, { "math_id": 6, "text": "\\mathcal{J} = \\int_1^{r} \\frac{\\lambda(\\theta)}{\\theta}d\\theta, \\quad \\mathcal H = \\frac{1}{r-1}\\int_{1}^{r} [L -\\lambda(\\theta)]d\\theta." }, { "math_id": 7, "text": "\\lambda(\\theta)" }, { "math_id": 8, "text": "\\lambda =\\theta^m" }, { "math_id": 9, "text": "m=0.7" }, { "math_id": 10, "text": "L=r^m" }, { "math_id": 11, "text": "\\mathcal{J} = \\frac{1}{m} (r^m-1), \\quad \\mathcal H = r^m - \\frac{r^{1+m}-1}{(1+m)(r-1)}." }, { "math_id": 12, "text": "\\lambda=1" }, { "math_id": 13, "text": "\\mathcal{J} =\\ln r , \\quad \\mathcal H = 0." } ]
https://en.wikipedia.org/wiki?curid=77284585
7729301
Debye–Hückel theory
Model describing the departures from ideality in solutions of electrolytes and plasmas The Debye–Hückel theory was proposed by Peter Debye and Erich Hückel as a theoretical explanation for departures from ideality in solutions of electrolytes and plasmas. It is a linearized Poisson–Boltzmann model, which assumes an extremely simplified model of electrolyte solution but nevertheless gave accurate predictions of mean activity coefficients for ions in dilute solution. The Debye–Hückel equation provides a starting point for modern treatments of non-ideality of electrolyte solutions. Overview. In the chemistry of electrolyte solutions, an ideal solution is a solution whose colligative properties are proportional to the concentration of the solute. Real solutions may show departures from this kind of ideality. In order to accommodate these effects in the thermodynamics of solutions, the concept of activity was introduced: the properties are then proportional to the activities of the ions. Activity, "a", is proportional to concentration, "c". The proportionality constant is known as an activity coefficient, formula_0. formula_1 In an ideal electrolyte solution the activity coefficients for all the ions are equal to one. Ideality of an electrolyte solution can be achieved only in very dilute solutions. Non-ideality of more concentrated solutions arises principally (but not exclusively) because ions of opposite charge attract each other due to electrostatic forces, while ions of the same charge repel each other. In consequence ions are not randomly distributed throughout the solution, as they would be in an ideal solution. Activity coefficients of single ions cannot be measured experimentally because an electrolyte solution must contain both positively charged ions and negatively charged ions. Instead, a mean activity coefficient, formula_2 is defined. For example, with the electrolyte NaCl formula_3 In general, the mean activity coefficient of a fully dissociated electrolyte of formula AnBm is given by formula_4 Activity coefficients are themselves functions of concentration as the amount of inter-ionic interaction increases as the concentration of the electrolyte increases. Debye and Hückel developed a theory with which single ion activity coefficients could be calculated. By calculating the mean activity coefficients from them the theory could be tested against experimental data. It was found to give excellent agreement for "dilute" solutions. The model. A description of Debye–Hückel theory includes a very detailed discussion of the assumptions and their limitations as well as the mathematical development and applications. A snapshot of a 2-dimensional section of an idealized electrolyte solution is shown in the picture. The ions are shown as spheres with unit electrical charge. The solvent (pale blue) is shown as a uniform medium, without structure. On average, each ion is surrounded more closely by ions of opposite charge than by ions of like charge. These concepts were developed into a quantitative theory involving ions of charge "z"1"e"+ and "z"2"e"−, where "z" can be any integer. The principal assumption is that departure from ideality is due to electrostatic interactions between ions, mediated by Coulomb's law: the force of interaction between two electric charges, separated by a distance, "r" in a medium of relative permittivity εr is given by formula_5 It is also assumed that The last assumption means that each cation is surrounded by a spherically symmetric cloud of other ions. The cloud has a net negative charge. Similarly each anion is surrounded by a cloud with net positive charge. Mathematical development. The deviation from ideality is taken to be a function of the potential energy resulting from the electrostatic interactions between ions and their surrounding clouds. To calculate this energy two steps are needed. The first step is to specify the electrostatic potential for ion "j" by means of Poisson's equation formula_6 ψ("r") is the total potential at a distance, "r", from the central ion and ρ("r") is the averaged charge density of the surrounding cloud at that distance. To apply this formula it is essential that the cloud has spherical symmetry, that is, the charge density is a function only of distance from the central ion as this allows the Poisson equation to be cast in terms of spherical coordinates with no angular dependence. The second step is to calculate the charge density by means of a Boltzmann distribution. formula_7 where "k"B is Boltzmann constant and "T" is the temperature. This distribution also depends on the potential ψ("r") and this introduces a serious difficulty in terms of the superposition principle. Nevertheless, the two equations can be combined to produce the Poisson–Boltzmann equation. formula_8 Solution of this equation is far from straightforward. Debye and Hückel expanded the exponential as a truncated Taylor series to first order. The zeroth order term vanishes because the solution is on average electrically neutral (so that Σ ni zi = 0), which leaves us with only the first order term. The result has the form of the Helmholtz equation formula_9, which has an analytical solution. This equation applies to electrolytes with equal numbers of ions of each charge. Nonsymmetrical electrolytes require another term with ψ2. For symmetrical electrolytes, this reduces to the modified spherical Bessel equation formula_10 The coefficients formula_11 and formula_12 are fixed by the boundary conditions. As formula_13, formula_14 must not diverge, so formula_15. At formula_16, which is the distance of the closest approach of ions, the force exerted by the charge should be balanced by the force of other ions, imposing formula_17, from which formula_11 is found, yielding formula_18 The electrostatic potential energy, formula_19, of the ion at formula_20 is formula_21 This is the potential energy of a single ion in a solution. The multiple-charge generalization from electrostatics gives an expression for the potential energy of the entire solution. The mean activity coefficient is given by the logarithm of this quantity as follows formula_23 formula_24 formula_25 where "I" is the ionic strength and "a"0 is a parameter that represents the distance of closest approach of ions. For aqueous solutions at 25 °C "A" = 0.51 mol−1/2dm3/2 and "B" = 3.29 nm−1mol−1/2dm3/2 formula_26 is a constant that depends on temperature. If formula_27 is expressed in terms of molality, instead of molarity (as in the equation above and in the rest of this article), then an experimental value for formula_26 "of water" is formula_28 at 25 °C. It is common to use a base-10 logarithm, in which case we factor formula_29, so "A" is formula_30. The multiplier formula_31 before formula_32 in the equation is for the case when the dimensions of formula_27 are formula_33. When the dimensions of formula_27 are formula_34, the multiplier formula_31 must be dropped from the equation The most significant aspect of this result is the prediction that the mean activity coefficient is a function of "ionic strength" rather than the electrolyte concentration. For very low values of the ionic strength the value of the denominator in the expression above becomes nearly equal to one. In this situation the mean activity coefficient is proportional to the square root of the ionic strength. This is known as the Debye–Hückel limiting law. In this limit the equation is given as follows formula_35 The excess osmotic pressure obtained from Debye–Hückel theory is in cgs units: formula_36 Therefore, the total pressure is the sum of the excess osmotic pressure and the ideal pressure formula_37. The osmotic coefficient is then given by formula_38 Nondimensionalization. Taking the differential equation from earlier (as stated above, the equation only holds for low concentrations): formula_39 Using the Buckingham π theorem on this problem results in the following dimensionless groups: formula_40 formula_41 is called the reduced scalar electric potential field. formula_42 is called the reduced radius. The existing groups may be recombined to form two other dimensionless groups for substitution into the differential equation. The first is what could be called the square of the reduced inverse screening length, formula_43. The second could be called the reduced central ion charge, formula_44 (with a capital Z). Note that, though formula_45 is already dimensionless, without the substitution given below, the differential equation would still be dimensional. formula_46 formula_47 To obtain the nondimensionalized differential equation and initial conditions, use the formula_48 groups to eliminate formula_49 in favor of formula_50, then eliminate formula_51 in favor of formula_52 while carrying out the chain rule and substituting formula_53, then eliminate formula_52 in favor of formula_42 (no chain rule needed), then eliminate formula_27 in favor of formula_43, then eliminate formula_45 in favor of formula_44. The resulting equations are as follows: formula_54 formula_55 formula_56 For table salt in 0.01 M solution at 25 °C, a typical value of formula_43 is 0.0005636, while a typical value of formula_44 is 7.017, highlighting the fact that, in low concentrations, formula_43 is a target for a zero order of magnitude approximation such as perturbation analysis. Unfortunately, because of the boundary condition at infinity, regular perturbation does not work. The same boundary condition prevents us from finding the exact solution to the equations. Singular perturbation may work, however. Limitations and extensions. This equation for formula_22 gives satisfactory agreement with experimental measurements for low electrolyte concentrations, typically less than 10−3 mol/L. Deviations from the theory occur at higher concentrations and with electrolytes that produce ions of higher charges, particularly unsymmetrical electrolytes. Essentially these deviations occur because the model is oversimplified, so there is little to be gained making small adjustments to the model. The individual assumptions can be challenged in turn. Moreover, ionic radius is assumed to be negligible, but at higher concentrations, the ionic radius becomes comparable to the radius of the ionic atmosphere. Most extensions to Debye–Hückel theory are empirical in nature. They usually allow the Debye–Hückel equation to be followed at low concentration and add further terms in some power of the ionic strength to fit experimental observations. The main extensions are the Davies equation, Pitzer equations and specific ion interaction theory. One such extended Debye–Hückel equation is given by: formula_57 where formula_58 as its common logarithm is the activity coefficient, formula_59 is the integer charge of the ion (1 for H+, 2 for Mg2+ etc.), formula_27 is the ionic strength of the aqueous solution, and formula_60 is the size or effective diameter of the ion in angstrom. The effective hydrated radius of the ion, a is the radius of the ion and its closely bound water molecules. Large ions and less highly charged ions bind water less tightly and have smaller hydrated radii than smaller, more highly charged ions. Typical values are 3Å for ions such as H+, Cl−, CN−, and HCOO−. The effective diameter for the hydronium ion is 9Å. formula_61 and formula_62 are constants with values of respectively 0.5085 and 0.3281 at 25 °C in water [#endnote_]. The extended Debye–Hückel equation provides accurate results for μ ≤ 0.1. For solutions of greater ionic strengths, the Pitzer equations should be used. In these solutions the activity coefficient may actually increase with ionic strength. The Debye–Hückel equation cannot be used in the solutions of surfactants where the presence of micelles influences on the electrochemical properties of the system (even rough judgement overestimates γ for ~50%). Electrolytes mixtures. The theory can be applied also to dilute solutions of mixed electrolytes. Freezing point depression measurements has been used to this purpose. Conductivity. The treatment given so far is for a system not subject to an external electric field. When conductivity is measured the system is subject to an oscillating external field due to the application of an AC voltage to electrodes immersed in the solution. Debye and Hückel modified their theory in 1926 and their theory was further modified by Lars Onsager in 1927. All the postulates of the original theory were retained. In addition it was assumed that the electric field causes the charge cloud to be distorted away from spherical symmetry. After taking this into account, together with the specific requirements of moving ions, such as viscosity and electrophoretic effects, Onsager was able to derive a theoretical expression to account for the empirical relation known as Kohlrausch's Law, for the molar conductivity, Λm. formula_63 formula_64 is known as the limiting molar conductivity, "K" is an empirical constant and "c" is the electrolyte concentration. Limiting here means "at the limit of the infinite dilution"). Onsager's expression is formula_65 where "A" and "B" are constants that depend only on known quantities such as temperature, the charges on the ions and the dielectric constant and viscosity of the solvent. This is known as the Debye–Hückel–Onsager equation. However, this equation only applies to very dilute solutions and has been largely superseded by other equations due to Fuoss and Onsager, 1932 and 1957 and later. Summary of Debye and Hückel's first article on the theory of dilute electrolytes. The English title of the article is "On the Theory of Electrolytes. I. Freezing Point Depression and Related Phenomena". It was originally published in 1923 in volume 24 of a German-language journal . An English translation of the article is included in a book of collected papers presented to Debye by "his pupils, friends, and the publishers on the occasion of his seventieth birthday on March 24, 1954". Another English translation was completed in 2019. The article deals with the calculation of properties of electrolyte solutions that are under the influence of ion-induced electric fields, thus it deals with electrostatics. In the same year they first published this article, Debye and Hückel, hereinafter D&amp;H, also released an article that covered their initial characterization of solutions under the influence of electric fields called "On the Theory of Electrolytes. II. Limiting Law for Electric Conductivity", but that subsequent article is not (yet) covered here. In the following summary (as yet incomplete and unchecked), modern notation and terminology are used, from both chemistry and mathematics, in order to prevent confusion. Also, with a few exceptions to improve clarity, the subsections in this summary are (very) condensed versions of the same subsections of the original article. Introduction. D&amp;H note that the Guldberg–Waage formula for electrolyte species in chemical reaction equilibrium in classical form is formula_66 where D&amp;H say that, due to the "mutual electrostatic forces between the ions", it is necessary to modify the Guldberg–Waage equation by replacing formula_72 with formula_73, where formula_0 is an overall activity coefficient, not a "special" activity coefficient (a separate activity coefficient associated with each species)—which is what is used in modern chemistry as of 2007[ [update]]. The relationship between formula_0 and the special activity coefficients formula_74 is formula_75 Fundamentals. D&amp;H use the Helmholtz and Gibbs free entropies formula_41 and formula_76 to express the effect of electrostatic forces in an electrolyte on its thermodynamic state. Specifically, they split most of the thermodynamic potentials into classical and electrostatic terms: formula_77 where D&amp;H give the total differential of formula_41 as formula_81 where By the definition of the total differential, this means that formula_84 formula_85 which are useful further on. As stated previously, the internal energy is divided into two parts: formula_86 where Similarly, the Helmholtz free entropy is also divided into two parts: formula_89 D&amp;H state, without giving the logic, that formula_90 It would seem that, without some justification,formula_91 Without mentioning it specifically, D&amp;H later give what might be the required (above) justification while arguing that formula_92, an assumption that the solvent is incompressible. The definition of the Gibbs free entropy formula_76 is formula_93 where formula_94 is Gibbs free energy. D&amp;H give the total differential of formula_76 as formula_95 At this point D&amp;H note that, for water containing 1 mole per liter of potassium chloride (nominal pressure and temperature aren't given), the electric pressure formula_96 amounts to 20 atmospheres. Furthermore, they note that this level of pressure gives a relative volume change of 0.001. Therefore, they neglect change in volume of water due to electric pressure, writing formula_97 and put formula_98 D&amp;H say that, according to Planck, the classical part of the Gibbs free entropy is formula_99 where Species zero is the solvent. The definition of formula_101 is as follows, where lower-case letters indicate the particle specific versions of the corresponding extensive properties: formula_103 D&amp;H don't say so, but the functional form for formula_104 may be derived from the functional dependence of the chemical potential of a component of an ideal mixture upon its mole fraction. D&amp;H note that the internal energy formula_79 of a solution is lowered by the electrical interaction of its ions, but that this effect can't be determined by using the crystallographic approximation for distances between dissimilar atoms (the cube root of the ratio of total volume to the number of particles in the volume). This is because there is more thermal motion in a liquid solution than in a crystal. The thermal motion tends to smear out the natural lattice that would otherwise be constructed by the ions. Instead, D&amp;H introduce the concept of an ionic atmosphere or cloud. Like the crystal lattice, each ion still attempts to surround itself with oppositely charged ions, but in a more free-form manner; at small distances away from positive ions, one is more likely to find negative ions and vice versa. The potential energy of an arbitrary ion solution. Electroneutrality of a solution requires that formula_105 where To bring an ion of species "i", initially far away, to a point formula_82 within the ion cloud requires interaction energy in the amount of formula_107, where formula_108 is the elementary charge, and formula_109 is the value of the scalar electric potential field at formula_82. If electric forces were the only factor in play, the minimal-energy configuration of all the ions would be achieved in a close-packed lattice configuration. However, the ions are in thermal equilibrium with each other and are relatively free to move. Thus they obey Boltzmann statistics and form a Boltzmann distribution. All species' number densities formula_110 are altered from their bulk (overall average) values formula_111 by the corresponding Boltzmann factor formula_112, where formula_102 is the Boltzmann constant, and formula_80 is the temperature. Thus at every point in the cloud formula_113 Note that in the infinite temperature limit, all ions are distributed uniformly, with no regard for their electrostatic interactions. The charge density is related to the number density: formula_114 When combining this result for the charge density with the Poisson equation from electrostatics, a form of the Poisson–Boltzmann equation results: formula_115 This equation is difficult to solve and does not follow the principle of linear superposition for the relationship between the number of charges and the strength of the potential field. It has been solved analyticallt by the Swedish mathematician Thomas Hakon Gronwall and his collaborators physical chemists V. K. La Mer and Karl Sandved in a 1928 article from Physikalische Zeitschrift dealing with extensions to Debye–Huckel theory. However, for sufficiently low concentrations of ions, a first-order Taylor series expansion approximation for the exponential function may be used (formula_116 for formula_117) to create a linear differential equation. D&amp;H say that this approximation holds at large distances between ions, which is the same as saying that the concentration is low. Lastly, they claim without proof that the addition of more terms in the expansion has little effect on the final solution. Thus formula_118 The Poisson–Boltzmann equation is transformed to formula_119 because the first summation is zero due to electroneutrality. Factor out the scalar potential and assign the leftovers, which are constant, to formula_120. Also, let formula_27 be the ionic strength of the solution: formula_121 formula_122 So, the fundamental equation is reduced to a form of the Helmholtz equation: formula_123 Today, formula_124 is called the Debye screening length. D&amp;H recognize the importance of the parameter in their article and characterize it as a measure of the thickness of the ion atmosphere, which is an electrical double layer of the Gouy–Chapman type. The equation may be expressed in spherical coordinates by taking formula_125 at some arbitrary ion: formula_126 The equation has the following general solution (keep in mind that formula_127 is a positive constant): formula_128 where formula_26, formula_11, and formula_12 are undetermined constants The electric potential is zero at infinity by definition, so formula_12 must be zero. In the next step, D&amp;H assume that there is a certain radius formula_129, beyond which no ions in the atmosphere may approach the (charge) center of the singled out ion. This radius may be due to the physical size of the ion itself, the sizes of the ions in the cloud, and any water molecules that surround the ions. Mathematically, they treat the singled out ion as a point charge to which one may not approach within the radius formula_129. The potential of a point charge by itself is formula_130 D&amp;H say that the total potential inside the sphere is formula_131 where formula_132 is a constant that represents the potential added by the ionic atmosphere. No justification for formula_132 being a constant is given. However, one can see that this is the case by considering that any spherical static charge distribution is subject to the mathematics of the shell theorem. The shell theorem says that no force is exerted on charged particles inside a sphere (of arbitrary charge). Since the ion atmosphere is assumed to be (time-averaged) spherically symmetric, with charge varying as a function of radius formula_52, it may be represented as an infinite series of concentric charge shells. Therefore, inside the radius formula_129, the ion atmosphere exerts no force. If the force is zero, then the potential is a constant (by definition). In a combination of the continuously distributed model which gave the Poisson–Boltzmann equation and the model of the point charge, it is assumed that at the radius formula_129, there is a continuity of formula_49 and its first derivative. Thus formula_133 formula_134 formula_135 formula_136 By the definition of electric potential energy, the potential energy associated with the singled out ion in the ion atmosphere is formula_137 Notice that this only requires knowledge of the charge of the singled out ion and the potential of all the other ions. To calculate the potential energy of the entire electrolyte solution, one must use the multiple-charge generalization for electric potential energy: formula_138 Experimental verification of the theory. To verify the validity of the Debye–Hückel theory, many experimental ways have been tried, measuring the activity coefficients: the problem is that we need to go towards very high dilutions. Typical examples are: measurements of vapour pressure, freezing point, osmotic pressure (indirect methods) and measurement of electric potential in cells (direct method). Going towards high dilutions good results have been found using liquid membrane cells, it has been possible to investigate aqueous media 10−4 M and it has been found that for 1:1 electrolytes (as NaCl or KCl) the Debye–Hückel equation is totally correct, but for 2:2 or 3:2 electrolytes it is possible to find negative deviation from the Debye–Hückel limit law: this strange behavior can be observed only in the very dilute area, and in more concentrate regions the deviation becomes positive. It is possible that Debye–Hückel equation is not able to foresee this behavior because of the linearization of the Poisson–Boltzmann equation, or maybe not: studies about this have been started only during the last years of the 20th century because before it wasn't possible to investigate the 10−4 M region, so it is possible that during the next years new theories will be born. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "a=\\gamma c/c^0" }, { "math_id": 2, "text": "\\gamma_{\\pm}" }, { "math_id": 3, "text": "\\gamma_{\\pm} = \\left(\\gamma_\\mathrm{Na^+}\\gamma _\\mathrm{Cl^-}\\right )^{1/2}" }, { "math_id": 4, "text": "\\gamma_{\\pm} = \\left({\\gamma_A}^n{\\gamma _B}^m\\right )^{1/(n+m)}" }, { "math_id": 5, "text": "\\text{force} = \\frac {z_1z_2e^2}{4\\pi \\epsilon _0 \\epsilon _r r^2}" }, { "math_id": 6, "text": "\\nabla^2 \\psi_j(r) = -\\frac{1}{\\epsilon _0 \\epsilon _r}\\rho _j(r)" }, { "math_id": 7, "text": "n'_i = n_i \\exp \\left(\\frac{-z_ie\\psi_j(r)}{k_{\\rm B}T}\\right)" }, { "math_id": 8, "text": "\\nabla^2\\psi_j(r)=-\\frac{1}{\\epsilon_0\\epsilon_r} \\sum_i \\left\\{n_i(z_ie) \\exp \\left( \\frac{-z_ie\\psi_j(r)}{k_{\\rm B}T} \\right)\\right\\} " }, { "math_id": 9, "text": "\\nabla^2\\psi_j(r)=\\kappa^2\\psi_j(r) \\qquad \\text{with} \\qquad \\kappa^2 = \\frac{e^2}{\\epsilon_0\\epsilon_r k_{\\rm B}T} \\sum_i n_i z_i^2 " }, { "math_id": 10, "text": "(\\partial_r^2 + \\frac{2}{r} \\partial_r - \\kappa^2) \\psi_j = 0 \\qquad \\text{with solutions} \\qquad \\psi_j(r) = A' \\frac{e^{-\\kappa r}}{r} + A'' \\frac{e^{\\kappa r}}{r}" }, { "math_id": 11, "text": "A'" }, { "math_id": 12, "text": "A''" }, { "math_id": 13, "text": "r \\rightarrow \\infty" }, { "math_id": 14, "text": "\\psi" }, { "math_id": 15, "text": "A'' = 0" }, { "math_id": 16, "text": "r = a_0" }, { "math_id": 17, "text": "\\partial_r \\psi_j(a_0) = -z_j e/(4\\pi \\epsilon_0 \\epsilon_r a_0^2)" }, { "math_id": 18, "text": "\\psi_j(r) = \\frac{z_j e}{4\\pi\\varepsilon_0\\varepsilon_r} \\frac{e^{\\kappa a_0}}{1+\\kappa a_0} \\frac{e^{-\\kappa r}}{r}" }, { "math_id": 19, "text": "u_j" }, { "math_id": 20, "text": "r=0" }, { "math_id": 21, "text": "u_j = z_j e \\Big( \\psi_j(a_0) - \\frac{z_j e}{4\\pi\\varepsilon_0\\varepsilon_r} \\frac{1}{a_0} \\Big)= -\\frac{z_j^2 e^2}{4\\pi\\varepsilon_0\\varepsilon_r} \\frac{\\kappa}{1+\\kappa a_0}" }, { "math_id": 22, "text": "\\log\\gamma_\\pm" }, { "math_id": 23, "text": "\\log_{10}\\gamma_\\pm = -Az_j^2 \\frac{\\sqrt I}{1 +Ba_0\\sqrt I}" }, { "math_id": 24, "text": "A=\\frac{e^2B}{2.303 \\times 8\\pi\\epsilon_0\\epsilon_r k_{\\rm B}T}" }, { "math_id": 25, "text": "B=\\left( \\frac{2e^2 N}{\\epsilon_0\\epsilon_rk_{\\rm B}T} \\right)^{1/2}" }, { "math_id": 26, "text": "A" }, { "math_id": 27, "text": "I" }, { "math_id": 28, "text": "1.172\\text{ mol}^{-1/2}\\text{kg}^{1/2}" }, { "math_id": 29, "text": "\\ln 10" }, { "math_id": 30, "text": "0.509\\text{ mol}^{-1/2}\\text{kg}^{1/2}" }, { "math_id": 31, "text": "10^3" }, { "math_id": 32, "text": "I/2" }, { "math_id": 33, "text": "\\text{mol}/\\text{dm}^3" }, { "math_id": 34, "text": "\\text{mole}/\\text{m}^3" }, { "math_id": 35, "text": "\\ln(\\gamma_i) =\n -\\frac{z_i^2 q^2 \\kappa}{8 \\pi \\varepsilon_r \\varepsilon_0 k_\\text{B} T} =\n -\\frac{z_i^2 q^3 N^{1/2}_\\text{A}}{4 \\pi (\\varepsilon_r \\varepsilon_0 k_\\text{B} T)^{3/2}} \\sqrt{10^3\\frac{I}{2}} =\n -A z_i^2 \\sqrt{I}," }, { "math_id": 36, "text": "P^\\text{ex} =\n -\\frac{k_\\text{B} T \\kappa_\\text{cgs}^3}{24\\pi} =\n -\\frac{k_\\text{B} T \\left(\\frac{4\\pi \\sum_j c_j q_j}{\\varepsilon_0 \\varepsilon_r k_\\text{B} T }\\right)^{3/2}}{24\\pi}." }, { "math_id": 37, "text": "P^\\text{id} = k_\\text{B} T \\sum_i c_i" }, { "math_id": 38, "text": "\\phi = \\frac{P^\\text{id} + P^\\text{ex}}{P^\\text{id}} = 1 + \\frac{P^\\text{ex}}{P^\\text{id}}." }, { "math_id": 39, "text": "\\frac{\\partial^2 \\varphi(r) }{\\partial r^2} + \\frac{2}{r} \\frac{\\partial \\varphi(r) }{\\partial r} = \\frac{I q \\varphi(r)}{\\varepsilon_r \\varepsilon_0 k_\\text{B} T} = \\kappa^2 \\varphi(r)." }, { "math_id": 40, "text": "\\begin{align}\n\\pi_1 &= \\frac{q \\varphi(r)}{k_\\text{B} T} = \\Phi(R(r)) \\\\\n\\pi_2 &= \\varepsilon_r \\\\\n\\pi_3 &= \\frac{a k_\\text{B} T \\varepsilon_0}{q^2} \\\\\n\\pi_4 &= a^3 I \\\\\n\\pi_5 &= z_0 \\\\\n\\pi_6 &= \\frac{r}{a} = R(r).\n\\end{align}" }, { "math_id": 41, "text": "\\Phi" }, { "math_id": 42, "text": "R" }, { "math_id": 43, "text": "(\\kappa a)^2" }, { "math_id": 44, "text": "Z_0" }, { "math_id": 45, "text": "z_0" }, { "math_id": 46, "text": "\\frac{\\pi_4}{\\pi_2 \\pi_3} = \\frac{a^2 q^2 I}{\\varepsilon_r \\varepsilon_0 k_\\text{B} T} = (\\kappa a)^2" }, { "math_id": 47, "text": "\\frac{\\pi_5}{\\pi_2 \\pi_3} = \\frac{z_0 q^2}{4 \\pi a \\varepsilon_r \\varepsilon_0 k_\\text{B} T} = Z_0" }, { "math_id": 48, "text": "\\pi" }, { "math_id": 49, "text": "\\varphi(r)" }, { "math_id": 50, "text": "\\Phi(R(r))" }, { "math_id": 51, "text": "R(r)" }, { "math_id": 52, "text": "r" }, { "math_id": 53, "text": "{R^\\prime}(r) = a" }, { "math_id": 54, "text": "\\frac{\\partial \\Phi(R) }{\\partial R}\\bigg|_{R=1} = - Z_0" }, { "math_id": 55, "text": "\\Phi(\\infty) = 0" }, { "math_id": 56, "text": "\\frac{\\partial^2 \\Phi(R) }{\\partial R^2} + \\frac{2}{R} \\frac{\\partial \\Phi(R) }{\\partial R} = (\\kappa a)^2 \\Phi(R)." }, { "math_id": 57, "text": "- \\log_{10}(\\gamma) = \\frac{A|z_+z_-|\\sqrt{I}}{1 + Ba\\sqrt{I}} " }, { "math_id": 58, "text": "\\gamma " }, { "math_id": 59, "text": "z" }, { "math_id": 60, "text": "a " }, { "math_id": 61, "text": " A " }, { "math_id": 62, "text": " B " }, { "math_id": 63, "text": "\\Lambda_m =\\Lambda_m^0-K\\sqrt{c} " }, { "math_id": 64, "text": "\\Lambda_m^0" }, { "math_id": 65, "text": "\\Lambda_m =\\Lambda_m^0-(A+B\\Lambda_m^0 )\\sqrt{c} " }, { "math_id": 66, "text": "\\prod_{i=1}^s x_i^{\\nu_i} = K," }, { "math_id": 67, "text": " \\prod" }, { "math_id": 68, "text": "i" }, { "math_id": 69, "text": "s" }, { "math_id": 70, "text": "x_i" }, { "math_id": 71, "text": "\\nu_i" }, { "math_id": 72, "text": "K" }, { "math_id": 73, "text": "\\gamma K" }, { "math_id": 74, "text": "\\gamma_i" }, { "math_id": 75, "text": "\\log(\\gamma) = \\sum_{i=1}^s \\nu_i \\log(\\gamma_i)." }, { "math_id": 76, "text": "\\Xi" }, { "math_id": 77, "text": "\\Phi = S - \\frac{U}{T} = -\\frac{A}{T}," }, { "math_id": 78, "text": "S" }, { "math_id": 79, "text": "U" }, { "math_id": 80, "text": "T" }, { "math_id": 81, "text": "d \\Phi = \\frac{P}{T} \\,dV + \\frac{U}{T^2} \\,dT," }, { "math_id": 82, "text": "P" }, { "math_id": 83, "text": "V" }, { "math_id": 84, "text": "\\frac{P}{T} = \\frac{\\partial \\Phi}{\\partial V}," }, { "math_id": 85, "text": "\\frac{U}{T^2} = \\frac{\\partial \\Phi}{\\partial T}," }, { "math_id": 86, "text": "U = U_k + U_e" }, { "math_id": 87, "text": "k" }, { "math_id": 88, "text": "e" }, { "math_id": 89, "text": "\\Phi = \\Phi_k + \\Phi_e." }, { "math_id": 90, "text": "\\Phi_e = \\int \\frac{U_e}{T^2} \\,dT." }, { "math_id": 91, "text": "\\Phi_e = \\int \\frac{P_e}{T} \\,dV + \\int \\frac{U_e}{T^2} \\,dT." }, { "math_id": 92, "text": "\\Phi_e = \\Xi_e" }, { "math_id": 93, "text": "\\Xi = S - \\frac{U + PV}{T} = \\Phi - \\frac{PV}{T} = -\\frac{G}{T}," }, { "math_id": 94, "text": "G" }, { "math_id": 95, "text": "d\\Xi = -\\frac{V}{T} \\,dP + \\frac{U + PV}{T^2} \\,dT." }, { "math_id": 96, "text": "P_e" }, { "math_id": 97, "text": "\\Xi = \\Xi_k + \\Xi_e," }, { "math_id": 98, "text": "\\Xi_e = \\Phi_e = \\int \\frac{U_e}{T^2} \\,dT." }, { "math_id": 99, "text": "\\Xi_k = \\sum_{i=0}^s N_i (\\xi_i - k_\\text{B} ln(x_i))," }, { "math_id": 100, "text": "N_i" }, { "math_id": 101, "text": "\\xi_i" }, { "math_id": 102, "text": "k_\\text{B}" }, { "math_id": 103, "text": "\\xi_i = s_i - \\frac{u_i + P v_i}{T}." }, { "math_id": 104, "text": "\\Xi_k" }, { "math_id": 105, "text": "\\sum_{i=1}^s N_i z_i = 0," }, { "math_id": 106, "text": "z_i" }, { "math_id": 107, "text": "z_i q \\varphi" }, { "math_id": 108, "text": "q" }, { "math_id": 109, "text": "\\varphi" }, { "math_id": 110, "text": "n_i" }, { "math_id": 111, "text": "n^0_i" }, { "math_id": 112, "text": "e^{-\\frac{z_i q \\varphi}{k_\\text{B} T}}" }, { "math_id": 113, "text": "n_i = \\frac{N_i}{V} e^{-\\frac{z_i q \\varphi}{k_\\text{B} T}} = n^0_i e^{-\\frac{z_i q \\varphi}{k_\\text{B} T}}." }, { "math_id": 114, "text": "\\rho = \\sum_i z_i q n_i = \\sum_i z_i q n^0_i e^{-\\frac{z_i q \\varphi}{k_\\text{B} T}}." }, { "math_id": 115, "text": "\\nabla^2 \\varphi =\n -\\frac{\\rho}{\\varepsilon_r \\varepsilon_0} =\n -\\sum_i \\frac{z_i q n^0_i}{\\varepsilon_r \\varepsilon_0} e^{-\\frac{z_i q \\varphi}{k_\\text{B} T}}." }, { "math_id": 116, "text": "e^x \\approx 1 + x" }, { "math_id": 117, "text": "0 < x \\ll 1" }, { "math_id": 118, "text": "-\\sum_i \\frac{z_i q n^0_i}{\\varepsilon_r \\varepsilon_0} e^{-\\frac{z_i q \\varphi}{k_\\text{B} T}} \\approx\n -\\sum_i \\frac{z_i q n^0_i}{\\varepsilon_r \\varepsilon_0} \\left(1 - \\frac{z_i q \\varphi}{k_\\text{B} T}\\right) =\n -\\left(\\sum_i \\frac{z_i q n^0_i}{\\varepsilon_r \\varepsilon_0} - \\sum_i \\frac{z_i^2 q^2 n^0_i \\varphi}{\\varepsilon_r \\varepsilon_0 k_\\text{B} T}\\right)." }, { "math_id": 119, "text": "\\nabla^2 \\varphi = \\sum_i \\frac{z_i^2 q^2 n^0_i \\varphi}{\\varepsilon_r \\varepsilon_0 k_\\text{B} T}," }, { "math_id": 120, "text": "\\kappa^2" }, { "math_id": 121, "text": "\\kappa^2 = \\sum_i \\frac{z_i^2 q^2 n^0_i}{\\varepsilon_r \\varepsilon_0 k_\\text{B} T} = \\frac{2 I q^2}{\\varepsilon_r \\varepsilon_0 k_\\text{B} T}," }, { "math_id": 122, "text": "I = \\frac{1}{2} \\sum_i z_i^2 n^0_i." }, { "math_id": 123, "text": "\\nabla^2 \\varphi = \\kappa^2 \\varphi." }, { "math_id": 124, "text": "\\kappa^{-1}" }, { "math_id": 125, "text": "r = 0" }, { "math_id": 126, "text": "\\nabla^2 \\varphi = \\frac{1}{r^2} \\frac{\\partial}{\\partial r} \\left( r^2 \\frac{\\partial\\varphi(r)}{\\partial r} \\right) =\n \\frac{\\partial^2\\varphi(r)}{\\partial r^2} + \\frac{2}{r} \\frac{\\partial\\varphi(r)}{\\partial r} =\n \\kappa^2 \\varphi(r)." }, { "math_id": 127, "text": "\\kappa" }, { "math_id": 128, "text": "\\varphi(r) =\n A \\frac{e^{-\\sqrt{\\kappa^2} r}}{r} + A' \\frac{e^{\\sqrt{\\kappa^2} r}}{2 r \\sqrt {\\kappa^2}} =\n A \\frac{e^{-\\kappa r}}{r} + A'' \\frac{e^{\\kappa r}}{r} = A \\frac{e^{-\\kappa r}}{r}," }, { "math_id": 129, "text": "a_i" }, { "math_id": 130, "text": "\\varphi_\\text{pc}(r) = \\frac{1}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac{z_i q}{r}." }, { "math_id": 131, "text": "\\varphi_\\text{sp}(r) = \\varphi_\\text{pc}(r) + B_i = \\frac{1}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac{z_i q}{r} + B_i," }, { "math_id": 132, "text": "B_i" }, { "math_id": 133, "text": "\\varphi(a_i) = A_i \\frac{e^{-\\kappa a_i}}{a_i} = \\frac{1}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac{z_i q}{a_i} + B_i = \\varphi_\\text{sp}(a_i)," }, { "math_id": 134, "text": "\\varphi'(a_i) = -\\frac{A_i e^{-\\kappa a_i} (1 + \\kappa a_i)}{a_i^2} = -\\frac{1}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac{z_i q}{a_i^2} = \\varphi_\\text{sp}'(a_i)," }, { "math_id": 135, "text": "A_i = \\frac{z_i q}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac{e^{\\kappa a_i}}{1 + \\kappa a_i}," }, { "math_id": 136, "text": "B_i = -\\frac{z_i q \\kappa}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac {1}{1 + \\kappa a_i}." }, { "math_id": 137, "text": "u_i = z_i q B_i = -\\frac{z_i^2 q^2 \\kappa}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac {1}{1 + \\kappa a_i}." }, { "math_id": 138, "text": "U_e = \\frac{1}{2} \\sum_{i=1}^s N_i u_i\n= -\\sum_{i=1}^s \\frac {N_i z_i^2}{2} \\frac{q^2 \\kappa}{4 \\pi \\varepsilon_r \\varepsilon_0} \\frac {1}{1 + \\kappa a_i}." } ]
https://en.wikipedia.org/wiki?curid=7729301
77298068
Landau–Peierls instability
Landau–Peierls instability refers to the phenomenon in which the mean square displacements due to thermal fluctuatuions diverge in the thermodynamic limit and is named after Lev Landau (1937) and Rudolf Peierls (1934). This instability prevails in one-dimensional ordering of atoms/molecules in 3D space such as 1D crystals and smectics and also in two-dimensional ordering in 2D space such as a monomolecular adsorbed filsms at the interface between two isotrophic phases. The divergence is logarthmic, which is rather slow and therefore it is possible to realize substances (such as the smectics) in practice that are subject to Landau–Peierls instability. Mathematical description. Consider a one-dimensionally ordered crystal in 3D space. The density function is then given by formula_0. Since this is a 1D system, only the displacement formula_1 along the formula_2-direction due to thermal fluctuations can smooth out the density function; displacements in other two directions are irrelevant. The net change in the free energy due to the fluctuations is given by formula_3 where formula_4 is the free energy without flcutuations. Note that formula_5 cannot depend on formula_1 or be a linear function of formula_6 because the first case corresponds to a simple uniform translation and the second case is unstable. Thus, formula_5 must be quadratic in the derivatives of formula_1. These are given by formula_7 where formula_8, formula_9 and formula_10 are material constants; in smectics, where the symmetry formula_11 must be obeyed, the second term has to be set zero, i.e., formula_12. In the Fourier space (in a unit volume), the free energy is just formula_13 From the equipartition theorem (each Fourier mode, on average, is allotted an energy equal to formula_14) , we can deduce that formula_15 The mean square displacement is then given by formula_16 where the integral is cut off at a large wavenumber that is comparable to the linear dimension of the element undergoing deformation. In the thermodynamic limit, formula_17, the integral diverges logarthmically. This means that an element at a particular point is displaced through very large distances and therefore smoothes out the function formula_18, leaving formula_19constant as the only solution and destroying the 1D ordering. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho=\\rho(z)" }, { "math_id": 1, "text": "u" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "\\mathcal F = \\int(F-F_0) dV" }, { "math_id": 4, "text": "F_0" }, { "math_id": 5, "text": "\\mathcal F" }, { "math_id": 6, "text": "\\nabla u" }, { "math_id": 7, "text": "\\mathcal F = \\frac{C}{2}\\int dV\\left[\\left(\\frac{\\partial u}{\\partial z}\\right)^2 + \\lambda_1 \\frac{\\partial u}{\\partial z}\\left(\\frac{\\partial^2 u}{\\partial x^2}+ \\frac{\\partial^2 u}{\\partial y^2}\\right) + \\lambda_2 \\left(\\frac{\\partial^2 u}{\\partial x^2}+ \\frac{\\partial^2 u}{\\partial y^2}\\right)^2\\right] " }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "\\lambda_1" }, { "math_id": 10, "text": "\\lambda_2" }, { "math_id": 11, "text": "z\\mapsto -z" }, { "math_id": 12, "text": "\\lambda_1=0" }, { "math_id": 13, "text": "\\mathcal F = \\frac{1}{(2\\pi)^3}\\int d^3k \\frac{C}{2}(k_z^2 + \\lambda_1 k_z \\kappa^2 + \\lambda_2 \\kappa^4)|\\hat u(k)|^2, \\quad \\kappa^2 = k_x^2 + k_y^2." }, { "math_id": 14, "text": "k_B T/2" }, { "math_id": 15, "text": "\\langle|\\hat u(k)|^2 \\rangle = \\frac{k_B T}{C(k_z^2 + \\lambda_1 k_z \\kappa^2 + \\lambda_2 \\kappa^4)}." }, { "math_id": 16, "text": "\\langle u^2(r)\\rangle = \\frac{k_B T}{(2\\pi)^3 C} \\int_{1/L}^{k_c} \\frac{d^3k}{k_z^2 + \\lambda_1 k_z \\kappa^2 + \\lambda_2 \\kappa^4}" }, { "math_id": 17, "text": "L\\to \\infty" }, { "math_id": 18, "text": "\\rho(z)" }, { "math_id": 19, "text": "\\rho=" } ]
https://en.wikipedia.org/wiki?curid=77298068
77309499
Equivalent circuit model for Li-ion cells
Model to simulate Li-ion cells electrical dynamics The equivalent circuit model (ECM) is a common lumped-element model for Lithium-ion battery cells. The ECM simulates the terminal voltage dynamics of a Li-ion cell through an equivalent electrical network composed passive elements, such as resistors and capacitors, and a voltage generator. The ECM is widely employed in several application fields, including computerized simulation, because of its simplicity, its low computational demand, its ease of characterization, and its structural flexibility. These features make the ECM suitable for real-time Battery Management System (BMS) tasks like state of charge (SoC) estimation, State of Health (SoH) monitoring and battery thermal management. Model structure. The equivalent-circuit model is used to simulate the voltage at the cell terminals when an electric current is applied to discharge or recharge it. The most common circuital representation consists of three elements in series: a variable voltage source, representing the open-circuit voltage (OCV) of the cell, a resistor representing ohmic internal resistance of the cell and a set of resistor-capacitor (RC) parallels accounting for the dynamic voltage drops. Open-circuit voltage. The open-circuit voltage of a Li-ion cell (or battery) is its terminal voltage in equilibrium conditions, "i.e." measured when no load current is applied and after a long rest period. The open-circuit voltage is a decreasing nonlinear function of the and its shape depends on the chemical composition of the anode (usually made of graphite) and cathode (LFP, NMC, NCA, LCO...) of the cell. The open-circuit voltage, represented in the circuit by a state of charge-driven voltage generator, is the major voltage contribution and is the most informative indicator of cell's state of charge. Internal resistance. The internal resistance, represented in the circuit by a simple resistor, is used to simulate the istantaneous voltage drops due to ohmic effects such as electrodes resistivity, electrolyte conductivity and contact resistance ("e.g." solid-electrolyte interface (SEI) and collectors contact resistance). Internal resistance is strongly influenced by several factors, such as: RC parallels. One or more RC parallels are often added to the model to improve its accuracy in simulating dynamic voltage transients. The number of RC parallels is an arbitrary modeling choice: in general, a large number of RC parallels improves the accuracy of the model but complicates the identification process and increases the computational load, while a small number will result in a computationally light and easy-to-characterize model but less accurate in predicting cell voltage during transients. Commonly, one or two RC parallels are considered the optimal choices. Model equations. The ECM can be described by a state-space representation that has current (formula_0) as input and voltage at the cell terminals (formula_1) as output. Consider a generic ECM model with a number of RC parallels formula_2. The states of the model, ("i.e.", the variables that evolve over time via differential equations), are the state of charge (formula_3) and the voltage drops across the RC parallels (formula_4). The state of charge is usually computed integrating the current drained/supplied by/to the battery through the formula known as "Coulomb Counting": formula_5 where formula_6 is the cell nominal capacity (expressed in ampere-hours). The voltage formula_7 across each RC parallel is simulated as: formula_8 where formula_9 and formula_10 are, respectively, the polarization resistance and capacity. Finally, knowing the open-circuit voltage-state of charge relationship formula_11 and the internal resistance formula_12, the cell terminal voltage can be computed as: formula_13 Introduction to experimental identification. Experimental identification of the ECM involves the estimation of unknown parameters, especially the capacitance formula_6, the open-circuit voltage curve formula_11, and the passive components formula_12 and formula_9,formula_10. Commonly, identification is addressed in sequential steps. Capacity assessment. Cell capacity formula_6 is usually measured by fully discharging the cell at constant current. The capacity test is commonly carried out by discharging the cell completely (from upper voltage limit formula_14 to lower voltage limit formula_15) at the rated current of 0.5C/1C (that is, the current required, according to the manufacturer, to fully discharge it in two/one hours) and after a full charge (usually conducted via CC-CV charging strategy). Capacity can be computed as: formula_16. Open-circuit voltage characterization. There are two main experimental techniques for characterizing the open-circuit voltage: Dynamic response characterization. The parameters that characterize the dynamic response, namely the ohmic resistance formula_12 and the parameters of RC parallels formula_9,formula_10, are usually identified experimentally in two different ways: Applications. Some of the possible uses of ECM include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "SoC" }, { "math_id": 4, "text": "V_{c,1}, V_{c,2} \\dots V_{c,N}" }, { "math_id": 5, "text": "SoC(t)= SoC(t_0) + \\int_{t_0}^t\\dfrac{1}{3600Q}i(t) dt" }, { "math_id": 6, "text": "Q" }, { "math_id": 7, "text": "V_{c,i}" }, { "math_id": 8, "text": "\\dfrac{dV_{c,i}}{dt}(t)=-\\dfrac{1}{R_iC_i}V_{c,i}(t) + \\dfrac{1}{C_i}i(t)" }, { "math_id": 9, "text": "R_i" }, { "math_id": 10, "text": "C_i" }, { "math_id": 11, "text": "V_{OC}(SoC)" }, { "math_id": 12, "text": "R_0" }, { "math_id": 13, "text": "V(t) = V_{OC}(SoC(t)) + R_0i(t) + \\sum_{i=1}^NV_{c,i}(t)" }, { "math_id": 14, "text": "V_{max}" }, { "math_id": 15, "text": "V_{min}" }, { "math_id": 16, "text": "Q= \\int_{t\\mid_{V(t)=V_{max}}}^{t\\mid_{{V(t)=V_{min}}}}\\dfrac{1}{3600}i(t) dt" }, { "math_id": 17, "text": "V_{OC}" }, { "math_id": 18, "text": "V_{OC}=f(SoC)" }, { "math_id": 19, "text": "V = V_{OC}(SoC) + R_0i + \\sum_{i=1}^NV_{c,i} \\; \\underset{i \\rightarrow0}{\\simeq}\\; V_{OC}(SoC)" } ]
https://en.wikipedia.org/wiki?curid=77309499
77310445
Deep backward stochastic differential equation method
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial derivatives pricing and risk management. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. History. Backwards stochastic differential equations. BSDEs were first introduced by Pardoux and Peng in 1990 and have since become essential tools in stochastic control and financial mathematics. In the 1990s, Étienne Pardoux and Shige Peng established the existence and uniqueness theory for BSDE solutions, applying BSDEs to financial mathematics and control theory. For instance, BSDEs have been widely used in option pricing, risk measurement, and dynamic hedging. Deep learning. Deep Learning is a machine learning method based on multilayer neural networks. Its core concept can be traced back to the neural computing models of the 1940s. In the 1980s, the proposal of the backpropagation algorithm made the training of multilayer neural networks possible. In 2006, the Deep Belief Networks proposed by Geoffrey Hinton and others rekindled interest in deep learning. Since then, deep learning has made groundbreaking advancements in image processing, speech recognition, natural language processing, and other fields. Limitations of Traditional Numerical Methods. Traditional numerical methods for solving stochastic differential equations include the Euler–Maruyama method, Milstein method, Runge–Kutta method (SDE) and methods based on different representations of iterated stochastic integrals. But as financial problems become more complex, traditional numerical methods for BSDEs (such as the Monte Carlo method, finite difference method, etc.) have shown limitations such as high computational complexity and the curse of dimensionality. Deep BSDE method. The combination of deep learning with BSDEs, known as deep BSDE, was proposed by Han, Jentzen, and E in 2018 as a solution to the high-dimensional challenges faced by traditional numerical methods. The Deep BSDE approach leverages the powerful nonlinear fitting capabilities of deep learning, approximating the solution of BSDEs by constructing neural networks. The specific idea is to represent the solution of a BSDE as the output of a neural network and train the network to approximate the solution. Model. Mathematical method. Backward Stochastic Differential Equations (BSDEs) represent a powerful mathematical tool extensively applied in fields such as stochastic control, financial mathematics, and beyond. Unlike traditional Stochastic differential equations (SDEs), which are solved forward in time, BSDEs are solved backward, starting from a future time and moving backwards to the present. This unique characteristic makes BSDEs particularly suitable for problems involving terminal conditions and uncertainties. A backward stochastic differential equation (BSDE) can be formulated as: formula_0 In this equation: The goal is to find adapted processes formula_9 and formula_10 that satisfy this equation. Traditional numerical methods struggle with BSDEs due to the curse of dimensionality, which makes computations in high-dimensional spaces extremely challenging. Methodology overview. Source: 1. Semilinear parabolic PDEs. We consider a general class of PDEs represented by formula_11 In this equation: 2. Stochastic process representation. Let formula_22 be a formula_15-dimensional Brownian motion and formula_23 be a formula_15-dimensional stochastic process which satisfies formula_24 3. Backward stochastic differential equation(BSDE). Then the solution of the PDE satisfies the following BSDE: formula_25 formula_26 4. Temporal discretization. Discretize the time interval formula_27 into steps formula_28: formula_29 formula_30 formula_31 where formula_32 and formula_33. 5. Neural network approximation. Use a multilayer feedforward neural network to approximate: formula_34 for formula_35, where formula_36 are parameters of the neural network approximating formula_37 at formula_38. 6. Training the neural network. Stack all sub-networks in the approximation step to form a deep neural network. Train the network using paths formula_39 and formula_40 as input data, minimizing the loss function: formula_41 where formula_42 is the approximation of formula_43. Neural network architecture. Source: Deep learning encompass a class of machine learning techniques that have transformed numerous fields by enabling the modeling and interpretation of intricate data structures. These methods, often referred to as deep learning, are distinguished by their hierarchical architecture comprising multiple layers of interconnected nodes, or neurons. This architecture allows deep neural networks to autonomously learn abstract representations of data, making them particularly effective in tasks such as image recognition, natural language processing, and financial modeling. The core of this method lies in designing an appropriate neural network structure (such as fully connected networks or recurrent neural networks) and selecting effective optimization algorithms. The choice of deep BSDE network architecture, the number of layers, and the number of neurons per layer are crucial hyperparameters that significantly impact the performance of the deep BSDE method. The deep BSDE method constructs neural networks to approximate the solutions for formula_44 and formula_45, and utilizes stochastic gradient descent and other optimization algorithms for training. The fig illustrates the network architecture for the deep BSDE method. Note that formula_46 denotes the variable approximated directly by subnetworks, and formula_47 denotes the variable computed iteratively in the network. There are three types of connections in this network: i) formula_48 is the multilayer feedforward neural network approximating the spatial gradients at time formula_38. The weights formula_36 of this subnetwork are the parameters optimized. ii) formula_49 is the forward iteration providing the final output of the network as an approximation of formula_50, characterized by Eqs. 5 and 6. There are no parameters optimized in this type of connection. iii) formula_51 is the shortcut connecting blocks at different times, characterized by Eqs. 4 and 6. There are also no parameters optimized in this type of connection. Algorithms. Adam optimizer. This function implements the Adam algorithm for minimizing the target function formula_52. Function: ADAM(formula_53, formula_54, formula_55, formula_56, formula_52, formula_57) is formula_58 "// Initialize the first moment vector" formula_59 "// Initialize the second moment vector" formula_60 "// Initialize timestep" "// Step 1: Initialize parameters" formula_61 "// Step 2: Optimization loop" while formula_62 has not converged do formula_63 formula_64 "// Compute gradient of formula_65 at timestep formula_66" formula_67 "// Update biased first moment estimate" formula_68 "// Update biased second raw moment estimate" formula_69 "// Compute bias-corrected first moment estimate" formula_70 "// Compute bias-corrected second moment estimate" formula_71 "// Update parameters" return formula_62 Backpropagation algorithm. This function implements the backpropagation algorithm for training a multi-layer feedforward neural network. Function: BackPropagation("set" formula_72) is "// Step 1: Random initialization" "// Step 2: Optimization loop" repeat until termination condition is met: for each formula_73: formula_74 "// Compute output" "// Compute gradients" for each output neuron formula_75: formula_76 "// Gradient of output neuron" for each hidden neuron formula_77: formula_78 "// Gradient of hidden neuron" "// Update weights" for each weight formula_79: formula_80 "// Update rule for weight" for each weight formula_81: formula_82 "// Update rule for weight" "// Update parameters" for each parameter formula_83: formula_84 "// Update rule for parameter" for each parameter formula_85: formula_86 "// Update rule for parameter" "// Step 3: Construct the trained multi-layer feedforward neural network" return trained neural network Numerical solution for optimal investment portfolio. Source: This function calculates the optimal investment portfolio using the specified parameters and stochastic processes. function OptimalInvestment(formula_87, formula_88, formula_89) is "// Step 1: Initialization" for formula_90 to maxstep do formula_91, formula_92 "// Parameter initialization" for formula_93 to formula_94 do formula_95 "// Update feedforward neural network unit" formula_96 formula_97 "// Step 2: Compute loss function" formula_98 "// Step 3: Update parameters using ADAM optimization" formula_99 formula_100 "// Step 4: Return terminal state" return formula_101 Application. Deep BSDE is widely used in the fields of financial derivatives pricing, risk management, and asset allocation. It is particularly suitable for: Advantages and disadvantages. Advantages. Sources: Disadvantages. Sources: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Y_t = \\xi + \\int_t^T f(s, Y_s, Z_s) \\, ds - \\int_t^T Z_s \\, dW_s, \\quad t \\in [0, T] " }, { "math_id": 1, "text": " \\xi " }, { "math_id": 2, "text": " T " }, { "math_id": 3, "text": "f:[0,T]\\times\\mathbb{R}\\times\\mathbb{R}\\to\\mathbb{R}" }, { "math_id": 4, "text": "(Y_t,Z_t)_{t\\in[0,T]}" }, { "math_id": 5, "text": "(Y_t)_{t\\in[0,T]}" }, { "math_id": 6, "text": "(Z_t)_{t\\in[0,T]}" }, { "math_id": 7, "text": "(\\mathcal{F}_t)_{t\\in [0,T]}" }, { "math_id": 8, "text": " W_s " }, { "math_id": 9, "text": " Y_t " }, { "math_id": 10, "text": " Z_t " }, { "math_id": 11, "text": "\n\\frac{\\partial u}{\\partial t}(t,x) + \\frac{1}{2} \\text{Tr}\\left(\\sigma\\sigma^T(t,x)\\left(\\text{Hess}_x u(t,x)\\right)\\right) + \\nabla u(t,x) \\cdot \\mu(t,x) + f\\left(t,x,u(t,x),\\sigma^T(t,x)\\nabla u(t,x)\\right) = 0\n" }, { "math_id": 12, "text": " u(T,x) = g(x) " }, { "math_id": 13, "text": " t " }, { "math_id": 14, "text": " x " }, { "math_id": 15, "text": " d " }, { "math_id": 16, "text": " \\sigma " }, { "math_id": 17, "text": " \\sigma^T " }, { "math_id": 18, "text": " \\text{Hess}_x u " }, { "math_id": 19, "text": " u " }, { "math_id": 20, "text": " \\mu " }, { "math_id": 21, "text": " f " }, { "math_id": 22, "text": " \\{W_t\\}_{t \\geq 0} " }, { "math_id": 23, "text": " \\{X_t\\}_{t \\geq 0} " }, { "math_id": 24, "text": "\nX_t = \\xi + \\int_0^t \\mu(s, X_s) \\, ds + \\int_0^t \\sigma(s, X_s) \\, dW_s\n" }, { "math_id": 25, "text": "\nu(t, X_t) - u(0, X_0)\n" }, { "math_id": 26, "text": "= - \\int_0^t f\\left(s, X_s, u(s, X_s), \\sigma^T(s, X_s)\\nabla u(s, X_s)\\right) \\, ds + \\int_0^t \\nabla u(s, X_s) \\cdot \\sigma(s, X_s) \\, dW_s\n" }, { "math_id": 27, "text": " [0, T] " }, { "math_id": 28, "text": " 0 = t_0 < t_1 < \\cdots < t_N = T " }, { "math_id": 29, "text": "\nX_{t_{n+1}} - X_{t_n} \\approx \\mu(t_n, X_{t_n}) \\Delta t_n + \\sigma(t_n, X_{t_n}) \\Delta W_n\n" }, { "math_id": 30, "text": "\nu(t_n, X_{t_{n+1}}) - u(t_n, X_{t_n})\n" }, { "math_id": 31, "text": "\\approx - f\\left(t_n, X_{t_n}, u(t_n, X_{t_n}), \\sigma^T(t_n, X_{t_n}) \\nabla u(t_n, X_{t_n})\\right) \\Delta t_n + \\left[\\nabla u(t_n, X_{t_n}) \\sigma(t_n, X_{t_n})\\right] \\Delta W_n\n" }, { "math_id": 32, "text": " \\Delta t_n = t_{n+1} - t_n " }, { "math_id": 33, "text": " \\Delta W_n = W_{t_{n+1}} - W_n " }, { "math_id": 34, "text": "\n\\sigma^T(t_n, X_n) \\nabla u(t_n, X_n) \\approx (\\sigma^T \\nabla u)(t_n, X_n; \\theta_n)\n" }, { "math_id": 35, "text": " n = 1, \\ldots, N " }, { "math_id": 36, "text": " \\theta_n " }, { "math_id": 37, "text": " x \\mapsto \\sigma^T(t, x) \\nabla u(t, x) " }, { "math_id": 38, "text": " t = t_n " }, { "math_id": 39, "text": " \\{X_{t_n}\\}_{0 \\leq n \\leq N} " }, { "math_id": 40, "text": " \\{W_{t_n}\\}_{0 \\leq n \\leq N} " }, { "math_id": 41, "text": "\nl(\\theta) = \\mathbb{E} \\left| g(X_{t_N}) - \\hat{u}\\left(\\{X_{t_n}\\}_{0 \\leq n \\leq N}, \\{W_{t_n}\\}_{0 \\leq n \\leq N}; \\theta \\right) \\right|^2\n" }, { "math_id": 42, "text": " \\hat{u} " }, { "math_id": 43, "text": " u(t, X_t) " }, { "math_id": 44, "text": " Y " }, { "math_id": 45, "text": " Z " }, { "math_id": 46, "text": " \\nabla u(t_n, X_{t_n}) " }, { "math_id": 47, "text": " u(t_n, X_{t_n}) " }, { "math_id": 48, "text": " X_{t_n} \\rightarrow h_1^n \\rightarrow h_2^n \\rightarrow \\ldots \\rightarrow h_H^n \\rightarrow \\nabla u(t_n, X_{t_n}) " }, { "math_id": 49, "text": " (u(t_n, X_{t_n}), \\nabla u(t_n, X_{t_n}), W_{t_n+1} - W_{t_n}) \\rightarrow u(t_{n+1}, X_{t_{n+1}}) " }, { "math_id": 50, "text": " u(t_N, X_{t_N}) " }, { "math_id": 51, "text": " (X_{t_n}, W_{t_n+1} - W_{t_n}) \\rightarrow X_{t_{n+1}} " }, { "math_id": 52, "text": "\\mathcal{G}(\\theta)" }, { "math_id": 53, "text": "\\alpha" }, { "math_id": 54, "text": "\\beta_1" }, { "math_id": 55, "text": "\\beta_2" }, { "math_id": 56, "text": "\\epsilon" }, { "math_id": 57, "text": "\\theta_0" }, { "math_id": 58, "text": "m_0 := 0" }, { "math_id": 59, "text": "v_0 := 0" }, { "math_id": 60, "text": "t := 0" }, { "math_id": 61, "text": "\\theta_t := \\theta_0" }, { "math_id": 62, "text": "\\theta_t" }, { "math_id": 63, "text": "t := t + 1" }, { "math_id": 64, "text": "g_t := \\nabla_\\theta \\mathcal{G}_t(\\theta_{t-1})" }, { "math_id": 65, "text": "\\mathcal{G}" }, { "math_id": 66, "text": "t" }, { "math_id": 67, "text": "m_t := \\beta_1 \\cdot m_{t-1} + (1 - \\beta_1) \\cdot g_t" }, { "math_id": 68, "text": "v_t := \\beta_2 \\cdot v_{t-1} + (1 - \\beta_2) \\cdot g_t^2" }, { "math_id": 69, "text": "\\widehat{m}_t := \\frac{m_t}{(1 - \\beta_1^t)}" }, { "math_id": 70, "text": "\\widehat{v}_t := \\frac{v_t}{(1 - \\beta_2^t)}" }, { "math_id": 71, "text": "\\theta_t := \\theta_{t-1} - \\frac{\\alpha \\cdot \\widehat{m}_t}{(\\sqrt{\\widehat{v}_t} + \\epsilon)}" }, { "math_id": 72, "text": "D=\\left\\{(\\mathbf{x}_k,\\mathbf{y}_k)\\right\\}_{k=1}^{m}" }, { "math_id": 73, "text": "(\\mathbf{x}_k,\\mathbf{y}_k) \\in D" }, { "math_id": 74, "text": "\\hat{\\mathbf{y}}_k := f(\\beta_j - \\theta_j)" }, { "math_id": 75, "text": "j" }, { "math_id": 76, "text": "g_j := \\hat{y}_{j}^{k} (1 - \\hat{y}_{j}^{k}) (\\hat{y}_{j}^{k} - y_{j}^{k})" }, { "math_id": 77, "text": "h" }, { "math_id": 78, "text": "e_h := b_h (1 - b_h) \\sum_{j=1}^{\\ell} w_{hj} g_{j}" }, { "math_id": 79, "text": "w_{hj}" }, { "math_id": 80, "text": "\\Delta w_{hj} := \\eta g_j b_h" }, { "math_id": 81, "text": "v_{ih}" }, { "math_id": 82, "text": "\\Delta v_{ih} := \\eta e_h x_i" }, { "math_id": 83, "text": "\\theta_j" }, { "math_id": 84, "text": "\\Delta \\theta_j := -\\eta g_j" }, { "math_id": 85, "text": "\\gamma_{h}" }, { "math_id": 86, "text": "\\Delta \\gamma_{h} := -\\eta e_h" }, { "math_id": 87, "text": "W_{t_{i+1}} - W_{t_i}" }, { "math_id": 88, "text": "x" }, { "math_id": 89, "text": "\\theta=(X_{0}, H_{0}, \\theta_{1}, \\theta_{2}, \\dots, \\theta_{N-1})" }, { "math_id": 90, "text": "k := 0" }, { "math_id": 91, "text": "M_0^{k, m} := 0" }, { "math_id": 92, "text": "X_0^{k, m} := X_0^k" }, { "math_id": 93, "text": "i := 0" }, { "math_id": 94, "text": "N-1" }, { "math_id": 95, "text": "H_{t_i}^{k, m} := \\mathcal{NN}(M_{t_i}^{k, m}; \\theta_i^k)" }, { "math_id": 96, "text": "M_{t_{i+1}}^{k, m} := M_{t_{i}}^{k, m} + \\big((1 - \\phi)(\\mu_{t_{i}} - M_{t_{i}}^{k, m})\\big)(t_{i+1} - t_{i}) + \\sigma_{t_{i}}(W_{t_{i+1}} - W_{t_{i}})" }, { "math_id": 97, "text": "X_{t_{i+1}}^{k, m} := X_{t_{i}}^{k, m} + \\big[H_{t_{i}}^{k, m}(\\phi (M_{t_{i}}^{k, m} - \\mu_{t_{i}}) + \\mu_{t_{i}})\\big](t_{i+1} - t_{i}) + H_{t_{i}}^{k, m} (W_{t_{i+1}} - W_{t_{i}})" }, { "math_id": 98, "text": "\\mathcal{L}(t) := \\frac{1}{M} \\sum_{m=1}^M \\left| X_{t_N}^{k, m} - g(M_{t_N}^{k, m}) \\right|^2" }, { "math_id": 99, "text": "\\theta^{k+1} := \\operatorname{ADAM}(\\theta^k, \\nabla \\mathcal{L}(t))" }, { "math_id": 100, "text": "X_0^{k+1} := \\operatorname{ADAM}(X_0^k, \\nabla \\mathcal{L}(t))" }, { "math_id": 101, "text": "(M_{t_N}, X_{t_N})" } ]
https://en.wikipedia.org/wiki?curid=77310445
77310453
Single-pixel imaging
Computational imaging technique Computational imaging technique Single-pixel imaging is a computational imaging technique for producing spatially-resolved images using a single detector instead of an array of detectors (as in conventional camera sensors). A device that implements such an imaging scheme is called a "single-pixel camera". Combined with compressed sensing, the single-pixel camera can recover images from fewer measurements than the number of reconstructed pixels. Single-pixel imaging differs from raster scanning in that multiple parts of the scene are imaged at the same time, in a wide-field fashion, by using a sequence of mask patterns either in the illumination or in the detection stage. A spatial light modulator (such as a digital micromirror device) is often used for this purpose. Single-pixel cameras were developed to be simpler, smaller, and cheaper alternatives to conventional, silicon-based digital cameras, with the ability to also image a broader spectral range. Since then, it has been adapted and demonstrated to be suitable for numerous applications in microscopy, tomography, holography, ultrafast imaging, FLIM and remote sensing. History. The origins of single-pixel imaging can be traced back to the development of dual photography and compressed sensing in the mid 2000s. The seminal paper written by Duarte et al. in 2008 at Rice University concretised the foundations of the single-pixel imaging technique. It also presented a detailed comparison of different scanning and imaging modalities in existence at that time. These developments were also one of the earliest applications of the digital micromirror device (DMD), developed by Texas Instruments for their DLP projection technology, for structured light detection. Soon, the technique was extended to computational ghost imaging, terahertz imaging, and 3D imaging. Systems based on structured detection were often termed single-pixel cameras, whereas those based on structured illumination were often referred to as computational ghost imaging. By using pulsed-lasers as the light source, single-pixel imaging was applied for time-of-flight measurements used in depth-mapping LiDAR applications. Apart from the DMD, different light modulation schemes were also experimented with liquid crystals and LED arrays. In the early 2010s, single-pixel imaging was exploited in fluorescence microscopy, for imaging biological samples. Coupled with the technique of time-correlated single photon counting (TCSPC), the use of single-pixel imaging for compressive fluorescence lifetime imaging microscopy (FLIM) has also been explored. Since the late 2010s, machine learning techniques, especially Deep learning, have been increasingly used to optimise the illumination, detection, or reconstruction strategies of single-pixel imaging. Principles. Theory. In sampling, digital data acquisition involves uniformly sampling discrete points of an analog signal at or above the Nyquist rate. For example, in a digital camera, the sampling is done with a 2-D array of formula_3 pixelated detectors on a CCD or CMOS sensor (formula_3 is usually millions in consumer digital cameras). Such a sample can be represented using the vector formula_1 with elements formula_4. A vector can be expressed as the coefficients formula_5 of an orthonormal basis expansion:formula_6where formula_7 are the formula_8 basis vectors. Or, more compactly: formula_9where formula_0 is the formula_10 basis matrix formed by stacking formula_7. It is often possible to find a basis in which the coefficient vector formula_11 is "sparse" (with formula_12 non-zero coefficients) or r-compressible (the sorted coefficients decay as a power law). This is the principle behind compression standards like JPEG and JPEG-2000, which exploit the fact that natural images tend to be compressible in the DCT and wavelet bases. Compressed sensing aims to bypass the conventional "sample-then-compress" framework by directly acquiring a condensed representation with formula_13 linear measurements. Similar to the previous step, this can be represented mathematically as:formula_14where formula_2 is an formula_15 vector and formula_16 is the formula_17-rank measurement matrix. This so-called under-determined measurement makes the inverse problem an ill-posed problem, which in general is unsolvable. However, compressed sensing exploits the fact that with the proper design of formula_16, the compressible signal formula_1 can be exactly or approximately recovered using computational methods. It has been shown that "incoherence" between the bases formula_16 and formula_0 (along with the existence of sparsity in formula_0) is sufficient for such a scheme to work. Popular choices of formula_0 are random matrices or random subsets of basis vectors from Fourier, Walsh-Hadamard or Noiselet bases. It has also been shown that the formula_18 optimisation given by:formula_19works better to retrieve the signal formula_1 from the random measurements formula_2, than other common methods like least-squares minimisation. An improvement to the formula_18 optimisation algorithm, based on total-variation minimisation, is especially useful for reconstructing images directly in the pixel basis. Single-pixel camera. The single-pixel camera is an optical computer that implements the compressed sensing measurement architecture described above. It works by sequentially measuring the inner products formula_20 between the image formula_1 and the set of 2-D test functions formula_21, to compute the measurement vector formula_2. In a typical setup, it consists of two main components: a spatial light modulator (SLM) and a single-pixel detector. The light from a wide-field source is collimated and projected onto the scene, and the reflected/transmitted light is focussed on to the detector with lenses. The SLM is used to realise the test functions formula_21, often as binary pattern masks, and to introduce them either in the illumination or in the detection path. The detector integrates and converts the light signal into an output voltage, which is then digitised by an A/D converter and analysed by a computer. Rows from a randomly permuted (for incoherence) Walsh-Hadamard matrix, reshaped into square patterns, are commonly used as binary test functions in single-pixel imaging. To obtain both positive and negative values (±1 in this case), the mean light intensity can be subtracted from each measurements, since the SLM can produce only binary patterns with 0 (off) and 1 (on) conditions. An alternative is to split the positive and negative elements into two sets, measure both with the negative set inverted (i.e., -1 replaced with +1), and subtract the measurements in the end. Values between 0 and 1 can be obtained by dithering the DMD micromirrors during the detector's integration time. Examples of commonly used detectors include photomultiplier tubes, avalanche photodiodes, or hybrid photo multipliers (sandwich of layers of photon amplification stages). A spectrometer can also be used for multispectral imaging, along with an array of detectors, one for each spectral channel. Another common addition is a time-correlated single photon counting (TCSPC) board to process the detector output, which, coupled with a pulsed laser, enables lifetime measurement and is useful in biomedical imaging. Advantages and drawbacks. The most important advantage of the single-pixel design is its reduced size, complexity, and cost of the photon detector (just a single unit). This enables the use of exotic detectors capable of multi-spectral, time-of-flight, photon counting, and other fast detection schemes. This made single-pixel imaging suitable for various fields, ranging from microscopy to astronomy. The quantum efficiency of a photodiode is also higher than that of the pixel sensors in a typical CCD or CMOS array. Coupled with the fact that each single-pixel measurement receives about formula_22 times more photons than an average pixel sensor, this help reduce image distortion from dark noise and read-out noise significantly. Another important advantage is the fill factor of SLMs like a DMD, which can reach around 90% (compared to that of a CCD/CMOS array which is only around 50%). In addition, single-pixel imaging inherits the theoretical advantages that underpins the compressed sensing framework, such as its universality (the same measurement matrix formula_16 works for many sparsifying bases formula_0) and robustness (measurements have equal priority, and thus, loss of a measurement does not corrupt the entire reconstruction). The main drawback the single-pixel imaging technique face is the tradeoff between speed of acquisition and spatial resolution. Fast acquisition needs projecting fewer patterns (since each of them is measured sequentially), which leads to lower resolution of the reconstructed image. An innovative method of "fusing" the low resolution single-pixel image with a high spatial-resolution CCD/CMOS image (dubbed "Data Fusion") has been proposed to mitigate this problem. Deep learning methods to learn the optimal set of patterns suitable to image a particular category of samples are also being developed to improve the speed and reliability of the technique. Applications. Some of the research fields that are increasingly employing and developing single-pixel imaging are listed below:
[ { "math_id": 0, "text": "\\Psi" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "x_i, i = 1,2,...,N" }, { "math_id": 5, "text": "\\{a_i\\}" }, { "math_id": 6, "text": "x = \\sum_{i=1}^{N}{a_i \\psi_i}" }, { "math_id": 7, "text": "\\psi_i" }, { "math_id": 8, "text": "N \\times 1" }, { "math_id": 9, "text": "x = \\Psi a" }, { "math_id": 10, "text": "N \\times N" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "K << N" }, { "math_id": 13, "text": "M < N" }, { "math_id": 14, "text": "y = \\Phi x = \\Phi \\Psi a" }, { "math_id": 15, "text": "M \\times 1" }, { "math_id": 16, "text": "\\Phi" }, { "math_id": 17, "text": "M" }, { "math_id": 18, "text": "\\mathcal{L}_1" }, { "math_id": 19, "text": "\\hat{a} = \\text{arg min} ||\\alpha'||_1 \\quad S.T. ||y - \\Phi \\Psi \\alpha'||_2 < \\epsilon" }, { "math_id": 20, "text": "y_m = \\langle x, \\phi_m \\rangle" }, { "math_id": 21, "text": "\\{\\phi_m\\}" }, { "math_id": 22, "text": "N/2" } ]
https://en.wikipedia.org/wiki?curid=77310453
77310456
Time-interleaved ADC
Time interleaved (TI) ADCs are Analog-to-Digital Converters (ADCs) that involve "M" converters working in parallel. Each of the "M" converters is referred to as sub-ADC, channel or slice in the literature. The time interleaving technique, akin to multithreading in computing, involves using multiple converters in parallel to sample the input signal at staggered intervals, increasing the overall sampling rate and improving performance without overburdening the single ADCs. History. Early concept. The concept of time interleaving can be traced back to the 1960s. One of the earliest mentions of using multiple ADCs to increase sampling rates appeared in the work of Bernard M. Oliver and Claude E. Shannon. Their pioneering work on communication theory and sampling laid the groundwork for the theoretical basis of time interleaving. However, practical implementations were limited by the technology of the time. Development. In the 1980s, significant advancements were made: W. C. Black and D. A. Hodges from the Berkeley University successfully implemented the first prototype of a time interleaved ADC. In particular, they designed a 4-way interleaved converter working at 2.5 MSample/s. Each slice of the converter was a 7-stage SAR pipeline ADC running at 625 kSample/s. An ENOB equal to 6.2 was measured for the proposed converter with a probing input signal at 100 kHz. The work was presented at ISSCC 1980 and the paper was focused on the practical challenges of implementing TI ADCs, including the synchronization and calibration of multiple channels to reduce mismatches. In 1987, Ken Poulton and other researchers of the HP Labs developed the first product based on Time Interleaved ADCs: the HP 54111D digital oscilloscope. Commercialization. In the 1990s, the TI ADC technology saw further advancements driven by the increasing demand for high-speed data conversion in telecommunications and other fields. A notable project during this period was the development of high-speed ADCs for digital oscilloscopes by Tektronix. Engineers at Tektronix implemented TI ADCs to achieve the high sampling rates necessary for capturing fast transient signals in test and measurement equipment. As a result of this work, the Tektronix TDS350, a two-channel, 200 MHz, 1 GSample/s digital storage scope, was commercialized in 1991. Widespread adoption. By the late 1990s, TI ADCs had become commercially viable. One of the key projects that showcased the potential of TI ADCs was the development of the GSM (Global System for Mobile Communications) standard, where high-speed ADCs were essential for digital signal processing in mobile phones. Companies like Analog Devices and Texas Instruments began to offer TI ADCs as standard products, enabling widespread adoption in various applications. Nowadays. The 21st century has seen continued innovation in TI ADC technology. Researchers and engineers have focused on further improving the performance and integration of TI ADCs to meet the growing demands of digital systems. Key figures in this era include Boris Murmann and his colleagues at Stanford University, who have contributed to the development of advanced calibration techniques and low-power design methods for TI ADCs. Future perspectives. Today, TI ADCs are used in a wide range of applications, from 5G telecommunications to high-resolution medical imaging. The future of TI ADCs looks promising, with ongoing research focusing on further improving their performance and expanding their application areas. Emerging technologies such as autonomous vehicles, advanced radar systems, and artificial intelligence-driven signal processing will continue to drive the demand for high-speed, high-resolution ADCs. Working principle. In a time interleaved system, the conversion time required by each sub-ADC is equal to formula_0. If the outputs of the multiple channels are properly combined, the overall system can be considered as a single converter operating at a sampling period equal to formula_1, where formula_2 represents the number of channels or sub-ADCs in the TI system. To illustrate this concept, let us delve into the conversion process of a TI ADC with reference to the first figure of this paragraph. The figure shows the time diagram of a data converter that employs four interleaved channels. The input signal formula_3 (depicted as a blue waveform) is a sinusoidal wave at frequency formula_4. Here, formula_5 is the clock frequency, which is the reciprocal of formula_6, the overall sampling period of the TI ADC. This relationship aligns with the Shannon-Nyquist sampling theorem, which states that the sampling rate must be at least twice the highest frequency present in the input signal to accurately reconstruct the signal without aliasing. In a TI ADC, every formula_6, one of the channels acquires a sample of the input signal. The conversion operation performed by each sub-ADC takes formula_0 seconds and, after the conversion, a digital multiplexer sequentially selects the output from one of the formula_2 sub-ADCs. This selection occurs in a specific order, typically from the first sub-ADC to the formula_7 sub-ADC, and then the cycle repeats. At any given moment, each channel is engaged in converting different samples. Consequently, the aggregate data rate of the system is faster than the data rate of a single sub-ADC by a factor of formula_2. This is because the TI system essentially parallelizes the conversion process across multiple sub-ADCs. The factor formula_2, representing the number of interleaved channels, thus represents the increase in the overall sampling rate of the entire system. To conclude, the time-interleaving method effectively increases the conversion speed of each sub-ADC by formula_2 times. As a result, even though each sub-ADC operates at a relatively slow pace, the combined output of the TI system is characterized by a higher sampling rate. Time interleaving is therefore a powerful technique in the design and implementation of data converters since it enables the creation of high-speed ADCs using components that individually have much lower performance capabilities in terms of speed. Possible architectures. Two architectures are possible to implement a time interleaved ADC. The first architecture is depicted in the first figure of the paragraph and it is characterized by the presence of a single Sample and Hold (S&amp;H) circuit for the entire structure. The sampler operates at a frequency formula_8 and acquires the samples for all the channels of the TI ADC. Once a sample is acquired, an analog demultiplexer distributes it to the correspondent sub-ADC. This approach centralizes the sampling process, ensuring uniformity in the acquired samples. However, it places stringent speed requirements on the S&amp;H circuit since it must operate at the full sampling rate of the ADC system. In contrast, the second architecture, illustrated in the second figure of the paragraph, employs different S&amp;H circuits for each channel, each operating at a reduced frequency formula_9, where formula_2 is once again the number of interleaved channels. This solution significantly relaxes the speed requirements for each S&amp;H circuit, as they only need to operate at a fraction of the overall sampling rate. This approach mitigates the challenge of high-speed operation of the first architecture. However, this benefit comes with trade-offs, namely, increased area occupation and higher power dissipation due to the additional circuitry required to implement multiple S&amp;H circuits. Advantages and trade-offs of the two architectures. The choice between these two architectures depends on the specific requirements and constraints of the application. The single S&amp;H circuit architecture offers a compact and potentially lower-power solution, as it eliminates the redundancy of multiple S&amp;H circuits. The centralized sampling can also reduce mismatches between channels, as all samples are derived from a single source. However, the high-speed requirement of the single S&amp;H circuit can be a significant challenge, particularly at very high sampling rates where achieving the necessary performance may require more advanced and costly technology. On the other hand, the multiple S&amp;H circuit architecture distributes the sampling load, allowing each S&amp;H circuit to operate at a lower speed. This can be advantageous in applications where high-speed circuits are difficult or expensive to implement. Additionally, this architecture can offer improved flexibility in managing timing and gain mismatches between channels. Each S&amp;H circuit can be independently optimized for its specific operating conditions, potentially leading to better overall performance. The trade-offs include a larger footprint on the integrated circuit and increased power consumption, which may be critical factors in power-sensitive or space-constrained applications. In practical implementations, the choice between these architectures is influenced by several factors, including the required sampling rate, power budget, available silicon area, and the acceptable level of complexity in calibration and error correction. For instance, in high-speed communication systems the single S&amp;H circuit architecture might be preferred despite its stringent speed requirements, due to its compact design and potentially lower power consumption. Conversely, in applications where power is less of a concern but achieving ultra-high speeds is challenging, the multiple S&amp;H circuit architecture might be more suitable. Sources of errors. Ideally, all the sub-ADCs are identical. In practice, however, they end up being slightly different due to process, voltage and temperature (PVT) variations. If not taken care of, sub-ADC mismatches can jeopardize the performance of TI ADCs since they show up in the output spectrum as spectral tones. Offset mismatches (i.e., different input-referred offsets for each sub-ADC) are superimposed to the converted signal as a sequence of period formula_10, affecting the output spectrum of the ADC with spurious tones, whose power depends on the magnitude of the offsets, located at frequencies formula_11, where M represents the number of channels and k is an arbitrary integer number from formula_12 to formula_13. Gain errors affect the amplitude of the converted signal and are transferred to the output as an amplitude modulation (AM) of the input signal with a sequence of period formula_10. As a matter of fact, this mechanism introduces spurious harmonics at frequencies formula_14, whose power depends both on the amplitude of the input signal and on the magnitude of the gain error sequence. Finally, skew mismatches are due to the channels being timed by different phases of the same clock signal. If one timing signal is skewed with respect to the others, spurious harmonics will be generated in the output spectrum. It can be demonstrated that these spurs are located at the frequencies formula_14. Moreover, their power depends both on the magnitude of the skew between the control phases and on the value of the input signal frequency. Channel mismatches in a TI ADC can seriously degrade its Spurious-Free Dynamic Range (SFDR) and its Signal-to-Noise-and-Distortion Ratio (SNDR). To recover the spectral purity of the converter, the proven solution consists of compensating these non-idealities with digitally implemented corrections. Despite being able to recover the overall spectral purity by suppressing the mismatch spurs, digital calibrations can significantly increase the overall power consumption of the receiver and may not be as effective when the input signal is broadband. For this reason, methods to provide higher stability and usability in real-world cases should be actively researched. Typical applications. Telecommunications. As cellular communications systems evolve, the performance of the receivers becomes more and more demanding. For example, the channel bandwidth offered by the 4G network can be as high as 20 MHz, whereas it can range from 400 MHz up to 1 GHz in the current 5G NR network. On top of that, the complexity of signal modulation also increased from 64-QAM in 4G to 256-QAM in 5G NR. The tighter requirements impose new design challenges on modern receivers, whose performance relies on the analog-to-digital interface provided by ADCs. In 4G receivers, the data conversion is performed by Delta-Sigma Modulators (DSMs), as they are easily reconfigurable: It is sufficient to modify the oversampling ratio (OSR), the loop order or the quantizer resolution to adjust the bandwidth of the data converter according to the need. This is a desirable feature of an ADC in receivers supporting multiple wireless communication standards. In 5G receivers, instead, DSMs are not the preferred choice: The bandwidth of the receiver has to be higher than a few hundreds of MHz, whereas the signal bandwidth formula_15 of a DSM is only a fraction of half of the sampling frequency formula_16. In mathematical terms, formula_17. Thus, in practice, it is hard if not impossible to achieve the required sampling frequency with DSMs. For this reason, 5G receivers typically rely on Nyquist ADCs, in which the signal bandwidth can be as high as formula_18, according to the Shannon-Nyquist sampling theorem. The ADCs employed in 5G receivers do not only require a high sampling rate to deal with large channel bandwidths, but also a reasonable number of bits. A high resolution is necessary for the data converter to enable the use of the high-order modulation schemes, which are fundamental to achieve high throughputs with an efficient bandwidth utilization. The resolution of a data converter is defined as the minimum voltage value that it can resolve, i.e., its Least Significant Bit (LSB). The latter parameter depends on the number of physical bits ("N") of the converter as formula_19 (where FSR is the full scale range of the ADC). Hence, the larger the number of levels, the finer the conversion will be. In practice, however, noise (e.g., jitter and thermal noise) poses a fundamental limit on the achievable resolution, which is lower than the physical number of bits and it is typically expressed in terms of ENOB (Equivalent Number of Bits). Usually, for 5G receivers, ADCs with an ENOB of 12 bits and bandiwdth up to the GHz are the favorable choice. Time interleaved ADCs are frequently employed for this application since they are capable of meeting the above-mentioned requirements. In fact, TI ADCs utilize multiple ADC channels operating in parallel and this technique effectively increases the overall sampling rate, allowing the receiver to handle the wide bandwidths required by 5G network. Direct RF sampling. A receiver is one of the essential components of a communication system. In particular, a receiver is responsible for the conversion of radio signals in digital words to allow the signal to be further processed by electronic devices. Typically, a receiver include an antenna, a pre-selector filter, a low-noise amplifier (LNA), a mixer, a local oscillator, an intermediate frequency (IF) filter, a demodulator and an analog-to-digital converter. The antenna is the first component in a receiver system; it captures electromagnetic waves from the air and it converts these radio waves into electrical signals. These signals are then filtered by the pre-selector, which guarantees that only the desired frequency range from the signals captured by the antenna are passed to the next stages of the receiver. The signal is then amplified by an LNA. The amplification action ensures that the signal is strong enough to be processed effectively by the subsequent stages of the system. The amplified signal is then mixed with a stable signal from the local oscillator to produce an intermediate frequency (IF) signal. This process, known as heterodyning, shifts the frequency of the received signal to a lower, more manageable IF. The IF signal undergoes further filtering to remove any remaining unwanted signals and noise. Finally, a demodulator extracts the original information signal from the modulated carrier wave. Precisely, the demodulator converts the IF signal back into the baseband signal, which contains the transmitted information. Different demodulation techniques can be used depending on the type of modulation employed (e.g., amplitude modulation (AM), frequency modulation (FM), or phase modulation (PM)). As a last step, an ADC converts the continuous analog signal into a discrete digital signal, which can be processed by digital signal processors (DSPs) or microcontrollers. This step is crucial for enabling advanced digital signal processing techniques. To further improve the power efficiency and cost of a receiver, the paradigm of Direct RF Sampling is emerging. According to this technique, the analog signal at radio frequency is simply fed to the ADC, avoiding the downconversion to an intermediate frequency altogether. Direct RF Sampling has significant advantages in terms of system design and performance. By removing the downconversion stage, the design complexity is reduced, leading to lower power consumption and cost. Additionally, the absence of the mixer and local oscillator means there are fewer components that can introduce noise and distortion, potentially improving the signal-to-noise ratio (SNR) and linearity of the receiver. However, directly sampling the radio-frequency signal imposes stringent requirements on the performance of the ADC. The signal bandwidth of the ADC in the receiver must be a few GHz to handle the high-frequency signals directly. Achieving such high values with a single ADC is challenging due to limitations in speed, power consumption and resolution. To meet these demanding requirements, Time interleaved ADC systems are typically adopted. In fact, TI ADCs utilize multiple slower sub-ADCs operating in parallel, each sampling the input signal at different time intervals. By interleaving the sampling process, the effective sampling rate of the overall system is increased, allowing it to handle the high bandwidths required for direct RF sampling. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_\\mathrm{c}" }, { "math_id": 1, "text": "T_\\mathrm{s} = {T_\\mathrm{c}}/{M}" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "V_\\mathrm{in}" }, { "math_id": 4, "text": "f_\\mathrm{in} = {f_\\mathrm{clk}}/{2}" }, { "math_id": 5, "text": "f_\\mathrm{clk}" }, { "math_id": 6, "text": "T_\\mathrm{s}" }, { "math_id": 7, "text": "M^{\\text{th}}" }, { "math_id": 8, "text": "f_\\mathrm{s} = {T_\\mathrm{s}}^{-1}" }, { "math_id": 9, "text": "({M\\cdot T_\\mathrm{s}})^{-1}" }, { "math_id": 10, "text": "M\\cdot T_\\mathrm{s}" }, { "math_id": 11, "text": "f = ({k}/{M})\\cdot f_\\mathrm{s}" }, { "math_id": 12, "text": "0" }, { "math_id": 13, "text": "M-1" }, { "math_id": 14, "text": "f = \\pm f_\\mathrm{in} + ({k}/{M})\\cdot f_\\mathrm{s}" }, { "math_id": 15, "text": "(f_\\mathrm{b})" }, { "math_id": 16, "text": "(f_\\mathrm{s})" }, { "math_id": 17, "text": "f_\\mathrm{b} \\le {f_\\mathrm{s}}/({2\\cdot \\mathit{OSR}})" }, { "math_id": 18, "text": "{f_\\mathrm{s}}/{2}" }, { "math_id": 19, "text": "\\mathit{LSB} = {\\mathit{FSR}}/{2^N}" } ]
https://en.wikipedia.org/wiki?curid=77310456
77310514
3D Face Morphable Model
Generative model for 3D textured faces In computer vision and computer graphics, the 3D Face Morphable Model (3DFMM) is a generative technique for modeling textured 3D faces. The generation of new faces is based on a pre-existing database of example faces acquired through a 3D scanning procedure. All these faces are in dense point-to-point correspondence, which enables the generation of a new realistic face ("morph") by combining the acquired faces. A new 3D face can be inferred from one or multiple existing images of a face or by arbitrarily combining the example faces. 3DFMM provides a way to represent face shape and texture disentangled from external factors, such as camera parameters and illumination. The 3D Morphable Model (3DMM) is a general framework that has been applied to various objects other than faces, e.g., the whole human body, specific body parts, and animals. 3DMMs were first developed to solve vision tasks by representing objects in terms of the prior knowledge that can be gathered from that object class. The prior knowledge is statistically extracted from a database of 3D examples and used as a basis to represent or generate new plausible objects of that class. Its effectiveness lies in the ability to efficiently encode this prior information, enabling the solution of otherwise ill-posed problems (such as single-view 3D object reconstruction). Historically, face models have been the first example of morphable models, and the field of 3DFMM remains a very active field of research as today. In fact, 3DFMM has been successfully employed in face recognition, entertainment industry (gaming and extended reality, virtual try on, face replacement, face reenactment), digital forensics, and medical applications. Modeling. In general, 3D faces can be modeled by three variational components extracted from the face dataset: Shape modeling. The 3DFMM uses statistical analysis to define a "statistical shape space", a vectorial space equipped with a probability distribution, or "prior." To extract the "prior" from the example dataset, all the 3D faces must be in a dense point-to-point correspondence. This means that each point has the same semantical meaning on each face (e.g., nose tip, edge of the eye). In this way, by fixing a point, we can, for example, derive the probability distribution of the texture's red channel values over all the faces. A face shape formula_0 of formula_1 vertices is defined as the vector containing the 3D coordinates of the formula_2 vertices in a specified order, that is formula_3. A shape space is regarded as a formula_4-dimensional space that generates plausible 3D faces by performing a lower-dimensional (formula_5) parametrization of the database. Thus, a shape formula_0 can be represented through a generator function formula_6 by the parameters formula_7, formula_8. The most common statistical technique used in 3DFMM to generate the shape space is Principal Component Analysis (PCA), that generates a basis that maximizes the variance of the data. Performing PCA, the generator function is linear and defined as formula_9where formula_10 is the mean over the training data and formula_11 is the matrix that contains the formula_4 most dominant eigenvectors. Using a unique generator function for the whole face leads to the imperfect representation of finer details. A solution is to use local models of the face by segmenting important parts such as the eyes, mouth, and nose. Expression modeling. The modeling of the expression is performed by explicitly subdividing the representation of the identity from the facial expression. Depending on how identity and expression are combined, these methods can be classified as additive, multiplicative, and nonlinear. The additive model is defined as a linear model and the expression is an additive offset with respect to the identity formula_12where formula_13,formula_14 and formula_15,formula_16 are the matrices basis and the coefficients vectors of the shape and expression space, respectively. With this model, given the 3D shape of a subject in a neutral expression formula_17 and in a particular expression formula_18, we can transfer the expression to a different subject by adding the offset formula_19. Two PCAs can be performed to learn two different spaces for shape and expression. In a multiplicative model, shape and expression can be combined in different ways. For example, by exploiting formula_20 operators formula_21 that transform a neutral expression into a target blendshape we can writeformula_22where formula_23 and formula_24 are vectors to correct to the target expression. The nonlinear model uses nonlinear transformations to represent an expression. Appearance modeling. The color information id often associated to each vertex of a 3D shape. This one-to-one correspondence allows us to represent appearance analogously to the linear shape model formula_25where formula_26 is the coefficients vector defined over the basis matrix formula_27. PCA can be again be used to learn the appearance space. History. Facial recognition can be considered the field that originated the concepts that later on converged into the formalization of the morphable models. The eigenface approach used in face recognition represented faces in a vector space and used principal component analysis to identify the main modes of variation. However, this method had limitations: it was constrained to fixed poses and illumination and lacked an effective representation of shape differences. As a result, changes in the eigenvectors did not accurately represent shifts in facial structures but caused structures to fade in and out. To address these limitations, researchers added an eigendecomposition of 2D shape variations between faces. The original eigenface approach aligned images based on a single point, while new methods established correspondences on many points. Landmark-based face warping was introduced by Craw and Cameron (1991), and the first statistical shape model, Active Shape Model, was proposed by Cootes et al. (1995). This model used shape alone, but Active Appearance Model by Cootes et al. (1998) combined shape and appearance. Since these 2D methods were effective only for fixed poses and illumination, they were extended by Vetter and Poggio (1997) to handle more diverse settings. Even though separating shape and texture was effective for face representation, handling pose and illumination variations required many separate models. On the other hand, advances in 3D computer graphics showed that simulating pose and illumination variations was straightforward. The combination of graphics methods with face modeling led to the first formulation of 3DMMs by Blanz and Vetter (1999). The analysis-by-synthesis approach enabled the mapping of the 3D and 2D domains and a new representation of 3D shape and appearance. Their work is the first to introduce a statistical model for faces that enabled 3D reconstruction from 2D images and a parametric face space for controlled manipulation. In the original definition of Blanz and Vetter, the shape of a face is represented as the vector formula_28 that contains the 3D coordinates of the formula_2 vertices. Similarly, the texture is represented as a vector formula_29 that contains the three RGB color channels associated with each corresponding vertex. Due to the full correspondence between exemplar 3D faces, new shapes formula_30 and textures formula_31 can be defined as a linear combination of the formula_32 example faces:formula_33Thus, a new face shape and texture is parametrized by the shape formula_34 and texture coefficients formula_35. To extract the statistics from the dataset, they performed PCA to generate the shape space of dimension to formula_4 and used a linear model for shape and appearance modeling. In this case, a new model can be generated in the orthogonal basis using the shape and the texture eigenvector formula_36 and formula_37, respectively: formula_38 where formula_39 and formula_40 are the mean shape and texture of the dataset. Publicly available databases. In the following table, we list the publicly available databases of human faces that can be used for the 3DFMM. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "S \\in \\mathbb{R}^{3n}" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "d \\ll n" }, { "math_id": 6, "text": "\\mathbf{c}: \\mathbb{R}^d \\rightarrow \\mathbb{R}^{3n}" }, { "math_id": 7, "text": "\\mathbf{w} \\in \\mathbb{R}^d" }, { "math_id": 8, "text": "\\mathbf{c}(\\mathbf{w}) = S \\in \\mathbb{R}^{3n}" }, { "math_id": 9, "text": "\\mathbf{c}(\\mathbf{w}) = \\mathbf{\\bar c} + \\mathbf{E}\\mathbf{w}" }, { "math_id": 10, "text": "\\mathbf{\\bar c}" }, { "math_id": 11, "text": "\\mathbf{E} \\in \\mathbb{R}^{3n \\times d}" }, { "math_id": 12, "text": "\\mathbf{c}(\\mathbf{w}^s, \\mathbf{w}^w) = \\mathbf{\\bar c} + \\mathbf{E}^s\\mathbf{w}^s + \\mathbf{E}^e\\mathbf{w}^e" }, { "math_id": 13, "text": "\\mathbf{E}^s" }, { "math_id": 14, "text": "\\mathbf{E}^{e}" }, { "math_id": 15, "text": "\\mathbf{w}^{s}" }, { "math_id": 16, "text": "\\mathbf{w}^e" }, { "math_id": 17, "text": "\\mathbf{c}_{ne}" }, { "math_id": 18, "text": "\\mathbf{c}^{exp}" }, { "math_id": 19, "text": "\\Delta_{\\mathbf{c}} = \\mathbf{c}^{exp} - \\mathbf{c}^{ne}" }, { "math_id": 20, "text": "d_e" }, { "math_id": 21, "text": "\\mathbf{T}_j: \\mathbb{R}^{3n} \\rightarrow \\mathbb{R}^{3n}" }, { "math_id": 22, "text": "\\mathbf{c}(\\mathbf{w}^s, \\mathbf{w}^e) = \\sum_{j=1}^{d_e}w_j^e\\mathbf{T}_j(\\mathbf{c}(\\mathbf{w}^s) + \\mathbf{\\delta}^s) + \\mathbf{\\delta}_j^e" }, { "math_id": 23, "text": "\\mathbf{\\delta}^s" }, { "math_id": 24, "text": "\\mathbf{\\delta}^s_j" }, { "math_id": 25, "text": "\\mathbf{d}(\\mathbf{w}^t) = \\mathbf{\\bar d} + \\mathbf{E}^{t}\\mathbf{w}^{t}" }, { "math_id": 26, "text": "\\mathbf{w}^t" }, { "math_id": 27, "text": "\\mathbf{E}^t" }, { "math_id": 28, "text": "S = (X_1, Y_1, Z_1, ..., X_n, Y_n, Z_n)^T \\in \\mathbb{R}^{3n}" }, { "math_id": 29, "text": "T = (R_1, G_1, B_1, ..., R_n, G_n, B_n)^T \\in \\mathbb{R}^{3n}" }, { "math_id": 30, "text": "\\mathbf{S}_{models}" }, { "math_id": 31, "text": "\\mathbf{T}_{models}" }, { "math_id": 32, "text": "m" }, { "math_id": 33, "text": "\\mathbf{S}_{model} =\\sum_{i=1}^m a_i \\mathbf{S}_i \\qquad \\mathbf{T}_{model} =\\sum_{i=1}^m b_i \\mathbf{T}_i \\qquad \\text{with} \\; \\sum_{i=1}^m a_i = \\sum_{i=1}^m b_i = 1" }, { "math_id": 34, "text": "\\mathbf{a} = (a_1, a_2,..., a_m)^T" }, { "math_id": 35, "text": "\\mathbf{b} = (b_1, b_2,..., b_m)^T" }, { "math_id": 36, "text": "s_i" }, { "math_id": 37, "text": "t_i" }, { "math_id": 38, "text": "\\mathbf{S}_{model} = \\mathbf{\\bar S} + \\sum_{i=1}^m a_i \\mathbf{s}_i \\qquad \\mathbf{T}_{model} = \\mathbf{\\bar T} + \\sum_{i=1}^m b_i \\mathbf{t}_i \\qquad " }, { "math_id": 39, "text": "\\mathbf{\\bar{S}}" }, { "math_id": 40, "text": "\\mathbf{\\bar{T}}" } ]
https://en.wikipedia.org/wiki?curid=77310514
77310528
Ceria based thermochemical cycles
A ceria based thermochemical cycle is a type of two-step thermochemical cycle that uses as oxygen carrier cerium oxides (formula_0/formula_1) for synthetic fuel production such as hydrogen or syngas. These cycles are able to obtain either hydrogen (formula_2) from the splitting of water molecules (formula_3), or also syngas, which is a mixture of hydrogen (formula_2) and carbon monoxide (formula_4), by also splitting carbon dioxide (formula_5) molecules alongside water molecules. These type of thermochemical cycles are mainly studied for concentrated solar applications. Types of cycles. These cycles are based on the two step redox thermochemical cycle. In the first step, a metal oxide, such as ceria, is reduced by providing heat to the material, liberating oxygen. In the second step, a stream of steam oxidises the previously obtained molecule back to its starting state, therefore closing the cycle. Depending on the stoichiometry of the reactions, which is the relation of the reactants and products of the chemical reaction, these cycles can be classified in two categories. Stoichiometric ceria cycle. The stoichiometric ceria cycle uses the cerium(IV) oxide (formula_0) and cerium(III) oxide (formula_1) metal oxide pairs as oxygen carriers. This cycle is composed of two steps: A reduction step, to liberate oxygen (formula_6) from the material: formula_7 And an oxidation step, to split the water molecules into hydrogen (formula_2) and oxygen (formula_6), and/or the carbon dioxide molecules (formula_5) into carbon monoxide (formula_4) and oxygen (formula_6): The reduction step is an endothermic reaction that takes place at temperatures around 2,300 K (2,027 ºC) in order to ensure a sufficient reduction. In order to enhance the reduction of the material, low partial pressures of oxygen are required. To obtain these low partial pressures, there are two main possibilities, either by vacuum pumping the reactor chamber, or by using an chemically inert sweep gas, such as nitrogen (formula_10) or argon (formula_11). On the other hand, the oxidation step is an exothermic reaction that can take place at a considerable range of temperatures, from 400 ºC up to 1,000ºC. In this case, depending on the fuel to be produced, a stream of steam, carbon dioxide or a mixture of both is introduced to the reaction chamber for hydrogen, carbon monoxide or syngas production respectively. The temperature difference between the two steps presents a challenge for heat recovery, since the existing solid to solid heat exchangers are not highly efficient. The thermal energy required to achieve these high temperatures is provided by concentrated solar radiation. Due to the high concentration ratio required to achieve this high temperatures, the main technologies used are concentrating solar towers (CST) or parabolic dishes. The main disadvantage of the stoichiometric ceria cycle lies in the fact that the reduction reaction temperature of cerium(IV) oxide (formula_0) is at the same range of the melting temperature (1,687–2,230 ºC) of cerium(IV) oxide (formula_1), which in the end results in some melting and sublimation of the material, which can produce reactor failures such as deposition on the window or sintering of the particles. Non-stoichiometric ceria cycle. The non-stoichiometric ceria cycle uses only cerium(IV) oxide, and instead of totally reducing it to the next oxidation molecule, it performs a partial reduction of it. The quantity of this reduction is commonly expressed as reduction extent and is indicated as formula_12. In this way, by partially reducing ceria, oxygen vacancies are created in the material. The two steps are formulated as such: Reduction reaction: formula_13 Oxidation reaction: The main advantage of this cycle is that the reduction temperature is lower, around 1,773 K (1,500ºC) which alleviates the high temperature demand of some materials and avoids certain problems such as sublimation or sintering. Temperatures above these would result in the reduction of the material to the next oxidation molecule, which should be avoided. In order to reduce the thermal loses of the cycle, the temperature difference between the reduction and oxidation chambers need to be optimized. This results in partially oxidated states, rather than a full oxidation of the ceria. Due to this, the chemical reaction is commonly expressed considering these two reduction extents: Reduction reaction: formula_16 Oxidation reaction: The main disadvantage of these cycles is the low reduction extent, due to the low non-stoichiometry, hence leaving less vacancies for the oxidation process, which in the end translates to lower fuel production rates. Due to the properties of ceria, other materials are being studied, mainly perovskites based on ceria, to improve the thermodynamic and chemical properties of the metal oxide. Methane driven non-stoichiometric ceria cycle. Since the temperatures needed to achieve the reduction of the material are considerably high, the reduction of the cerium oxide can be enhanced by providing methane to the reaction. This reduces significantly the temperatures required to achieve the reduction of ceria, ranging between 800-1,000 ºC, while also producing syngas in the reduction reactor. In this case, the reduction reaction goes as follows: formula_19 The main disadvantages of this cycle are the carbon deposition on the material, which eventually deactivates it after several cycles and needs to be replaced, and the acquisition of the methane feedstock. Types of reactors. Depending on the type and topology of the reactors, the cycles will function either in continuous production or in batch production. There are two main types of reactors for these specific cycles: Monolithic reactors. These type of reactors consist on a piece of solid material, which is shaped as a reticulated porous foam (RPC) in other to increase both the surface area and the solar radiation penetration. This reactors are shaped as a cavity receivers, in order to reduce the thermal losses due to reradiation. They usually count with a quartz (fused silica) window in order to let the solar radiation inside the cavity. Since the metal oxide is a solid structure, both reactions must be done in the same reactor, which leads to a discontinuous production process, carrying out one step after the other. To avoid this stops in the production time, multiple reactors can be arranged to approximate a continuous production process. This is usually referred as a batch process. The intention is to always have one or multiple reactors operating in the oxidation step at the same time, hence always generating hydrogen. Some new reactor concepts are being studied, in which the RPCs can be moved from one reactor to another, in order to have one single reduction reactor. Solid particles reactors. These type of reactors try to solve the discontinuity problem of the cycle by using solid particles of the metal oxide instead of having solid structures. This particles can be moved from the reduction reactor to the oxidation reactor, which allows a continuous production of fuel. Many types of reactors work with solid particles, from free falling receivers, to packed beds, fluidized beds or rotary kilns. The main disadvantage of this approach is that, due to the high temperatures achieved, the solid particles are susceptible to sintering, which is a process in which small particles melt and get stuck to another particles, creating bigger particles, which reduces their surface area and difficult the transportation process. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "CeO_2" }, { "math_id": 1, "text": "Ce_2O_3" }, { "math_id": 2, "text": "H_2" }, { "math_id": 3, "text": "H_2O" }, { "math_id": 4, "text": "CO" }, { "math_id": 5, "text": "CO_2" }, { "math_id": 6, "text": "O_2" }, { "math_id": 7, "text": "2CeO_2 \\longrightarrow Ce_2O_3+\\frac{1}{2}O_2" }, { "math_id": 8, "text": "Ce_2O_3 + H_2O \\rightarrow 2CeO_2+H_2" }, { "math_id": 9, "text": "Ce_2O_3 + CO_2 \\rightarrow 2CeO_2+CO" }, { "math_id": 10, "text": "N_2" }, { "math_id": 11, "text": "Ar" }, { "math_id": 12, "text": "\\delta" }, { "math_id": 13, "text": "CeO_2 \\rightarrow CeO_{2-\\delta} + \\frac{\\delta}{2}O_2" }, { "math_id": 14, "text": "CeO_{2-\\delta}+\\delta H_2O\\rightarrow CeO_2+\\delta H_2" }, { "math_id": 15, "text": "CeO_{2-\\delta}+\\frac{\\delta}{2}CO_2 \\rightarrow CeO_2+\\frac{\\delta}{2} CO" }, { "math_id": 16, "text": "CeO_{2-\\delta_{ox}} \\rightarrow CeO_{2-\\delta_{red}} + \\frac{\\Delta\\delta}{2}O_2" }, { "math_id": 17, "text": "CeO_{2-\\delta_{red}}+\\Delta\\delta H_2O\\rightarrow CeO_{2-\\delta_{ox}}+\\Delta\\delta H_2" }, { "math_id": 18, "text": "CeO_{2-\\delta_{red}}+\\frac{\\Delta\\delta}{2}CO_2 \\rightarrow CeO_{2-\\delta_{ox}}+\\frac{\\Delta\\delta}{2} CO" }, { "math_id": 19, "text": "CeO_2+\\delta CH_4 \\rightarrow CeO_{2-\\delta} + \\delta CO+2\\delta H_2" } ]
https://en.wikipedia.org/wiki?curid=77310528
77310532
Quantum Cascade Detector
Photodetector sensitive to infrared radiation A Quantum Cascade Detector (QCD) is a photodetector sensitive to infrared radiation. The absorption of incident light is mediated by intersubband transitions in a semiconductor multiple-quantum-well structure. The term "cascade" refers to the characteristic path of the electrons inside the material bandstructure, induced by absorption of incident light. QCDs are realized by stacking thin layers of semiconductors on a lattice-matched substrate by means of suitable epitaxial deposition processes, including molecular-beam epitaxy and metal organic vapor-phase epitaxy. The design of the quantum wells can be engineered to tune the absorption in a wide range of wavelengths in the infrared spectrum and to achieve broadband operation: QCDs have been demonstrated to operate from the short-wave to the long-wave infrared region and beyond. QCDs operate in photovoltaic mode, meaning that no bias is required to generate a photoresponse. For this reason, QCDs are also referred to as the photovoltaic counterpart of the photoconductive quantum well infrared photodetectors (QWIPs). Since the vibrational modes of organic molecules are found in the mid-infrared region of the spectrum, QCDs are investigated for sensing applications and integration in dual-comb spectroscopic systems. Moreover, QCDs have been shown to be promising for high-speed operation in free-space communication applications. History. In 2002, Daniel Hofstetter, Mattias Beck and Jérôme Faist reported the first ever use of an InGaAs/InAlAs quantum-cascade-laser structure for photodetection at room temperature. The specific detectivity of the device was shown to be comparable to the detectivity of more established detectors at the time, such as QWIPs or HgCdTe detectors. This pioneering work stimulated the search for bi-functional optoelectronic devices embedding both lasing and detection within the same photonic architecture. The term "quantum cascade detector" was coined in 2004, when L. Gendron and V. Berger demonstrated the first operating cascade device fully devoted to photodetection purposes, employing a GaAs/AlGaAs heterostructure. This work was motivated by the necessity to find an alternative intersubband infrared photodetector to QWIPs. Indeed, while manifesting high responsivity enhanced by photoconductive gain, QWIPs suffer from large dark current noise, which is detrimental to in room-temperature photodetection. In the subsequent years researchers have explored a variety of solutions leading to an enhancement of the device performances and functionalities. New material platforms have been studied, such as II-VI ZnCdSe/ZnCdMgSe semiconductor systems. These compounds are characterized by a large conduction band offset, allowing for broadband and room-temperature photodetection. Moreover, QCDs based on GaN/AlGaN and ZnO/MgZnO material platforms have also been reported with the aim to investigate photodetection operation at the very edges of the infrared spectrum. Innovative architectures have been designed and fabricated. Diagonal-transition quantum cascade detectors have been proposed to improve the mechanism of electronic extraction from the optical well. While in conventional QCDs the transition is hosted in a single well (vertical transition), in diagonal-transition QCDs the photoexcitation takes place in two adjacent wells, in a bound-to-bound or bound-to-miniband transition scheme. The motivation behind the realization of this architecture lies in the opportunity to improve the extraction efficiency towards the cascade, even though at the expense of the absorption strength of the transition. Since early 2000s up to more recent years, QCDs embedded in optical cavities operating in the strong light-matter interaction regime have been investigated, aiming to further improvement of the device performances. Working principle. QCDs are unipolar devices, meaning that only a single type of charge carrier, either electrons or holes, contributes to the photocurrent. The structure of a QCD consists of a periodic multiple-quantum-well heterostructure, realized by stacking very thin layers of semiconductors characterized by different energy band-gaps. In each period, the first quantum well (also called optical well) is devoted to the resonant absorption of incident radiation. Upon absorption of a photon, an electron is excited from a lower state to an upper state. Since these states are confined within the same band, intersubband transitions occur and QCDs are also referred to as intersubband devices. The transition energy can be tuned by adjusting the thickness of the well: indeed, the energy of an electronic state confined in a quantum well can be written as:formula_0within the approximation of infinite potential barriers. It can be derived by solving the Schrödinger equation for an electron confined in a one-dimensional infinite barrier potential. In the formula, formula_1 is the reduced Planck constant, formula_2 and formula_3 represent the wavevector and the effective mass of the electron, respectively, while formula_4 is the thickness of the quantum well and formula_5 identifies the formula_5th confined state. The well thickness can be tuned in order to engineer the bandstructure of the QCD. The photoexcited electron is then transferred to a cascade of confined states called "extraction region". The transfer mechanism between adjacent wells consists of a double-step process: quantum tunneling transfers the electron through the barrier and scattering with longitudinal optical (LO) phonons relaxes the electron to the ground state. This mechanism is very efficient if the energy difference between adjacent confined states matches the typical LO phonon energy, a condition that is easily achievable by tuning the thickness of the wells. It also sets the cut-off frequency of the detector, being the process that determines the transit time of the electron through the cascade. Since typical time-scales for LO phonon scattering are in the formula_6 range, the QCD cut-off frequency lies in the 100 formula_7 range. When the electron reaches the bottom of the cascade, it is confined in the optical well of the next period, where it is once again photoexcited. A displacement current is then generated, and it can be easily measured by a read-out circuit. Notice that the generation of a photocurrent does not require the application of an external bias and, consistently, the energy bands are flat. Figure of merit. The responsivity formula_8 of any quantum photodetector can be calculated exploiting the following formula: formula_9, where the constant formula_10 is the electronic charge, formula_11 represents the radiation wavelength, formula_12 is the Planck constant, formula_13 refers to the speed of light in vacuum and formula_14 is the external quantum efficiency. This last term takes into account both the absorption efficiency formula_15, i.e. the probability of photoexciting an electron, and the photodetector gain formula_16, which measures the number of electrons contributing to the photocurrent per absorbed photon, according to formula_17. The photodetector gain depends on the working principle of the photodetector; in a QCD, it is proportional to the extraction probability formula_18: formula_19, where formula_20 is the number of active periods. The responsivity reads:formula_21.In first approximation, in weakly-absorbing systems, the absorption efficiency formula_15 is a linear function of formula_20 and the responsivity is independent from the number of periods. In other systems an optimal trade-off between absorption efficiency and gain must be found to maximize the responsivity. At the state of the art, QCDs have been demonstrated to have a responsivity in the order of hundreds of formula_22. Another figure of merit for photodetectors is the specific detectivity formula_23, since it facilitates the comparison between devices with different area formula_24 and bandwidth formula_25. At sufficiently high temperature, where detectivity is dominated by Johnson noise, it can be calculated as: formula_26, where formula_27 is the peak responsivity, formula_28 is the resistance at zero bias, formula_29 is the Boltzmann constant and formula_30 is temperature. Enhancement of the detectivity is accomplished by high resistance, strong absorption and large extraction probability. Optical coupling. As any intersubband detector, QCDs can absorb only TM-polarized light, while they are blind to vertically-incident radiation. This behavior is predicted by intersubband transition selection rules, which show that a non-zero matrix element is obtained on the condition of light polarized perpendicularly to quantum well planes. Consequently, alternative approaches to couple light into the active region of QCDs have been developed, including a variety of geometrical coupling configurations, diffraction gratings and mode confinement solutions. 45°-wedge-multipass configuration. Incident light impinges vertically on a 45° polished facet of a wedge-like QCD. In this coupling configuration, radiation contains both TM and TE polarizations. While this configuration is easily realized, 50% of the power is not coupled to the device, and the amount of absorbed light is strongly reduced. However, it is regarded as a standard configuration to characterize intersubband photodetectors. Brewster angle configuration. At the air-semiconductor interface, p-polarized light is fully transmitted if radiation is impinged at the Brewster angle formula_31, which is a function of the semiconductor refractive index formula_32, since formula_33. This is the simplest configuration, since no tilted facets are required. However, due to the high refractive index difference at the interface, only a small fraction of the total optical input power couples to the detector. Diffraction grating couplers. A metallic diffraction grating is patterned on top of the device to couple the impinging light to surface plasmon polaritons, a type of surface wave that propagates along the metal-semiconductor interface. Being TM-polarized, surface plasmon polaritons are compatible with intersubband device operation, but typically propagates only over 10 periods of the structure. Waveguide end-fire coupling. Planar or ridge waveguides are employed to confine the optical mode in the active region of the QCD, provided that the semiconductor heterostructure is grown on a substrate exhibiting a lower refractive index. The optical mode, indeed, is guided towards the region of highest refractive index. This is the case of InP-matched InGaAs/AlGaAs heterostructures. The absorption efficiency is limited by waveguide losses, approximately in the order of 1 formula_34.
[ { "math_id": 0, "text": "E_{n}(\\bold{k})=\\frac{\\hbar^{2}(k_{x}^{2}+k_{y}^{2})}{2m^{*}_{e}}+\\frac{\\pi^{2}\\hbar^{2}}{2m^{*}_{e}W^{2}}n^{2}," }, { "math_id": 1, "text": "\\hbar" }, { "math_id": 2, "text": "\\bold{k}" }, { "math_id": 3, "text": "m_{e}^{*}" }, { "math_id": 4, "text": "W" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "ps" }, { "math_id": 7, "text": "GHz" }, { "math_id": 8, "text": "R(\\lambda)" }, { "math_id": 9, "text": "R(\\lambda)=\\frac{\\eta\\lambda e}{hc}" }, { "math_id": 10, "text": "e" }, { "math_id": 11, "text": "\\lambda" }, { "math_id": 12, "text": "h" }, { "math_id": 13, "text": "c" }, { "math_id": 14, "text": "\\eta" }, { "math_id": 15, "text": "\\eta_{abs}" }, { "math_id": 16, "text": "g" }, { "math_id": 17, "text": "\\eta=\\eta_{abs}g" }, { "math_id": 18, "text": "p_{e}" }, { "math_id": 19, "text": "g=\\frac{p_{e}}{N}" }, { "math_id": 20, "text": "N" }, { "math_id": 21, "text": "R(\\lambda)=\\eta_{abs}\\frac{p_{e}}{N}\\frac{\\lambda e}{hc}" }, { "math_id": 22, "text": "\\frac{mA}{W}" }, { "math_id": 23, "text": "D^{*}" }, { "math_id": 24, "text": "A" }, { "math_id": 25, "text": "\\Delta f" }, { "math_id": 26, "text": "D^{*}=R_{p}\\sqrt{\\frac{R_{0}A}{4K_{b}T}}" }, { "math_id": 27, "text": "R_{p}" }, { "math_id": 28, "text": "R_{0}" }, { "math_id": 29, "text": "K_{b}" }, { "math_id": 30, "text": "T" }, { "math_id": 31, "text": "\\theta_{B}" }, { "math_id": 32, "text": "n_{s}" }, { "math_id": 33, "text": "\\theta_{B}=arctan({\\frac{1}{n_{s}}})" }, { "math_id": 34, "text": "\\frac{dB}{cm}" } ]
https://en.wikipedia.org/wiki?curid=77310532