id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
71870715
Job 32
Job 32 is the 32nd chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Elihu, which belongs to the "Verdicts" section of the book, comprising Job 32:1–. Text. The original text is written in Hebrew language. This chapter is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q99 (4QJoba; 175–60 BCE) with extant verses 3–4. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 32 is grouped into the Verdict section with the following outline: The section containing Elihu's speeches serves as a bridge between the Dialogue (chapters 3–31) and the speeches of YHWH (chapters 38–41). There is an introduction in the prose form (Job 32:1–5), describing Elihu's identity and circumstances that cause him to speak (starting in Job 32:6). The whole speech section can be formally divided into four monologues, each starting with a similar formula (Job 32:6; 34:1; 35:1; 36:1). Elihu's first monologue is preceded by an apologia (justification) for speaking (Job 32:6–22) and a transitionary part which introduces Elihu's main arguments (Job 33:1–7) before the speech formally commences (Job 33:8–33). In the first three speeches Elihu cites and then disputes specific Job's charges in the preceding dialogue: In chapters 36–37 Elihu stops refuting Job's charges, but states his conclusions and verdict: Prose introduction to Elihu (32:1–5). The section starts by stating the breakdown of the Dialogue, that Job's three friends (Eliphaz, Bildad and Zophar) cease to answer Job (verse 1) and this gives an opportunity for another person, Elihu, to come forward to speak (verse 2). Elihu is described as 'angry' (repeated four times in verses 2 (twice), 3 and 5), first to Job, because Job justified himself rather than God (verse 2), then to the three friends for not providing a (legal) "answer" to Job yet condemning Job (verse 3), and then while waiting for his turn to speak, Elihu is forced by this great anger to give responses to Job (verses 4–5). "So these three men ceased to answer Job, because he was righteous in his own eyes." "Then was kindled the wrath of Elihu the son of Barachel the Buzite, of the kindred of Ram: against Job was his wrath kindled, because he justified himself rather than God." Elihu's apology (32:6–22). This section records Elihu's speech in a form of apologia or justification for his boldness to speak out. At first, Elihu refrains from speaking in the presence of his elders, due to his timidity (verse 6) and his initial belief that wisdom is learned over time (verse 7). However, he is now compelled to speak after realizing that the source of wisdom is not old age but God alone ("the breath of the Almighty") and this gift can be given by God to anyone, including Elihu who is younger than Job and the three friends (verses 8–10, 18). Because of the "spirit" or "wind" (presumably from God) in him, Elihu 'needs' to speak (instead of 'ought' to speak) to find relief (verse 19–20), but he will be impartial (not 'giving any preferential treatment', literally "lift up the face of a person" in verse 21) as he believes that he is accountable before God (verse 22). [Elihu said:] "But there is a spirit in man," "and the breath of the Almighty gives him understanding." References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71870715
71873757
Job 33
Chapter in the bible Job 33 is the 33rd chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Elihu, which belongs to the "Verdicts" section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 33 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 2Q15 (2QJob; 30 BCE–30 CE) with extant verses 28–30 and 4Q99 (4QJoba; 175–60 BCE) with extant verses 10–11, 24–30. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 33 is grouped into the Verdict section with the following outline: The section containing Elihu's speeches serves as a bridge between the Dialogue (chapters 3–31) and the speeches of YHWH (chapters 38–41). There is an introduction in the prose form (Job 32:1–5), describing Elihu's identity and circumstances that cause him to speak (starting in Job 32:6). The whole speech section can be formally divided into four monologues, each starting with a similar formula (Job 32:6; 34:1; 35:1; 36:1). Elihu's first monologue is preceded by an apologia (justification) for speaking (Job 32:6–22) and a transitionary part which introduces Elihu's main arguments (Job 33:1–7) before the speech formally commences (Job 33:8–33). In the first three speeches Elihu cites and then disputes specific Job's charges in the preceding dialogue: The first speech in chapter 33 opens with a citation of Job's charges (Job 33:8–11), followed by a rejection to Job's argument about God's silence (Job 33:12–13), that, according to Elihu, God speaks in a variety of ways (Job 33:14): through dreams (Job 33:15–18), through suffering (Job 33:19–22) and through messengers (angels; Job 33:23–28), so now Elihu challenges Job to listen to him (Job 33:29–33). In chapters 36–37 Elihu stops refuting Job's charges, but states his conclusions and verdict: Preamble to Elihu's speech (33:1–7). The section starts with Elihu addressing Job by name (Job's friends never mentioned Job's name), stating that as a fellow human being, he can speak to Job as an equal (verse 4), because they both are created by their maker (verse 6b; cf. Isaiah 45:9). Elihu claims to be speaking from "uprightness of heart" and "pure, sincere lips" (verse 3; cf. "the breath of the Almighty" in verse 4), so his words will lead to better understanding, as he challenges but as the same time provides reassurances to Job (verses 6–7). [Elihu said:] "The Spirit of God has made me," "and the breath of the Almighty has given me life." Elihu's first speech (33:8–33). Elihu quotes what he heard of Job's charges (verses 8–11) in which Job maintains his innocence (cited 4 times in verse 9) while Job feels that God has wronged him (cited 4 times in verses 10–11; cf. Job 10:6–7; 13:23 in Job 33:10a; Job 13:24b in Job 33:10b; Job 13:27 in Job 33:11). Elihu focuses his response only on the words spoken among Job and his friends, not on Job's prior actions, but on how to speak rightly about God, so Elihu is basically delivering a verdict on the debate. In addressing Job's legal approach, Elihu rightly understands that Job's pursuit of litigation was primarily to get God to answer Job's questions, so Elihu gives three examples how God does speak to people, although humans often cannot perceive the response (verse 14; cf. Job 34:23): The speech concludes with an appeal to Job (called by name, cf. Job 33:1) to speak, although the tone of the statements suggests Job only to "pay attention" and "listen" (verses 31–33). [Elihu said:] 23 "If there is a messenger for him," "an interpreter, one among a thousand," "to show to man what is right for him," 24 "then He is gracious to him, and says," "‘Deliver him from going down to the pit;" "I have found a ransom.’" 25 "His flesh will be fresher than a child’s;" "he will return to the days of his youth;" References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71873757
71879967
Negative log predictive density
In statistics, the negative log predictive density (NLPD) is a measure of error between a model's predictions and associated true values. A smaller value is better. Importantly the NLPD assesses the quality of the model's uncertainty quantification. It is used for both regression and classification. To compute: (1) find the probabilities given by the model to the true labels. (2) find the negative log of this product. (we actually find the negative of the sum of the logs, for numerical reasons). Definition. formula_0 where formula_1 is the model, formula_2 are the inputs (independent variables) and formula_3 are the observations outputs (dependent variable). Often the mean rather than the sum is used (by dividing by N), formula_4 Example. Calculating the NLPD for a simple classification example. We have a method that classifies images as dogs or cats. Importantly it assigns probabilities to the two classes. We show it a picture of three dogs and three cats. It predicts that the probability of the first three being dogs as 0.9 and 0.4, 0.7 and of the last three being cats as 0.8 and 0.4, 0.3. The NLPD is: formula_5. Comparing to a classifier with better accuracy but overconfident. We compare this to another classifier which predicts the first three as being dogs as 0.95, 0.98, 0.02, and the last three being cats as 0.99, 0.96,0.96. The NLPD for this classifier is 4.08. The first classifier only guessed half correctly, so did worse on a traditional measure of accuracy (compared to 5/6 for the second classifier). However it performs better on the metric of NLPD: The second classifier is effectively 'confidently wrong' which is penalised heavily by this metric. Compared to a very under-confident classifier. A third classifier then just predicts 0.5 for all classes will have an NLPD in this case of 4.15: worse than either of the others. Usage. It is used extensively in probabilistic modelling research. Examples include: - Candela, Joaquin Quinonero, et al. "Propagation of uncertainty in bayesian kernel models-application to multiple-step ahead forecasting." 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).. Vol. 2. IEEE, 2003. - Kersting, Kristian, et al. "Most likely heteroscedastic Gaussian process regression." Proceedings of the 24th international conference on Machine learning. 2007. - See also https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/coin.12411 for a background of other approaches (confusingly the definition in that reference says that the NLPD is what most others refer to as the *average* NLPD). I.e. formula_6. - Heinonen, Markus, et al. "Non-stationary gaussian process regression with hamiltonian monte carlo." Artificial Intelligence and Statistics. PMLR, 2016.
[ { "math_id": 0, "text": "\\text{NLPD} = - \\sum_{i=1}^N \\log p(y_i = t_i | \\mathbf{x_i})" }, { "math_id": 1, "text": "p(y|\\mathbf{x})" }, { "math_id": 2, "text": "\\mathbf{x_i}" }, { "math_id": 3, "text": "t_i" }, { "math_id": 4, "text": "\\text{NLPD} = - \\frac{1}{N} \\sum_{i=1}^N \\log p(y_i = t_i | \\mathbf{x_i})" }, { "math_id": 5, "text": "-(\\log 0.9 + \\log 0.4 + \\log 0.7 + \\log 0.8 + \\log 0.4 + \\log 0.3) = 3.72" }, { "math_id": 6, "text": "\\frac{1}{N}\\text{NLPD}" } ]
https://en.wikipedia.org/wiki?curid=71879967
71882037
Job 34
Job 34 is the 34th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Elihu, which belongs to the "Verdicts" section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 37 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 34 is grouped into the Verdict section with the following outline: The section containing Elihu's speeches serves as a bridge between the Dialogue (chapters 3–31) and the speeches of YHWH (chapters 38–41). There is an introduction in the prose form (Job 32:1–5), describing Elihu's identity and circumstances that cause him to speak (starting in Job 32:6). The whole speech section can be formally divided into four monologues, each starting with a similar formula (Job 32:6; 34:1; 35:1; 36:1). Elihu's first monologue is preceded by an apologia (justification) for speaking (Job 32:6–22) and a transitionary part which introduces Elihu's main arguments (Job 33:1–7) before the speech formally commences (Job 33:8–33). In the first three speeches Elihu cites and then disputes specific Job's charges in the preceding dialogue: The second speech of Elihu in chapter 34 opens with a summon to the sages (presumably gathered around Job and his friends) to confirm his view (verses 2–4; cf. Job 34:10, 34) before citing Job's charges (Job 34:5–9) and providing correction to Job's view (34:10–33) and then again inviting the sages to consider the correction. The focus of the speech is God's justice. In chapters 36–37 Elihu stops refuting Job's charges, but states his conclusions and verdict: Elihu calls the wise men to listen to his speech (34:1–4). The section starts with Elihu calling on the sages to examine Job's intention for litigation (verse 3 quoting Job 12:11). This indicates that Elihu has listened well and now skillfully uses Job's own word back on him. [Elihu said:] "For the ear tests words," "as the mouth tastes food." Elihu's second speech (34:5–37). Elihu quotes Job's words from different parts of speeches (verse 5a citing Job 9:21; 13:18; 27:2–6; verse 5b citing Job 27:22; also 14:3; 19:7; verse 9 citing Job 9:22–24 and 21:5–13) which claim that the innocent Job has been wrongly denied justice by God. Then, Elihu comprehensively refutes Job with the strong insistence that God is fundamentally just and committed to justice. God the creator has all the right to actively rule over his creation, so can never be charged with acting unjustly, as God's sovereign power extends to life and death, and God does not need further information before acting justly (verses 24–25). Closing his speech, Elihu urges the gathering wise men to adopt his analysis of Job (verses 34–37). [Elihu said:] "IYea, surely God will not do wickedly, neither will the Almighty pervert judgment." Verse 12. This is one step further than to emphasize that God cannot do what is wrong or wicked. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71882037
71884010
Job 35
Job 35 is the 35th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Elihu, which belongs to the "Verdicts" section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 16 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q99 (4QJoba; 175–60 BCE) with extant verse 16. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 35 is grouped into the Verdict section with the following outline: The section containing Elihu's speeches serves as a bridge between the Dialogue (chapters 3–31) and the speeches of YHWH (chapters 38–41). There is an introduction in the prose form (Job 32:1–5), describing Elihu's identity and circumstances that cause him to speak (starting in Job 32:6). The whole speech section can be formally divided into four monologues, each starting with a similar formula (Job 32:6; 34:1; 35:1; 36:1). Elihu's first monologue is preceded by an apologia (justification) for speaking (Job 32:6–22) and a transitionary part which introduces Elihu's main arguments (Job 33:1–7) before the speech formally commences (Job 33:8–33). In the first three speeches Elihu cites and then disputes specific Job's charges in the preceding dialogue: The third speech of Elihu in chapter 35 opens with citing other Job's charges (Job 35:2–3) and disputing them (verses 4–13) before giving the conclusion (verses 14–16). The focus of the speech is that the transcendent God is not dependent on human beings (verses 4–8), so there is no obligation for God to answer cries for help (verses 9–13). In chapters 36–37 Elihu stops refuting Job's charges, but states his conclusions and verdict: Elihu challenges Job's motivation (35:1–3). In this section Elihu compiles Job's various words (Job 7:19–20; 9:22–31 and 21:7–13) that state no advantage (or "blessings") for choosing righteousness over sin. None of these words seem to draw any response from God, according to Job (verses 2–3). [Elihu said:] "For you said, ‘What advantage will it be to me?" "What profit will I have if I am cleansed from my sin?‘" Elihu's third speech (35:4–16). Elihu directs his speech to Job and his friends, so the words serve as corrections for all of them. The main point of Elihu is that neither human righteousness nor wickedness will affect or benefit God, so the divine silence demonstrate the sovereign transcendence of God as the Creator. A person may be crying for help but not be seeking God (verse 10a), so they may want help from God, but not God himself. However, God is also the one "who gives song in the night", that is, God is present even in the midst of difficult times (Psalm 23:4). [Elihu said:] "IAlthough you say you do not see Him," "yet judgment is before Him," "and you must trust in Him." References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71884010
71884234
Job 36
Chapter of the Bible Job 36 is the 36th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Elihu, which belongs to the "Verdicts" section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 33 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q99 (4QJoba; 175–60 BCE) with extant verses 7–11, 13–27, 32–33. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 36 is grouped into the Verdict section with the following outline: The section containing Elihu's speeches serves as a bridge between the Dialogue (chapters 3–31) and the speeches of YHWH (chapters 38–41). There is an introduction in the prose form (Job 32:1–5), describing Elihu's identity and circumstances that cause him to speak (starting in Job 32:6). The whole speech section can be formally divided into four monologues, each starting with a similar formula (Job 32:6; 34:1; 35:1; 36:1). Elihu's first monologue is preceded by an apologia (justification) for speaking (Job 32:6–22) and a transitionary part which introduces Elihu's main arguments (Job 33:1–7) before the speech formally commences (Job 33:8–33). In the first three speeches Elihu cites and then disputes specific Job's charges in the preceding dialogue: The fourth (and final) speech of Elihu comprises chapters 36–37, in which Elihu stops refuting Job's charges, but states his conclusions and verdict: Elihu asks Job's attention (36:1–4). After speaking without interruption for a long time, Elihu likely senses that Job (and his friends) may be impatient for him to finish, so he calls for Job's attention. Elihu claims that what he is saying is right because he voices God's perfect knowledge (verse 4; cf. : Elihu affirms that God is perfect in knowledge). [Elihu said:] "For truly my words will not be false; "He who is perfect in knowledge is with you." Elihu points to the corrective benefit of suffering (36:5–33). Elihu's last speech is more compassionate and constructive than his previous three discourses. He focuses on the consequences of suffering rather than its cause, that suffering is God's discipline by which a person can be built up and be better. In the second part of this speech, Elihu voices a hymn of praise to God as Creator (Job 36:22–25; 26–29, 30–33; 37:1–5, 6–13). His words actually prepare for the divine appearance in chapter 38. [Elihu said:] "For by these He judges the people;" "He gives food in abundance." Verse 31. Elihu draws a parallel between God's arrangements of natural world with God's government of human world; in both worlds, God is 'transcendent and in control'. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71884234
71886336
Nucleon magnetic moment
In physics, proton and neutron magnetism The nucleon magnetic moments are the intrinsic magnetic dipole moments of the proton and neutron, symbols "μ"p and "μ"n. The nucleus of an atom comprises protons and neutrons, both nucleons that behave as small magnets. Their magnetic strengths are measured by their magnetic moments. The nucleons interact with normal matter through either the nuclear force or their magnetic moments, with the charged proton also interacting by the Coulomb force. The proton's magnetic moment was directly measured in 1933 by Otto Stern team in University of Hamburg. While the neutron was determined to have a magnetic moment by indirect methods in the mid-1930s, Luis Alvarez and Felix Bloch made the first accurate, direct measurement of the neutron's magnetic moment in 1940. The proton's magnetic moment is exploited to make measurements of molecules by proton nuclear magnetic resonance. The neutron's magnetic moment is exploited to probe the atomic structure of materials using scattering methods and to manipulate the properties of neutron beams in particle accelerators. The existence of the neutron's magnetic moment and the large value for the proton magnetic moment indicate that nucleons are not elementary particles. For an elementary particle to have an intrinsic magnetic moment, it must have both spin and electric charge. The nucleons have spin "ħ"/2, but the neutron has no net charge. Their magnetic moments were puzzling and defied a valid explanation until the quark model for hadron particles was developed in the 1960s. The nucleons are composed of three quarks, and the magnetic moments of these elementary particles combine to give the nucleons their magnetic moments. Description. The CODATA recommended value for the magnetic moment of the proton is "μ"p =  "μ"N‍ =  "μ"B.‍ The best available measurement for the value of the magnetic moment of the neutron is "μ"n =  "μ"N.‍ Here, "μ"N is the nuclear magneton, a standard unit for the magnetic moments of nuclear components, and "μ"B is the Bohr magneton, both being physical constants. In SI units, these values are "μ"p = ‍ and "μ"n = .‍ A magnetic moment is a vector quantity, and the direction of the nucleon's magnetic moment is determined by its spin.73 The torque on the neutron that results from an external magnetic field is towards aligning the neutron's spin vector opposite to the magnetic field vector.385 The nuclear magneton is the spin magnetic moment of a Dirac particle, a charged, spin-1/2 elementary particle, with a proton's mass mp, in which anomalous corrections are ignored.389 The nuclear magneton is formula_0 where e is the elementary charge, and ħ is the reduced Planck constant. The magnetic moment of such a particle is parallel to its spin.389 Since the neutron has no charge, it should have no magnetic moment by the analogous expression.391 The non-zero magnetic moment of the neutron thus indicates that it is not an elementary particle. The sign of the neutron's magnetic moment is that of a negatively charged particle. Similarly, that the magnetic moment of the proton, "μ"p/⁠"μ"N ≈  is not almost equal to 1 μN indicates that it too is not an elementary particle. Protons and neutrons are composed of quarks, and the magnetic moments of the quarks can be used to compute the magnetic moments of the nucleons. Although the nucleons interact with normal matter through magnetic forces, the magnetic interactions are many orders of magnitude weaker than the nuclear interactions. The influence of the neutron's magnetic moment is therefore only apparent for low energy, or slow, neutrons. Because the value for the magnetic moment is inversely proportional to particle mass, the nuclear magneton is about 1/2000 as large as the Bohr magneton. The magnetic moment of the electron is therefore about 1000 times larger than that of the nucleons. The magnetic moments of the antiproton and antineutron have the same magnitudes as their antiparticles, the proton and neutron, but they have opposite sign. Measurement. Proton. The magnetic moment of the proton was discovered in 1933 by Otto Stern, Otto Robert Frisch and Immanuel Estermann at the University of Hamburg. The proton's magnetic moment was determined by measuring the deflection of a beam of molecular hydrogen by a magnetic field. Stern won the Nobel Prize in Physics in 1943 for this discovery. Neutron. The neutron was discovered in 1932, and since it had no charge, it was assumed to have no magnetic moment. Indirect evidence suggested that the neutron had a non-zero value for its magnetic moment, however, until direct measurements of the neutron's magnetic moment in 1940 resolved the issue. Values for the magnetic moment of the neutron were independently determined by R. Bacher at the University of Michigan at Ann Arbor (1933) and I. Y. Tamm and S. A. Altshuler in the Soviet Union (1934) from studies of the hyperfine structure of atomic spectra. Although Tamm and Altshuler's estimate had the correct sign and order of magnitude (), the result was met with skepticism. By 1934 groups led by Stern, now at the Carnegie Institute of Technology in Pittsburgh, and I. I. Rabi at Columbia University in New York had independently measured the magnetic moments of the proton and deuteron. The measured values for these particles were only in rough agreement between the groups, but the Rabi group confirmed the earlier Stern measurements that the magnetic moment for the proton was unexpectedly large. Since a deuteron is composed of a proton and a neutron with aligned spins, the neutron's magnetic moment could be inferred by subtracting the deuteron and proton magnetic moments. The resulting value was not zero and had a sign opposite to that of the proton. By the late 1930s, accurate values for the magnetic moment of the neutron had been deduced by the Rabi group using measurements employing newly developed nuclear magnetic resonance techniques. The value for the neutron's magnetic moment was first directly measured by L. Alvarez and F. Bloch at the University of California at Berkeley in 1940. Using an extension of the magnetic resonance methods developed by Rabi, Alvarez and Bloch determined the magnetic moment of the neutron to be . By directly measuring the magnetic moment of free neutrons, or individual neutrons free of the nucleus, Alvarez and Bloch resolved all doubts and ambiguities about this anomalous property of neutrons. Unexpected consequences. The large value for the proton's magnetic moment and the inferred negative value for the neutron's magnetic moment were unexpected and could not be explained. The unexpected values for the magnetic moments of the nucleons would remain a puzzle until the quark model was developed in the 1960s. The refinement and evolution of the Rabi measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. Rabi was awarded the Nobel Prize in 1944 for his resonance method for recording the magnetic properties of atomic nuclei. Nucleon gyromagnetic ratios. The magnetic moment of a nucleon is sometimes expressed in terms of its g-factor, a dimensionless scalar. The convention defining the g-factor for composite particles, such as the neutron or proton, is formula_1 where μ is the intrinsic magnetic moment, I is the spin angular momentum, and g is the effective g-factor. While the g-factor is dimensionless, for composite particles it is defined relative to the nuclear magneton. For the neutron, I is , so the neutron's g-factor is "g"n = ,‍ while the proton's g-factor is "g"p = .‍ The gyromagnetic ratio, symbol γ, of a particle or system is the ratio of its magnetic moment to its spin angular momentum, or formula_2 For nucleons, the ratio is conventionally written in terms of the proton mass and charge, by the formula formula_3 The neutron's gyromagnetic ratio is "γ"n = .‍ The proton's gyromagnetic ratio is "γ"p = .‍ The gyromagnetic ratio is also the ratio between the observed angular frequency of Larmor precession and the strength of the magnetic field in nuclear magnetic resonance applications, such as in MRI imaging. For this reason, the quantity "γ"/2"π" called "gamma bar", expressed in the unit MHz/T, is often given. The quantities "γ"n/⁠2"π" = ‍ and "γ"p/⁠2"π" = ,‍ are therefore convenient. Physical significance. Larmor precession. When a nucleon is put into a magnetic field produced by an external source, it is subject to a torque tending to orient its magnetic moment parallel to the field (in the case of the neutron, its spin is antiparallel to the field). As with any magnet, this torque is proportional the product of the magnetic moment and the external magnetic field strength. Since the nucleons have spin angular momentum, this torque will cause them to precess with a well-defined frequency, called the Larmor frequency. It is this phenomenon that enables the measurement of nuclear properties through nuclear magnetic resonance. The Larmor frequency can be determined from the product of the gyromagnetic ratio with the magnetic field strength. Since for the neutron the sign of "γ"n is negative, the neutron's spin angular momentum precesses counterclockwise about the direction of the external magnetic field. Proton nuclear magnetic resonance. Nuclear magnetic resonance employing the magnetic moments of protons is used for nuclear magnetic resonance (NMR) spectroscopy. Since hydrogen-1 nuclei are within the molecules of many substances, NMR can determine the structure of those molecules. Determination of neutron spin. The interaction of the neutron's magnetic moment with an external magnetic field was exploited to determine the spin of the neutron. In 1949, D. Hughes and M. Burgy measured neutrons reflected from a ferromagnetic mirror and found that the angular distribution of the reflections was consistent with spin . In 1954, J. Sherwood, T. Stephenson, and S. Bernstein employed neutrons in a Stern–Gerlach experiment that used a magnetic field to separate the neutron spin states. They recorded the two such spin states, consistent with a spin  particle. Until these measurements, the possibility that the neutron was a spin  particle could not have been ruled out. Neutrons used to probe material properties. Since neutrons are neutral particles, they do not have to overcome Coulomb repulsion as they approach charged targets, unlike protons and alpha particles. Neutrons can deeply penetrate matter. The magnetic moment of the neutron has therefore been exploited to probe the properties of matter using scattering or diffraction techniques. These methods provide information that is complementary to X-ray spectroscopy. In particular, the magnetic moment of the neutron is used to determine magnetic properties of materials at length scales of 1–100 Å using cold or thermal neutrons. B. Brockhouse and C. Shull won the Nobel Prize in physics in 1994 for developing these scattering techniques. Control of neutron beams by magnetism. As neutrons carry no electric charge, neutron beams cannot be controlled by the conventional electromagnetic methods employed in particle accelerators. The magnetic moment of the neutron allows some control of neutrons using magnetic fields, however, including the formation of polarized neutron beams. One technique employs the fact that cold neutrons will reflect from some magnetic materials at great efficiency when scattered at small grazing angles. The reflection preferentially selects particular spin states, thus polarizing the neutrons. Neutron magnetic mirrors and guides use this total internal reflection phenomenon to control beams of slow neutrons. Nuclear magnetic moments. Since an atomic nucleus consists of a bound state of protons and neutrons, the magnetic moments of the nucleons contribute to the nuclear magnetic moment, or the magnetic moment for the nucleus as a whole. The nuclear magnetic moment also includes contributions from the orbital motion of the charged protons. The deuteron, consisting of a proton and a neutron, has the simplest example of a nuclear magnetic moment. The sum of the proton and neutron magnetic moments gives 0.879 "μ"N, which is within 3% of the measured value 0.857 "μ"N. In this calculation, the spins of the nucleons are aligned, but their magnetic moments offset because of the neutron's negative magnetic moment. Nature of the nucleon magnetic moments. A magnetic dipole moment can be generated by two possible mechanisms. One way is by a small loop of electric current, called an "Ampèrian" magnetic dipole. Another way is by a pair of magnetic monopoles of opposite magnetic charge, bound together in some way, called a "Gilbertian" magnetic dipole. Elementary magnetic monopoles remain hypothetical and unobserved, however. Throughout the 1930s and 1940s it was not readily apparent which of these two mechanisms caused the nucleon intrinsic magnetic moments. In 1930, Enrico Fermi showed that the magnetic moments of nuclei (including the proton) are Ampèrian. The two kinds of magnetic moments experience different forces in a magnetic field. Based on Fermi's arguments, the intrinsic magnetic moments of elementary particles, including the nucleons, have been shown to be Ampèrian. The arguments are based on basic electromagnetism, elementary quantum mechanics, and the hyperfine structure of atomic s-state energy levels. In the case of the neutron, the theoretical possibilities were resolved by laboratory measurements of the scattering of slow neutrons from ferromagnetic materials in 1951. Anomalous magnetic moments and meson physics. The anomalous values for the magnetic moments of the nucleons presented a theoretical quandary for the 30 years from the time of their discovery in the early 1930s to the development of the quark model in the 1960s. Considerable theoretical efforts were expended in trying to understand the origins of these magnetic moments, but the failures of these theories were glaring. Much of the theoretical focus was on developing a nuclear-force equivalence to the remarkably successful theory explaining the small anomalous magnetic moment of the electron. The problem of the origins of the magnetic moments of nucleons was recognized as early as 1935. G. C. Wick suggested that the magnetic moments could be caused by the quantum-mechanical fluctuations of these particles in accordance with Fermi's 1934 theory of beta decay. By this theory, a neutron is partly, regularly and briefly, disassociated into a proton, an electron, and a neutrino as a natural consequence of beta decay. By this idea, the magnetic moment of the neutron was caused by the fleeting existence of the large magnetic moment of the electron in the course of these quantum-mechanical fluctuations, the value of the magnetic moment determined by the length of time the virtual electron was in existence. The theory proved to be untenable, however, when H. Bethe and R. Bacher showed that it predicted values for the magnetic moment that were either much too small or much too large, depending on speculative assumptions. Similar considerations for the electron proved to be much more successful. In quantum electrodynamics (QED), the anomalous magnetic moment of a particle stems from the small contributions of quantum mechanical fluctuations to the magnetic moment of that particle. The g-factor for a "Dirac" magnetic moment is predicted to be "g" = −2 for a negatively charged, spin-1/2 particle. For particles such as the electron, this "classical" result differs from the observed value by around 0.1%; the difference compared to the classical value is the anomalous magnetic moment. The "g"-factor for the electron is measured to be .‍ QED is the theory of the mediation of the electromagnetic force by photons. The physical picture is that the "effective" magnetic moment of the electron results from the contributions of the "bare" electron, which is the Dirac particle, and the cloud of "virtual", short-lived electron–positron pairs and photons that surround this particle as a consequence of QED. The effects of these quantum mechanical fluctuations can be computed theoretically using Feynman diagrams with loops. The one-loop contribution to the anomalous magnetic moment of the electron, corresponding to the first-order and largest correction in QED, is found by calculating the vertex function shown in the diagram on the right. The calculation was discovered by J. Schwinger in 1948. Computed to fourth order, the QED prediction for the electron's anomalous magnetic moment agrees with the experimentally measured value to more than 10 significant figures, making the magnetic moment of the electron one of the most accurately verified predictions in the history of physics. Compared to the electron, the anomalous magnetic moments of the nucleons are enormous. The g-factor for the proton is 5.6, and the chargeless neutron, which should have no magnetic moment at all, has a g-factor of −3.8. Note, however, that the anomalous magnetic moments of the nucleons, that is, their magnetic moments with the expected Dirac particle magnetic moments subtracted, are roughly equal but of opposite sign: "μ"p − = +, but "μ"n − =. The Yukawa interaction for nucleons was discovered in the mid-1930s, and this nuclear force is mediated by pion mesons. In parallel with the theory for the electron, the hypothesis was that higher-order loops involving nucleons and pions may generate the anomalous magnetic moments of the nucleons. The physical picture was that the "effective" magnetic moment of the neutron arose from the combined contributions of the "bare" neutron, which is zero, and the cloud of "virtual" pions and photons that surround this particle as a consequence of the nuclear and electromagnetic forces. The Feynman diagram at right is roughly the first-order diagram, with the role of the virtual particles played by pions. As noted by A. Pais, "between late 1948 and the middle of 1949 at least six papers appeared reporting on second-order calculations of nucleon moments". These theories were also, as noted by Pais, "a flop" – they gave results that grossly disagreed with observation. Nevertheless, serious efforts continued along these lines for the next couple of decades, to little success. These theoretical approaches were incorrect because the nucleons are composite particles with their magnetic moments arising from their elementary components, quarks. Quark model of nucleon magnetic moments. In the quark model for hadrons, the neutron is composed of one up quark (charge  e) and two down quarks (charge  e) while the proton is composed of one down quark (charge  e) and two up quarks (charge  e). The magnetic moment of the nucleons can be modeled as a sum of the magnetic moments of the constituent quarks, although this simple model belies the complexities of the Standard Model of Particle Physics. The calculation assumes that the quarks behave like pointlike Dirac particles, each having their own magnetic moment, as computed using an expression similar to the one above for the nuclear magneton: formula_4 where the q-subscripted variables refer to quark magnetic moment, charge, or mass. Simplistically, the magnetic moment of a nucleon can be viewed as resulting from the vector sum of the three quark magnetic moments, plus the orbital magnetic moments caused by the movement of the three charged quarks within it. In one of the early successes of the Standard Model (SU(6) theory), in 1964 M. Beg, B. Lee, and A. Pais theoretically calculated the ratio of proton-to-neutron magnetic moments to be , which agrees with the experimental value to within 3%. The measured value for this ratio is . A contradiction of the quantum mechanical basis of this calculation with the Pauli exclusion principle led to the discovery of the color charge for quarks by O. Greenberg in 1964. From the nonrelativistic quantum-mechanical wave function for baryons composed of three quarks, a straightforward calculation gives fairly accurate estimates for the magnetic moments of neutrons, protons, and other baryons. For a neutron, the magnetic moment is given by where μd and μu are the magnetic moments for the down and up quarks respectively. This result combines the intrinsic magnetic moments of the quarks with their orbital magnetic moments and assumes that the three quarks are in a particular, dominant quantum state. The results of this calculation are encouraging, but the masses of the up or down quarks were assumed to be the mass of a nucleon. The masses of the quarks are actually only about 1% that of a nucleon. The discrepancy stems from the complexity of the Standard Model for nucleons, where most of their mass originates in the gluon fields, virtual particles, and their associated energy that are essential aspects of the strong force. Furthermore, the complex system of quarks and gluons that constitute a nucleon requires a relativistic treatment. Nucleon magnetic moments have been successfully computed from first principles, requiring significant computing resources. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mu_\\text{N} = \\frac{e \\hbar}{2 m_\\text{p}}," }, { "math_id": 1, "text": "\\boldsymbol{\\mu} = \\frac{g \\mu_\\text{N}}{\\hbar} \\boldsymbol{I}," }, { "math_id": 2, "text": "\\boldsymbol{\\mu} = \\gamma \\boldsymbol{I}." }, { "math_id": 3, "text": "\\gamma = \\frac{g \\mu_\\text{N}}{\\hbar} = g \\frac{e}{2m_\\text{p}}." }, { "math_id": 4, "text": "\\ \\mu_\\text{q} = \\frac{\\ e_\\text{q} \\hbar\\ }{2 m_\\text{q}}\\ ," } ]
https://en.wikipedia.org/wiki?curid=71886336
71887532
Job 37
Job 37 is the 37th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Elihu, which belongs to the "Verdicts" section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 24 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q99 (4QJoba; 175–60 BCE) with extant verses 1–5, 14–15. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 37 is grouped into the Verdict section with the following outline: The section containing Elihu's speeches serves as a bridge between the Dialogue (chapters 3–31) and the speeches of YHWH (chapters 38–41). There is an introduction in the prose form (Job 32:1–5), describing Elihu's identity and circumstances that cause him to speak (starting in Job 32:6). The whole speech section can be formally divided into four monologues, each starting with a similar formula (Job 32:6; 34:1; 35:1; 36:1). Elihu's first monologue is preceded by an apologia (justification) for speaking (Job 32:6–22) and a transitionary part which introduces Elihu's main arguments (Job 33:1–7) before the speech formally commences (Job 33:8–33). In the first three speeches Elihu cites and then disputes specific Job's charges in the preceding dialogue: The fourth (and final) speech of Elihu comprises chapters 36–37, in which Elihu stops refuting Job's charges, but states his conclusions and verdict: Elihu asks Job's attention (37:1–13). This section contains the continuation of Elihu's hymn of praise to God as Creator (Job 36:22–25; 26–29, 30–33; 37:1–5, 6–13). The storm imagery to animately describe God's work in nature anticipates God's appearance in the whirlwind (Job 38). [Elihu said:] "God thunders marvelously with His voice;" "He does great things that we cannot comprehend." Verse 5. This affirms that God is perfect in knowledge (cf. ; 37:16). Elihu points to the corrective benefit of suffering (37:14–24). In the last part of his last speech, Elihu calls Job to perceive God's great works (verses 14–20) and closing with a more general summary of God's greatness (verses 21–24). The introduction of the coming of God (verses 21–22) anticipates the appearance of God (the image of light following the storm) and the correlation between God's power and justice (verse 23) prepares for God's speeches. [Elihu said:] "Do you know the balancings of the clouds," "the wondrous works of him who is perfect in knowledge," References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71887532
7189385
Tacit collusion
Collusion between competitors Tacit collusion is a collusion between competitors who do not explicitly exchange information but achieve an agreement about coordination of conduct. There are two types of tacit collusion: concerted action and conscious parallelism. In a concerted action also known as concerted activity, competitors exchange some information without reaching any explicit agreement, while conscious parallelism implies no communication. In both types of tacit collusion, competitors agree to play a certain strategy "without explicitly saying so". It is also called oligopolistic price coordination or tacit parallelism. A dataset of gasoline prices of BP, Caltex, Woolworths, Coles, and Gull from Perth gathered in the years 2001 to 2015 was used to show by statistical analysis the tacit collusion between these retailers. BP emerged as a price leader and influenced the behavior of the competitors. As result, the timing of price jumps became coordinated and the margins started to grow in 2010. Conscious parallelism. In competition law, some sources use conscious parallelism as a synonym to tacit collusion in order to describe pricing strategies among competitors in an oligopoly that occurs without an actual agreement or at least without any evidence of an actual agreement between the players. In result, one competitor will take the lead in raising or lowering prices. The others will then follow suit, raising or lowering their prices by the same amount, with the understanding that greater profits result. This practice can be harmful to consumers who, if the market power of the firm is used, can be forced to pay monopoly prices for goods that should be selling for only a little more than the cost of production. Nevertheless, it is very hard to prosecute because it may occur without any collusion between the competitors. Courts have held that no violation of the antitrust laws occurs where firms independently raise or lower prices, but that a violation can be shown when "plus factors" occur, such as firms being motivated to collude and taking actions against their own economic self-interests. This procedure of the courts is sometimes called as setting of a conspiracy theory. Price leadership. Oligopolists usually try not to engage in price cutting, excessive advertising or other forms of competition. Thus, there may be unwritten rules of collusive behavior such as price leadership. Price leadership is the form of a tacit collusion, whereby firms orient at the price set by a leader. A price leader will then emerge and set the general industry price, with other firms following suit. For example, see the case of British Salt Limited and New Cheshire Salt Works Limited. Classical economic theory holds that Pareto efficiency is attained at a price equal to the incremental cost of producing additional units. Monopolies are able to extract optimum revenue by offering fewer units at a higher cost. An oligopoly where each firm acts independently tends toward equilibrium at the ideal, but such covert cooperation as price leadership tends toward higher profitability for all, though it is an unstable arrangement. There exist two types of price leadership. In dominant firm price leadership, the price leader is the biggest firm. In barometric firm price leadership, the most reliable firm emerges as the best barometer of market conditions, or the firm could be the one with the lowest costs of production, leading other firms to follow suit. Although this firm might not be dominating the industry, its prices are believed to reflect market conditions which are the most satisfactory, as the firm would most likely be a good forecaster of economic changes. Auctions. In repeated auctions, bidders might participate in a tacit collusion to keep bids low. A profitable collusion is possible, if the number of bidders is finite and the identity of the winner is publicly observable. It can be very difficult or even impossible for the seller to detect such collusion from the distribution of bids only. In case of spectrum auctions, some sources claim that a tacit collusion is easily upset:"It requires that all the bidders reach an implicit agreement about who should get what. With thirty diverse bidders unable to communicate about strategy except through their bids, forming such unanimous agreement is difficult at best." Nevertheless, Federal Communications Commission (FCC) experimented with precautions for spectrum auctions like restricting visibility of bids, limiting the number of bids and anonymous bidding. So called click-box bidding used by governmental agencies in spectrum auctions restricts the number of valid bids and offers them as a list to a bidder to choose from. Click-box bidding was invented in 1997 by FCC to prevent bidders from signalling bidding information by embedding it into digits of the bids. Economic theory predicts a higher difficulty for tacit collusions due to those precautions. In general, transparency in auctions always increases the risk of a tacit collusion. Algorithms. Once the competitors are able to use algorithms to determine prices, a tacit collusion between them imposes a much higher danger. E-commerce is one of the major premises for algorithmic tacit collusion. Complex pricing algorithms are essential for the development of e-commerce. European Commissioner Margrethe Vestager mentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows: "A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy." The book "The Making of a Fly" by Peter Anthony Lawrence, written in 1992, briefly achieved a price of $23,698,655.93 on Amazon in 2011. An OECD Competition Committee Roundtable "Algorithms and Collusion" took place in June 2017 in order to address the risk of possible anti-competitive behaviour by algorithms. It is important to distinguish between simple algorithms intentionally programmed to raise price according to the competitors and more sophisticated self-learning AI algorithms with more general goals. Self-learning AI algorithms might form a tacit collusion without the knowledge of their human programmers as result of the task to determine optimal prices in any market situation. Duopoly example. Tacit collusion is best understood in the context of a duopoly and the concept of game theory (namely, Nash equilibrium). Let's take an example of two firms A and B, who both play an advertising game over an indefinite number of periods (effectively saying 'infinitely many'). Both of the firms' payoffs are contingent upon their own action, but more importantly the action of their competitor. They can choose to stay at the current level of advertising or choose a more aggressive advertising strategy. If either firm chooses low advertising while the other chooses high, then the low-advertising firm will suffer a great loss in market share while the other experiences a boost. If they both choose high advertising, then neither firms' market share will increase but their advertising costs will increase, thus lowering their profits. If they both choose to stay at the normal level of advertising, then sales will remain constant without the added advertising expense. Thus, both firms will experience a greater payoff if they both choose normal advertising (this set of actions is unstable, as both are tempted to defect to higher advertising to increase payoffs). A payoff matrix is presented with numbers given: Notice that Nash's equilibrium is set at both firms choosing an aggressive advertising strategy. This is to protect themselves against lost sales. This game is an example of a prisoner's dilemma. In general, if the payoffs for colluding (normal, normal) are greater than the payoffs for cheating (aggressive, aggressive), then the two firms will want to collude (tacitly). Although this collusive arrangement is not an equilibrium in the one-shot game above, repeating the game allows the firms to sustain collusion over long time periods. This can be achieved, for example if each firm's strategy is to undertake normal advertising so long as its rival does likewise, and to pursue aggressive advertising forever as soon as its rival has used an aggressive advertising campaign at least once (see: grim trigger) (this threat is credible since symmetric use of aggressive advertising is a Nash equilibrium of each stage of the game). Each firm must then weigh the short term gain of $30 from 'cheating' against the long term loss of $35 in all future periods that comes as part of its punishment. Provided that firms care enough about the future, collusion is an equilibrium of this repeated game. To be more precise, suppose that firms have a discount factor formula_0. The discounted value of the cost to cheating and being punished indefinitely are formula_1. The firms therefore prefer not to cheat (so that collusion is an equilibrium) if formula_2. See also. <templatestyles src="Div col/styles.css"/>* Accumulation by dispossession References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\delta" }, { "math_id": 1, "text": "\\sum_{t=1}^{\\infty}\\delta^t 35=\\frac{\\delta}{1-\\delta}35" }, { "math_id": 2, "text": "30<\\frac{\\delta}{1-\\delta}35\\Leftrightarrow\\delta>\\frac{6}{13}" } ]
https://en.wikipedia.org/wiki?curid=7189385
7189886
Optimal stopping
Class of mathematical problems In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance (related to the pricing of American options). A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming. Definition. Discrete time case. Stopping rule problems are associated with two objects: Given those objects, the problem is as follows: Continuous time case. Consider a gain process formula_5 defined on a filtered probability space formula_6 and assume that formula_7 is adapted to the filtration. The optimal stopping problem is to find the stopping time formula_8 which maximizes the expected gain formula_9 where formula_10 is called the value function. Here formula_11 can take value formula_12. A more specific formulation is as follows. We consider an adapted strong Markov process formula_13 defined on a filtered probability space formula_14 where formula_15 denotes the probability measure where the stochastic process starts at formula_16. Given continuous functions formula_17, and formula_18, the optimal stopping problem is formula_19 This is sometimes called the MLS (which stand for Mayer, Lagrange, and supremum, respectively) formulation. Solution methods. There are generally two approaches to solving optimal stopping problems. When the underlying process (or the gain process) is described by its unconditional finite-dimensional distributions, the appropriate solution technique is the martingale approach, so called because it uses martingale theory, the most important concept being the Snell envelope. In the discrete time case, if the planning horizon formula_11 is finite, the problem can also be easily solved by dynamic programming. When the underlying process is determined by a family of (conditional) transition functions leading to a Markov family of transition probabilities, powerful analytical tools provided by the theory of Markov processes can often be utilized and this approach is referred to as the Markov method. The solution is usually obtained by solving the associated free-boundary problems (Stefan problems). A jump diffusion result. Let formula_20 be a Lévy diffusion in formula_21 given by the SDE formula_22 where formula_23 is an formula_24-dimensional Brownian motion, formula_25 is an formula_26-dimensional compensated Poisson random measure, formula_27, formula_28, and formula_29 are given functions such that a unique solution formula_30 exists. Let formula_31 be an open set (the solvency region) and formula_32 be the bankruptcy time. The optimal stopping problem is: formula_33 It turns out that under some regularity conditions, the following verification theorem holds: If a function formula_34 satisfies then formula_42 for all formula_43. Moreover, if Then formula_46 for all formula_43 and formula_47 is an optimal stopping time. These conditions can also be written is a more compact form (the integro-variational inequality): Examples. Coin tossing. You have a fair coin and are repeatedly tossing it. Each time, before it is tossed, you can choose to stop tossing it and get paid (in dollars, say) the average number of heads observed. You wish to maximise the amount you get paid by choosing a stopping rule. If "X""i" (for "i" ≥ 1) forms a sequence of independent, identically distributed random variables with Bernoulli distribution formula_51 and if formula_52 then the sequences formula_53, and formula_54 are the objects associated with this problem. House selling. You have a house and wish to sell it. Each day you are offered formula_55 for your house, and pay formula_56 to continue advertising it. If you sell your house on day formula_57, you will earn formula_58, where formula_59. You wish to maximise the amount you earn by choosing a stopping rule. In this example, the sequence (formula_60) is the sequence of offers for your house, and the sequence of reward functions is how much you will earn. Secretary problem. You are observing a sequence of objects which can be ranked from best to worst. You wish to choose a stopping rule which maximises your chance of picking the best object. Here, if formula_62 ("n" is some large number) are the ranks of the objects, and formula_4 is the chance you pick the best object if you stop intentionally rejecting objects at step i, then formula_63 and formula_64 are the sequences associated with this problem. This problem was solved in the early 1960s by several people. An elegant solution to the secretary problem and several modifications of this problem is provided by the more recent odds algorithm of optimal stopping (Bruss algorithm). Search theory. Economists have studied a number of optimal stopping problems similar to the 'secretary problem', and typically call this type of analysis 'search theory'. Search theory has especially focused on a worker's search for a high-wage job, or a consumer's search for a low-priced good. Parking problem. A special example of an application of search theory is the task of optimal selection of parking space by a driver going to the opera (theater, shopping, etc.). Approaching the destination, the driver goes down the street along which there are parking spaces – usually, only some places in the parking lot are free. The goal is clearly visible, so the distance from the target is easily assessed. The driver's task is to choose a free parking space as close to the destination as possible without turning around so that the distance from this place to the destination is the shortest. Option trading. In the trading of options on financial markets, the holder of an American option is allowed to exercise the right to buy (or sell) the underlying asset at a predetermined price at any time before or at the expiry date. Therefore, the valuation of American options is essentially an optimal stopping problem. Consider a classical Black–Scholes set-up and let formula_65 be the risk-free interest rate and formula_66 and formula_67 be the dividend rate and volatility of the stock. The stock price formula_68 follows geometric Brownian motion formula_69 under the risk-neutral measure. When the option is perpetual, the optimal stopping problem is formula_70 where the payoff function is formula_71 for a call option and formula_72 for a put option. The variational inequality is formula_73 for all formula_74 where formula_75 is the exercise boundary. The solution is known to be On the other hand, when the expiry date is finite, the problem is associated with a 2-dimensional free-boundary problem with no known closed-form solution. Various numerical methods can, however, be used. See Black–Scholes model#American options for various valuation methods here, as well as Fugit for a discrete, tree based, calculation of the optimal time to exercise. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "X_1, X_2, \\ldots" }, { "math_id": 1, "text": "(y_i)_{i\\ge 1}" }, { "math_id": 2, "text": "y_i=y_i (x_1, \\ldots ,x_i)" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "y_i" }, { "math_id": 5, "text": "G=(G_t)_{t\\ge 0}" }, { "math_id": 6, "text": "(\\Omega,\\mathcal{F},(\\mathcal{F}_t)_{t\\ge 0},\\mathbb{P})" }, { "math_id": 7, "text": "G" }, { "math_id": 8, "text": "\\tau^*" }, { "math_id": 9, "text": " V_t^T = \\mathbb{E} G_{\\tau^*} = \\sup_{t\\le \\tau \\le T} \\mathbb{E} G_\\tau " }, { "math_id": 10, "text": "V_t^T" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "\\infty" }, { "math_id": 13, "text": "X = (X_t)_{t\\ge 0}" }, { "math_id": 14, "text": "(\\Omega,\\mathcal{F},(\\mathcal{F}_t)_{t\\ge 0},\\mathbb{P}_x)" }, { "math_id": 15, "text": "\\mathbb{P}_x" }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "M,L" }, { "math_id": 18, "text": "K" }, { "math_id": 19, "text": " V(x) = \\sup_{0\\le \\tau \\le T} \\mathbb{E}_x \\left( M(X_\\tau) + \\int_0^\\tau L(X_t) dt + \\sup_{0\\le t\\le\\tau} K(X_t) \\right). " }, { "math_id": 20, "text": "Y_t" }, { "math_id": 21, "text": "\\mathbb{R}^k" }, { "math_id": 22, "text": " dY_t = b(Y_t) dt + \\sigma (Y_t) dB_t + \\int_{\\mathbb{R}^k} \\gamma (Y_{t-},z)\\bar{N}(dt,dz),\\quad Y_0 = y " }, { "math_id": 23, "text": " B " }, { "math_id": 24, "text": " m" }, { "math_id": 25, "text": " \\bar{N} " }, { "math_id": 26, "text": " l " }, { "math_id": 27, "text": " b:\\mathbb{R}^k \\to \\mathbb{R}^k " }, { "math_id": 28, "text": " \\sigma:\\mathbb{R}^k \\to \\mathbb{R}^{k\\times m} " }, { "math_id": 29, "text": " \\gamma:\\mathbb{R}^k \\times \\mathbb{R}^k \\to \\mathbb{R}^{k\\times l} " }, { "math_id": 30, "text": " (Y_t) " }, { "math_id": 31, "text": " \\mathcal{S}\\subset \\mathbb{R}^k " }, { "math_id": 32, "text": " \\tau_\\mathcal{S} = \\inf\\{ t>0: Y_t \\notin \\mathcal{S} \\} " }, { "math_id": 33, "text": "V(y) = \\sup_{\\tau \\le \\tau_\\mathcal{S}} J^\\tau (y) = \\sup_{\\tau \\le \\tau_\\mathcal{S}} \\mathbb{E}_y \\left[ M(Y_\\tau) + \\int_0^\\tau L(Y_t) dt \\right]. " }, { "math_id": 34, "text": "\\phi:\\bar{\\mathcal{S}}\\to \\mathbb{R}" }, { "math_id": 35, "text": " \\phi \\in C(\\bar{\\mathcal{S}}) \\cap C^1(\\mathcal{S}) \\cap C^2(\\mathcal{S}\\setminus \\partial D) " }, { "math_id": 36, "text": " D = \\{y\\in\\mathcal{S}: \\phi(y) > M(y) \\} " }, { "math_id": 37, "text": " \\phi \\ge M " }, { "math_id": 38, "text": " \\mathcal{S} " }, { "math_id": 39, "text": " \\mathcal{A}\\phi + L \\le 0 " }, { "math_id": 40, "text": " \\mathcal{S} \\setminus \\partial D " }, { "math_id": 41, "text": " \\mathcal{A} " }, { "math_id": 42, "text": " \\phi(y) \\ge V(y) " }, { "math_id": 43, "text": " y\\in \\bar{\\mathcal{S}} " }, { "math_id": 44, "text": " \\mathcal{A}\\phi + L = 0 " }, { "math_id": 45, "text": " D " }, { "math_id": 46, "text": " \\phi(y) = V(y) " }, { "math_id": 47, "text": " \\tau^* = \\inf\\{ t>0: Y_t\\notin D\\} " }, { "math_id": 48, "text": " \\max\\left\\{ \\mathcal{A}\\phi + L, M-\\phi \\right\\} = 0 " }, { "math_id": 49, "text": " \\mathcal{S} \\setminus \\partial D. " }, { "math_id": 50, "text": "\\mathbb{E}(y_i)" }, { "math_id": 51, "text": "\\text{Bern}\\left(\\frac{1}{2}\\right)," }, { "math_id": 52, "text": "y_i = \\frac 1 i \\sum_{k=1}^{i} X_k" }, { "math_id": 53, "text": "(X_i)_{i\\geq 1}" }, { "math_id": 54, "text": "(y_i)_{i\\geq 1}" }, { "math_id": 55, "text": "X_n" }, { "math_id": 56, "text": "k" }, { "math_id": 57, "text": "n" }, { "math_id": 58, "text": "y_n" }, { "math_id": 59, "text": "y_n = (X_n - nk)" }, { "math_id": 60, "text": "X_i" }, { "math_id": 61, "text": "(X_i)" }, { "math_id": 62, "text": "R_1, \\ldots, R_n" }, { "math_id": 63, "text": "(R_i)" }, { "math_id": 64, "text": "(y_i)" }, { "math_id": 65, "text": " r " }, { "math_id": 66, "text": " \\delta " }, { "math_id": 67, "text": " \\sigma " }, { "math_id": 68, "text": " S " }, { "math_id": 69, "text": " S_t = S_0 \\exp\\left\\{ \\left(r - \\delta - \\frac{\\sigma^2}{2}\\right) t + \\sigma B_t \\right\\} " }, { "math_id": 70, "text": " V(x) = \\sup_{\\tau} \\mathbb{E}_x \\left[ e^{-r\\tau} g(S_\\tau) \\right] " }, { "math_id": 71, "text": " g(x) = (x-K)^+ " }, { "math_id": 72, "text": " g(x) = (K-x)^+ " }, { "math_id": 73, "text": " \\max\\left\\{ \\frac{1}{2} \\sigma^2 x^2 V''(x) + (r-\\delta) x V'(x) - rV(x), g(x) - V(x) \\right\\} = 0" }, { "math_id": 74, "text": "x \\in (0,\\infty)\\setminus \\{b\\}" }, { "math_id": 75, "text": " b " }, { "math_id": 76, "text": " V(x) = \\begin{cases} (b-K)(x/b)^\\gamma & x\\in(0,b) \\\\ x-K & x\\in[b,\\infty) \\end{cases} " }, { "math_id": 77, "text": " \\gamma = (\\sqrt{\\nu^2 + 2r} - \\nu) / \\sigma" }, { "math_id": 78, "text": " \\nu = (r-\\delta)/\\sigma - \\sigma / 2, \\quad b = \\gamma K / (\\gamma - 1). " }, { "math_id": 79, "text": " V(x) = \\begin{cases} K - x & x\\in(0,c] \\\\(K-c)(x/c)^\\tilde{\\gamma} & x\\in(c,\\infty) \\end{cases} " }, { "math_id": 80, "text": " \\tilde{\\gamma} = -(\\sqrt{\\nu^2 + 2r} + \\nu) / \\sigma " }, { "math_id": 81, "text": " \\nu = (r-\\delta)/\\sigma - \\sigma / 2, \\quad c = \\tilde{\\gamma} K / (\\tilde{\\gamma} - 1). " } ]
https://en.wikipedia.org/wiki?curid=7189886
7190735
Verdier duality
In mathematics, Verdier duality is a cohomological duality in algebraic topology that generalizes Poincaré duality for manifolds. Verdier duality was introduced in 1965 by Jean-Louis Verdier (1965) as an analog for locally compact topological spaces of Alexander Grothendieck's theory of Poincaré duality in étale cohomology for schemes in algebraic geometry. It is thus (together with the said étale theory and for example Grothendieck's coherent duality) one instance of Grothendieck's six operations formalism. Verdier duality generalises the classical Poincaré duality of manifolds in two directions: it applies to continuous maps from one space to another (reducing to the classical case for the unique map from a manifold to a one-point space), and it applies to spaces that fail to be manifolds due to the presence of singularities. It is commonly encountered when studying constructible or perverse sheaves. Verdier duality. Verdier duality states that (subject to suitable finiteness conditions discussed below) certain derived image functors for sheaves are actually adjoint functors. There are two versions. Global Verdier duality states that for a continuous map formula_0 of locally compact Hausdorff spaces, the derived functor of the direct image with compact (or proper) supports formula_1 has a right adjoint formula_2 in the derived category of sheaves, in other words, for (complexes of) sheaves (of abelian groups) formula_3 on formula_4 and formula_5 on formula_6 we have formula_7 Local Verdier duality states that formula_8 in the derived category of sheaves on "Y". It is important to note that the distinction between the global and local versions is that the former relates morphisms between complexes of sheaves in the derived categories, whereas the latter relates internal Hom-complexes and so can be evaluated locally. Taking global sections of both sides in the local statement gives the global Verdier duality. These results hold subject to the compactly supported direct image functor formula_9 having finite cohomological dimension. This is the case if there is a bound formula_10 such that the compactly supported cohomology formula_11 vanishes for all fibres formula_12 (where formula_13) and formula_14. This holds if all the fibres formula_15 are at most formula_16-dimensional manifolds or more generally at most formula_16-dimensional CW-complexes. The discussion above is about derived categories of sheaves of abelian groups. It is instead possible to consider a ring formula_17 and (derived categories of) sheaves of formula_17-modules; the case above corresponds to formula_18. The dualizing complex formula_19 on formula_4 is defined to be formula_20 where "p" is the map from formula_4 to a point. Part of what makes Verdier duality interesting in the singular setting is that when formula_4 is not a manifold (a graph or singular algebraic variety for example) then the dualizing complex is not quasi-isomorphic to a sheaf concentrated in a single degree. From this perspective the derived category is necessary in the study of singular spaces. If formula_4 is a finite-dimensional locally compact space, and formula_21 the bounded derived category of sheaves of abelian groups over formula_4, then the Verdier dual is a contravariant functor formula_22 defined by formula_23 It has the following properties: Relation to classical Poincaré duality. Poincaré duality can be derived as a special case of Verdier duality. Here one explicitly calculates cohomology of a space using the machinery of sheaf cohomology. Suppose "X" is a compact orientable "n"-dimensional manifold, "k" is a field and formula_24 is the constant sheaf on "X" with coefficients in "k". Let formula_25 be the constant map to a point. Global Verdier duality then states formula_26 To understand how Poincaré duality is obtained from this statement, it is perhaps easiest to understand both sides piece by piece. Let formula_27 be an injective resolution of the constant sheaf. Then by standard facts on right derived functors formula_28 is a complex whose cohomology is the compactly supported cohomology of "X". Since morphisms between complexes of sheaves (or vector spaces) themselves form a complex we find that formula_29 where the last non-zero term is in degree 0 and the ones to the left are in negative degree. Morphisms in the derived category are obtained from the homotopy category of chain complexes of sheaves by taking the zeroth cohomology of the complex, i.e. formula_30 For the other side of the Verdier duality statement above, we have to take for granted the fact that when "X" is a compact orientable "n"-dimensional manifold formula_31 which is the dualizing complex for a manifold. Now we can re-express the right hand side as formula_32 We finally have obtained the statement that formula_33 By repeating this argument with the sheaf "k"X replaced with the same sheaf placed in degree "i" we get the classical Poincaré duality formula_34
[ { "math_id": 0, "text": " f\\colon X \\to Y " }, { "math_id": 1, "text": "Rf_!" }, { "math_id": 2, "text": "f^!" }, { "math_id": 3, "text": "\\mathcal F" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "\\mathcal G" }, { "math_id": 6, "text": "Y" }, { "math_id": 7, "text": "RHom(Rf_!\\mathcal{F},\\mathcal{G}) \\cong RHom(\\mathcal{F},f^!\\mathcal{G}). " }, { "math_id": 8, "text": "R\\,\\mathcal{H}om(Rf_!\\mathcal{F},\\mathcal{G}) \\cong Rf_{\\ast}R\\,\\mathcal{H}om(\\mathcal{F},f^!\\mathcal{G})" }, { "math_id": 9, "text": "f_{!}" }, { "math_id": 10, "text": "d\\in\\mathbf{N}" }, { "math_id": 11, "text": "H_c^{r}(X_y,\\mathbf{Z})" }, { "math_id": 12, "text": "X_y = f^{-1}(y)" }, { "math_id": 13, "text": "y\\in Y" }, { "math_id": 14, "text": "r>d" }, { "math_id": 15, "text": "X_y" }, { "math_id": 16, "text": "d" }, { "math_id": 17, "text": "A" }, { "math_id": 18, "text": "A=\\mathbf{Z}" }, { "math_id": 19, "text": "D_X" }, { "math_id": 20, "text": "\\omega_X = p^!(k) , " }, { "math_id": 21, "text": "D^b(X)" }, { "math_id": 22, "text": "D \\colon D^b(X)\\to D^b(X) " }, { "math_id": 23, "text": "D(\\mathcal{F}) = R\\,\\mathcal{H}om(\\mathcal{F}, \\omega_X) . " }, { "math_id": 24, "text": "k_X" }, { "math_id": 25, "text": "f=p" }, { "math_id": 26, "text": "[Rp_!k_X,k] \\cong [k_X,p^!k] . " }, { "math_id": 27, "text": "k_X\\to (I^{\\bullet}_X = I^0_X \\to I^1_X \\to \\cdots) " }, { "math_id": 28, "text": "Rp_!k_X=p_!I^{\\bullet}_X=\\Gamma_c(X;I^{\\bullet}_X)" }, { "math_id": 29, "text": "\\mathrm{Hom}^{\\bullet}(\\Gamma_c(X;I^{\\bullet}_X),k)= \\cdots \\to \\Gamma_c(X;I^2_X)^{\\vee}\\to \\Gamma_c(X;I^1_X)^{\\vee}\\to \\Gamma_c(X;I^0_X)^{\\vee}\\to 0" }, { "math_id": 30, "text": "[Rp_!k_X,k]\\cong H^0(\\mathrm{Hom}^{\\bullet}(\\Gamma_c(X;I^{\\bullet}_X),k))=H^0_c(X;k_X)^{\\vee}." }, { "math_id": 31, "text": "p^!k=k_X[n]," }, { "math_id": 32, "text": "[k_X,k_X[n]]\\cong H^n(\\mathrm{Hom}^{\\bullet}(k_X,k_X))=H^n(X;k_X)." }, { "math_id": 33, "text": "H^0_c(X;k_X)^{\\vee}\\cong H^n(X;k_X)." }, { "math_id": 34, "text": "H^i_c(X;k_X)^{\\vee}\\cong H^{n-i}(X;k_X)." } ]
https://en.wikipedia.org/wiki?curid=7190735
71907438
Spin chain
Type of model in quantum statistical physics A spin chain is a type of model in statistical physics. Spin chains were originally formulated to model magnetic systems, which typically consist of particles with magnetic spin located at fixed sites on a lattice. A prototypical example is the quantum Heisenberg model. Interactions between the sites are modelled by operators which act on two different sites, often neighboring sites. They can be seen as a quantum version of statistical lattice models, such as the Ising model, in the sense that the parameter describing the spin at each site is promoted from a variable taking values in a discrete set (typically formula_0, representing 'spin up' and 'spin down') to a variable taking values in a vector space (typically the spin-1/2 or two-dimensional representation of formula_1). History. The prototypical example of a spin chain is the Heisenberg model, described by Werner Heisenberg in 1928. This models a one-dimensional lattice of fixed particles with spin 1/2. A simple version (the antiferromagnetic XXX model) was solved, that is, the spectrum of the Hamiltonian of the Heisenberg model was determined, by Hans Bethe using the Bethe ansatz. Now the term Bethe ansatz is used generally to refer to many ansatzes used to solve exactly solvable problems in spin chain theory such as for the other variations of the Heisenberg model (XXZ, XYZ), and even in statistical lattice theory, such as for the six-vertex model. Another spin chain with physical applications is the Hubbard model, introduced by John Hubbard in 1963. This model was shown to be exactly solvable by Elliott Lieb and Fa-Yueh Wu in 1968. Another example of (a class of) spin chains is the Gaudin model, described and solved by Michel Gaudin in 1976 Mathematical description. The lattice is described by a graph formula_2 with vertex set formula_3 and edge set formula_4. The model has an associated Lie algebra formula_5. More generally, this Lie algebra can be taken to be any complex, finite-dimensional semi-simple Lie algebra formula_6. More generally still it can be taken to be an arbitrary Lie algebra. Each vertex formula_7 has an associated representation of the Lie algebra formula_6, labelled formula_8. This is a quantum generalization of statistical lattice models, where each vertex has an associated 'spin variable'. The Hilbert space formula_9 for the whole system, which could be called the configuration space, is the tensor product of the representation spaces at each vertex: formula_10 A Hamiltonian is then an operator on the Hilbert space. In the theory of spin chains, there are possibly many Hamiltonians which mutually commute. This allows the operators to be simultaneously diagonalized. There is a notion of exact solvability for spin chains, often stated as determining the spectrum of the model. In precise terms, this means determining the simultaneous eigenvectors of the Hilbert space for the Hamiltonians of the system as well as the eigenvalues of each eigenvector with respect to each Hamiltonian. Examples. Spin 1/2 XXX model in detail. The prototypical example, and a particular example of the Heisenberg spin chain, is known as the spin 1/2 Heisenberg XXX model. The graph formula_2 is the periodic 1-dimensional lattice with formula_11-sites. Explicitly, this is given by formula_12, and the elements of formula_4 being formula_13 with formula_14 identified with formula_15. The associated Lie algebra is formula_16. At site formula_17 there is an associated Hilbert space formula_18 which is isomorphic to the two dimensional representation of formula_16 (and therefore further isomorphic to formula_19). The Hilbert space of system configurations is formula_20, of dimension formula_21. Given an operator formula_22 on the two-dimensional representation formula_23 of formula_16, denote by formula_24 the operator on formula_9 which acts as formula_22 on formula_18 and as identity on the other formula_25 with formula_26. Explicitly, it can be written formula_27 where the 1 denotes identity. The Hamiltonian is essentially, up to an affine transformation, formula_28 with implied summation over index formula_29, and where formula_30 are the Pauli matrices. The Hamiltonian has formula_16 symmetry under the action of the three total spin operators formula_31. The central problem is then to determine the spectrum (eigenvalues and eigenvectors in formula_9) of the Hamiltonian. This is solved by the method of an Algebraic Bethe ansatz, discovered by Hans Bethe and further explored by Ludwig Faddeev. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{+1, -1\\}" }, { "math_id": 1, "text": "\\mathfrak{su}(2)" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "E" }, { "math_id": 5, "text": "\\mathfrak{sl}_2 := \\mathfrak{sl}(2, \\mathbb{C})" }, { "math_id": 6, "text": "\\mathfrak{g}" }, { "math_id": 7, "text": "v \\in V" }, { "math_id": 8, "text": "V_v" }, { "math_id": 9, "text": "\\mathcal{H}" }, { "math_id": 10, "text": " \\mathcal{H} = \\bigotimes_{v\\in V} V_v." }, { "math_id": 11, "text": "N" }, { "math_id": 12, "text": "V = \\{1, \\cdots, N\\}" }, { "math_id": 13, "text": "\\{n, n+1\\}" }, { "math_id": 14, "text": "N+1" }, { "math_id": 15, "text": "1" }, { "math_id": 16, "text": "\\mathfrak{sl}_2" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "h_n" }, { "math_id": 19, "text": "\\mathbb{C}^2" }, { "math_id": 20, "text": "\\mathcal{H} = \\bigotimes_{n = 1}^N h_n" }, { "math_id": 21, "text": "2^N" }, { "math_id": 22, "text": "A" }, { "math_id": 23, "text": "h" }, { "math_id": 24, "text": "A^{(n)}" }, { "math_id": 25, "text": "h_m" }, { "math_id": 26, "text": "m \\neq n" }, { "math_id": 27, "text": "A^{(n)} = 1\\otimes \\cdots \\otimes \\underbrace{A}_{n} \\otimes \\cdots \\otimes 1," }, { "math_id": 28, "text": " H = \\sum_{n = 1}^N \\sigma^{(n)}_i \\sigma^{(n+1)}_i" }, { "math_id": 29, "text": "i" }, { "math_id": 30, "text": "\\sigma_i" }, { "math_id": 31, "text": "\\sigma_i = \\sum_{n = 1}^{N} \\sigma_i^{(n)}" } ]
https://en.wikipedia.org/wiki?curid=71907438
719095
FIFA Men's World Ranking
World ranking list The FIFA Men's World Ranking is a ranking system for men's national teams in association football, led by Argentina as of July 2024[ [update]]. The men's teams of the member nations of FIFA, football's world governing body, are ranked based on their game results with the most successful teams being ranked highest. The rankings were introduced in December 1992, and eight teams (Argentina, Belgium, Brazil, France, Germany, Italy, the Netherlands and Spain) have held the top position, of which Brazil have spent the longest time ranked first. A points system is used, with points being awarded based on the results of all FIFA-recognised full international matches. The ranking system has been revamped on several occasions, generally responding to criticism that the preceding calculation method did not effectively reflect the relative strengths of the national teams. Since 16 August 2018, the ranking system has adopted the Elo rating system used in chess and Go. The ranking is sponsored by Coca-Cola; as such, the FIFA/Coca-Cola World Ranking name is also used. Coca-Cola also sponsors the women's counterpart. History. In December 1992, FIFA first published a listing in rank order of its men's member associations to provide a basis for comparison of the relative strengths of these teams. From the following August, this list was more frequently updated, to be published most months. Significant changes were implemented in January 1999 and again in July 2006, as a reaction to criticisms of the system. Historical records of the rankings, such as listed at FIFA.com, reflect the method of calculation in use at the time, as the current method has not been applied retrospectively to rankings before July 2006. Membership of FIFA has expanded from 167 to 211 since the rankings began; 211 members are currently included in the rankings. The Cook Islands were temporarily removed from the ranking in the period from September 2019 until February 2022, after not having played any matches between 4 September 2015 until 17 March 2022. 1992–1998 calculation method. The ranking formula used from December 1992 until December 1998, was devised by two Swiss lecturers from University of Zurich (Markus Lamprecht and Dr. Hanspeter Stam). The first formula was the most simple one, compared to later revisions, but it still required complex calculation. The main concept was to award points for matches played between all FIFA-affiliated national A teams, based on their results over the past eight years in FIFA-recognised matches (friendly matches, qualifying and finals matches for the World Cup, and qualifying and final matches for a Continental championship): Results were not included from matches played by the FIFA association's: B teams, C teams, League XI teams, Women, U17, U20, U23 and futsal teams. The calculation formula was adjusted by the following factors: In example, despite being undefeated for all matches played in 1994, the world ranking of England still dropped seven places from December 1993 to December 1994, because the team only scored points from six lower factored friendly games (as per rule 4). England did not play a single competitive match in 1994, because they had failed to qualify for the 1994 FIFA World Cup and neither played qualifiers for the UEFA Euro 1996 as an automatically qualified host. The calculated results for the rankings published throughout 1992–1998, was at some point of time also rounded to the nearest integer by the official FIFA website, although other websites opted to publish the unrounded points of the ranking. 1999–2006 calculation method. In January 1999, FIFA introduced a revised system of ranking calculation, incorporating many changes in response to criticism of inappropriate rankings. For the ranking all matches, their scores and importance were all recorded, and were used in the calculation procedure. Only matches for the senior men's national team were included. Separate ranking systems were used for other representative national sides such as women's and junior teams, for example the FIFA Women's World Rankings. The women's rankings were, and still are, based on a procedure which is a simplified version of the Football Elo Ratings. The major changes were as follows: A contemporary website, described the 1999-revision of the calculation formula to be something that "slightly modified and more finely tuned the tried and tested method of calculation", with the most impactful revisions being that from now on only the seven best matches annually are taken into account (removing the previous advantage of playing additional matches) along with an adjustment of the regional strength factors for confederations and the match importance factors for various competitions. The new ranking system continued the practice of the previous one, to annually grant the awards: The changes made the ranking system more complex, but helped improve its accuracy by making it more comprehensive. 2006–2018 calculation method. FIFA announced that the ranking system would be updated following the 2006 World Cup. The evaluation period was cut from eight to four years, and a simpler method of calculation was used to determine rankings. Goals scored and home or away advantage were no longer taken into account, and other aspects of the calculations, including the importance attributed to different types of match, were revised. The first set of revised rankings and the calculation methodology were announced on 12 July 2006. This change was rooted at least in part in widespread criticism of the previous ranking system. Many football enthusiasts felt it was inaccurate, especially when compared to other ranking systems and that it was not sufficiently responsive to changes in the performance of individual teams. 2018 ranking system update. In September 2017, FIFA announced they were reviewing the ranking system and would decide after the end of the 2018 FIFA World Cup qualification whether any changes would be made to improve the ranking. FIFA announced on 10 June 2018 that the ranking system would be updated following the 2018 World Cup finals. The calculation method adopted closely modelled the Elo rating system (see Calculation Method). The weighting designated for each confederation for ranking purposes was abolished. The new methodology does not account for home or away games and margin of the victory, as in several popular unofficial Elo-based ranking systems. FIFA had intended to introduce the new ranking system in July 2018, but with no matches scheduled between the July and August ranking dates, delayed until August 2018. There was speculation from football journalists such as ESPN's Dale Johnson that this was because projections of the new rankings had seen relatively little change in positions, with Germany – who had been eliminated in the first round of the World Cup – remaining as the top ranked team. FIFA had originally planned to use existing world ranking points from June 2018 as the start value, but when the August rankings appeared, the starting points had been changed to an equal distribution of points between 1600 (Germany, as the previously top ranked team) and 868 (Anguilla, Bahamas, Eritrea, Somalia, Tonga and Turks and Caicos Islands, which had 0 points in June), according to the formula: formula_0, where R is the rank in June 2018. When two or more teams had equal ranks, the following team received the next immediate rank possible, e.g. if two teams had R=11, the following team had R=12, not 13. Then the rating changes according to the games played after previous release were calculated. This produced a more dramatically altered ranking table, with Germany falling to 15th and 2018 World Cup champions France moving to the top of the ranking. 2021 alteration. Starting with the April 2021 rankings, the teams' points are now rounded to two decimal points, instead of being rounded to the nearest integer. Leaders. FIFA World Men's Ranking Leaders When the system was introduced, Germany debuted as the top-ranked team following their extended period of dominance in which they had reached the three previous FIFA World Cup finals, winning one of them. Brazil took the lead in the run up to the 1994 FIFA World Cup after winning eight and losing only one of nine qualification matches, while on the way scoring twenty goals and conceding just four. Italy then led for a short time on the back of their own equally successful World Cup qualifying campaign, after which the top place was re-claimed by Germany. Brazil's success in their lengthy qualifying campaign returned them to the lead for a brief period. Germany led again during the 1994 World Cup, until Brazil's victory in that competition gave them a large lead that would stand up for nearly seven years, until they were surpassed by a strong France team that captured both the 1998 FIFA World Cup and the 2000 European Football Championship. Success at the 2002 FIFA World Cup restored Brazil to the top position, where they remained until February 2007, when Italy returned to the top for the first time since 1993 following their 2006 FIFA World Cup win in Germany. Just one month later, Argentina replaced them, reaching the top for the first time, but Italy regained its place in April. After winning the Copa América 2007 in July, Brazil returned to the top, but were replaced by Italy in September and then Argentina in October. In July 2008, Spain took over the lead for the first time, having won UEFA Euro 2008. Brazil began a sixth stint at the top of the rankings in July 2009 after winning the 2009 Confederations Cup, and Spain regained the title in November 2009 after winning every match in qualification for the 2010 FIFA World Cup. In April 2010, Brazil returned to the top of the table. After winning the 2010 World Cup, Spain regained the top position and held it until August 2011, when the Netherlands reached the top spot for the first time, only to relinquish it the following month. In July 2014, Germany took over the lead once again, having won the 2014 FIFA World Cup. In July 2015, Argentina reached the top spot for the first time since 2008, after reaching both the 2014 FIFA World Cup Final, as well as the 2015 Copa America Final. In November 2015, Belgium became the leader in the FIFA rankings for the first time, after topping their Euro 2016 qualifying group. Belgium led the rankings until April 2016, when Argentina returned to the top. On 6 April 2017, Brazil returned to the No. 1 spot for the first time since just before the 2010 World Cup, but Germany regained the top spot in July after winning the Confederations Cup. In the summer of 2018, FIFA updated their rating system by adopting the Elo rating system. The first ranking list with this system, in August 2018, saw France retake the top spot for the first time after nearly 16 years, having won the 2018 FIFA World Cup. One month later, for the first time, two teams were joint leaders as Belgium reached the same ranking as France. This lasted only one month as Belgium regained sole possession of the top spot in September 2018 and kept it for nearly four years until the end of March 2022, with only Brazil and Spain ever holding it longer uninterrupted. In March 2022, Brazil returned to the top of the list for about a year before being overtaken by 2022 FIFA World Cup winners Argentina on 6 April 2023. Ranking schedule. Rankings are generally published four times a year. The next update is scheduled for 19 September 2024. Uses of the rankings. The rankings are used by FIFA to rank the progression and ability of the national football teams of its member nations, and claims that they create "a reliable measure for comparing national A-teams". They are used as part of the calculation, or the entire grounds to seed competitions. In the 2010 FIFA World Cup qualification tournament, the rankings were used to seed the groups in the competitions involving CONCACAF members (using the May rankings), CAF (with the July set of data), and UEFA, using the specially postponed November 2007 ranking positions. The October 2009 ranking was used to determine the seeds for the 2010 FIFA World Cup final draw. The March 2011 ranking was used to seed the draw for the 2012 CAF Men's Pre-Olympic Tournament second qualifying round. The rankings are also used to determine the winners of the two annual awards national teams receive on the basis of their performance in the rankings. The (English) Football Association uses the average of the last 24 months of rankings as one of the criteria for player work permits. Special releases. To determine the seeding of teams in certain instances like FIFA World Cup qualification, FIFA occasionally releases a list of special rankings for a particular confederation to determine the seeding of the teams. For instance, the seeding for the third round draw for AFC qualifiers was based on a special release of the FIFA World Rankings for Asian teams on 18 June 2021. Criticism of pre-2018 methods. Since their introduction in 1992, the FIFA World Rankings have been the subject of much debate, particularly regarding the calculation procedure and the resulting disparity between generally perceived quality and world ranking of some teams. The perceived flaws in the FIFA system have led to the creation of a number of alternative rankings from football statisticians. The initial system was very simple, with no weighting for the quality of opponent or importance of a match. This saw Norway reach second in October 1993 and July–August 1995, a ranking that was criticised at the time. The rankings were adapted in 1999 to include weightings based on the importance of the match and the strength of the opponent. A win over a weak opponent resulted in fewer points being awarded than a win over a much stronger one. Further adaptations in 2006 were made to reduce the number of years' results considered from 8 to 4, with greater reliance on matches from within the previous 12 months. Still, criticisms of the rankings remained, with particular anomalies being noted including: the United States rise to fourth in 2006, to the surprise of even their own players; Israel's climb to 15th in November 2008, which surprised the Israeli press; and Belgium's rank of world number 1 in November 2015, even though Belgium had only played in one tournament final stage in the past 13 years. Further criticisms of the 2006–2018 formula included the inability of hosts of major tournaments to retain a high place in the rankings, as the team participated in only lower-value friendly matches due to their automatic qualification for the tournament. For example, 2014 FIFA World Cup hosts Brazil fell to a record low ranking of 22nd in the world before that tournament, at which they then finished fourth. 2018 FIFA World Cup hosts Russia had the lowest ranking (70th) at the tournament, where they reached the quarter-finals before bowing out to eventual finalists Croatia on penalties. In the 2010s, teams realised the ranking system could be 'gamed', specifically by avoiding playing non-competitive matches, particularly against weaker opponents. This was because the low weighting of friendlies meant that even victories could reduce a team's "average" score: in other words, a team could win a match and lose points. Before the seeding of the 2018 World Cup preliminary draw, Romania even appointed a ranking consultant, playing only one friendly in the year before the draw. Similar accusations had been made against Switzerland, who were a seeded team at the 2014 FIFA World Cup having played only three friendly matches in the previous year, and Poland before the 2018 FIFA World Cup. The use of regional strength multiplier in the ranking formula before 2018 was also accused of further reinforcing and perpetuating the bias for and against certain regions. Calculation method. On 10 June 2018, the new ranking system was approved by the FIFA Council. It is based on the Elo rating system and after each game points are added to or subtracted from a team's rating according to the formula: formula_1 where: If a game ends with a winner, but still requires a penalty shoot-out (PSO) (i.e. in the second game of a two-legged tie), it is considered as a regular game and the PSO is disregarded. formula_2 where formula_3 is the difference between two teams' ratings before the game, and formula_4 is a scale. Negative points in knockout stages of final competitions do not affect teams' ratings. Analysis of model properties. The 2018 ranking is a clear improvement over the previous FIFA rankings and it can be evaluated against well-defined numerical and statistical benchmarks. Inflation. The pure Elo rating is described as a zero-sum game, where the total number of ranking points stays constant: the points gained by the winner are taken away from the losing team. This happens when the results of the game formula_5 are defined in a symmetric manner, i.e., formula_6. However, the above condition is not satisfied in the FIFA ranking in : As a consequence, the total number of points slowly grows over time, which is known as the ranking "inflation". As an example, between June 4, 2018 and March 31, 2022, there were 3,444 FIFA-recognized games, and the initial total value of points equal to 254,680 was increased by 2,099 points (approximately 0.8%) which was due to 24 games with the shootout points applied (but not the knockout), 90 games with the knockout rule (but not the shootout), and 30 games where both rules were applied. Probabilistic model. Since the FIFA ranking is inspired by the Elo rating, it inherits its properties. In particular, the explicit probabilistic model of the game outcomes is not defined, that is, even if the expected result formula_12 is calculated from the difference between the rankings formula_3, the probability of observing the win, the draw or the loss is not specified. This makes difficult the assessment of how well the ranking predicts the future games. However, as explained in Elo rating article, it is possible, to infer the implicit probabilistic model used by the algorithm. The obvious advantage of using such a model is that we can calculate the probability of the particular outcome of the game, and, more importantly, adjust the parameters to take into account the frequency of the draws. In the context of international FIFA games, this improves the prediction capacity by 2-3%. Home advantage. Using the inferred probabilistic model, the statistical analysis indicates that introducing the Home Advantage parameter improves the prediction capacity of the ranking by approximately 3%. In fact, this should be uncontroversial because the FIFA Women's World Ranking already uses the Home Advantage parameter H=100 which is added to the Ranking of the home team when calculating the ranking difference formula_3. The corresponding coefficient in Men's ranking should be around H=300. Game importance coefficients. From the statistical perspective, the importance coefficients formula_13, which de facto change the ranking adaptation step, are counterproductive, that is, yield game prediction which are worse when compared to using the same step in all games. Awards. Each year FIFA hands out two awards to its member nations, based on their rankings. They are: Team of the Year. "Team of the Year" is awarded each year to the first ranked team in the December-edition of the FIFA World Ranking. Except in 2000 and 2001, where a different calculation method determined the award instead should be given to: The national team with the highest average points score in their seven best matches of the recent year ended on 31 December. Argentina are the most recent Team of the Year for the third time in the 30-year history of the rankings. Brazil hold the records for most consecutive wins (seven, between 1994 and 2000) and most wins overall (thirteen). The table shows the three best teams of each year: Best Mover of the Year. The Best Mover of the Year is awarded to the team who made the best progress up the rankings over the course of the year. In the FIFA rankings, this is not simply the team that has risen the most places, but a calculation is performed in order to account for the fact that it becomes progressively harder to earn more points the higher up the rankings a team is. The calculation done for the years 1993–2006, the era before the major FIFA World ranking calculation revision in July 2006, is the number of points the team has at the end of the year ("z") multiplied by the number of points it earned during the year ("y"). The team with the highest index on this calculation received the award. The 1993–2006 table shows the top three best movers for each year. In the years from 1993 until 2006, an official "Best Mover Award" was handed over to the coach of the winning national football team at the annual FIFA World Player Gala. For example, the coach of the Slovenia national football team (Srečko Katanec) received this official award at the "FIFA World Player 1999 Awards Gala", with the award granted a few days after the year had ended on 24 January 2000. The award has not been an official part of the annual FIFA awards gala show, The Best FIFA Football Awards, since the show for 2006. While an official award has not been made for national team movements since 2006, FIFA has continued each year to release a list of the 'Best Movers' in the rankings. An example of the informal on-going "Mover of the Year" award is the recognition made by FIFA to Colombia in 2012 in an official press release. After implementation of the major FIFA World ranking calculation revision in July 2006, the calculation methodology to decide the "Mover of the Year" ranking also changed in 2007, to simply the difference between the FIFA world ranking points by the end of the year compared to 12 months earlier. The Best Mover results for all subsequent years are based on the same methodology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_\\text{seeding}=1600-(R-1)\\times 4" }, { "math_id": 1, "text": "P = P_\\text{before}+I(W-W_e)" }, { "math_id": 2, "text": "W_e = \\frac{1}{10^{-\\frac{\\Delta}{c}} + 1}" }, { "math_id": 3, "text": "\\Delta" }, { "math_id": 4, "text": "c=600" }, { "math_id": 5, "text": "W" }, { "math_id": 6, "text": "W_\\text{winner} + W_\\text{loser} = 1" }, { "math_id": 7, "text": "W_\\text{winner}= 0.75" }, { "math_id": 8, "text": "W_\\text{loser} = 0.5" }, { "math_id": 9, "text": "W_\\text{winner} + W_\\text{loser} = 1.25" }, { "math_id": 10, "text": "W-W_e" }, { "math_id": 11, "text": "W_e>0.75" }, { "math_id": 12, "text": "W_{e}" }, { "math_id": 13, "text": "I" } ]
https://en.wikipedia.org/wiki?curid=719095
71912239
Diffusion model
Deep learning algorithm &lt;templatestyles src="Machine learning/styles.css"/&gt; In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable generative models. A diffusion model consists of three major components: the forward process, the reverse process, and the sampling procedure. The goal of diffusion models is to learn a diffusion process for a given dataset, such that the process can generate new elements that are distributed similarly as the original dataset. A diffusion model models data as generated by a diffusion process, whereby a new datum performs a random walk with drift through the space of all possible data. A trained diffusion model can be sampled in many ways, with different efficiency and quality. There are various equivalent formalisms, including Markov chains, denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. They are typically trained using variational inference. The model responsible for denoising is typically called its "backbone". The backbone may be of any kind, but they are typically U-nets or transformers. As of 2024[ [update]], diffusion models are mainly used for computer vision tasks, including image denoising, inpainting, super-resolution, image generation, and video generation. These typically involves training a neural network to sequentially denoise images blurred with Gaussian noise. The model is trained to reverse the process of adding noise to an image. After training to convergence, it can be used for image generation by starting with an image composed of random noise, and applying the network iteratively to denoise the image. Diffusion-based image generators have seen widespread commercial interest, such as Stable Diffusion and DALL-E. These models typically combine diffusion models with other models, such as text-encoders and cross-attention modules to allow text-conditioned generation. Other than computer vision, diffusion models have also found applications in natural language processing such as text generation and summarization, sound generation, and reinforcement learning. Denoising diffusion model. Non-equilibrium thermodynamics. Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion. Consider, for example, how one might model the distribution of all naturally-occurring photos. Each image is a point in the space of all images, and the distribution of naturally-occurring photos is a "cloud" in space, which, by repeatedly adding noise to the images, diffuses out to the rest of the image space, until the cloud becomes all but indistinguishable from a Gaussian distribution formula_0. A model that can approximately undo the diffusion can then be used to sample from the original distribution. This is studied in "non-equilibrium" thermodynamics, as the starting distribution is not in equilibrium, unlike the final distribution. The equilibrium distribution is the Gaussian distribution formula_0, with pdf formula_1. This is just the Maxwell–Boltzmann distribution of particles in a potential well formula_2 at temperature 1. The initial distribution, being very much out of equilibrium, would diffuse towards the equilibrium distribution, making biased random steps that are a sum of pure randomness (like a Brownian walker) and gradient descent down the potential well. The randomness is necessary: if the particles were to undergo only gradient descent, then they will all fall to the origin, collapsing the distribution. Denoising Diffusion Probabilistic Model (DDPM). The 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method by variational inference. Forward diffusion. To present the model, we need some notation. A forward diffusion process starts at some starting point formula_13, where formula_14 is the probability distribution to be learned, then repeatedly adds noise to it byformula_15where formula_16 are IID samples from formula_0. This is designed so that for any starting distribution of formula_17, we have formula_18 converging to formula_0. The entire diffusion process then satisfiesformula_19orformula_20where formula_21 is a normalization constant and often omitted. In particular, we note that formula_22 is a gaussian process, which affords us considerable freedom in reparameterization. For example, by standard manipulation with gaussian process, formula_23formula_24In particular, notice that for large formula_25, the variable formula_26 converges to formula_0. That is, after a long enough diffusion process, we end up with some formula_27 that is very close to formula_0, with all traces of the original formula_13 gone. For example, sinceformula_23we can sample formula_28 directly "in one step", instead of going through all the intermediate steps formula_29. &lt;templatestyles src="Math_proof/styles.css" /&gt;Derivation by reparameterization We know formula_30 is a gaussian, and formula_31 is another gaussian. We also know that these are independent. Thus we can perform a reparameterization: formula_32 formula_33 where formula_34 are IID gaussians. There are 5 variables formula_35 and two linear equations. The two sources of randomness are formula_34, which can be reparameterized by rotation, since the IID gaussian distribution is rotationally symmetric. By plugging in the equations, we can solve for the first reparameterization: formula_36 where formula_37 is a gaussian with mean zero and variance one. To find the second one, we complete the rotational matrix: formula_38 Since rotational matrices are all of the form formula_39, we know the matrix must be formula_40 and since the inverse of rotational matrix is its transpose, formula_41 Plugging back, and simplifying, we have formula_42 formula_43 Backward diffusion. The key idea of DDPM is to use a neural network parametrized by formula_44. The network takes in two arguments formula_45, and outputs a vector formula_46 and a matrix formula_47, such that each step in the forward diffusion process can be approximately undone by formula_48. This then gives us a backward diffusion process formula_49 defined byformula_50formula_51The goal now is to learn the parameters such that formula_52 is as close to formula_53 as possible. To do that, we use maximum likelihood estimation with variational inference. Variational inference. The ELBO inequality states that formula_54, and taking one more expectation, we getformula_55We see that maximizing the quantity on the right would give us a lower bound on the likelihood of observed data. This allows us to perform variational inference. Define the loss functionformula_56and now the goal is to minimize the loss by stochastic gradient descent. The expression may be simplified toformula_57where formula_21 does not depend on the parameter, and thus can be ignored. Since formula_58 also does not depend on the parameter, the term formula_59 can also be ignored. This leaves just formula_60 with formula_61 to be minimized. Noise prediction network. Since formula_62, this suggests that we should use formula_63; however, the network does not have access to formula_17, and so it has to estimate it instead. Now, since formula_26, we may write formula_64, where formula_65 is some unknown gaussian noise. Now we see that estimating formula_17 is equivalent to estimating formula_65. Therefore, let the network output a noise vector formula_66, and let it predictformula_67It remains to design formula_47. The DDPM paper suggested not learning it (since it resulted in "unstable training and poorer sample quality"), but fixing it at some value formula_68, where either formula_69 yielded similar performance. With this, the loss simplifies to formula_70which may be minimized by stochastic gradient descent. The paper noted empirically that an even simpler loss functionformula_71resulted in better models. Score-based generative model. Score-based generative model is another formulation of diffusion modelling. They are also called noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD). Score matching. The idea of score functions. Consider the problem of image generation. Let formula_12 represent an image, and let formula_72 be the probability distribution over all possible images. If we have formula_72 itself, then we can say for certain how likely a certain image is. However, this is intractable in general. Most often, we are uninterested in knowing the absolute probability of a certain image. Instead, we are usually only interested in knowing how likely a certain image is compared to its immediate neighbors — e.g. how much more likely is an image of cat compared to some small variants of it? Is it more likely if the image contains two whiskers, or three, or with some Gaussian noise added? Consequently, we are actually quite uninterested in formula_72 itself, but rather, formula_73. This has two major effects: Let the score function be formula_78; then consider what we can do with formula_79. As it turns out, formula_79 allows us to sample from formula_72 using thermodynamics. Specifically, if we have a potential energy function formula_80, and a lot of particles in the potential well, then the distribution at thermodynamic equilibrium is the Boltzmann distribution formula_81. At temperature formula_82, the Boltzmann distribution is exactly formula_72. Therefore, to model formula_72, we may start with a particle sampled at any convenient distribution (such as the standard gaussian distribution), then simulate the motion of the particle forwards according to the Langevin equation formula_83 and the Boltzmann distribution is, by Fokker-Planck equation, the unique thermodynamic equilibrium. So no matter what distribution formula_17 has, the distribution of formula_84 converges in distribution to formula_14 as formula_85. Learning the score function. Given a density formula_14, we wish to learn a score function approximation formula_86. This is score matching"." Typically, score matching is formalized as minimizing Fisher divergence function formula_87. By expanding the integral, and performing an integration by parts, formula_88giving us a loss function, also known as the Hyvärinen scoring rule, that can be minimized by stochastic gradient descent. Annealing the score function. Suppose we need to model the distribution of images, and we want formula_89, a white-noise image. Now, most white-noise images do not look like real images, so formula_90 for large swaths of formula_89. This presents a problem for learning the score function, because if there are no samples around a certain point, then we can't learn the score function at that point. If we do not know the score function formula_91 at that point, then we cannot impose the time-evolution equation on a particle:formula_92To deal with this problem, we perform annealing. If formula_14 is too different from a white-noise distribution, then progressively add noise until it is indistinguishable from one. That is, we perform a forward diffusion, then learn the score function, then use the score function to perform a backward diffusion. Continuous diffusion processes. Forward diffusion process. Consider again the forward diffusion process, but this time in continuous time:formula_15By taking the formula_93 limit, we obtain a continuous diffusion process, in the form of a stochastic differential equation:formula_94where formula_95 is a Wiener process (multidimensional Brownian motion). Now, the equation is exactly a special case of the overdamped Langevin equationformula_96where formula_97 is diffusion tensor, formula_98 is temperature, and formula_99 is potential energy field. If we substitute in formula_100, we recover the above equation. This explains why the phrase "Langevin dynamics" is sometimes used in diffusion models. Now the above equation is for the stochastic motion of a single particle. Suppose we have a cloud of particles distributed according to formula_14 at time formula_101, then after a long time, the cloud of particles would settle into the stable distribution of formula_0. Let formula_102 be the density of the cloud of particles at time formula_25, then we haveformula_103and the goal is to somehow reverse the process, so that we can start at the end and diffuse back to the beginning. By Fokker-Planck equation, the density of the cloud evolves according toformula_104where formula_105 is the dimension of space, and formula_106 is the Laplace operator. Backward diffusion process. If we have solved formula_102 for time formula_107, then we can exactly reverse the evolution of the cloud. Suppose we start with another cloud of particles with density formula_108, and let the particles in the cloud evolve according toformula_109then by plugging into the Fokker-Planck equation, we find that formula_110. Thus this cloud of points is the original cloud, evolving backwards. Noise conditional score network (NCSN). At the continuous limit, formula_111 and so formula_112 In particular, we see that we can directly sample from any point in the continuous diffusion process without going through the intermediate steps, by first sampling formula_113, then get formula_114. That is, we can quickly sample formula_115 for any formula_116. Now, define a certain probability distribution formula_117 over formula_118, then the score-matching loss function is defined as the expected Fisher divergence: formula_119 After training, formula_120, so we can perform the backwards diffusion process by first sampling formula_121, then integrating the SDE from formula_122 to formula_101: formula_123 This may be done by any SDE integration method, such as Euler–Maruyama method. The name "noise conditional score network" is explained thus: Their equivalence. DDPM and score-based generative models are equivalent. This means that a network trained using DDPM can be used as a NCSN, and vice versa. We know that formula_127, so by Tweedie's formula, we have formula_128 As described previously, the DDPM loss function is formula_129 with formula_71 where formula_130. By a change of variables, formula_131 and the term inside becomes a least squares regression, so if the network actually reaches the global minimum of loss, then we have formula_132. Now, the continuous limit formula_133 of the backward equation formula_134 gives us precisely the same equation as score-based diffusion: formula_135 Main variants. Denoising Diffusion Implicit Model (DDIM). The original DDPM method for generating images is slow, since the forward diffusion process usually takes formula_136 to make the distribution of formula_27 to appear close to gaussian. However this means the backward diffusion process also take 1000 steps. Unlike the forward diffusion process, which can skip steps as formula_137 is gaussian for all formula_138, the backward diffusion process does not allow skipping steps. For example, to sample formula_139 requires the model to first sample formula_140. Attempting to directly sample formula_141 would require us to marginalize out formula_140, which is generally intractable. DDIM is a method to take any model trained on DDPM loss, and use it to sample with some steps skipped, sacrificing an adjustable amount of quality. If we generate the Markovian chain case in DDPM to non-Markovian case, DDIM corresponds to the case that the reverse process has variance equals to 0. In other words, the reverse process (and also the forward process) is deterministic. When using fewer sampling steps, DDIM outperforms DDPM. Latent diffusion model (LDM). Since the diffusion model is a general method for modelling probability distributions, if one wants to model a distribution over images, one can first encode the images into a lower-dimensional space by an encoder, then use a diffusion model to model the distribution over encoded images. Then to generate an image, one can sample from the diffusion model, then use a decoder to decode it into an image. The encoder-decoder pair is most often a variational autoencoder (VAE). Classifier guidance. Suppose we wish to sample not from the entire distribution of images, but conditional on the image description. We don't want to sample a generic image, but an image that fits the description "black cat with red eyes". Generally, we want to sample from the distribution formula_142, where formula_12 ranges over images, and formula_143 ranges over classes of images (a description "black cat with red eyes" is just a very detailed class, and a class "cat" is just a very vague description). Taking the perspective of the noisy channel model, we can understand the process as follows: To generate an image formula_12 conditional on description formula_143, we imagine that the requester really had in mind an image formula_12, but the image is passed through a noisy channel and came out garbled, as formula_143. Image generation is then nothing but inferring which formula_12 the requester had in mind. In other words, conditional image generation is simply "translating from a textual language into a pictorial language". Then, as in noisy-channel model, we use Bayes theorem to get formula_144 in other words, if we have a good model of the space of all images, and a good image-to-class translator, we get a class-to-image translator "for free". In the equation for backward diffusion, the score formula_145 can be replaced by formula_146 where formula_147 is the score function, trained as previously described, and formula_148 is found by using a differentiable image classifier. With temperature. The classifier-guided diffusion model samples from formula_142, which is concentrated around the maximum a posteriori estimate formula_149. If we want to force the model to move towards the maximum likelihood estimate formula_150, we can use formula_151 where formula_152 is interpretable as "inverse temperature". In the context of diffusion models, it is usually called the guidance scale. A high formula_153 would force the model to sample from a distribution concentrated around formula_150. This often improves quality of generated images. This can be done simply by SGLD with formula_154 Classifier-free guidance (CFG). If we do not have a classifier formula_155, we could still extract one out of the image model itself: formula_156 Such a model is usually trained by presenting it with both formula_157 and formula_158, allowing it to model both formula_159 and formula_160. Samplers. Given a diffusion model, one may regard it either as a continuous process, and sample from it by integrating a SDE, or one can regard it as a discrete process, and sample from it by iterating the discrete steps. The choice of the "noise schedule" formula_161 can also affect the quality of samples. In the DDPM perspective, one can use the DDPM itself (with noise), or DDIM (with adjustable amount of noise). The case where one adds noise is sometimes called ancestral sampling. One can interpolate between noise and no noise. The amount of noise is denoted formula_162 ("eta value") in the DDIM paper, with formula_163 denoting no noise (as in "deterministic" DDIM), and formula_164 denoting full noise (as in DDPM). In the perspective of SDE, one can use any of the numerical integration methods, such as Euler–Maruyama method, Heun's method, linear multistep methods, etc. Just as in the discrete case, one can add an adjustable amount of noise during the integration. A survey and comparison of samplers in the context of image generation is in. Other examples. Notable variants include Poisson flow generative model, consistency model, critically-damped Langevin diffusion, GenPhys, cold diffusion, etc. Flow-based diffusion model. Abstractly speaking, the idea of diffusion model is to take an unknown probability distribution (the distribution of natural-looking images), then progressively convert it to a known probability distribution (standard gaussian distribution), by building an absolutely continuous probability path connecting them. The probability path is in fact defined implicitly by the score function formula_165. In denoising diffusion models, the forward process adds noise, and the backward process removes noise. Both the forward and backward processes are SDEs, though the forward process is integrable in closed-form, so it can be done at no computational cost. The backward process is not integrable in closed-form, so it must be integrated step-by-step by standard SDE solvers, which can be very expensive. The probability path in diffusions model is defined through an Itô process and one can retrieve the deterministic process by using the Probability ODE flow formulation. In flow-based diffusion models, the forward process is a both deterministic flow along a time-dependent vector field, and the backward process is the same vector field, but going backwards. Both processes are solutions to ODEs. If the vector field is well-behaved, the ODE will also be well-behaved. Given two distributions formula_166 and formula_167, a flow-based model is a time-dependent velocity field formula_168 in formula_169, such that if we start by sampling a point formula_170, and let it move according to the velocity field: formula_171 we end up with a point formula_172. The solution formula_173 of the above ODE define a probability path formula_174 by the Pushforward measure operator. Espectially, one has formula_175. The probability path and the velocity field also satisfy the Continuity equation, in the sense of probability distribution: formula_176 To construct a probability path, we start by construct a conditional probability path formula_177 and the corresponding conditional velocity field formula_178 on some conditional distribution formula_179. A natural choice is the Gaussian conditional probability path: formula_180 The conditional velocity field which corresponds to the geodesic path between conditional Gaussian path is formula_181 The probability path and velocity field are then computed by marginalizing formula_182 Optimal Transport Flow. The idea of optimal transport flow is to construct a probability path minimizing the Wasserstein metric. The distribution on which we condition is the optimal transport plan between formula_183 and formula_184: formula_185 and formula_186, where formula_187 is the optimal transport plan, which can be approximated by mini-batch optimal transport. Rectified flow. The idea of rectified flow is to learn a flow model such that the velocity is nearly constant along each flow path. This is beneficial, because we can integrate along such a vector field with very few steps. For example, if an ODE formula_188 follows perfectly straight paths, it simplifies to formula_189, allowing for exact solutions in one step. In practice, we cannot reach such perfection, but when the flow field is nearly so, we can take a few large steps instead of many little steps. The general idea is to start with two distributions formula_166 and formula_167, then construct a flow field formula_190 from it, then repeatedly apply a "reflow" operation to obtain successive flow fields formula_191, each straighter than the previous one. When the flow field is straight enough for the application, we stop. Generally, for any time-differentiable process formula_173, formula_192 can be estimated by solving: formula_193 In rectified flow, by injecting strong priors that intermediate trajectories are straight, it can achieve both theoretical relevance for optimal transport and computational efficiency, as ODEs with straight paths can be simulated precisely without time discretization. Specifically, rectified flow seeks to match an ODE with the marginal distributions of the linear interpolation between points from distributions formula_166 and formula_167. Given observations formula_194 and formula_172, the canonical linear interpolation formula_195 yields a trivial case formula_196, which cannot be causally simulated without formula_197. To address this, formula_84 is "projected" into a space of causally simulatable ODEs, by minimizing the least squares loss with respect to the direction formula_198: formula_199 The data pair formula_200 can be any coupling of formula_166 and formula_167, typically independent (i.e., formula_201) obtained by randomly combining observations from formula_166 and formula_167. This process ensures that the trajectories closely mirror the density map of formula_84 trajectories but "reroute" at intersections to ensure causality. This rectifying process is also known as Flow Matching, Stochastic Interpolation, and Alpha-Blending. A distinctive aspect of rectified flow is its capability for "reflow", which straightens the trajectory of ODE paths. Denote the rectified flow formula_190 induced from formula_202 as formula_203. Recursively applying this formula_204 operator generates a series of rectified flows formula_205. This "reflow" process not only reduces transport costs but also straightens the paths of rectified flows, making formula_206 paths straighter with increasing formula_207. Rectified flow includes a nonlinear extension where linear interpolation formula_84 is replaced with any time-differentiable curve that connects formula_17 and formula_197, given by formula_208. This framework encompasses DDIM and probability flow ODEs as special cases, with particular choices of formula_209 and formula_161. However, in the case where the path of formula_84 is not straight, the reflow process no longer ensures a reduction in convex transport costs, and also no longer straighten the paths of formula_173. See for a tutorial on flow matching, with animations. Choice of architecture. Diffusion model. For generating images by DDPM, we need a neural network that takes a time formula_25 and a noisy image formula_84, and predicts a noise formula_66 from it. Since predicting the noise is the same as predicting the denoised image, then subtracting it from formula_84, denoising architectures tend to work well. For example, the U-Net, which was found to be good for denoising images, is often used for denoising diffusion models that generate images. For DDPM, the underlying architecture ("backbone") does not have to be a U-Net. It just has to predict the noise somehow. For example, the diffusion transformer (DiT) uses a Transformer to predict the mean and diagonal covariance of the noise, given the textual conditioning and the partially denoised image. It is the same as standard U-Net-based denoising diffusion model, with a Transformer replacing the U-Net. Mixture of experts-Transformer can also be applied. DDPM can be used to model general data distributions, not just natural-looking images. For example, Human Motion Diffusion models human motion trajectory by DDPM. Each human motion trajectory is a sequence of poses, represented by either joint rotations or positions. It uses a Transformer network to generate a less noisy trajectory out of a noisy one. Conditioning. The base diffusion model can only generate unconditionally from the whole distribution. For example, a diffusion model learned on ImageNet would generate images that look like a random image from ImageNet. To generate images from just one category, one would need to impose the condition. Whatever condition one wants to impose, one needs to first convert the conditioning into a vector of floating point numbers, then feed it into the underlying diffusion model neural network. However, one has freedom in choosing how to convert the conditioning into a vector. Stable Diffusion, for example, imposes conditioning in the form of cross-attention mechanism, where the query is an intermediate representation of the image in the U-Net, and both key and value are the conditioning vectors. The conditioning can be selectively applied to only parts of an image, and new kinds of conditionings can be finetuned upon the base model, as used in ControlNet. As a particularly simple example, consider image inpainting. The conditions are formula_210, the reference image, and formula_211, the inpainting mask. The conditioning is imposed at each step of the backward diffusion process, by first sampling formula_212, a noisy version of formula_210, then replacing formula_84 with formula_213, where formula_214 means elementwise multiplication. Another application of cross-attention mechanism is prompt-to-prompt image editing. Conditioning is not limited to just generating images from a specific category, or according to a specific caption (as in text-to-image). For example, demonstrated generating human motion, conditioned on an audio clip of human walking (allowing syncing motion to a soundtrack), or video of human running, or a text description of human motion, etc. Upscaling. As generating an image takes a long time, one can try to generate a small image by a base diffusion model, then upscale it by other models. Upscaling can be done by GAN, Transformer, or signal processing methods like Lanczos resampling. Diffusion models themselves can be used to perform upscaling. Cascading diffusion model stacks multiple diffusion models one after another, in the style of Progressive GAN. The lowest level is a standard diffusion model that generate 32x32 image, then the image would be upscaled by a diffusion model specifically trained for upscaling, and the process repeats. In more detail, the diffusion upscaler is trained as follows: Examples. This section collects some notable diffusion models, and briefly describes their architecture. OpenAI. The DALL-E series by OpenAI are text-conditional diffusion models of images. The first version of DALL-E (2021) is not actually a diffusion model. Instead, it uses a Transformer architecture that generates a sequence of tokens, which is then converted to an image by the decoder of a discrete VAE. Released with DALL-E was the CLIP classifier, which was used by DALL-E to rank generated images according to how close the image fits the text. GLIDE (2022-03) is a 3.5-billion diffusion model, and a small version was released publicly. Soon after, DALL-E 2 was released (2022-04). DALL-E 2 is a 3.5-billion cascaded diffusion model that generates images from text by "inverting the CLIP image encoder", the technique which they termed "unCLIP". Sora (2024-02) is a diffusion Transformer model (DiT). Stability AI. Stable Diffusion, released by Stability AI, consists of a denoising latent diffusion model (860 million parameters), a VAE, and a text encoder. The denoising network is a U-Net, with cross-attention blocks to allow for conditional image generation. Stable Diffusion 3 changed the latent diffusion model from the UNet to a Transformer model, and so it is a DiT. It uses rectified flow. Stable Video 4D is a latent diffusion model for videos of 3D objects. Google. Imagen uses a T5 language model to encode the input text into an embedding vector. It is a cascaded diffusion model with three steps. The first step denoises a white noise to a 64×64 image, conditional on the embedding vector of the text. The second step upscales the image by 64×64→256×256, conditional on embedding. The third step is similar, upscaling by 256×256→1024×1024. The three denoising networks are all U-Nets. Imagen 2 is also diffusion-based. It can generate images based on a prompt that mixes images and text. No further information available. Veo generates videos by latent diffusion. The diffusion is conditioned on a vector that encodes both a text prompt and an image prompt. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N(0, I)" }, { "math_id": 1, "text": "\\rho(x) \\propto e^{-\\frac 12 \\|x\\|^2}" }, { "math_id": 2, "text": "V(x) = \\frac 12 \\|x\\|^2" }, { "math_id": 3, "text": "\\beta_1, ..., \\beta_T \\in (0, 1)" }, { "math_id": 4, "text": "\\alpha_t := 1-\\beta_t" }, { "math_id": 5, "text": "\\bar \\alpha_t := \\alpha_1 \\cdots \\alpha_t" }, { "math_id": 6, "text": "\\tilde \\beta_t := \\frac{1-\\bar \\alpha_{t-1}}{1-\\bar \\alpha_{t}}\\beta_t" }, { "math_id": 7, "text": "\\tilde\\mu_t(x_t, x_0) :=\\frac{\\sqrt{\\alpha_{t}}(1-\\bar \\alpha_{t-1})x_t +\\sqrt{\\bar\\alpha_{t-1}}(1-\\alpha_{t})x_0}{1-\\bar\\alpha_{t}}" }, { "math_id": 8, "text": "N(\\mu, \\Sigma)" }, { "math_id": 9, "text": "\\mu" }, { "math_id": 10, "text": "\\Sigma" }, { "math_id": 11, "text": "N(x | \\mu, \\Sigma)" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "x_0 \\sim q" }, { "math_id": 14, "text": "q" }, { "math_id": 15, "text": "x_t = \\sqrt{1-\\beta_t} x_{t-1} + \\sqrt{\\beta_t} z_t" }, { "math_id": 16, "text": "z_1, ..., z_T" }, { "math_id": 17, "text": "x_0" }, { "math_id": 18, "text": "\\lim_t x_t|x_0" }, { "math_id": 19, "text": "q(x_{0:T}) = q(x_0)q(x_1|x_0) \\cdots q(x_T|x_{T-1}) = q(x_0) N(x_1 | \\sqrt{\\alpha_1} x_0, \\beta_1 I) \\cdots N(x_T | \\sqrt{\\alpha_T} x_{T-1}, \\beta_T I)" }, { "math_id": 20, "text": "\\ln q(x_{0:T}) = \\ln q(x_0) - \\sum_{t=1}^T \\frac{1}{2\\beta_t} \\| x_t - \\sqrt{1-\\beta_t}x_{t-1}\\|^2 + C" }, { "math_id": 21, "text": "C" }, { "math_id": 22, "text": "x_{1:T}|x_0" }, { "math_id": 23, "text": "x_{t}|x_0 \\sim N\\left(\\sqrt{\\bar\\alpha_t} x_{0}, (1-\\bar\\alpha_t) I \\right)" }, { "math_id": 24, "text": "x_{t-1} | x_t, x_0 \\sim N(\\tilde\\mu_t(x_t, x_0), \\tilde \\beta_t I)" }, { "math_id": 25, "text": "t" }, { "math_id": 26, "text": "x_{t}|x_0 \\sim N\\left(\\sqrt{\\bar\\alpha_t} x_{0}, (1-\\bar\\alpha_t) I \\right)" }, { "math_id": 27, "text": "x_T" }, { "math_id": 28, "text": "x_{t}|x_0" }, { "math_id": 29, "text": "x_1, x_2, ..., x_{t-1}" }, { "math_id": 30, "text": "x_{t-1}|x_0" }, { "math_id": 31, "text": "x_t|x_{t-1}" }, { "math_id": 32, "text": "x_{t-1} = \\sqrt{\\bar\\alpha_{t-1}} x_{0} + \\sqrt{1-\\bar\\alpha_{t-1}} z" }, { "math_id": 33, "text": "x_t = \\sqrt{\\alpha_t} x_{t-1} + \\sqrt{1-\\alpha_t} z'" }, { "math_id": 34, "text": "z, z'" }, { "math_id": 35, "text": "x_0, x_{t-1}, x_t, z, z'" }, { "math_id": 36, "text": "x_t = \\sqrt{\\bar \\alpha_t}x_0 + \\underbrace{\\sqrt{\\alpha_t - \\bar\\alpha_t}z + \\sqrt{1-\\alpha_t}z'}_{= \\sqrt{1-\\bar\\alpha_t} z''}" }, { "math_id": 37, "text": "z''" }, { "math_id": 38, "text": "\\begin{bmatrix}z'' \\\\z'''\\end{bmatrix} =\n \\begin{bmatrix} \\frac{\\sqrt{\\alpha_t - \\bar\\alpha_t}}{\\sqrt{1-\\bar\\alpha_t}} & \\frac{\\sqrt{\\beta_t}}{\\sqrt{1-\\bar\\alpha_t}} \\\\?&?\\end{bmatrix}\n \\begin{bmatrix} z\\\\z'\\end{bmatrix}" }, { "math_id": 39, "text": "\\begin{bmatrix} \\cos\\theta & \\sin\\theta\\\\ -\\sin\\theta & \\cos\\theta \\end{bmatrix}" }, { "math_id": 40, "text": "\\begin{bmatrix}z'' \\\\z'''\\end{bmatrix} =\n \\begin{bmatrix} \\frac{\\sqrt{\\alpha_t - \\bar\\alpha_t}}{\\sqrt{1-\\bar\\alpha_t}} & \\frac{\\sqrt{\\beta_t}}{\\sqrt{1-\\bar\\alpha_t}} \\\\- \\frac{\\sqrt{\\beta_t}}{\\sqrt{1-\\bar\\alpha_t}} & \\frac{\\sqrt{\\alpha_t - \\bar\\alpha_t}}{\\sqrt{1-\\bar\\alpha_t}} \n \\end{bmatrix}\n \\begin{bmatrix} z\\\\z'\\end{bmatrix}" }, { "math_id": 41, "text": "\\begin{bmatrix}z \\\\z'\\end{bmatrix} =\n \\begin{bmatrix} \\frac{\\sqrt{\\alpha_t - \\bar\\alpha_t}}{\\sqrt{1-\\bar\\alpha_t}} & -\\frac{\\sqrt{\\beta_t}}{\\sqrt{1-\\bar\\alpha_t}} \\\\ \\frac{\\sqrt{\\beta_t}}{\\sqrt{1-\\bar\\alpha_t}} & \\frac{\\sqrt{\\alpha_t - \\bar\\alpha_t}}{\\sqrt{1-\\bar\\alpha_t}} \n \\end{bmatrix}\n \\begin{bmatrix} z''\\\\z'''\\end{bmatrix}" }, { "math_id": 42, "text": "x_t = \\sqrt{\\bar\\alpha_t}x_0 + \\sqrt{1-\\bar\\alpha_t}z''" }, { "math_id": 43, "text": "x_{t-1} = \\tilde\\mu_t(x_t, x_0) - \\sqrt{\\tilde \\beta_t} z'''" }, { "math_id": 44, "text": "\\theta" }, { "math_id": 45, "text": "x_t, t" }, { "math_id": 46, "text": "\\mu_\\theta(x_t, t)" }, { "math_id": 47, "text": "\\Sigma_\\theta(x_t, t)" }, { "math_id": 48, "text": "x_{t-1} \\sim N(\\mu_\\theta(x_t, t), \\Sigma_\\theta(x_t, t))" }, { "math_id": 49, "text": "p_\\theta" }, { "math_id": 50, "text": "p_\\theta(x_T) = N(x_T | 0, I)" }, { "math_id": 51, "text": "p_\\theta(x_{t-1} | x_t) = N(x_{t-1} | \\mu_\\theta(x_t, t), \\Sigma_\\theta(x_t, t))" }, { "math_id": 52, "text": "p_\\theta(x_0)" }, { "math_id": 53, "text": "q(x_0)" }, { "math_id": 54, "text": "\\ln p_\\theta(x_0) \\geq E_{x_{1:T}\\sim q(\\cdot | x_0)}[ \\ln p_\\theta(x_{0:T}) - \\ln q(x_{1:T}|x_0)] " }, { "math_id": 55, "text": "E_{x_0 \\sim q}[\\ln p_\\theta(x_0)] \\geq E_{x_{0:T}\\sim q}[ \\ln p_\\theta(x_{0:T}) - \\ln q(x_{1:T}|x_0)] " }, { "math_id": 56, "text": "L(\\theta) := -E_{x_{0:T}\\sim q}[ \\ln p_\\theta(x_{0:T}) - \\ln q(x_{1:T}|x_0)]" }, { "math_id": 57, "text": "L(\\theta) = \\sum_{t=1}^T E_{x_{t-1}, x_t\\sim q}[-\\ln p_\\theta(x_{t-1} | x_t)] + E_{x_0 \\sim q}[D_{KL}(q(x_T|x_0) \\| p_\\theta(x_T))] + C" }, { "math_id": 58, "text": "p_\\theta(x_T) = N(x_T | 0, I)" }, { "math_id": 59, "text": "E_{x_0 \\sim q}[D_{KL}(q(x_T|x_0) \\| p_\\theta(x_T))]" }, { "math_id": 60, "text": "L(\\theta ) = \\sum_{t=1}^T L_t" }, { "math_id": 61, "text": "L_t = E_{x_{t-1}, x_t\\sim q}[-\\ln p_\\theta(x_{t-1} | x_t)]" }, { "math_id": 62, "text": "x_{t-1} | x_t, x_0 \\sim N(\\tilde\\mu_t(x_t, x_0), \\tilde \\beta_t I)" }, { "math_id": 63, "text": "\\mu_\\theta(x_t, t) = \\tilde \\mu_t(x_t, x_0)" }, { "math_id": 64, "text": "x_t = \\sqrt{\\bar\\alpha_t} x_{0} + \\sqrt{1-\\bar\\alpha_t} z" }, { "math_id": 65, "text": "z" }, { "math_id": 66, "text": "\\epsilon_\\theta(x_t, t)" }, { "math_id": 67, "text": "\\mu_\\theta(x_t, t) =\\tilde\\mu_t\\left(x_t, \\frac{x_t - \\sqrt{1-\\bar\\alpha_t} \\epsilon_\\theta(x_t, t)}{\\sqrt{\\bar\\alpha_t}}\\right) = \\frac{x_t - \\epsilon_\\theta(x_t, t) \\beta_t/\\sqrt{1-\\bar\\alpha_t}}{\\sqrt{\\alpha_t}}" }, { "math_id": 68, "text": "\\Sigma_\\theta(x_t, t) = \\sigma_t^2 I" }, { "math_id": 69, "text": "\\sigma_t^2 = \\beta_t \\text{ or } \\tilde \\beta_t" }, { "math_id": 70, "text": "L_t = \\frac{\\beta_t^2}{2\\alpha_t(1-\\bar\\alpha_t)\\sigma_t^2} E_{x_0\\sim q; z \\sim N(0, I)}\\left[ \\left\\| \\epsilon_\\theta(x_t, t) - z \\right\\|^2\\right] + C" }, { "math_id": 71, "text": "L_{simple, t} = E_{x_0\\sim q; z \\sim N(0, I)}\\left[ \\left\\| \\epsilon_\\theta(x_t, t) - z \\right\\|^2\\right]" }, { "math_id": 72, "text": "q(x)" }, { "math_id": 73, "text": "\\nabla_x \\ln q(x)" }, { "math_id": 74, "text": "\\tilde q(x) = Cq(x)" }, { "math_id": 75, "text": "C = \\int \\tilde q(x) dx > 0" }, { "math_id": 76, "text": "q(x + dx)" }, { "math_id": 77, "text": "\\frac{q(x)}{q(x+dx)} =e^{-\\langle \\nabla_x \\ln q, dx \\rangle}" }, { "math_id": 78, "text": "s(x) := \\nabla_x \\ln q(x)" }, { "math_id": 79, "text": "s(x)" }, { "math_id": 80, "text": "U(x) = -\\ln q(x)" }, { "math_id": 81, "text": "q_U(x) \\propto e^{-U(x)/k_B T} = q(x)^{1/k_BT}" }, { "math_id": 82, "text": "k_BT=1" }, { "math_id": 83, "text": "dx_{t}= -\\nabla_{x_t}U(x_t) d t+d W_t" }, { "math_id": 84, "text": "x_t" }, { "math_id": 85, "text": "t\\to \\infty" }, { "math_id": 86, "text": "f_\\theta \\approx \\nabla \\ln q" }, { "math_id": 87, "text": "E_q[\\|f_\\theta(x) - \\nabla \\ln q(x)\\|^2]" }, { "math_id": 88, "text": "E_q[\\|f_\\theta(x) - \\nabla \\ln q(x)\\|^2] = E_q[\\|f_\\theta\\|^2 + 2\\nabla^2\\cdot f_\\theta] + C" }, { "math_id": 89, "text": "x_0 \\sim N(0, I)" }, { "math_id": 90, "text": "q(x_0) \\approx 0" }, { "math_id": 91, "text": "\\nabla_{x_t}\\ln q(x_t)" }, { "math_id": 92, "text": "dx_{t}= \\nabla_{x_t}\\ln q(x_t) d t+d W_t" }, { "math_id": 93, "text": "\\beta_t \\to \\beta(t)dt, \\sqrt{dt}z_t \\to dW_t" }, { "math_id": 94, "text": "dx_t = -\\frac 12 \\beta(t) x_t dt + \\sqrt{\\beta(t)} dW_t" }, { "math_id": 95, "text": "W_t" }, { "math_id": 96, "text": "dx_t = -\\frac{D}{k_BT} (\\nabla_x U)dt + \\sqrt{2D}dW_t" }, { "math_id": 97, "text": "D" }, { "math_id": 98, "text": "T" }, { "math_id": 99, "text": "U" }, { "math_id": 100, "text": "D= \\frac 12 \\beta(t)I, k_BT = 1, U = \\frac 12 \\|x\\|^2" }, { "math_id": 101, "text": "t=0" }, { "math_id": 102, "text": "\\rho_t" }, { "math_id": 103, "text": "\\rho_0 = q; \\quad \\rho_T \\approx N(0, I)" }, { "math_id": 104, "text": "\\partial_t \\ln \\rho_t = \\frac 12 \\beta(t) \\left(\nn + (x+ \\nabla\\ln\\rho_t) \\cdot \\nabla \\ln\\rho_t + \\Delta\\ln\\rho_t\n\\right)" }, { "math_id": 105, "text": "n" }, { "math_id": 106, "text": "\\Delta" }, { "math_id": 107, "text": "t\\in [0, T]" }, { "math_id": 108, "text": "\\nu_0 = \\rho_T" }, { "math_id": 109, "text": "dy_t = \\frac{1}{2} \\beta(T-t) y_{t} d t + \\beta(T-t) \\underbrace{\\nabla_{y_{t}} \\ln \\rho_{T-t}\\left(y_{t}\\right)}_{\\text {score function }} d t+\\sqrt{\\beta(T-t)} d W_t" }, { "math_id": 110, "text": "\\partial_t \\rho_{T-t} = \\partial_t \\nu_t" }, { "math_id": 111, "text": "\\bar \\alpha_t = (1-\\beta_1) \\cdots (1-\\beta_t) = e^{\\sum_i \\ln(1-\\beta_i)} \\to e^{-\\int_0^t \\beta(t)dt} " }, { "math_id": 112, "text": "x_{t}|x_0 \\sim N\\left(e^{-\\frac 12\\int_0^t \\beta(t)dt} x_{0}, \\left(1- e^{-\\int_0^t \\beta(t)dt}\\right) I \\right)" }, { "math_id": 113, "text": "x_0 \\sim q, z \\sim N(0, I)" }, { "math_id": 114, "text": "x_t = e^{-\\frac 12\\int_0^t \\beta(t)dt} x_{0} + \\left(1- e^{-\\int_0^t \\beta(t)dt}\\right) z" }, { "math_id": 115, "text": "x_t \\sim \\rho_t" }, { "math_id": 116, "text": "t \\geq 0" }, { "math_id": 117, "text": "\\gamma" }, { "math_id": 118, "text": "[0, \\infty)" }, { "math_id": 119, "text": "L(\\theta) = E_{t\\sim \\gamma, x_t \\sim \\rho_t}[\\|f_\\theta(x_t, t)\\|^2 + 2\\nabla\\cdot f_\\theta(x_t, t)]" }, { "math_id": 120, "text": "f_\\theta(x_t, t) \\approx \\nabla \\ln\\rho_t" }, { "math_id": 121, "text": "x_T \\sim N(0, I)" }, { "math_id": 122, "text": "t=T" }, { "math_id": 123, "text": "x_{t-dt}=x_t + \\frac{1}{2} \\beta(t) x_{t} d t + \\beta(t) f_\\theta(x_t, t) d t+\\sqrt{\\beta(t)} d W_t" }, { "math_id": 124, "text": "f_\\theta" }, { "math_id": 125, "text": "\\nabla\\ln\\rho_t" }, { "math_id": 126, "text": "\\rho_0" }, { "math_id": 127, "text": "x_{t}|x_0 \\sim N\\left(\\sqrt{\\bar\\alpha_t} x_{0}, (1-\\bar\\alpha_t) I\\right)" }, { "math_id": 128, "text": "\\nabla_{x_t}\\ln q(x_t) = \\frac{1}{1-\\bar\\alpha_t}(-x_t + \\sqrt{\\bar\\alpha_t} E_q[x_0|x_t])" }, { "math_id": 129, "text": "\\sum_t L_{simple, t}" }, { "math_id": 130, "text": "x_t =\\sqrt{\\bar\\alpha_t} x_{0} + \\sqrt{1-\\bar\\alpha_t}z\n " }, { "math_id": 131, "text": "L_{simple, t} = E_{x_0, x_t\\sim q}\\left[ \\left\\| \\epsilon_\\theta(x_t, t) - \n\\frac{x_t -\\sqrt{\\bar\\alpha_t} x_{0}}{\\sqrt{1-\\bar\\alpha_t}} \\right\\|^2\\right] = E_{x_t\\sim q, x_0\\sim q(\\cdot | x_t)}\\left[ \\left\\| \\epsilon_\\theta(x_t, t) - \n\\frac{x_t -\\sqrt{\\bar\\alpha_t} x_{0}}{\\sqrt{1-\\bar\\alpha_t}} \\right\\|^2\\right]" }, { "math_id": 132, "text": "\\epsilon_\\theta(x_t, t) = \\frac{x_t -\\sqrt{\\bar\\alpha_t} E_q[x_0|x_t]}{\\sqrt{1-\\bar\\alpha_t}} = -\\sqrt{1-\\bar\\alpha_t}\\nabla_{x_t}\\ln q(x_t)" }, { "math_id": 133, "text": "x_{t-1} = x_{t-dt}, \\beta_t = \\beta(t) dt, z_t\\sqrt{dt} = dW_t" }, { "math_id": 134, "text": "x_{t-1} = \\frac{x_t}{\\sqrt{\\alpha_t}}- \\frac{ \\beta_t}{\\sqrt{\\alpha_t (1-\\bar\\alpha_t)}} \\epsilon_\\theta(x_t, t) + \\sqrt{\\beta_t} z_t; \\quad z_t \\sim N(0, I)" }, { "math_id": 135, "text": "x_{t-dt} = x_t(1+\\beta(t)dt / 2) + \\beta(t) \\nabla_{x_t}\\ln q(x_t) dt + \\sqrt{\\beta(t)}dW_t" }, { "math_id": 136, "text": "T \\sim 1000" }, { "math_id": 137, "text": "x_t | x_0" }, { "math_id": 138, "text": "t \\geq 1" }, { "math_id": 139, "text": "x_{t-2}|x_{t-1} \\sim N(\\mu_\\theta(x_{t-1}, t-1), \\Sigma_\\theta(x_{t-1}, t-1))" }, { "math_id": 140, "text": "x_{t-1}" }, { "math_id": 141, "text": "x_{t-2}|x_t" }, { "math_id": 142, "text": "p(x|y)" }, { "math_id": 143, "text": "y" }, { "math_id": 144, "text": "p(x|y) \\propto p(y|x)p(x) " }, { "math_id": 145, "text": "\\nabla \\ln p(x) " }, { "math_id": 146, "text": "\\nabla_x \\ln p(x|y) = \\nabla_x \\ln p(y|x) + \\nabla_x \\ln p(x) " }, { "math_id": 147, "text": "\\nabla_x \\ln p(x)" }, { "math_id": 148, "text": "\\nabla_x \\ln p(y|x)" }, { "math_id": 149, "text": "\\arg\\max_x p(x|y)" }, { "math_id": 150, "text": "\\arg\\max_x p(y|x)" }, { "math_id": 151, "text": "p_\\beta(x|y) \\propto p(y|x)^\\beta p(x) " }, { "math_id": 152, "text": "\\beta > 0 " }, { "math_id": 153, "text": "\\beta " }, { "math_id": 154, "text": "\\nabla_x \\ln p_\\beta(x|y) = \\beta\\nabla_x \\ln p(y|x) + \\nabla_x \\ln p(x) " }, { "math_id": 155, "text": "p(y|x)" }, { "math_id": 156, "text": "\\nabla_x \\ln p_\\beta(x|y) = (1-\\beta) \\nabla_x \\ln p(x) + \\beta \\nabla_x \\ln p(x|y) " }, { "math_id": 157, "text": "(x, y) " }, { "math_id": 158, "text": "(x, {\\rm None}) " }, { "math_id": 159, "text": "\\nabla_x\\ln p(x|y) " }, { "math_id": 160, "text": "\\nabla_x\\ln p(x) " }, { "math_id": 161, "text": "\\beta_t" }, { "math_id": 162, "text": "\\eta" }, { "math_id": 163, "text": "\\eta = 0" }, { "math_id": 164, "text": "\\eta = 1" }, { "math_id": 165, "text": "\\nabla \\ln p_t " }, { "math_id": 166, "text": "\\pi_0" }, { "math_id": 167, "text": "\\pi_1" }, { "math_id": 168, "text": "v_t(x)" }, { "math_id": 169, "text": "[0,1] \\times \\mathbb R^d " }, { "math_id": 170, "text": "x \\sim \\pi_0" }, { "math_id": 171, "text": "\\frac{d}{dt} \\phi_t(x) = v_t(\\phi_t(x)) \\quad t \\in [0,1], \\quad \\text{starting from }\\phi_0(x) = x" }, { "math_id": 172, "text": "x_1 \\sim \\pi_1" }, { "math_id": 173, "text": "\\phi_t" }, { "math_id": 174, "text": "p_t = [\\phi_t]_{\\#} \\pi_0 " }, { "math_id": 175, "text": "[\\phi_1]_{\\#} \\pi_0 = \\pi_1" }, { "math_id": 176, "text": "\\partial_t p_t + \\mathrm{div}(v_t \\cdot p_t) = 0" }, { "math_id": 177, "text": "p_t(x \\vert z)" }, { "math_id": 178, "text": "v_t(x \\vert z)" }, { "math_id": 179, "text": "q(z)" }, { "math_id": 180, "text": "p_t(x \\vert z) = \\mathcal{N} \\left( m_t(z), \\sigma_t^2 I \\right) " }, { "math_id": 181, "text": "v_t(x \\vert z) = \\frac{\\sigma_t'}{\\sigma_t} (x - m_t(z)) + m_t'(z)" }, { "math_id": 182, "text": "p_t(x) = \\int p_t(x \\vert z) q(z) dz \\qquad \\text{ and } \\qquad v_t(x) = \\mathbb{E}_{q(z)} \\left[\\frac{v_t(x \\vert z) p_t(x \\vert z)}{p_t(x)} \\right]" }, { "math_id": 183, "text": "\\pi_0 " }, { "math_id": 184, "text": "\\pi_1\n " }, { "math_id": 185, "text": "z = (x_0, x_1) " }, { "math_id": 186, "text": "q(z) = \\Gamma(\\pi_0, \\pi_1) " }, { "math_id": 187, "text": "\\Gamma" }, { "math_id": 188, "text": "\\dot{\\phi_t}(x) = v_t(\\phi_t(x))" }, { "math_id": 189, "text": "\\phi_t(x) = x_0 + t \\cdot v_0(x_0)" }, { "math_id": 190, "text": "\\phi^0 = \\{\\phi_t: t\\in[0,1]\\}" }, { "math_id": 191, "text": "\\phi^1, \\phi^2, \\dots" }, { "math_id": 192, "text": "v_t" }, { "math_id": 193, "text": "\\min_{\\theta} \\int_0^1 \\mathbb{E}_{x \\sim p_t(x}\\left [\\lVert{v_t(x, \\theta) - v_t(x)}\\rVert^2\\right] \\,\\mathrm{d}t." }, { "math_id": 194, "text": "x_0 \\sim \\pi_0" }, { "math_id": 195, "text": "x_t= t x_1 + (1-t)x_0, t\\in [0,1]" }, { "math_id": 196, "text": "\\dot{x}_t = x_1 - x_0" }, { "math_id": 197, "text": "x_1" }, { "math_id": 198, "text": "x_1 - x_0" }, { "math_id": 199, "text": "\\min_{\\theta} \\int_0^1 \\mathbb{E}_{\\pi_0, \\pi_1, p_t}\\left [\\lVert{(x_1-x_0) - v_t(x_t)}\\rVert^2\\right] \\,\\mathrm{d}t." }, { "math_id": 200, "text": "(x_0, x_1)" }, { "math_id": 201, "text": "(x_0,x_1) \\sim \\pi_0 \\times \\pi_1" }, { "math_id": 202, "text": "(x_0,x_1)" }, { "math_id": 203, "text": "\\phi^0 = \\mathsf{Rectflow}((x_0,x_1))" }, { "math_id": 204, "text": "\\mathsf{Rectflow}(\\cdot)" }, { "math_id": 205, "text": "\\phi^{k+1} = \\mathsf{Rectflow}((\\phi_0^k(x_0), \\phi_1^k(x_1)))" }, { "math_id": 206, "text": "\\phi^k" }, { "math_id": 207, "text": "k" }, { "math_id": 208, "text": "x_t = \\alpha_t x_1 + \\beta_t x_0" }, { "math_id": 209, "text": "\\alpha_t" }, { "math_id": 210, "text": "\\tilde x" }, { "math_id": 211, "text": "m" }, { "math_id": 212, "text": "\\tilde x_t \\sim N\\left(\\sqrt{\\bar\\alpha_t} \\tilde x, (1-\\bar\\alpha_t) I \\right)" }, { "math_id": 213, "text": "(1-m) \\odot x_t + m \\odot \\tilde x_t" }, { "math_id": 214, "text": "\\odot" }, { "math_id": 215, "text": "(x_0, z_0, c)" }, { "math_id": 216, "text": "z_0" }, { "math_id": 217, "text": "c" }, { "math_id": 218, "text": "\\epsilon_x, \\epsilon_z" }, { "math_id": 219, "text": "t_x, t_z" }, { "math_id": 220, "text": "\\begin{cases}\nx_{t_x} &= \\sqrt{\\bar\\alpha_{t_x}} x_0 + \\sqrt{1-\\bar\\alpha_{t_x}} \\epsilon_x\\\\\nz_{t_z} &= \\sqrt{\\bar\\alpha_{t_z}} z_0 + \\sqrt{1-\\bar\\alpha_{t_z}} \\epsilon_z\n\\end{cases}" }, { "math_id": 221, "text": "\\epsilon_x" }, { "math_id": 222, "text": "x_{t_x}, z_{t_z}, t_x, t_z, c" }, { "math_id": 223, "text": "\\| \\epsilon_\\theta(x_{t_x}, z_{t_z}, t_x, t_z, c) - \\epsilon_x \\|_2^2" } ]
https://en.wikipedia.org/wiki?curid=71912239
71916539
Functional Decision Theory
Decision theory Functional Decision Theory (FDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the decision procedure (a "fixed mathematical decision function", as opposed to a singular act) that leads to the best output. It aims to provide a more reliable method to maximize utility — the measure of how much an outcome satisfies an agent's preference — than the more prominent decision theories, Causal Decision Theory (CDT) and Evidential Decision Theory (EDT). In general, CDT states that the agent should consider the causal effects of their actions to maximize the utility; in other words, it prescribes to act in the way that will produce the best consequences given the situation at hand. EDT states that the agent should look at how likely certain outcomes are "given" their actions and observations (regardless of causality); in other words, it advises an agent to ‘do what you most want to learn that you will do.’ Many proponents of FDT argue that, since there are some scenarios in which either CDT, EDT or both do not prescribe the "most rational" choice, both theories are incorrect. Background. FDT was first proposed by Eliezer Yudkowsky and Nate Soares in a 2017 research paper supported by the Machine Intelligence Research Institute (MIRI). Prior to this publication, Yudkowsky had proposed another, albeit similar, decision theory, which he named Timeless Decision Theory (TDT). Roughly speaking, Timeless Decision Theory states that, rather than acting like you are determining an individual decision, you should act as if you are determining the output of an abstract computation. The original paper and the idea behind TDT was seen as a work in progress, which found much criticism due to its vagueness. Broadly, FDT can be seen as a replacement of TDT, and a generalization of Wei Dai's Updateless Decision Theory (UDT). Informal description. Informally, Functional Decision Theory recommends the agent to select her decision procedure that produces the best outcome. It claims that the agent possesses a model of her decision procedures (which a reliable predictor must also know to high certainty), and which she can alter accordingly. As a visual example, when you type "2 + 2" on a calculator, and receive the answer of "4", you conclude that 2 + 2 = 4 because the calculator runs the same function. Similarly, a predictor, such as that in Newcomblike problems, also runs the same decision function of the agent in order to predict her actions. Philosophical Thought Experiments. FDT outperforms both CDT and EDT. Parfit's Hitchhiker. This problem shows a scenario in which FDT outperforms both CDT and EDT simultaneously. It states that: “An agent is dying in the desert. A driver comes along who offers to give the agent a ride into the city, but only if the agent will agree to visit an ATM once they arrive and give the driver $1,000. The driver will have no way to enforce this after they arrive, but he does have an extraordinary ability to detect lies with 99% accuracy. Being left to die causes the agent to lose the equivalent of $1,000,000. In the case where the agent gets to the city, should she proceed to visit the ATM and pay the driver?” The CDT agent says no. Given that she has safely arrived in the city, she sees nothing further to gain by paying the driver. The EDT agent agrees: on the assumption that she is already in the city, it would be bad news for her to learn that she was out $1,000. Assuming that the CDT and EDT agents are smart enough to know what they would do upon arriving in the city, this means that neither can honestly claim that they would pay. The driver, detecting the lie, leaves them in the desert to die. The prescriptions of CDT and EDT here run contrary to many people’s intuitions, which say that the most “rational” course of action is to pay upon reaching the city. Certainly if these agents had the opportunity to make binding pre-commitments to pay upon arriving, they would achieve better outcomes. The FDT agent reasons the driver models her reasoning in order to detect her lies. Therefore, she does pay up, even though she knows she is out of the desert already. While it might seem irrational to pay even though one is already outside of the desert, it is convenient to be the kind of agent that pays up in these kind of scenarios — because it means you, while still in the desert, can honestly claim to pay up once you’re in the city, and therefore it means the driver will take you. Blackmail. The following dilemma, as stated by Yudkowsky:A blackmailer has a nasty piece of information which incriminates both the blackmailer and the agent. She has written a computer program which, if run, will publish it on the internet, costing $1,000,000 in damages to both of them. If the program is run, the only way it can be stopped is for the agent to wire the blackmailer $1,000 within 24 hours—the blackmailer will not be able to stop the program once it is running. The blackmailer would like the $1,000, but doesn’t want to risk incriminating herself, so she only runs the program if she is quite sure that the agent will pay up. She is also a perfect predictor of the agent, and she runs the program (which, when run, automatically notifies her via a blackmail letter) if she predicts that she would pay upon receiving the blackmail. Imagine that the agent receives the blackmail letter. Should she wire $1,000 to the blackmailer?While CDT and EDT would both pay the blackmailer, the FDT agent reasons, “Paying corresponds to a world where I lose $1,000; refusing corresponds to a world where I never get blackmailed (as the blackmailer would have predicted this). The latter looks better, so I refuse.” As such, she never gets blackmailed — her counterfactual reasoning is proven correct, according to Yudkowsky. FDT and EDT outperform both CDT. Newcomb's Paradox. In Newcomb's Paradox, an agent finds herself standing in front of a transparent box labeled “A” that contains $1,000, and an opaque box labeled “B” that contains either $1,000,000 or $0. A reliable predictor, who has made similar predictions in the past and has been correct 99% of the time, claims to have placed $1,000,000 in box B if she predicted that the agent would leave box A behind. The predictor has already made her prediction and left. Box B is now empty or full. Should the agent take both boxes (“two-boxing”), or only box B, leaving the transparent box containing $1,000 behind (“one-boxing”)? An agent using CDT argues that at the moment she is making the decision to one-box or two-box, the predictor has already either put a million dollars or nothing in box B. Her own decision now can't change the predictor's earlier decision; she can't cause the past to be different. Furthermore, no matter what the content of box B actually is, two-boxing gives an extra thousand dollars. The CDT agent therefore two-boxes. In contrast, an EDT agent argues as follows: “If I two-box, the predictor will almost certainly have predicted this. Future-me two-boxing would therefore be strong evidence that box B is empty. If I one box, the predictor will almost certainly have predicted that too — which is why future-me one-boxing would be strong evidence of box B containing a million dollars.” Following this line of reasoning, the EDT agent, in contrast to the CDT agent, one-boxes. In the case of an FDT agent, she reasons that the predictor must have a model of her decision process. Therefore, it would be best if the FDT agent’s decision procedure would lead to her one-boxing, because then the predictor's model of the FDT agent’s decision procedure would also output one-boxing, leading him to predict the FDT agent will one-box and put a million dollars in box B. Then, since FDT and EDT both one-box they will receive a million, outperforming CDT which only obtain $1,000 by two-boxing. In general, a Newcomb problem illustrates choice situations in which: Psychological Twin Prisoner's Dilemma. In this variant of the Prisoner's Dilemma, an agent and her twin must both choose to either “cooperate” or “defect.” If both cooperate, they each receive $1,000,000. If both defect, they each receive $1,000. If one cooperates and the other defects, the defector gets $1,001,000 and the cooperator gets nothing. The agent and the twin know that they reason the same way, using the same considerations to come to their conclusions. However, their decisions are causally independent, made in separate rooms without communication. Should the agent cooperate with her twin? An CDT agent would defect, as she would argue that no matter what action her twin takes, she wins an extra thousand dollars by defecting. She and her twin both reason in this way, and thus they both walk away with $1,000. EDT would prescribe cooperation, on the grounds that it would be good news to learn that one had cooperated as it would provide evidence that the twin also cooperated. In the case of an FDT agent, she would cooperate, reasoning that since her twin and herself follow the same course of reasoning, if it concludes that cooperation is better, then both cooperate and obtain $1,000,000. If it concludes that defection is better, then both defect and obtain a mere $1,000. Since the former is preferable, and since both twins have the same decision procedure, the course of reasoning therefore concludes cooperation. Death in Damascus. In this plot, an agent encounters Death in Damascus and is told that Death is coming for her tomorrow, as it is written in his appointment book — including the location of the event. This agent knows that deciding to flee to Aleppo (at a cost of $1,000) means that Death will be in Aleppo tomorrow, whereas staying in Damascus means that Death will be in Damascus tomorrow. Should she stay, or flee? FDT would suggest staying, as no matter what Death will be waiting for her as it has her decision procedure, therefore it is better to save the extra $1000. However, CDT is put unstable as it bases her decision on the hypothetical that Death's action is independent of his action (as it was already written in the book). FDT and CDT outperform both EDT. Smoking Lesion Problem. While in Newcomb's Paradox, FDT and EDT outperform CDT, in the Smoking Lesion Problem it is claimed that FDT and CDT outperform EDT. Consider a hypothetical world where smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. If Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer, should Susan smoke? EDT tells Susan not to smoke, because it treats the fact that her smoking is evidence that she has the lesion, and therefore is evidence that she is likely to get cancer, as a reason not to smoke. Causal decision theory tells her to smoke, as it does not treat the connection between an action and a bad outcome as a reason not to perform the action, rather it considers that smoking has no causal effect on whether or not one gets cancer. In the case of FDT, whether or not the cancer metastasizes does not depend upon the output of the FDT procedure since there exists no dependence of smoking and cancer, therefore FDT recommends smoking. Since smoking provides more utility to Susan regardless of whether she has cancer or not - something she cannot control - it is viewed that the "correct" answer is to smoke. Nonetheless, there has been discussion whether this truly is the correct approach. Formal Description. Yudkowsky formalizes Functional Decision Theory per the following formula: formula_0 Criticism. Yudkowsky and Soares assume that an FDT agent is certain that she follows FDT, and this knowledge is held fixed under all counterfactual suppositions. Moreover, decision theorists do not agree on a "correct" or "rational" solution to all of the problems that Yudkowsky and Soares claim that FDT solves. In fact, many would suggest that FDT provides insane recommendations in certain cases, as detailed by Wolfgang Schwarz:Suppose you have committed an indiscretion that would ruin you if it should become public. You can escape the ruin by paying $1 once to a blackmailer. Of course you should pay! FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed. How is that even relevant? You "are" being blackmailed. Not being blackmailed isn't on the table. It's not something you can choose.Similarly, in the variant of Newcomb's Problem where you already know the contents of the million dollar box:Suppose the you see $1000 in the left box and a million in the right box. If you were to take both boxes, you would get a million and a thousand. If you were to take just the right box, you would get a million. So Causal Decision Theory says you should take [both] boxes. However, you follow FDT, and you are certain that you do. If FDT recommended two-boxing, then any FDT agent throughout history would two-box. And, crucially, the predictor would (probably) have foreseen that you would two-box, so she would have put nothing into the box on the right. As a result, if FDT recommended two-boxing, you would probably end up with $1000. To be sure, you know that there's a million in the box on the right. You can see it. But according to FDT, this is irrelevant. What matters is what "would" be in the box relative to different assumptions about what FDT recommends. Therefore, FDT recommends to one-box despite the fact that you gain $1000 less.In general, criticism of Functional Decision Theory can be summarized in the following points of argument. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{FDT}(P, G, x) := \\operatorname{arg\\,max} \\operatorname{E}[V | \\operatorname{do}(\\operatorname{FDT}(P, G, x) = a)]" } ]
https://en.wikipedia.org/wiki?curid=71916539
71917
Natural Color System
Proprietary perceptual color model The Natural Color System (NCS) is a proprietary perceptual color model. It is based on the color opponency hypothesis of color vision, first proposed by German physiologist Ewald Hering. The current version of the NCS was developed by the Swedish Colour Centre Foundation, from 1964 onwards. The research team consisted of Anders Hård, Lars Sivik and Gunnar Tonnquist, who in 1997 received the AIC Judd award for their work. The system is based entirely on the phenomenology of human perception and not on color mixing. It is illustrated by a color atlas, marketed by NCS Colour AB in Stockholm. Definition. The NCS states that there are six elementary color percepts of human vision—which might coincide with the psychological primaries—as proposed by the hypothesis of color opponency: white, black, red, yellow, green, and blue. The last four are also called unique hues. In the NCS all six are defined as elementary colors, irreducible qualia, each of which would be impossible to define in terms of the other elementary colors. All other experienced colors are considered composite perceptions, i.e. experiences that can be defined in terms of similarity to the six elementary colors. E.g. a saturated pink would be fully defined by its visual similarity to red, blue, black and white. Colors in the NCS are defined by three values, expressed in percentages, specifying the degree of blackness ("s", = relative visual similarity to the black elementary color), chromaticness ("c", = relative visual similarity to the "strongest", most saturated, color in that hue triangle), and hue ("Φ", = relative similarity to one or two of the chromatic elementary colors red, yellow, green and blue, expressed in at most two percentages). This means that a color can be expressed as either Y (yellow), YR (yellow with a red component), R (red), RB (red with a blue component), B (blue), etc. No hue is considered to have visual similarity to both hues of an opponent pair; i.e. there is no "redgreen" or "yellowblue". The blackness and the chromaticness together add up to less than or equal to 100%. The remainder from 100%, if any, gives the amount of whiteness ("w"). Achromatic colors, i.e. colors that lack chromatic contents (ranging from black, to grey and finally white), have their hue component replaced with a capital "N", for example   "NCS S 9000-N" (a more or less complete black). NCS color notations are sometimes prepended by a capital "S", which denotes that the current version of the NCS color standard was used to specify the color. In summary, the NCS color notation for   S 2030-Y90R (light, pinkish red) is described as follows. formula_0 with formula_1 Saturation and lightness. In addition to the above values "s" (blackness), "w" (whiteness), "c" (chromaticness) and "Φ" (hue), the NCS system can also describe the two perceptual quantities saturation and lightness. NCS saturation ("m") refers to a color's relation between its chromaticness and whiteness (regardless of hue), defined as the ratio between the chromaticness and the sum of its whiteness and chromaticness formula_2. The NCS saturation ranges between 0 and 1. For the example color of S 2030-Y90R, the saturation is calculated as formula_3 NCS lightness ("v") is a color's perceptual characteristic to contain more of the achromatic elementary colors black or white than another color. NCS lightness values varies from 0 for the elementary color black (S) to 1 for the elementary color white (W). For achromatic colors, that is any black, gray or white with no chromatic component ("c" = 0), lightness is defined as formula_4 For chromatic colors, the NCS lightness is determined by comparing the chromatic color to a reference scale of achromatic colors ("c" = 0), and is determined to have the same lightness value "v" as the sample on the reference scale to which it has the least noticeable edge-to-edge difference. Examples. Two examples of NCS color notation—the yellow and blue shades of the Swedish flag: The NCS is represented in nineteen countries and is the reference norm for color designation in Sweden (since 1979), Norway (since 1984), Spain (since 1994) and South Africa (since 2004). It is also one of the standards used by the International Colour Authority, a leading publisher of color trend forecasts for the interior design and textile markets. NCS 1950 Standard Colors. In order to be able to manufacture physical representations of the NCS color space (such as color atlases), a reduced set of colors had to be selected that would illustrate the system well. Originally developed in 1979 as part of becoming the Swedish national color standard by the SIS (Swedish Standards Institute), the Natural Color System was described in an atlas containing 1412 colors. In 1984, an additional 118 colors were added for a total of 1530 colors. Eleven years later, in 1995, a second edition of the NCS Color Samples was released containing 1750 standard colors. In 2004, 200 more colors (184 light colors and 16 in the blue-green space) were added, resulting in the NCS 1950 standard colors. Colors that have a representation in the NCS 1950 samples are denoted with a leading capital "S", for example   NCS S 1070-Y10R (a chromatic, slightly reddish yellow). Comparisons to other color systems. The most important difference between NCS and most other color systems resides in their starting points. The aim of NCS is to define colors from their visual appearance, as they are experienced by human consciousness. Other color models, such as CMYK and RGB, are based on an understanding of physical processes, how colors can be achieved or "made" in different media. The underlying physiological mechanisms involved in color opponency include the bipolar and ganglion cells in the retina, which process the signal originated by the retinal cones before it is sent to the brain. Models like RGB are based on what happens at the lower, retinal cone level, and thus are fitted for presenting self-illuminated, dynamic images as done by TV sets and computer displays; see additive color. The NCS model, for its part, describes the organization of the color sensations as perceived at the upper, brain level, and thus is much better fitted than RGB to deal with how humans experience and describe their color sensations (hence the "natural" part of its name). More problematic is the relation with the CMYK-model which is generally seen as a correct prediction of the behavior of mixing pigments, as a system of subtractive color. The NCS coincides with the CMYK as regards the green-yellow-red segment of the color circle, but differs from it in seeing the saturated subtractive primary colors magenta and cyan as complex sensations of a "redblue" and a "greenblue" respectively and in seeing green, not as a secondary color mix of yellow and cyan, but as a unique hue. The NCS explains this by assuming that the behavior of paint is partly counterintuitive to human phenomenology. Observing that the mix of yellow and cyan paint results in a green color, would thus be at odds with the intuition of pure human perception which would be unable to account for such a "yellowblue". Hering argued that yellow is not a "redgreen" but a unique hue. Colorimetrist Jan Koenderink, in a critique of Hering's system, considered it inconsistent not to apply the same argument to the other two subtractive primaries, cyan and magenta, and see them as unique hues as well, not a "greenblue" or a "redblue". He also pointed out the difficulty within a four color theory that the primaries would not be equally spaced in the color circle; and the problem that Hering does not account for the fact that cyan and magenta are brighter than green, blue and red, whereas this is, in his view, elegantly explained within the CMYK-model. He concluded that Hering's scheme fitted common language better than color experience. Overview of the six base colors in Natural Color System with their equivalent in hex triplet, RGB and HSV coordinates systems. However, note that these codes are only approximate, as the definition of NCS elementaries is based on perception and not production of color. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. NCS Colour – Universal Language for Colour Communication – official site
[ { "math_id": 0, "text": "\n\\underset{\\begin{matrix} \\vphantom{|^|}\\text{NCS 1950} \\\\[-4mu] \\text{Standard} \\end{matrix}} { \\text{S} } \\quad \n\\underbrace{\\underset{\\vphantom{|^|}s}{20}\\ \\underset{\\vphantom{|^|}c}{30}}_{\\text{nuance}}\n\\quad\\frac{\\phantom{i}}{}\\quad\n\\underbrace{\\text{Y}\\ \\underset{\\vphantom{|^|}\\phi}{90}\\ \\text{R}}_{\\text{hue}}\n" }, { "math_id": 1, "text": "w = 100 - c - s = 100 - 30 - 20 = 50" }, { "math_id": 2, "text": "m = c / (w + c) = c / (100 - s)" }, { "math_id": 3, "text": "m = c / (100 - s) = 30 / (100 - 20) = 30 / 80 = 0.375." }, { "math_id": 4, "text": "v = \\frac{100 - s}{100}." } ]
https://en.wikipedia.org/wiki?curid=71917
7192444
Hirzebruch signature theorem
Gives the signature of a smooth compact oriented manifold in terms of Pontryagin numbers In differential topology, an area of mathematics, the Hirzebruch signature theorem (sometimes called the Hirzebruch index theorem) is Friedrich Hirzebruch's 1954 result expressing the signature of a smooth closed oriented manifold by a linear combination of Pontryagin numbers called the L-genus. It was used in the proof of the Hirzebruch–Riemann–Roch theorem. Statement of the theorem. The L-genus is the genus for the multiplicative sequence of polynomials associated to the characteristic power series formula_0 The first two of the resulting L-polynomials are: (for further "L"-polynomials see or OEIS: ). By taking for the formula_3 the Pontryagin classes formula_4 of the tangent bundle of a 4"n" dimensional smooth closed oriented manifold M one obtains the L-classes of M. Hirzebruch showed that the n-th L-class of M evaluated on the fundamental class of M, formula_5, is equal to formula_6, the signature of M (i.e. the signature of the intersection form on the 2"n"th cohomology group of M): formula_7 Sketch of proof of the signature theorem. René Thom had earlier proved that the signature was given by some linear combination of Pontryagin numbers, and Hirzebruch found the exact formula for this linear combination by introducing the notion of the genus of a multiplicative sequence. Since the rational oriented cobordism ring formula_8 is equal to formula_9 the polynomial algebra generated by the oriented cobordism classes formula_10 of the even dimensional complex projective spaces, it is enough to verify that formula_11 for all i. Generalizations. The signature theorem is a special case of the Atiyah–Singer index theorem for the signature operator. The analytic index of the signature operator equals the signature of the manifold, and its topological index is the L-genus of the manifold. By the Atiyah–Singer index theorem these are equal. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "{x\\over \\tanh(x)} = \\sum_{k\\ge 0} {{2^{2k}B_{2k}\\over (2k)!}x^{2k}}\n = 1 + {x^2 \\over 3} - {x^4 \\over 45} +\\cdots ." }, { "math_id": 1, "text": "L_1 = \\tfrac13 p_1" }, { "math_id": 2, "text": "L_2 = \\tfrac1{45}(7p_2 - p_1^2)" }, { "math_id": 3, "text": "p_i" }, { "math_id": 4, "text": "p_i(M)" }, { "math_id": 5, "text": "[M]" }, { "math_id": 6, "text": "\\sigma(M)" }, { "math_id": 7, "text": " \\sigma(M) = \\langle L_n(p_1(M), \\dots, p_n(M)), [M]\\rangle. " }, { "math_id": 8, "text": "\\Omega_*^{\\text{SO}}\\otimes \\Q" }, { "math_id": 9, "text": "\\Omega_*^{\\text{SO}}\\otimes \\Q =\\Q [\\mathbb{P}^{2}(\\Complex), \\mathbb{P}^{4}(\\Complex), \\ldots ]," }, { "math_id": 10, "text": "[\\mathbb{P}^{2i}(\\Complex)]" }, { "math_id": 11, "text": " \\sigma(\\mathbb{P}^{2i})= 1 = \\langle L_i(p_1(\\mathbb{P}^{2i}), \\ldots, p_n(\\mathbb{P}^{2i})), [\\mathbb{P}^{2i}]\\rangle " } ]
https://en.wikipedia.org/wiki?curid=7192444
71927016
Eyeball theorem
Statement in elementary geometry The eyeball theorem is a statement in elementary geometry about a property of a pair of disjoined circles. More precisely it states the following: "For two nonintersecting circles formula_0 and formula_1centered at formula_2 and formula_3 the tangents from P onto formula_1 intersect formula_1 at formula_4 and formula_5 and the tangents from Q onto formula_0 intersect formula_0 at formula_6 and formula_7. Then formula_8. The eyeball theorem was discovered in 1960 by the Peruvian mathematician Antonio Gutierrez. However without the use of its current name it was already posed and solved as a problem in an article by G. W. Evans in 1938. Furthermore Evans stated that problem was given in an earlier examination paper. A variant of this theorem states, that if one draws line formula_9 in such a way that it intersects formula_0 for the second time at formula_10 and formula_1 at formula_11. Then, it turns out that formula_12. There are some proofs for Eyeball theorem, one of them show that this theorem is a consequence of the Japanese theorem for cyclic quadrilaterals.
[ { "math_id": 0, "text": "c_P" }, { "math_id": 1, "text": "c_Q" }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "C " }, { "math_id": 5, "text": "D" }, { "math_id": 6, "text": "A " }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "|AB| = |CD|" }, { "math_id": 9, "text": "FJ" }, { "math_id": 10, "text": "F'" }, { "math_id": 11, "text": "J'" }, { "math_id": 12, "text": "|FF'|=|JJ'|" } ]
https://en.wikipedia.org/wiki?curid=71927016
71929189
Anisotropic terahertz microspectroscopy
Spectroscopic technique Anisotropic terahertz microspectroscopy (ATM) is a spectroscopic technique in which molecular vibrations in an anisotropic material are probed with short pulses of terahertz radiation whose electric field is linearly polarized parallel to the surface of the material. The technique has been demonstrated in studies involving single crystal sucrose, fructose, oxalic acid, and molecular protein crystals in which the spatial orientation of molecular vibrations are of interest. Explanation. When the electric field of a propagating beam of light oscillates in a direction perpendicular to its direction of propagation, it is said to be a polarized transverse wave. Light with an electric field constrained to a particular angle in the transverse plane is said to be linearly polarized. When linearly polarized light is transmitted through an isotropic material — a material that exhibits the same physical properties in all spatial directions — the amount of light absorbed by the material is the same when measured for all angles of the polarized light. The resulting absorbance spectrum is featureless as a function of the polarization angle. A material said to be anisotropic exhibits different physical properties, like absorbance, refractive index, conductivity and so on, along different spatial directions. Thus, when a linearly polarized beam of light is passed through an anisotropic material and measured for different angles of polarization, the absorption of the light is different for different polarization angles. The resulting absorbance spectrum exhibits varying degrees of absorbance that correspond to the materials degree of anisotropy. When a polarized THz beam of light is transmitted through an anisotropic material, the resulting absorbance spectrum exhibits varying degrees of absorbance that correspond to the anisotropy of the material. If measurements are made at different frequencies across the THz spectrum (between about 0.3 to 3 THz) at a particular THz polarization angle, the resulting absorbance spectrum may also vary with frequency. This occurs because the vibrational modes of the molecules in the material absorb light at different frequencies. In protein molecules, for example, many of these vibrational modes oscillate within the range of terahertz frequencies. When the molecules in a material are arranged in the same orientation, the internal vibrational properties of the molecules may be identified using anisotropic terahertz microspectroscopy (ATM). This molecular alignment is found in single crystals of sucrose, fructose, oxalic acid, and other molecular crystals like protein crystals. Techniques. To date, ATM techniques have utilized THz time-domain spectroscopy (THz-TDS) because of historical scarcity of strong THz sources and highly sensitive THz detectors that operate at room temperature. Many samples of interest contain large amounts of water that strongly absorb THz radiation, thus requiring a very strong THz source. This requirement is exacerbated when attempting to use highly sensitive THz detectors that conventionally require supercooling to liquid helium temperatures. Worse, the need for supercooling these detectors has made THz detection unavailable to many researchers around the world due to recent sharp rises in the price of liquid helium due to its scarcity. To circumvent THz detection hurdles, THz-TDS is utilized as it requires commonly available infrared detectors sensitive in the near infrared region of the electromagnetic spectrum — most commonly around a wavelength of 800 nm. In this case, an electro-optic (EO) crystal, such as gallium nitride (GaN), zinc telluride (ZnTe), is commonly used to detect changes in the THz light after it has passed through a sample. The polarization properties of a synchronized infrared beam of light passing through the EO crystal are changed. This polarization change is detected by an infrared detector, called a balanced detector, that compares the magnitude of two perpendicular polarization components of the infrared beam. Until more powerful THz sources that provide a wide frequency range and more sensitive room temperature THz detectors are realized, THz-TDS remains a reliable technique for ATM. The THz-TDS techniques used in ATM may be divided into two categories: rotated sample and stationary sample. Historically, the former technique involved rotation of the sample at the focus of a THz beam while the detector is placed far from the sample in the far-field. For many mechanical reasons, however, a stationary sample technique is preferred. In stationary sample ATM, a polarized THz beam is rotated through 360° in a plane perpendicular to the propagation direction of the beam and typically utilizes a near-field detection scheme in which the sample is mounted in direct contact with an EO crystal that is subsequently analyzed by the infrared beam in a THz-TDS configuration. Rotated Sample ATM. Original ATM techniques involve rotating the sample at the focal point of a linearly polarized THz beam using a mechanically rotated sample mount. For this reason, the configuration is typically a far-field instrument in which a balanced detector (sensitive to infrared light) is placed a considerable distance from the sample. In the terahertz time-domain spectroscopy configuration, both the infrared and THz beams are transmitted through an electro-optic (EO) crystal like ZnTe or GaP. Here, the infrared beam detects the change in birefringence of the EO crystal due to the THz beam. When a sample is placed in the THz beam, the polarized THz beam is perturbed and the resulting degree of birefringence in the EO crystal is changed. The resulting perturbation of the infrared beam is sensed at the balanced detector. Rotated sample ATM is very useful for large samples (0.1 to 1 cm). However, when measuring samples such as protein crystals that must be isolated inside a hydration chamber, for example, the sample cannot be easily rotated. Additionally, it is challenging to maintain the same location of a rotated sample at the precise focal point of a THz beam. Instrument Design. An ATM designed with a rotated sample is typically a far-field measurement configuration using a time-domain spectroscopy strategy. A high power infrared laser is typically used. Its beam is split by a beamsplitter into two optical paths: a probe beam and a THz generation beam. The THz generation beam typically receives the greater fraction of NIR power in order to maximize the power of the THz light commonly generated by a voltage-pulsed photoconductive antenna. The generated THz light is collected through a hyper-hemispherical silicon lens and passed to an off-axis parabolic mirror that collimates the THz beam for polarization by a THz polarizer that is often made of a simple wire-grid. The linearly polarized THz beam is then focused by a second off-axis parabolic mirror onto the sample. The THz beam transmitted through the sample is again collected by a third off-axis parabolic mirror, collimated onto a fourth parabolic mirror that then focuses the beam onto an electro-optic (EO) crystal whose birefringence is perturbed by the strength of the THz beam. The NIR probe beam is passed through the EO crystal to probe the induced degree of birefringence caused by the THz beam and passed to a detection module that often consists of an NIR quarter wave plate, a Wollaston prism that spatially separates orthogonal polarization states of the probe beam into two optical paths that are individually detected at a balanced detector. The resulting signal reported by the balanced detector is a measure of the difference in magnitude of these two orthogonal components of the NIR probe beam and therefore a direct correlation of the degree of birefringence induced in the EO crystal by the THz beam passed through the sample. Stationary Sample ATM. Previously called "ideal ATM" and "polarization-varying ATM," stationary sample ATM (SSATM) involves rotation of the linearly polarized state of the THz beam in a time-domain spectroscopy (TDS) configuration parallel to the interrogated material sample. In a SSATM configuration, the THz beam polarization is rotated through 360° in a plane perpendicular to the propagation direction of the beam. Measurements of the sample's anisotropy is measured at several THz polarization angles. At least two methods to achieve THz polarization rotation for SSATM have been demonstrated: 1) by using a THz quarter waveplate (THz-QWP) together with an infrared polarizer and 2) by rotating the photoconductive antenna. In the case of employing a THz-QWP and an infrared polarizer, the magnitude of the measured signal, formula_0, where formula_1 is a time delay between THz generation and the detected pulses in a THz-TDS system is dependent on the relative polarization angle of the THz light, formula_2 and the polarization angle of the ultrafast near-infrared (NIR) probe beam, formula_3, at the sample by the relationship formula_4 The objective is to maintain equal magnitude of the THz electric field at the sample for all measurement angles, formula_2. This requires adjustment of formula_3 for every formula_2. Instrument Design. A SSATM instrument is typically designed in a time-domain spectroscopy configuration in which a high power infrared laser beam is divided into two optical paths by a beamsplitter. The first optical path often receives a greater fraction of the optical power of the laser to maximize the output power of generated THz light. THz light is often generated with a voltage-pulsed photoconductive antenna, collected with a hyper-hemispherical silicon lens, collimated using an off-axis parabolic mirror that is then passed through a THz polarizer, made circular by a THz quarter waveplate constructed of two planar mirrors and a right-angled high-resistivity silicon prism to form circularly polarized light. A second THz polarizer selects from the circularly polarized THz light the angle at which each measurement is made once the light reaches a sample located at a focal point of the beam and mounted in direct contact with an electro-optic crystal often made of either ZnTe or GaP. The second optical path includes a retroreflector mirror mounted on a delay stage that adjusts the time-of-flight of the NIR beam to match the delay time, formula_1, of the THz light at the sample. The NIR beam is linearly polarized and "chopped" at a frequency suitable for detection, directed to the EO crystal to measure the change in its birefringence due to the degree of THz absorption by the sample. The NIR beam is reflected by the sample/EO crystal interface and directed to the detection module that often consists of an NIR quarter waveplate, a Wollaston prism that spatially selects perpendicular polarization states of the light toward two detectors in a balanced detector. The detected signal is a measure of the difference of the magnitude of the two perpendicular polarization states and corresponds to the degree of birefringence induced in the EO crystal by the THz light as-perturbed by the sample. THz Quarter Waveplate. One strategy to provide full 360° rotation of THz polarization of equal electric field magnitude at the sample is to generate a circular state of polarization, then select particular linear polarization states from the circularly polarized beam with a THz polarizer. A circular polarization state may be generated by a quarter waveplate, however, common optical waveplates are typically designed for visible, near- and mid-infrared regions of the electromagnetic spectrum. A quarter waveplate designed for use in the THz frequency range consists of a right-angle silicon prism together with metal-coated planar mirrors as input/output. In particular, the silicon prism acts analogously to a Fresnel rhomb with a single total internal reflection on the longer face of the prism and is a passive broadband component that permits a wide frequency sweep during measurements. Advantages. A few advantages of ATM over other related microspectroscopy techniques include the orientation of the THz electric field at the sample and the ability to readily measure materials that are sensitive to environmental conditions like hydration, cryo-cooling, and evacuation. THz polarization orientation at the sample. A key characteristic of ATM is the orientation of the polarized electric field of THz light at the sample. In particular, unlike other microspectroscopy techniques like scattering scanning near-field optical microscopy (s-SNOM), the electric field of the interrogating THz field is parallel to the surface of the sample. In s-SNOM, the shape of the oscillating metallic probe tip directs the THz polarization into a direction predominantly perpendicular to the sample surface. Environmentally sensitive sample materials. Living organisms typically consist of large quantities of water. Many anisotropic materials of interest are biological in nature and as such require hydration during spectroscopic measurements. While some limited novel techniques to measure properties of materials inside a hydrated sample chamber have been recently reports, the primary design requirement of ATM is that the material is accessible through a window that is transparent to THz light such as quartz. Similarly, samples requiring cryo-cooling or low pressure vacuum environment are readily interrogated in ATM using THz-transparent window materials. Applications. Anisotropic terahertz microspectrosopy (ATM) has found applications in structural biology and molecular fingerprinting of DNA and proteins. The technique is also suitable for drug discovery and studying THz frequency properties of thin film solid state materials. Special attention is given to molecular motions in proteins where many structural changes occur at frequencies in the terahertz range of the spectrum (0.3 THz to 3 THz). These structural changes include hinge motions in which two regions of molecules are connected together by a flexible molecular structure that bends like a mechanical hinge or elbow. ATM is uniquely capable of measuring the spatial direction in which hinge motions occur because of its use of linearly polarized electric fields. Protein dynamics. ATM is uniquely suited to measure resonant molecular vibrations in proteins. Molecular motions in proteins occur with frequencies in the terahertz range of the spectrum (0.3 THz to 3 THz). These structural changes include hinge motions in which two regions of molecules are connected together in a flexible way that bends like a mechanical hinge or joint and other conformational changes that occur within systems of protein molecules. Protein molecules are typically surrounded by water molecules and are arranged in random orientations. For this reason, it is common to arrange protein molecules in crystal form such that their orientations all the same. In particular, in a protein crystal the dipole of all protein molecules are naturally aligned. This allows us to perform microspectroscopy with polarized THz light and ascertain the spatial orientation of vibrations within molecules. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\rm Sig}(\\tau,\\alpha_{{\\rm THz}},\\phi_{{\\rm NIR}})" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\alpha_{{\\rm THz}}" }, { "math_id": 3, "text": "\\phi_{{\\rm NIR}}" }, { "math_id": 4, "text": "{\\rm Sig}(\\tau,\\alpha_{{\\rm THz}},\\phi_{{\\rm NIR}})\\propto \\left| \\left[ \\cos(\\alpha_{{\\rm THz}})\\sin(2\\phi_{{\\rm NIR}})+2\\sin(\\alpha_{{\\rm THz}})\\cos(2\\phi_{{\\rm NIR}}) \\right]\\right|." } ]
https://en.wikipedia.org/wiki?curid=71929189
7193
Commutator
Operation measuring the failure of two entities to commute In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. Group theory. The commutator of two elements, g and h, of a group G, is the element ["g", "h"] = "g"−1"h"−1"gh". This element is equal to the group's identity if and only if g and h commute (that is, if and only if "gh" = "hg"). The set of all commutators of a group is not in general closed under the group operation, but the subgroup of "G" generated by all commutators is closed and is called the "derived group" or the "commutator subgroup" of "G". Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group. The definition of the commutator above is used throughout this article, but many other group theorists define the commutator as ["g", "h"] = "ghg"−1"h"−1. Using the first definition, this can be expressed as ["g"−1, "h"−1]. Identities (group theory). Commutator identities are an important tool in group theory. The expression "ax" denotes the conjugate of a by x, defined as "x"−1"ax". Identity (5) is also known as the "Hall–Witt identity", after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section). N.B., the above definition of the conjugate of a by x is used by some group theorists. Many other group theorists define the conjugate of a by x as "xax"−1. This is often written formula_8. Similar identities hold for these conventions. Many identities that are true modulo certain subgroups are also used. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well: formula_9 If the derived subgroup is central, then formula_10 Ring theory. Rings often do not support division. Thus, the commutator of two elements "a" and "b" of a ring (or any associative algebra) is defined differently by formula_11 The commutator is zero if and only if "a" and "b" commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra. The anticommutator of two elements a and b of a ring or associative algebra is defined by formula_12 Sometimes formula_13 is used to denote anticommutator, while formula_14 is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras and in the derivation of the Dirac equation in particle physics. The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets and are completely isomorphic to the Hilbert space commutator structures mentioned. Identities (ring theory). The commutator has the following properties: Lie-algebra identities. Relation (3) is called anticommutativity, while (4) is the Jacobi identity. Additional identities. If A is a fixed element of a ring "R", identity (1) can be interpreted as a Leibniz rule for the map formula_29 given by formula_30. In other words, the map ad"A" defines a derivation on the ring "R". Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity. From identity (9), one finds that the commutator of integer powers of ring elements is: formula_31 Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example: Exponential identities. Consider a ring or algebra in which the exponential formula_38 can be meaningfully defined, such as a Banach algebra or a ring of formal power series. In such a ring, Hadamard's lemma applied to nested commutators gives: formula_39 (For the last expression, see "Adjoint derivation" below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp("A") exp("B")). A similar expansion expresses the group commutator of expressions formula_40 (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets), formula_41 Graded rings and algebras. When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as formula_42 Adjoint derivation. Especially if one deals with multiple commutators in a ring "R", another notation turns out to be useful. For an element formula_43, we define the adjoint mapping formula_44 by: formula_45 This mapping is a derivation on the ring "R": formula_46 By the Jacobi identity, it is also a derivation over the commutation operation: formula_47 Composing such mappings, we get for example formula_48 and formula_49 We may consider formula_50 itself as a mapping, formula_51, where formula_52 is the ring of mappings from "R" to itself with composition as the multiplication operation. Then formula_50 is a Lie algebra homomorphism, preserving the commutator: formula_53 By contrast, it is not always a ring homomorphism: usually formula_54. General Leibniz rule. The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation: formula_55 Replacing formula_56 by the differentiation operator formula_57, and formula_58 by the multiplication operator formula_59, we get formula_60, and applying both sides to a function "g", the identity becomes the usual Leibniz rule for the "n"th derivative formula_61. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^y = x[x, y]." }, { "math_id": 1, "text": "[y, x] = [x,y]^{-1}." }, { "math_id": 2, "text": "[x, zy] = [x, y]\\cdot [x, z]^y" }, { "math_id": 3, "text": "[x z, y] = [x, y]^z \\cdot [z, y]." }, { "math_id": 4, "text": "\\left[x, y^{-1}\\right] = [y, x]^{y^{-1}}" }, { "math_id": 5, "text": "\\left[x^{-1}, y\\right] = [y, x]^{x^{-1}}." }, { "math_id": 6, "text": "\\left[\\left[x, y^{-1}\\right], z\\right]^y \\cdot \\left[\\left[y, z^{-1}\\right], x\\right]^z \\cdot \\left[\\left[z, x^{-1}\\right], y\\right]^x = 1" }, { "math_id": 7, "text": "\\left[\\left[x, y\\right], z^x\\right] \\cdot \\left[[z ,x], y^z\\right] \\cdot \\left[[y, z], x^y\\right] = 1." }, { "math_id": 8, "text": "{}^x a" }, { "math_id": 9, "text": "(xy)^2 = x^2 y^2 [y, x][[y, x], y]." }, { "math_id": 10, "text": "(xy)^n = x^n y^n [y, x]^\\binom{n}{2}." }, { "math_id": 11, "text": "[a, b] = ab - ba." }, { "math_id": 12, "text": "\\{a, b\\} = ab + ba." }, { "math_id": 13, "text": "[a,b]_+" }, { "math_id": 14, "text": "[a,b]_-" }, { "math_id": 15, "text": "[A + B, C] = [A, C] + [B, C]" }, { "math_id": 16, "text": "[A, A] = 0" }, { "math_id": 17, "text": "[A, B] = -[B, A]" }, { "math_id": 18, "text": "[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0" }, { "math_id": 19, "text": "[A, BC] = [A, B]C + B[A, C]" }, { "math_id": 20, "text": "[A, BCD] = [A, B]CD + B[A, C]D + BC[A, D]" }, { "math_id": 21, "text": "[A, BCDE] = [A, B]CDE + B[A, C]DE + BC[A, D]E + BCD[A, E]" }, { "math_id": 22, "text": "[AB, C] = A[B, C] + [A, C]B" }, { "math_id": 23, "text": "[ABC, D] = AB[C, D] + A[B, D]C + [A, D]BC" }, { "math_id": 24, "text": "[ABCD, E] = ABC[D, E] + AB[C, E]D + A[B, E]CD + [A, E]BCD" }, { "math_id": 25, "text": "[A, B + C] = [A, B] + [A, C]" }, { "math_id": 26, "text": "[A + B, C + D] = [A, C] + [A, D] + [B, C] + [B, D]" }, { "math_id": 27, "text": "[AB, CD] = A[B, C]D + [A, C]BD + CA[B, D] + C[A, D]B =A[B, C]D + AC[B,D] + [A,C]DB + C[A, D]B" }, { "math_id": 28, "text": "[[A, C], [B, D]] = [[[A, B], C], D] + [[[B, C], D], A] + [[[C, D], A], B] + [[[D, A], B], C]" }, { "math_id": 29, "text": "\\operatorname{ad}_A: R \\rightarrow R" }, { "math_id": 30, "text": "\\operatorname{ad}_A(B) = [A, B]" }, { "math_id": 31, "text": "[A^N, B^M] = \\sum_{n=0}^{N-1}\\sum_{m=0}^{M-1} A^{n}B^{m} [A,B] B^{N-n-1}A^{M-m-1} = \\sum_{n=0}^{N-1}\\sum_{m=0}^{M-1} B^{n}A^{m} [A,B] A^{N-n-1}B^{M-m-1}" }, { "math_id": 32, "text": "[AB, C]_\\pm = A[B, C]_- + [A, C]_\\pm B" }, { "math_id": 33, "text": "[AB, CD]_\\pm = A[B, C]_- D + AC[B, D]_- + [A, C]_- DB + C[A, D]_\\pm B" }, { "math_id": 34, "text": "[[A,B],[C,D]]=[[[B,C]_+,A]_+,D]-[[[B,D]_+,A]_+,C]+[[[A,D]_+,B]_+,C]-[[[A,C]_+,B]_+,D]" }, { "math_id": 35, "text": "\\left[A, [B, C]_\\pm\\right] + \\left[B, [C, A]_\\pm\\right] + \\left[C, [A, B]_\\pm\\right] = 0" }, { "math_id": 36, "text": "[A,BC]_\\pm = [A,B]_- C + B[A,C]_\\pm = [A,B]_\\pm C \\mp B[A,C]_-" }, { "math_id": 37, "text": "[A,BC] = [A,B]_\\pm C \\mp B[A,C]_\\pm" }, { "math_id": 38, "text": "e^A = \\exp(A) = 1 + A + \\tfrac{1}{2!}A^2 + \\cdots" }, { "math_id": 39, "text": "e^A Be^{-A}\n \\ =\\ B + [A, B] + \\frac{1}{2!}[A, [A, B]] + \\frac{1}{3!}[A, [A, [A, B]]] + \\cdots\n \\ =\\ e^{\\operatorname{ad}_A}(B).\n" }, { "math_id": 40, "text": "e^A" }, { "math_id": 41, "text": "e^A e^B e^{-A} e^{-B} =\n\\exp\\!\\left( [A, B] + \\frac{1}{2!}[A{+}B, [A, B]] + \\frac{1}{3!} \\left(\\frac{1}{2} [A, [B, [B, A]]] + [A{+}B, [A{+}B, [A, B]]]\\right) + \\cdots\\right). " }, { "math_id": 42, "text": "[\\omega, \\eta]_{gr} := \\omega\\eta - (-1)^{\\deg \\omega \\deg \\eta} \\eta\\omega." }, { "math_id": 43, "text": "x\\in R" }, { "math_id": 44, "text": "\\mathrm{ad}_x:R\\to R" }, { "math_id": 45, "text": "\\operatorname{ad}_x(y) = [x, y] = xy-yx." }, { "math_id": 46, "text": "\\mathrm{ad}_x\\!(yz) \\ =\\ \\mathrm{ad}_x\\!(y) \\,z \\,+\\, y\\,\\mathrm{ad}_x\\!(z)." }, { "math_id": 47, "text": "\\mathrm{ad}_x[y,z] \\ =\\ [\\mathrm{ad}_x\\!(y),z] \\,+\\, [y,\\mathrm{ad}_x\\!(z)] ." }, { "math_id": 48, "text": "\\operatorname{ad}_x\\operatorname{ad}_y(z) = [x, [y, z]\\,] " }, { "math_id": 49, "text": "\\operatorname{ad}_x^2\\!(z) \\ =\\ \n\\operatorname{ad}_x\\!(\\operatorname{ad}_x\\!(z)) \\ =\\ \n[x, [x, z]\\,]." }, { "math_id": 50, "text": "\\mathrm{ad}" }, { "math_id": 51, "text": "\\mathrm{ad}: R \\to \\mathrm{End}(R) " }, { "math_id": 52, "text": "\\mathrm{End}(R)" }, { "math_id": 53, "text": "\\operatorname{ad}_{[x, y]} = \\left[ \\operatorname{ad}_x, \\operatorname{ad}_y \\right]." }, { "math_id": 54, "text": "\\operatorname{ad}_{xy} \\,\\neq\\, \\operatorname{ad}_x\\operatorname{ad}_y " }, { "math_id": 55, "text": "x^n y = \\sum_{k = 0}^n \\binom{n}{k} \\operatorname{ad}_x^k\\!(y)\\, x^{n - k}." }, { "math_id": 56, "text": "x" }, { "math_id": 57, "text": "\\partial" }, { "math_id": 58, "text": "y" }, { "math_id": 59, "text": "m_f : g \\mapsto fg" }, { "math_id": 60, "text": "\\operatorname{ad}(\\partial)(m_f) = m_{\\partial(f)}" }, { "math_id": 61, "text": "\\partial^{n}\\!(fg)" } ]
https://en.wikipedia.org/wiki?curid=7193
7193470
Theta characteristic
In mathematics, a theta characteristic of a non-singular algebraic curve "C" is a divisor class Θ such that 2Θ is the canonical class. In terms of holomorphic line bundles "L" on a connected compact Riemann surface, it is therefore "L" such that "L"2 is the canonical bundle, here also equivalently the holomorphic cotangent bundle. In terms of algebraic geometry, the equivalent definition is as an invertible sheaf, which squares to the sheaf of differentials of the first kind. Theta characteristics were introduced by Rosenhain (1851) History and genus 1. The importance of this concept was realised first in the analytic theory of theta functions, and geometrically in the theory of bitangents. In the analytic theory, there are four fundamental theta functions in the theory of Jacobian elliptic functions. Their labels are in effect the theta characteristics of an elliptic curve. For that case, the canonical class is trivial (zero in the divisor class group) and so the theta characteristics of an elliptic curve "E" over the complex numbers are seen to be in 1-1 correspondence with the four points "P" on "E" with 2"P" = 0; this is counting of the solutions is clear from the group structure, a product of two circle groups, when "E" is treated as a complex torus. Higher genus. For "C" of genus 0 there is one such divisor class, namely the class of "-P", where "P" is any point on the curve. In case of higher genus "g", assuming the field over which "C" is defined does not have characteristic 2, the theta characteristics can be counted as 22"g" in number if the base field is algebraically closed. This comes about because the solutions of the equation on the divisor class level will form a single coset of the solutions of 2"D" = 0. In other words, with "K" the canonical class and Θ any given solution of 2Θ = "K", any other solution will be of form Θ + "D". This reduces counting the theta characteristics to finding the 2-rank of the Jacobian variety "J"("C") of "C". In the complex case, again, the result follows since "J"("C") is a complex torus of dimension 2"g". Over a general field, see the theory explained at Hasse-Witt matrix for the counting of the p-rank of an abelian variety. The answer is the same, provided the characteristic of the field is not 2. A theta characteristic Θ will be called "even" or "odd" depending on the dimension of its space of global sections formula_0. It turns out that on "C" there are formula_1 even and formula_2 odd theta characteristics. Classical theory. Classically the theta characteristics were divided into these two kinds, odd and even, according to the value of the Arf invariant of a certain quadratic form "Q" with values mod 2. Thus in case of "g" = 3 and a plane quartic curve, there were 28 of one type, and the remaining 36 of the other; this is basic in the question of counting bitangents, as it corresponds to the 28 bitangents of a quartic. The geometric construction of "Q" as an intersection form is with modern tools possible algebraically. In fact the Weil pairing applies, in its abelian variety form. Triples (θ1, θ2, θ3) of theta characteristics are called "syzygetic" and "asyzygetic" depending on whether Arf(θ1)+Arf(θ2)+Arf(θ3)+Arf(θ1+θ2+θ3) is 0 or 1. Spin structures. showed that, for a compact complex manifold, choices of theta characteristics correspond bijectively to spin structures.
[ { "math_id": 0, "text": "H^0(C, \\Theta)" }, { "math_id": 1, "text": "2^{g - 1} (2^g + 1)" }, { "math_id": 2, "text": "2^{g-1}(2^g - 1)" } ]
https://en.wikipedia.org/wiki?curid=7193470
71935692
Semiabelian group
Added a basic definition in group theory and algebra Semiabelian groups is a class of groups first introduced by and named by . It appears in Galois theory, in the study of the inverse Galois problem or the embedding problem which is a generalization of the former. Definition. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Definition: A finite group "G" is called semiabelian if and only if there exists a sequence formula_0 such that formula_1 is a homomorphic image of a semidirect product formula_2 with a finite abelian group formula_3 (formula_4.). The family formula_5 of finite semiabelian groups is the minimal family which contains the trivial group and is closed under the following operations: * If formula_6 acts on a finite abelian group formula_7, then formula_8; * If formula_9 and formula_10 is a normal subgroup, then formula_11. The class of finite groups "G" with a regular realizations over formula_12 is closed under taking semidirect products with abelian kernels, and it is also closed under quotients. The class formula_5 is the smallest class of finite groups that have both of these closure properties as mentioned above. Therefore "G" is an epimorphism of a split group extension with abelian kernel. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G_0 = \\{1\\}, G_1, \\dots , G_n = G" }, { "math_id": 1, "text": "G_i" }, { "math_id": 2, "text": "A_i\\rtimes G_{i-1}" }, { "math_id": 3, "text": "A_{i}" }, { "math_id": 4, "text": "i = 1, \\dots , n" }, { "math_id": 5, "text": "\\mathcal{S}" }, { "math_id": 6, "text": "G \\in \\mathcal{S}" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "A\\rtimes G\\in \\mathcal{S}" }, { "math_id": 9, "text": "G\\in \\mathcal{S}" }, { "math_id": 10, "text": "N\\triangleleft G" }, { "math_id": 11, "text": "G/N\\in \\mathcal{S}" }, { "math_id": 12, "text": "\\mathbb{Q}" }, { "math_id": 13, "text": "64" }, { "math_id": 14, "text": "A\\triangleleft G" }, { "math_id": 15, "text": "G = AU" }, { "math_id": 16, "text": "k(t)" }, { "math_id": 17, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=71935692
719388
Weak key
In cryptography, a weak key is a key, which, used with a specific cipher, makes the cipher behave in some undesirable way. Weak keys usually represent a very small fraction of the overall keyspace, which usually means that, a cipher key made by random number generation is very unlikely to give rise to a security problem. Nevertheless, it is considered desirable for a cipher to have no weak keys. A cipher with no weak keys is said to have a "flat", or "linear", key space. Historical origins. Virtually all rotor-based cipher machines (from 1925 onwards) have implementation flaws that lead to a substantial number of weak keys being created. Some rotor machines have more problems with weak keys than others, as modern block and stream ciphers do. The first stream cipher machines were also rotor machines and had some of the same problems of weak keys as the more traditional rotor machines. The T52 was one such stream cipher machine that had weak key problems. The British first detected T52 traffic in Summer and Autumn of 1942. One link was between Sicily and Libya, codenamed "Sturgeon", and another from the Aegean to Sicily, codenamed "Mackerel". Operators of both links were in the habit of enciphering several messages with the same machine settings, producing large numbers of depths. There were several (mostly incompatible) versions of the T52: the T52a and T52b (which differed only in their electrical noise suppression), T52c, T52d and T52e. While the T52a/b and T52c were cryptologically weak, the last two were more advanced devices; the movement of the wheels was intermittent, the decision on whether or not to advance them being controlled by logic circuits which took as input data from the wheels themselves. In addition, a number of conceptual flaws (including very subtle ones) had been eliminated. One such flaw was the ability to reset the keystream to a fixed point, which led to key reuse by undisciplined machine operators. Weak keys in DES. The block cipher DES has a few specific keys termed "weak keys" and "semi-weak keys". These are keys that cause the encryption mode of DES to act identically to the decryption mode of DES (albeit potentially that of a different key). In operation, the secret 56-bit key is broken up into 16 subkeys according to the DES key schedule; one subkey is used in each of the sixteen DES rounds. DES "weak keys" produce sixteen identical subkeys. This occurs when the key (expressed in hexadecimal) is: If an implementation does not consider the parity bits, the corresponding keys with the inverted parity bits may also work as weak keys: Using weak keys, the outcome of the Permuted Choice 1 (PC-1) in the DES key schedule leads to round keys being either all zeros, all ones or alternating zero-one patterns. Since all the subkeys are identical, and DES is a Feistel network, the encryption function is self-inverting; that is, despite encrypting once giving a secure-looking cipher text, encrypting twice produces the original plaintext. DES also has "semi-weak keys", which only produce two different subkeys, each used eight times in the algorithm: This means they come in pairs "K"1 and "K"2, and they have the property that: formula_0 where E"K"(M) is the encryption algorithm encrypting message "M "with key "K". There are six semi-weak key pairs: There are also 48 possibly weak keys that produce only four distinct subkeys (instead of 16). They can be found in a NIST publication. These weak and semi-weak keys are not considered "fatal flaws" of DES. There are 256 (7.21 × 1016, about 72 quadrillion) possible keys for DES, of which four are weak and twelve are semi-weak. This is such a tiny fraction of the possible keyspace that users do not need to worry. If they so desire, they can check for weak or semi-weak keys when the keys are generated. They are very few, and easy to recognize. Note, however, that currently DES is no longer recommended for general use since "all" DES keys can be brute-forced it's been decades since the Deep Crack machine was cracking them on the order of days, and as computers tend to do, more recent solutions are vastly cheaper on that time scale. Examples of progress are in Deep Crack's article. No weak keys as a design goal. The goal of having a 'flat' keyspace (i.e., all keys equally strong) is always a cipher design goal. As in the case of DES, sometimes a small number of weak keys is acceptable, provided that they are all identified or identifiable. An algorithm that has unknown weak keys does not inspire much trust. The two main countermeasures against inadvertently using a weak key: A large number of weak keys is a serious flaw in any cipher design, since there will then be a (perhaps too) large chance that a randomly generated one will be a weak one, compromising the security of messages encrypted under it. It will also take longer to check randomly generated keys for weakness in such cases, which will tempt shortcuts in the interest of 'efficiency'. However, weak keys are much more often a problem where the adversary has some control over what keys are used, such as when a block cipher is used in a mode of operation intended to construct a secure cryptographic hash function (e.g. Davies–Meyer). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{K_1}(E_{K_2}(M))=M" } ]
https://en.wikipedia.org/wiki?curid=719388
71939967
Fermi–Dirac prime
Prime power with exponent 2^k In number theory, a Fermi–Dirac prime is a prime power whose exponent is a power of two. These numbers are named from an analogy to Fermi–Dirac statistics in physics based on the fact that each integer has a unique representation as a product of Fermi–Dirac primes without repetition. Each element of the sequence of Fermi–Dirac primes is the smallest number that does not divide the product of all previous elements. Srinivasa Ramanujan used the Fermi–Dirac primes to find the smallest number whose number of divisors is a given power of two. Definition. The Fermi–Dirac primes are a sequence of numbers obtained by raising a prime number to an exponent that is a power of two. That is, these are the numbers of the form formula_0 where formula_1 is a prime number and formula_2 is a non-negative integer. These numbers form the sequence: &lt;templatestyles src="Block indent/styles.css"/&gt;2, 3, 4, 5, 7, 9, 11, 13, 16, 17, 19, 23, 25, 29, 31, 37, ... They can be obtained from the prime numbers by repeated squaring, and form the smallest set of numbers that includes all of the prime numbers and is closed under squaring. Another way of defining this sequence is that each element is the smallest positive integer that does not divide the product of all of the previous elements of the sequence. Factorization. Analogously to the way that every positive integer has a unique factorization, its representation as a product of prime numbers (with some of these numbers repeated), every positive integer also has a unique factorization as a product of Fermi–Dirac primes, with no repetitions allowed. For example, formula_3 The Fermi–Dirac primes are named from an analogy to particle physics. In physics, bosons are particles that obey Bose–Einstein statistics, in which it is allowed for multiple particles to be in the same state at the same time. Fermions are particles that obey Fermi–Dirac statistics, which only allow a single particle in each state. Similarly, for the usual prime numbers, multiple copies of the same prime number can appear in the same prime factorization, but factorizations into a product of Fermi–Dirac primes only allow each Fermi–Dirac prime to appear once within the product. Other properties. The Fermi–Dirac primes can be used to find the smallest number that has exactly formula_4 divisors, in the case that formula_4 is a power of two, formula_5. In this case, as Srinivasa Ramanujan proved, the smallest number with formula_5 divisors is the product of the formula_2 smallest Fermi–Dirac primes. Its divisors are the numbers obtained by multiplying together any subset of these formula_2 Fermi–Dirac primes. For instance, the smallest number with 1024 divisors is obtained by multiplying together the first ten Fermi–Dirac primes: formula_6 In the theory of infinitary divisors of Cohen, the Fermi–Dirac primes are exactly the numbers whose only infinitary divisors are 1 and the number itself. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p^{2^k}" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "2400 = 2\\cdot 3 \\cdot 16 \\cdot 25." }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "n=2^k" }, { "math_id": 6, "text": "294053760 = 2\\cdot 3\\cdot 4\\cdot 5\\cdot 7\\cdot 9\\cdot 11\\cdot 13\\cdot 16\\cdot 17." } ]
https://en.wikipedia.org/wiki?curid=71939967
719460
Laplace–Runge–Lenz vector
Vector used in astronomy In classical mechanics, the Laplace–Runge–Lenz (LRL) vector is a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another, such as a binary star or a planet revolving around a star. For two bodies interacting by Newtonian gravity, the LRL vector is a constant of motion, meaning that it is the same no matter where it is calculated on the orbit; equivalently, the LRL vector is said to be "conserved". More generally, the LRL vector is conserved in all problems in which two bodies interact by a central force that varies as the inverse square of the distance between them; such problems are called Kepler problems. The hydrogen atom is a Kepler problem, since it comprises two charged particles interacting by Coulomb's law of electrostatics, another inverse-square central force. The LRL vector was essential in the first quantum mechanical derivation of the spectrum of the hydrogen atom, before the development of the Schrödinger equation. However, this approach is rarely used today. In classical and quantum mechanics, conserved quantities generally correspond to a symmetry of the system. The conservation of the LRL vector corresponds to an unusual symmetry; the Kepler problem is mathematically equivalent to a particle moving freely on the surface of a four-dimensional (hyper-)sphere, so that the whole problem is symmetric under certain rotations of the four-dimensional space. This higher symmetry results from two properties of the Kepler problem: the velocity vector always moves in a perfect circle and, for a given total energy, all such velocity circles intersect each other in the same two points. The Laplace–Runge–Lenz vector is named after Pierre-Simon de Laplace, Carl Runge and Wilhelm Lenz. It is also known as the Laplace vector, the Runge–Lenz vector and the Lenz vector. Ironically, none of those scientists discovered it. The LRL vector has been re-discovered and re-formulated several times; for example, it is equivalent to the dimensionless eccentricity vector of celestial mechanics. Various generalizations of the LRL vector have been defined, which incorporate the effects of special relativity, electromagnetic fields and even different types of central forces. Context. A single particle moving under any conservative central force has at least four constants of motion: the total energy E and the three Cartesian components of the angular momentum vector L with respect to the center of force. The particle's orbit is confined to the plane defined by the particle's initial momentum p (or, equivalently, its velocity v) and the vector r between the particle and the center of force (see Figure 1). This plane of motion is perpendicular to the constant angular momentum vector L = r × p; this may be expressed mathematically by the vector dot product equation r ⋅ L = 0. Given its mathematical definition below, the Laplace–Runge–Lenz vector (LRL vector) A is always perpendicular to the constant angular momentum vector L for all central forces (A ⋅ L = 0). Therefore, A always lies in the plane of motion. As shown below, A points from the center of force to the periapsis of the motion, the point of closest approach, and its length is proportional to the eccentricity of the orbit. The LRL vector A is constant in length and direction, but only for an inverse-square central force. For other central forces, the vector A is not constant, but changes in both length and direction. If the central force is "approximately" an inverse-square law, the vector A is approximately constant in length, but slowly rotates its direction. A "generalized" conserved LRL vector formula_0 can be defined for all central forces, but this generalized vector is a complicated function of position, and usually not expressible in closed form. The LRL vector differs from other conserved quantities in the following property. Whereas for typical conserved quantities, there is a corresponding cyclic coordinate in the three-dimensional Lagrangian of the system, there does "not" exist such a coordinate for the LRL vector. Thus, the conservation of the LRL vector must be derived directly, e.g., by the method of Poisson brackets, as described below. Conserved quantities of this kind are called "dynamic", in contrast to the usual "geometric" conservation laws, e.g., that of the angular momentum. History of rediscovery. The LRL vector A is a constant of motion of the Kepler problem, and is useful in describing astronomical orbits, such as the motion of planets and binary stars. Nevertheless, it has never been well known among physicists, possibly because it is less intuitive than momentum and angular momentum. Consequently, it has been rediscovered independently several times over the last three centuries. Jakob Hermann was the first to show that A is conserved for a special case of the inverse-square central force, and worked out its connection to the eccentricity of the orbital ellipse. Hermann's work was generalized to its modern form by Johann Bernoulli in 1710. At the end of the century, Pierre-Simon de Laplace rediscovered the conservation of A, deriving it analytically, rather than geometrically. In the middle of the nineteenth century, William Rowan Hamilton derived the equivalent eccentricity vector defined below, using it to show that the momentum vector p moves on a circle for motion under an inverse-square central force (Figure 3). At the beginning of the twentieth century, Josiah Willard Gibbs derived the same vector by vector analysis. Gibbs' derivation was used as an example by Carl Runge in a popular German textbook on vectors, which was referenced by Wilhelm Lenz in his paper on the (old) quantum mechanical treatment of the hydrogen atom. In 1926, Wolfgang Pauli used the LRL vector to derive the energy levels of the hydrogen atom using the matrix mechanics formulation of quantum mechanics, after which it became known mainly as the "Runge–Lenz vector". Mathematical definition. An inverse-square central force acting on a single particle is described by the equation formula_1 The corresponding potential energy is given by formula_2. The constant parameter k describes the strength of the central force; it is equal to "G"⋅"M"⋅"m" for gravitational and −"k"e⋅"Q"⋅"q" for electrostatic forces. The force is attractive if "k" &gt; 0 and repulsive if "k" &lt; 0. The LRL vector A is defined mathematically by the formula formula_3 where The SI units of the LRL vector are joule-kilogram-meter (J⋅kg⋅m). This follows because the units of p and L are kg⋅m/s and J⋅s, respectively. This agrees with the units of m (kg) and of k (N⋅m2). This definition of the LRL vector A pertains to a single point particle of mass m moving under the action of a fixed force. However, the same definition may be extended to two-body problems such as the Kepler problem, by taking m as the reduced mass of the two bodies and r as the vector between the two bodies. Since the assumed force is conservative, the total energy E is a constant of motion, formula_6 The assumed force is also a central force. Hence, the angular momentum vector L is also conserved and defines the plane in which the particle travels. The LRL vector A is perpendicular to the angular momentum vector L because both p × L and r are perpendicular to L. It follows that A lies in the plane of motion. Alternative formulations for the same constant of motion may be defined, typically by scaling the vector with constants, such as the mass "m", the force parameter "k" or the angular momentum "L". The most common variant is to divide A by "mk", which yields the eccentricity vector, a dimensionless vector along the semi-major axis whose modulus equals the eccentricity of the conic: formula_7 An equivalent formulation multiplies this eccentricity vector by the major semiaxis a, giving the resulting vector the units of length. Yet another formulation divides A by formula_8, yielding an equivalent conserved quantity with units of inverse length, a quantity that appears in the solution of the Kepler problem formula_9 where formula_10 is the angle between A and the position vector r. Further alternative formulations are given below. Derivation of the Kepler orbits. The "shape" and "orientation" of the orbits can be determined from the LRL vector as follows. Taking the dot product of A with the position vector r gives the equation formula_11 where θ is the angle between r and A (Figure 2). Permuting the scalar triple product yields formula_12 Rearranging yields the solution for the Kepler equation formula_13 This corresponds to the formula for a conic section of eccentricity "e" formula_14 where the eccentricity formula_15 and C is a constant. Taking the dot product of A with itself yields an equation involving the total energy E, formula_16 which may be rewritten in terms of the eccentricity, formula_17 Thus, if the energy E is negative (bound orbits), the eccentricity is less than one and the orbit is an ellipse. Conversely, if the energy is positive (unbound orbits, also called "scattered orbits"), the eccentricity is greater than one and the orbit is a hyperbola. Finally, if the energy is exactly zero, the eccentricity is one and the orbit is a parabola. In all cases, the direction of A lies along the symmetry axis of the conic section and points from the center of force toward the periapsis, the point of closest approach. Circular momentum hodographs. The conservation of the LRL vector A and angular momentum vector L is useful in showing that the momentum vector p moves on a circle under an inverse-square central force. Taking the dot product of formula_18 with itself yields formula_19 Further choosing L along the z-axis, and the major semiaxis as the x-axis, yields the locus equation for p, formula_20 In other words, the momentum vector p is confined to a circle of radius "mk"/"L" = "L"/"ℓ" centered on (0, "A"/"L"). For bounded orbits, the eccentricity e corresponds to the cosine of the angle η shown in Figure 3. For unbounded orbits, we have formula_21 and so the circle does not intersect the formula_22-axis. In the degenerate limit of circular orbits, and thus vanishing A, the circle centers at the origin (0,0). For brevity, it is also useful to introduce the variable formula_23. This circular hodograph is useful in illustrating the symmetry of the Kepler problem. Constants of motion and superintegrability. The seven scalar quantities E, A and L (being vectors, the latter two contribute three conserved quantities each) are related by two equations, A ⋅ L = 0 and "A"2 = "m"2"k"2 + 2 "mEL"2, giving five independent constants of motion. (Since the magnitude of A, hence the eccentricity e of the orbit, can be determined from the total angular momentum L and the energy E, only the "direction" of A is conserved independently; moreover, since A must be perpendicular to L, it contributes "only one" additional conserved quantity.) This is consistent with the six initial conditions (the particle's initial position and velocity vectors, each with three components) that specify the orbit of the particle, since the initial time is not determined by a constant of motion. The resulting 1-dimensional orbit in 6-dimensional phase space is thus completely specified. A mechanical system with d degrees of freedom can have at most 2"d" − 1 constants of motion, since there are 2"d" initial conditions and the initial time cannot be determined by a constant of motion. A system with more than d constants of motion is called "superintegrable" and a system with 2"d" − 1 constants is called maximally superintegrable. Since the solution of the Hamilton–Jacobi equation in one coordinate system can yield only d constants of motion, superintegrable systems must be separable in more than one coordinate system. The Kepler problem is maximally superintegrable, since it has three degrees of freedom ("d" = 3) and five independent constant of motion; its Hamilton–Jacobi equation is separable in both spherical coordinates and parabolic coordinates, as described below. Maximally superintegrable systems follow closed, one-dimensional orbits in phase space, since the orbit is the intersection of the phase-space isosurfaces of their constants of motion. Consequently, the orbits are perpendicular to all gradients of all these independent isosurfaces, five in this specific problem, and hence are determined by the generalized cross products of all of these gradients. As a result, all superintegrable systems are automatically describable by Nambu mechanics, alternatively, and equivalently, to Hamiltonian mechanics. Maximally superintegrable systems can be quantized using commutation relations, as illustrated below. Nevertheless, equivalently, they are also quantized in the Nambu framework, such as this classical Kepler problem into the quantum hydrogen atom. Evolution under perturbed potentials. The Laplace–Runge–Lenz vector A is conserved only for a perfect inverse-square central force. In most practical problems such as planetary motion, however, the interaction potential energy between two bodies is not exactly an inverse square law, but may include an additional central force, a so-called "perturbation" described by a potential energy "h"("r"). In such cases, the LRL vector rotates slowly in the plane of the orbit, corresponding to a slow apsidal precession of the orbit. By assumption, the perturbing potential "h"("r") is a conservative central force, which implies that the total energy E and angular momentum vector L are conserved. Thus, the motion still lies in a plane perpendicular to L and the magnitude A is conserved, from the equation "A"2 = "m"2"k"2 + 2"mEL"2. The perturbation potential "h"("r") may be any sort of function, but should be significantly weaker than the main inverse-square force between the two bodies. The "rate" at which the LRL vector rotates provides information about the perturbing potential "h"("r"). Using canonical perturbation theory and action-angle coordinates, it is straightforward to show that A rotates at a rate of, formula_24 where T is the orbital period, and the identity "L" "dt" = "m" "r"2 "dθ" was used to convert the time integral into an angular integral (Figure 5). The expression in angular brackets, ⟨"h"("r")⟩, represents the perturbing potential, but "averaged" over one full period; that is, averaged over one full passage of the body around its orbit. Mathematically, this time average corresponds to the following quantity in curly braces. This averaging helps to suppress fluctuations in the rate of rotation. This approach was used to help verify Einstein's theory of general relativity, which adds a small effective inverse-cubic perturbation to the normal Newtonian gravitational potential, formula_25 Inserting this function into the integral and using the equation formula_26 to express r in terms of θ, the precession rate of the periapsis caused by this non-Newtonian perturbation is calculated to be formula_27 which closely matches the observed anomalous precession of Mercury and binary pulsars. This agreement with experiment is strong evidence for general relativity. Poisson brackets. The unscaled functions. The algebraic structure of the problem is, as explained in later sections, SO(4)/Z2 ~ SO(3) × SO(3). The three components "Li" of the angular momentum vector L have the Poisson brackets formula_28 where i=1,2,3 and "εijs" is the fully antisymmetric tensor, i.e., the Levi-Civita symbol; the summation index s is used here to avoid confusion with the force parameter k defined above. Then since the LRL vector A transforms like a vector, we have the following Poisson bracket relations between A and L: formula_29 Finally, the Poisson bracket relations between the different components of A are as follows: formula_30 where formula_31 is the Hamiltonian. Note that the span of the components of A and the components of L is not closed under Poisson brackets, because of the factor of formula_31 on the right-hand side of this last relation. Finally, since both L and A are constants of motion, we have formula_32 The Poisson brackets will be extended to quantum mechanical commutation relations in the next section and to Lie brackets in a following section. The scaled functions. As noted below, a scaled Laplace–Runge–Lenz vector D may be defined with the same units as angular momentum by dividing A by formula_33. Since D still transforms like a vector, the Poisson brackets of D with the angular momentum vector L can then be written in a similar form formula_34 The Poisson brackets of D with "itself" depend on the sign of H, i.e., on whether the energy is negative (producing closed, elliptical orbits under an inverse-square central force) or positive (producing open, hyperbolic orbits under an inverse-square central force). For "negative" energies—i.e., for bound systems—the Poisson brackets are formula_35 We may now appreciate the motivation for the chosen scaling of D: With this scaling, the Hamiltonian no longer appears on the right-hand side of the preceding relation. Thus, the span of the three components of L and the three components of D forms a six-dimensional Lie algebra under the Poisson bracket. This Lie algebra is isomorphic to so(4), the Lie algebra of the 4-dimensional rotation group SO(4). By contrast, for "positive" energy, the Poisson brackets have the opposite sign, formula_36 In this case, the Lie algebra is isomorphic to so(3,1). The distinction between positive and negative energies arises because the desired scaling—the one that eliminates the Hamiltonian from the right-hand side of the Poisson bracket relations between the components of the scaled LRL vector—involves the "square root" of the Hamiltonian. To obtain real-valued functions, we must then take the absolute value of the Hamiltonian, which distinguishes between positive values (where formula_37) and negative values (where formula_38). Laplace-Runge-Lenz operator for the hydrogen atom in momentum space. Scaled Laplace-Runge-Lenz operator in the momentum space was found in 2022 . The formula for the operator is simpler than in position space: formula_39 where the "degree operator" formula_40 multiplies a homogeneous polynomial by its degree. Casimir invariants and the energy levels. The Casimir invariants for negative energies are formula_41 and have vanishing Poisson brackets with all components of D and L, formula_42 "C"2 is trivially zero, since the two vectors are always perpendicular. However, the other invariant, "C"1, is non-trivial and depends only on m, k and E. Upon canonical quantization, this invariant allows the energy levels of hydrogen-like atoms to be derived using only quantum mechanical canonical commutation relations, instead of the conventional solution of the Schrödinger equation. This derivation is discussed in detail in the next section. Quantum mechanics of the hydrogen atom. Poisson brackets provide a simple guide for quantizing most classical systems: the commutation relation of two quantum mechanical operators is specified by the Poisson bracket of the corresponding classical variables, multiplied by "iħ". By carrying out this quantization and calculating the eigenvalues of the C1 Casimir operator for the Kepler problem, Wolfgang Pauli was able to derive the energy levels of hydrogen-like atoms (Figure 6) and, thus, their atomic emission spectrum. This elegant 1926 derivation was obtained "before the development of the Schrödinger equation". A subtlety of the quantum mechanical operator for the LRL vector A is that the momentum and angular momentum operators do not commute; hence, the quantum operator cross product of p and L must be defined carefully. Typically, the operators for the Cartesian components "As" are defined using a symmetrized (Hermitian) product, formula_43 Once this is done, one can show that the quantum LRL operators satisfy commutations relations exactly analogous to the Poisson bracket relations in the previous section—just replacing the Poisson bracket with formula_44 times the commutator. From these operators, additional ladder operators for L can be defined, formula_45 These further connect "different" eigenstates of L2, so different spin multiplets, among themselves. A normalized first Casimir invariant operator, quantum analog of the above, can likewise be defined, formula_46 where "H"−1 is the inverse of the Hamiltonian energy operator, and I is the identity operator. Applying these ladder operators to the eigenstates |"ℓ""mn"〉 of the total angular momentum, azimuthal angular momentum and energy operators, the eigenvalues of the first Casimir operator, C1, are seen to be quantized, "n"2 − 1. Importantly, by dint of the vanishing of "C"2, they are independent of the ℓ and m quantum numbers, making the energy levels degenerate. Hence, the energy levels are given by formula_47 which coincides with the Rydberg formula for hydrogen-like atoms (Figure 6). The additional symmetry operators A have connected the different ℓ multiplets among themselves, for a given energy (and "C"1), dictating "n"2 states at each level. In effect, they have enlarged the angular momentum group SO(3) to SO(4)/Z2 ~ SO(3) × SO(3). Conservation and symmetry. The conservation of the LRL vector corresponds to a subtle symmetry of the system. In classical mechanics, symmetries are continuous operations that map one orbit onto another without changing the energy of the system; in quantum mechanics, symmetries are continuous operations that "mix" electronic orbitals of the same energy, i.e., degenerate energy levels. A conserved quantity is usually associated with such symmetries. For example, every central force is symmetric under the rotation group SO(3), leading to the conservation of the angular momentum L. Classically, an overall rotation of the system does not affect the energy of an orbit; quantum mechanically, rotations mix the spherical harmonics of the same quantum number ℓ without changing the energy. The symmetry for the inverse-square central force is higher and more subtle. The peculiar symmetry of the Kepler problem results in the conservation of both the angular momentum vector L and the LRL vector A (as defined above) and, quantum mechanically, ensures that the energy levels of hydrogen do not depend on the angular momentum quantum numbers ℓ and m. The symmetry is more subtle, however, because the symmetry operation must take place in a higher-dimensional space; such symmetries are often called "hidden symmetries". Classically, the higher symmetry of the Kepler problem allows for continuous alterations of the orbits that preserve energy but not angular momentum; expressed another way, orbits of the same energy but different angular momentum (eccentricity) can be transformed continuously into one another. Quantum mechanically, this corresponds to mixing orbitals that differ in the ℓ and m quantum numbers, such as the s("ℓ" = 0) and p("ℓ" = 1) atomic orbitals. Such mixing cannot be done with ordinary three-dimensional translations or rotations, but is equivalent to a rotation in a higher dimension. For "negative" energies – i.e., for bound systems – the higher symmetry group is SO(4), which preserves the length of four-dimensional vectors formula_48 In 1935, Vladimir Fock showed that the quantum mechanical bound Kepler problem is equivalent to the problem of a free particle confined to a three-dimensional unit sphere in four-dimensional space. Specifically, Fock showed that the Schrödinger wavefunction in the momentum space for the Kepler problem was the stereographic projection of the spherical harmonics on the sphere. Rotation of the sphere and re-projection results in a continuous mapping of the elliptical orbits without changing the energy, an SO(4) symmetry sometimes known as Fock symmetry; quantum mechanically, this corresponds to a mixing of all orbitals of the same energy quantum number n. Valentine Bargmann noted subsequently that the Poisson brackets for the angular momentum vector L and the scaled LRL vector A formed the Lie algebra for SO(4). Simply put, the six quantities A and L correspond to the six conserved angular momenta in four dimensions, associated with the six possible simple rotations in that space (there are six ways of choosing two axes from four). This conclusion does not imply that our universe is a three-dimensional sphere; it merely means that this particular physics problem (the two-body problem for inverse-square central forces) is "mathematically equivalent" to a free particle on a three-dimensional sphere. For "positive" energies – i.e., for unbound, "scattered" systems – the higher symmetry group is SO(3,1), which preserves the Minkowski length of 4-vectors formula_49 Both the negative- and positive-energy cases were considered by Fock and Bargmann and have been reviewed encyclopedically by Bander and Itzykson. The orbits of central-force systems – and those of the Kepler problem in particular – are also symmetric under reflection. Therefore, the SO(3), SO(4) and SO(3,1) groups cited above are not the full symmetry groups of their orbits; the full groups are O(3), O(4), and O(3,1), respectively. Nevertheless, only the connected subgroups, SO(3), SO(4), and SO+(3,1), are needed to demonstrate the conservation of the angular momentum and LRL vectors; the reflection symmetry is irrelevant for conservation, which may be derived from the Lie algebra of the group. Rotational symmetry in four dimensions. The connection between the Kepler problem and four-dimensional rotational symmetry SO(4) can be readily visualized. Let the four-dimensional Cartesian coordinates be denoted ("w", "x", "y", "z") where ("x", "y", "z") represent the Cartesian coordinates of the normal position vector r. The three-dimensional momentum vector p is associated with a four-dimensional vector formula_50 on a three-dimensional unit sphere formula_51 where formula_52 is the unit vector along the new w axis. The transformation mapping p to η can be uniquely inverted; for example, the x component of the momentum equals formula_53 and similarly for "py" and "pz". In other words, the three-dimensional vector p is a stereographic projection of the four-dimensional formula_50 vector, scaled by "p"0 (Figure 8). Without loss of generality, we may eliminate the normal rotational symmetry by choosing the Cartesian coordinates such that the z axis is aligned with the angular momentum vector L and the momentum hodographs are aligned as they are in Figure 7, with the centers of the circles on the y axis. Since the motion is planar, and p and L are perpendicular, "pz" = "η""z" = 0 and attention may be restricted to the three-dimensional vector formula_54. The family of Apollonian circles of momentum hodographs (Figure 7) correspond to a family of great circles on the three-dimensional formula_50 sphere, all of which intersect the "η""x" axis at the two foci "η""x" = ±1, corresponding to the momentum hodograph foci at "px" = ±"p"0. These great circles are related by a simple rotation about the "η""x"-axis (Figure 8). This rotational symmetry transforms all the orbits of the same energy into one another; however, such a rotation is orthogonal to the usual three-dimensional rotations, since it transforms the fourth dimension "η""w". This higher symmetry is characteristic of the Kepler problem and corresponds to the conservation of the LRL vector. An elegant action-angle variables solution for the Kepler problem can be obtained by eliminating the redundant four-dimensional coordinates formula_50 in favor of elliptic cylindrical coordinates ("χ", "ψ", "φ") formula_55 where sn, cn and dn are Jacobi's elliptic functions. Generalizations to other potentials and relativity. The Laplace–Runge–Lenz vector can also be generalized to identify conserved quantities that apply to other situations. In the presence of a uniform electric field E, the generalized Laplace–Runge–Lenz vector formula_0 is formula_56 where q is the charge of the orbiting particle. Although formula_0 is not conserved, it gives rise to a conserved quantity, namely formula_57. Further generalizing the Laplace–Runge–Lenz vector to other potentials and special relativity, the most general form can be written as formula_58 where "u" = 1/"r" and "ξ" = cos "θ", with the angle θ defined by formula_59 and γ is the Lorentz factor. As before, we may obtain a conserved binormal vector B by taking the cross product with the conserved angular momentum vector formula_60 These two vectors may likewise be combined into a conserved dyadic tensor W, formula_61 In illustration, the LRL vector for a non-relativistic, isotropic harmonic oscillator can be calculated. Since the force is central, formula_62 the angular momentum vector is conserved and the motion lies in a plane. The conserved dyadic tensor can be written in a simple form formula_63 although p and r are not necessarily perpendicular. The corresponding Runge–Lenz vector is more complicated, formula_64 where formula_65 is the natural oscillation frequency, and formula_66 Proofs that the Laplace–Runge–Lenz vector is conserved in Kepler problems. The following are arguments showing that the LRL vector is conserved under central forces that obey an inverse-square law. Direct proof of conservation. A central force formula_67 acting on the particle is formula_68 for some function formula_69 of the radius formula_70. Since the angular momentum formula_71 is conserved under central forces, formula_72 and formula_73 where the momentum formula_74 and where the triple cross product has been simplified using Lagrange's formula formula_75 The identity formula_76 yields the equation formula_77 For the special case of an inverse-square central force formula_78, this equals formula_79 Therefore, A is conserved for inverse-square central forces formula_80 A shorter proof is obtained by using the relation of angular momentum to angular velocity, formula_81, which holds for a particle traveling in a plane perpendicular to formula_82. Specifying to inverse-square central forces, the time derivative of formula_83 is formula_84 where the last equality holds because a unit vector can only change by rotation, and formula_85 is the orbital velocity of the rotating vector. Thus, A is seen to be a difference of two vectors with equal time derivatives. As described elsewhere in this article, this LRL vector A is a special case of a general conserved vector formula_0 that can be defined for all central forces. However, since most central forces do not produce closed orbits (see Bertrand's theorem), the analogous vector formula_0 rarely has a simple definition and is generally a multivalued function of the angle θ between r and formula_0. Hamilton–Jacobi equation in parabolic coordinates. The constancy of the LRL vector can also be derived from the Hamilton–Jacobi equation in parabolic coordinates ("ξ", "η"), which are defined by the equations formula_86 where r represents the radius in the plane of the orbit formula_87 The inversion of these coordinates is formula_88 Separation of the Hamilton–Jacobi equation in these coordinates yields the two equivalent equations formula_89 where Γ is a constant of motion. Subtraction and re-expression in terms of the Cartesian momenta "p""x" and "p""y" shows that Γ is equivalent to the LRL vector formula_90 Noether's theorem. The connection between the rotational symmetry described above and the conservation of the LRL vector can be made quantitative by way of Noether's theorem. This theorem, which is used for finding constants of motion, states that any infinitesimal variation of the generalized coordinates of a physical system formula_91 that causes the Lagrangian to vary to first order by a total time derivative formula_92 corresponds to a conserved quantity Γ formula_93 In particular, the conserved LRL vector component "As" corresponds to the variation in the coordinates formula_94 where i equals 1, 2 and 3, with "xi" and "pi" being the i-th components of the position and momentum vectors r and p, respectively; as usual, "δis" represents the Kronecker delta. The resulting first-order change in the Lagrangian is formula_95 Substitution into the general formula for the conserved quantity Γ yields the conserved component "As" of the LRL vector, formula_96 Lie transformation. Noether's theorem derivation of the conservation of the LRL vector A is elegant, but has one drawback: the coordinate variation "δx""i" involves not only the "position" r, but also the "momentum" p or, equivalently, the "velocity" v. This drawback may be eliminated by instead deriving the conservation of A using an approach pioneered by Sophus Lie. Specifically, one may define a Lie transformation in which the coordinates r and the time t are scaled by different powers of a parameter λ (Figure 9), formula_97 This transformation changes the total angular momentum L and energy E, formula_98 but preserves their product "EL"2. Therefore, the eccentricity e and the magnitude A are preserved, as may be seen from the equation for "A"2 formula_99 The direction of A is preserved as well, since the semiaxes are not altered by a global scaling. This transformation also preserves Kepler's third law, namely, that the semiaxis a and the period T form a constant "T"2/"a"3. Alternative scalings, symbols and formulations. Unlike the momentum and angular momentum vectors p and L, there is no universally accepted definition of the Laplace–Runge–Lenz vector; several different scaling factors and symbols are used in the scientific literature. The most common definition is given above, but another common alternative is to divide by the quantity "mk" to obtain a dimensionless conserved eccentricity vector formula_100 where v is the velocity vector. This scaled vector e has the same direction as A and its magnitude equals the eccentricity of the orbit, and thus vanishes for circular orbits. Other scaled versions are also possible, e.g., by dividing A by m alone formula_101 or by "p"0 formula_102 which has the same units as the angular momentum vector L. In rare cases, the sign of the LRL vector may be reversed, i.e., scaled by −1. Other common symbols for the LRL vector include a, R, F, J and V. However, the choice of scaling and symbol for the LRL vector do not affect its conservation. An alternative conserved vector is the binormal vector B studied by William Rowan Hamilton, formula_103 which is conserved and points along the "minor" semiaxis of the ellipse. (It is not defined for vanishing eccentricity.) The LRL vector A = B × L is the cross product of B and L (Figure 4). On the momentum hodograph in the relevant section above, B is readily seen to connect the origin of momenta with the center of the circular hodograph, and to possess magnitude "A"/"L". At perihelion, it points in the direction of the momentum. The vector B is denoted as "binormal" since it is perpendicular to both A and L. Similar to the LRL vector itself, the binormal vector can be defined with different scalings and symbols. The two conserved vectors, A and B can be combined to form a conserved dyadic tensor W, formula_104 where α and β are arbitrary scaling constants and formula_105 represents the tensor product (which is not related to the vector cross product, despite their similar symbol). Written in explicit components, this equation reads formula_106 Being perpendicular to each another, the vectors A and B can be viewed as the principal axes of the conserved tensor W, i.e., its scaled eigenvectors. W is perpendicular to L , formula_107 since A and B are both perpendicular to L as well, L ⋅ A = L ⋅ B = 0. More directly, this equation reads, in explicit components, formula_108 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{A}" }, { "math_id": 1, "text": "\n\\mathbf{F}(r)=-\\frac{k}{r^{2}}\\mathbf{\\hat{r}};\n" }, { "math_id": 2, "text": "V(r) = - k / r" }, { "math_id": 3, "text": " \\mathbf{A} = \\mathbf{p} \\times \\mathbf{L} - m k \\mathbf{\\hat{r}}," }, { "math_id": 4, "text": "\\mathbf{\\hat{r}}" }, { "math_id": 5, "text": "\\mathbf{\\hat{r}} = \\frac{\\mathbf{r}}{r}" }, { "math_id": 6, "text": "\nE = \\frac{p^{2}}{2m} - \\frac{k}{r} = \\frac{1}{2} mv^{2} - \\frac{k}{r}.\n" }, { "math_id": 7, "text": "\n\\mathbf{e} = \\frac{\\mathbf{A}}{m k} = \\frac{1}{m k}(\\mathbf{p} \\times \\mathbf{L}) - \\mathbf{\\hat{r}}.\n" }, { "math_id": 8, "text": "L^2" }, { "math_id": 9, "text": "\nu \\equiv \\frac{1}{r} = \\frac{km}{L^2} + \\frac{A}{L^2} \\cos\\theta\n" }, { "math_id": 10, "text": "\\theta" }, { "math_id": 11, "text": "\n\\mathbf{A} \\cdot \\mathbf{r} = A \\cdot r \\cdot \\cos\\theta = \n\\mathbf{r} \\cdot \\left( \\mathbf{p} \\times \\mathbf{L} \\right) - mkr,\n" }, { "math_id": 12, "text": "\n\\mathbf{r} \\cdot\\left(\\mathbf{p}\\times \\mathbf{L}\\right) = \n\\left(\\mathbf{r} \\times \\mathbf{p}\\right)\\cdot\\mathbf{L} = \n\\mathbf{L}\\cdot\\mathbf{L}=L^2\n" }, { "math_id": 13, "text": "\n\\frac{1}{r} = \\frac{mk}{L^2} + \\frac{A}{L^{2}} \\cos\\theta" }, { "math_id": 14, "text": "\n\\frac{1}{r} = C \\cdot \\left( 1 + e \\cdot \\cos\\theta \\right)\n" }, { "math_id": 15, "text": "e = \\frac{A}{\\left| mk \\right|} \\geq 0" }, { "math_id": 16, "text": "\nA^2 = m^2 k^2 + 2 m E L^2,\n" }, { "math_id": 17, "text": "\ne^{2} = 1 + \\frac{2L^2}{mk^2}E.\n" }, { "math_id": 18, "text": " mk \\hat{\\mathbf{r}} = \\mathbf{p} \\times \\mathbf{L} - \\mathbf{A} " }, { "math_id": 19, "text": " (mk)^2= A^2+ p^2 L^2 + 2 \\mathbf{L} \\cdot (\\mathbf{p} \\times \\mathbf{A}). " }, { "math_id": 20, "text": " p_x^2 + \\left(p_y - \\frac A L \\right)^2 = \\left( \\frac{mk} L \\right)^2." }, { "math_id": 21, "text": " A > m k" }, { "math_id": 22, "text": "p_x" }, { "math_id": 23, "text": "p_0 = \\sqrt{2m|E|}" }, { "math_id": 24, "text": "\\begin{align}\n\\frac{\\partial}{\\partial L} \\langle h(r) \\rangle & = \\frac{\\partial}{\\partial L} \\left\\{ \\frac{1}{T} \\int_0^T h(r) \\, dt \\right\\} \\\\[1em]\n& = \\frac{\\partial}{\\partial L} \\left\\{ \\frac{m}{L^{2}} \\int_0^{2\\pi} r^2 h(r) \\, d\\theta \\right\\},\n\\end{align}" }, { "math_id": 25, "text": "\nh(r) = \\frac{kL^{2}}{m^{2}c^{2}} \\left( \\frac{1}{r^{3}} \\right).\n" }, { "math_id": 26, "text": "\n\\frac{1}{r} = \\frac{mk}{L^2} \\left( 1 + \\frac{A}{mk} \\cos\\theta \\right)\n" }, { "math_id": 27, "text": "\n\\frac{6 \\pi k^2}{T L^2 c^2},\n" }, { "math_id": 28, "text": "\n\\{ L_i, L_j\\} = \\sum_{s=1}^3 \\varepsilon_{ijs} L_s,\n" }, { "math_id": 29, "text": "\\{A_i,L_j\\}=\\sum_{s=1}^3\\varepsilon_{ijs}A_s." }, { "math_id": 30, "text": "\\{A_i,A_j\\}=-2mH\\sum_{s=1}^3\\varepsilon_{ijs}L_s," }, { "math_id": 31, "text": "H" }, { "math_id": 32, "text": "\\{A_i, H\\} = \\{L_i, H\\} = 0." }, { "math_id": 33, "text": "p_0 = \\sqrt{2m|H|}" }, { "math_id": 34, "text": "\n\\{ D_i, L_j\\} = \\sum_{s=1}^3 \\varepsilon_{ijs} D_s.\n" }, { "math_id": 35, "text": "\n\\{ D_i, D_j\\} = \\sum_{s=1}^3 \\varepsilon_{ijs} L_s.\n" }, { "math_id": 36, "text": "\n\\{ D_i, D_j\\} = -\\sum_{s=1}^3 \\varepsilon_{ijs} L_s.\n" }, { "math_id": 37, "text": "|H| = H" }, { "math_id": 38, "text": "|H| = -H" }, { "math_id": 39, "text": " \\hat \\mathbf{A}_{\\mathbf p}=\\imath(\\hat l_{\\mathbf p}+1 )\\mathbf p -\\frac{(p^2+1)}{2}\\imath\\mathbf\\nabla_{\\mathbf p } , " }, { "math_id": 40, "text": " \\hat l_{\\mathbf p }=(\\mathbf p \\mathbf \\nabla_{\\mathbf p} ) " }, { "math_id": 41, "text": " \\begin{align}\nC_1 &= \\mathbf{D} \\cdot \\mathbf{D} + \\mathbf{L} \\cdot \\mathbf{L} = \\frac{mk^2}{2|E|}, \\\\\nC_2 &= \\mathbf{D} \\cdot \\mathbf{L} = 0,\n\\end{align}" }, { "math_id": 42, "text": "\n\\{ C_1, L_i \\} = \\{ C_1, D_i\\} = \n\\{ C_2, L_i \\} = \\{ C_2, D_i \\} = 0.\n" }, { "math_id": 43, "text": "\nA_s = - m k \\hat{r}_s + \\frac{1}{2} \\sum_{i=1}^3 \\sum_{j=1}^3 \\varepsilon_{sij} (p_i \\ell_j + \\ell_j p_i),\n" }, { "math_id": 44, "text": "1/(i\\hbar)" }, { "math_id": 45, "text": "\\begin{align}\nJ_0 &= A_3, \\\\\nJ_{\\pm 1} &= \\mp \\tfrac{1}{\\sqrt{2}} \\left( A_1 \\pm i A_2 \\right).\n\\end{align}" }, { "math_id": 46, "text": "C_1 = - \\frac{m k^2}{2 \\hbar^{2}} H^{-1} - I," }, { "math_id": 47, "text": "\nE_n = - \\frac{m k^2}{2\\hbar^{2} n^2},\n" }, { "math_id": 48, "text": "\n|\\mathbf{e}|^2 = e_1^2 + e_2^2 + e_3^2 + e_4^2.\n" }, { "math_id": 49, "text": "\nds^2 = e_1^2 + e_2^2 + e_3^2 - e_4^2.\n" }, { "math_id": 50, "text": "\\boldsymbol\\eta" }, { "math_id": 51, "text": "\\begin{align}\n\\boldsymbol\\eta & = \\frac{p^2 - p_0^2}{p^2 + p_0^2} \\mathbf{\\hat{w}} + \\frac{2 p_0}{p^2 + p_0^2} \\mathbf{p} \\\\[1em]\n& = \\frac{mk - r p_0^2}{mk} \\mathbf{\\hat{w}} + \\frac{rp_0}{mk} \\mathbf{p},\n\\end{align}" }, { "math_id": 52, "text": "\\mathbf{\\hat{w}}" }, { "math_id": 53, "text": " p_x = p_0 \\frac{\\eta_x}{1 - \\eta_w}, " }, { "math_id": 54, "text": "\\boldsymbol\\eta = (\\eta_w, \\eta_x, \\eta_y)" }, { "math_id": 55, "text": "\\begin{align}\n\\eta_w &= \\operatorname{cn} \\chi \\operatorname{cn} \\psi, \\\\[1ex]\n\n\\eta_x &= \\operatorname{sn} \\chi \\operatorname{dn} \\psi \\cos \\phi, \\\\[1ex]\n\n\\eta_y &= \\operatorname{sn} \\chi \\operatorname{dn} \\psi \\sin \\phi, \\\\[1ex]\n\n\\eta_z &= \\operatorname{dn} \\chi \\operatorname{sn} \\psi,\n\\end{align}" }, { "math_id": 56, "text": "\n\\mathcal{A} = \\mathbf{A} + \\frac{mq}{2} \\left[ \\left( \\mathbf{r} \\times \\mathbf{E} \\right) \\times \\mathbf{r} \\right],\n" }, { "math_id": 57, "text": "\\mathcal{A} \\cdot \\mathbf{E}" }, { "math_id": 58, "text": "\n\\mathcal{A} = \n\\left( \\frac{\\partial \\xi}{\\partial u} \\right) \\left(\\mathbf{p} \\times \\mathbf{L}\\right) +\n\\left[ \\xi - u \\left( \\frac{\\partial \\xi}{\\partial u} \\right)\\right] L^{2} \\mathbf{\\hat{r}},\n" }, { "math_id": 59, "text": "\n\\theta = L \\int^u \\frac{du}{\\sqrt{m^2 c^2 (\\gamma^2 - 1) - L^2 u^{2}}},\n" }, { "math_id": 60, "text": "\n\\mathcal{B} = \\mathbf{L} \\times \\mathcal{A}.\n" }, { "math_id": 61, "text": "\n\\mathcal{W} = \\alpha \\mathcal{A} \\otimes \\mathcal{A} + \\beta \\, \\mathcal{B} \\otimes \\mathcal{B}.\n" }, { "math_id": 62, "text": "\n\\mathbf{F}(r)= -k \\mathbf{r},\n" }, { "math_id": 63, "text": "\n\\mathcal{W} = \\frac{1}{2m} \\mathbf{p} \\otimes \\mathbf{p} + \\frac{k}{2} \\, \\mathbf{r} \\otimes \\mathbf{r},\n" }, { "math_id": 64, "text": "\n\\mathcal{A} = \\frac{1}{\\sqrt{mr^2 \\omega_0 A - mr^2 E + L^2}} \\left\\{ \\left( \\mathbf{p} \\times \\mathbf{L} \\right) + \\left(mr\\omega_0 A - mrE \\right) \\mathbf{\\hat{r}} \\right\\},\n" }, { "math_id": 65, "text": "\\omega_0 = \\sqrt{\\frac{k}{m}}" }, { "math_id": 66, "text": "A = (E^2-\\omega^2 L^2)^{1/2} / \\omega." }, { "math_id": 67, "text": "\\mathbf{F}" }, { "math_id": 68, "text": "\n\\mathbf{F} = \\frac{d\\mathbf{p}}{dt} = f(r) \\frac{\\mathbf{r}}{r} = f(r) \\mathbf{\\hat{r}}\n" }, { "math_id": 69, "text": "f(r)" }, { "math_id": 70, "text": "r" }, { "math_id": 71, "text": "\\mathbf{L} = \\mathbf{r} \\times \\mathbf{p}" }, { "math_id": 72, "text": "\\frac{d}{dt}\\mathbf{L} = 0" }, { "math_id": 73, "text": "\n\\frac{d}{dt} \\left( \\mathbf{p} \\times \\mathbf{L} \\right) = \\frac{d\\mathbf{p}}{dt} \\times \\mathbf{L} = f(r) \\mathbf{\\hat{r}} \\times \\left( \\mathbf{r} \\times m \\frac{d\\mathbf{r}}{dt} \\right) = f(r) \\frac{m}{r} \\left[ \\mathbf{r} \\left(\\mathbf{r} \\cdot \\frac{d\\mathbf{r}}{dt} \\right) - r^2 \\frac{d\\mathbf{r}}{dt} \\right],\n" }, { "math_id": 74, "text": "\\mathbf{p} = m \\frac{d\\mathbf{r}}{dt}" }, { "math_id": 75, "text": "\n\\mathbf{r} \\times \\left( \\mathbf{r} \\times \\frac{d\\mathbf{r}}{dt} \\right) = \\mathbf{r} \\left(\\mathbf{r} \\cdot \\frac{d\\mathbf{r}}{dt} \\right) - r^2 \\frac{d\\mathbf{r}}{dt}.\n" }, { "math_id": 76, "text": "\n\\frac{d}{dt} \\left( \\mathbf{r} \\cdot \\mathbf{r} \\right) = 2 \\mathbf{r} \\cdot \\frac{d\\mathbf{r}}{dt} = \\frac{d}{dt} (r^2) = 2r\\frac{dr}{dt}\n" }, { "math_id": 77, "text": "\n\\frac{d}{dt} \\left( \\mathbf{p} \\times \\mathbf{L} \\right) = \n-m f(r) r^2 \\left[ \\frac{1}{r} \\frac{d\\mathbf{r}}{dt} - \\frac{\\mathbf{r}}{r^2} \\frac{dr}{dt}\\right] = -m f(r) r^2 \\frac{d}{dt} \\left( \\frac{\\mathbf{r}}{r}\\right).\n" }, { "math_id": 78, "text": "f(r)=\\frac{-k}{r^{2}}" }, { "math_id": 79, "text": "\n\\frac{d}{dt} \\left( \\mathbf{p} \\times \\mathbf{L} \\right) = \nm k \\frac{d}{dt} \\left( \\frac{\\mathbf{r}}{r}\\right) = \n\\frac{d}{dt} (mk\\mathbf{\\hat{r}}).\n" }, { "math_id": 80, "text": "\n\\frac{d}{dt} \\mathbf{A} = \\frac{d}{dt} \\left( \\mathbf{p} \\times \\mathbf{L} \\right) - \\frac{d}{dt} \\left( mk\\mathbf{\\hat{r}} \\right) = \\mathbf{0}.\n" }, { "math_id": 81, "text": " \\mathbf{L} = m r^2 \\boldsymbol{\\omega}" }, { "math_id": 82, "text": " \\mathbf{L}" }, { "math_id": 83, "text": "\\mathbf{p} \\times \\mathbf{L}" }, { "math_id": 84, "text": "\n \\frac{d}{dt} \\mathbf{p} \\times \\mathbf{L} = \\left( \\frac{-k}{r^2} \\mathbf{\\hat{r}} \\right) \\times \\left(m r^2 \\boldsymbol{\\omega}\\right)\n= m k \\, \\boldsymbol{\\omega} \\times \\mathbf{\\hat{r}} = m k \\,\\frac{d}{dt}\\mathbf{\\hat{r}},\n" }, { "math_id": 85, "text": "\\boldsymbol{\\omega}\\times\\mathbf{\\hat{r}}" }, { "math_id": 86, "text": "\\begin{align}\n\\xi &= r + x, \\\\\n\\eta &= r - x,\n\\end{align}" }, { "math_id": 87, "text": "r = \\sqrt{x^2 + y^2}." }, { "math_id": 88, "text": "\\begin{align}\nx &= \\tfrac{1}{2} (\\xi - \\eta), \\\\\ny &= \\sqrt{\\xi\\eta},\n\\end{align}" }, { "math_id": 89, "text": " \\begin{align}\n2\\xi p_\\xi^2 - mk - mE\\xi &= -\\Gamma, \\\\\n2\\eta p_\\eta^2 - mk - mE\\eta &= \\Gamma,\n\\end{align}" }, { "math_id": 90, "text": "\n\\Gamma = p_y (x p_y - y p_x) - mk\\frac{x}{r} = A_x.\n" }, { "math_id": 91, "text": "\n\\delta q_i = \\varepsilon g_i(\\mathbf{q}, \\mathbf{\\dot{q}}, t)\n" }, { "math_id": 92, "text": "\n\\delta L = \\varepsilon \\frac{d}{dt} G(\\mathbf{q}, t)\n" }, { "math_id": 93, "text": "\n\\Gamma = -G + \\sum_i g_i \\left( \\frac{\\partial L}{\\partial \\dot{q}_i}\\right).\n" }, { "math_id": 94, "text": "\n\\delta_s x_i = \\frac{\\varepsilon}{2} \\left[ 2 p_i x_s - x_i p_s - \\delta_{is} \\left( \\mathbf{r} \\cdot \\mathbf{p} \\right) \\right],\n" }, { "math_id": 95, "text": "\n\\delta L = \\frac{1}{2}\\varepsilon mk\\frac{d}{dt} \\left( \\frac{x_s}{r} \\right).\n" }, { "math_id": 96, "text": "\nA_s = \\left[ p^2 x_s - p_s \\ \\left(\\mathbf{r} \\cdot \\mathbf{p}\\right) \\right] - mk \\left( \\frac{x_s}{r} \\right) = \n\\left[ \\mathbf{p} \\times \\left( \\mathbf{r} \\times \\mathbf{p} \\right) \\right]_s - mk \\left( \\frac{x_s}{r} \\right).\n" }, { "math_id": 97, "text": "\nt \\rightarrow \\lambda^{3}t , \\qquad \\mathbf{r} \\rightarrow \\lambda^{2}\\mathbf{r} , \\qquad\\mathbf{p} \\rightarrow \\frac{1}{\\lambda}\\mathbf{p}.\n" }, { "math_id": 98, "text": "\nL \\rightarrow \\lambda L, \\qquad E \\rightarrow \\frac{1}{\\lambda^{2}} E,\n" }, { "math_id": 99, "text": "A^2 = m^2 k^2 e^{2} = m^2 k^2 + 2 m E L^2." }, { "math_id": 100, "text": "\n\\mathbf{e} = \n\\frac{1}{mk} \\left(\\mathbf{p} \\times \\mathbf{L} \\right) - \\mathbf{\\hat{r}} = \n\\frac{m}{k} \\left(\\mathbf{v} \\times \\left( \\mathbf{r} \\times \\mathbf{v} \\right) \\right) - \\mathbf{\\hat{r}},\n" }, { "math_id": 101, "text": "\n\\mathbf{M} = \\mathbf{v} \\times \\mathbf{L} - k\\mathbf{\\hat{r}},\n" }, { "math_id": 102, "text": "\n\\mathbf{D} = \\frac{\\mathbf{A}}{p_{0}} = \n\\frac{1}{\\sqrt{2m|E|}}\n\\left\\{ \\mathbf{p} \\times \\mathbf{L} - m k \\mathbf{\\hat{r}} \\right\\},\n" }, { "math_id": 103, "text": "\n\\mathbf{B} = \\mathbf{p} - \\left(\\frac{mk}{L^{2}r} \\right) \\ \\left( \\mathbf{L} \\times \\mathbf{r} \\right),\n" }, { "math_id": 104, "text": "\n\\mathbf{W} = \\alpha \\mathbf{A} \\otimes \\mathbf{A} + \\beta \\, \\mathbf{B} \\otimes \\mathbf{B},\n" }, { "math_id": 105, "text": "\\otimes" }, { "math_id": 106, "text": "\nW_{ij} = \\alpha A_i A_j + \\beta B_i B_j.\n" }, { "math_id": 107, "text": "\n\\mathbf{L} \\cdot \\mathbf{W} = \n\\alpha \\left( \\mathbf{L} \\cdot \\mathbf{A} \\right) \\mathbf{A} + \\beta \\left( \\mathbf{L} \\cdot \\mathbf{B} \\right) \\mathbf{B} = 0,\n" }, { "math_id": 108, "text": "\n\\left( \\mathbf{L} \\cdot \\mathbf{W} \\right)_j = \\alpha \\left( \\sum_{i=1}^3 L_i A_i \\right) A_j + \\beta \\left( \\sum_{i=1}^3 L_i B_i \\right) B_j = 0.\n" } ]
https://en.wikipedia.org/wiki?curid=719460
719575
May's theorem
Social choice theorem on superiority of majority voting In social choice theory, May's theorem, also called the general possibility theorem, says that majority vote is the unique ranked social choice function between two candidates that satisfies the following criteria: The theorem was first published by Kenneth May in 1952.[#endnote_] Various modifications have been suggested by others since the original publication. If rated voting is allowed, a wide variety of rules satisfy May's conditions, including score voting or highest median voting rules. Arrow's theorem does not apply to the case of two candidates (when there are trivially no "independent alternatives"), so this possibility result can be seen as the mirror analogue of that theorem. Note that anonymity is a stronger requirement than Arrow's non-dictatorship. Another way of explaining the fact that simple majority voting can successfully deal with at most two alternatives is to cite Nakamura's theorem. The theorem states that the number of alternatives that a rule can deal with successfully is less than the Nakamura number of the rule. The Nakamura number of simple majority voting is 3, except in the case of four voters. Supermajority rules may have greater Nakamura numbers. Formal statement. Let "A" and "B" be two possible choices, often called alternatives or candidates. A "preference" is then simply a choice of whether "A", "B", or neither is preferred. Denote the set of preferences by {"A", "B", 0}, where 0 represents neither. Let "N" be a positive integer. In this context, a "ordinal (ranked)" "social choice function" is a function formula_0 which aggregates individuals' preferences into a single preference. An "N"-tuple ("R"1, …, "R""N") ∈ {"A", "B", 0}"N" of voters' preferences is called a "preference profile". Define a social choice function called "simple majority voting" as follows: May's theorem states that simple majority voting is the unique social welfare function satisfying all three of the following conditions: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F : \\{A,B,0\\}^N \\to \\{A,B,0\\}" } ]
https://en.wikipedia.org/wiki?curid=719575
71957559
Favre averaging
Favre averaging is the density-weighted averaging method, used in variable density or compressible turbulent flows, in place of the Reynolds averaging. The method was introduced formally by the French scientist A. J. Favre in 1965, although Osborne Reynolds has also already introduced the density-weighted averaging in 1895. The averaging results a simplistic form for the nonlinear convective terms of the Navier-Stokes equations, at the expense of making the diffusion terms complicated. Favre averaged variables. Favre averaging is carried out for all dynamical variables except the pressure. For the velocity components, formula_0, the Favre averaging is defined as formula_1 where the overbar indicates the typical Reynolds averaging, the tilde denotes the Favre averaging and formula_2 is the density field. The Favre decomposition of the velocity components is then written as formula_3 where formula_4 is the fluctuating part in the Favre averaging, which satisfies the condition formula_5, that is to say, formula_6. The normal Reynolds decomposition is given by formula_7, where formula_8 is the fluctuating part in the Reynolds averaging, which satisfies the condition formula_9. The Favre-averaged variables are more difficult to measure experimentally than the Reynolds-averaged ones, however, the two variables can be related in an exact manner if correlations between density and the fluctuating quantity is known; this is so because, we can write formula_10 The advantage of Favre-averaged variables are clearly seen by taking the normal averaging of the term formula_11 that appears in the convective term of the Navier-Stokes equations written in its conserved form. This is given by formula_12 As we can see, there are five terms in the averaging when expressed in terms of Reynolds-averaged variables, whereas we have only two terms when it is expressed in terms of Favre-averaged variables. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_i" }, { "math_id": 1, "text": "\\widetilde{u_i}= \\frac{\\overline{\\rho u_i}}{\\overline \\rho}" }, { "math_id": 2, "text": "\\rho(\\mathbf{x},t)" }, { "math_id": 3, "text": "u_i =\\widetilde{u_i} + u_i''" }, { "math_id": 4, "text": "u_i''" }, { "math_id": 5, "text": "\\widetilde{u_i''}=0" }, { "math_id": 6, "text": "\\overline{\\rho u_i''}=0" }, { "math_id": 7, "text": "u_i =\\overline{u_i} + u_i'" }, { "math_id": 8, "text": "u_i'" }, { "math_id": 9, "text": "\\overline{u_i'}=0" }, { "math_id": 10, "text": "\\widetilde{u_i} = \\overline{u_i} \\left(1+ \\frac{\\overline{\\rho'u_i'}}{\\overline\\rho\\,\\overline{u_i}}\\right)." }, { "math_id": 11, "text": "\\rho u_iu_j" }, { "math_id": 12, "text": "\\begin{align}\n\\overline{\\rho u_iu_j} &=\\overline{\\rho}\\,\\overline{u_i}\\,\\overline{u_j} + \\overline\\rho \\overline{u_i'u_j'}+\\overline{u_i}\\overline{\\rho'u_j'} + \\overline{u_j}\\overline{\\rho'u_i'} + \\overline{\\rho'u_i'u_j'} \\\\\n&= \\overline\\rho \\widetilde{u_i}\\widetilde{u_j} + \\overline{\\rho u_i''u_j''}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=71957559
7196522
Lowest common ancestor
Tree node with two other nodes as descendants In graph theory and computer science, the lowest common ancestor (LCA) (also called least common ancestor) of two nodes v and w in a tree or directed acyclic graph (DAG) T is the lowest (i.e. deepest) node that has both v and w as descendants, where we define each node to be a descendant of itself (so if v has a direct connection from w, w is the lowest common ancestor). The LCA of v and w in T is the shared ancestor of v and w that is located farthest from the root. Computation of lowest common ancestors may be useful, for instance, as part of a procedure for determining the distance between pairs of nodes in a tree: the distance from v to w can be computed as the distance from the root to v, plus the distance from the root to w, minus twice the distance from the root to their lowest common ancestor . In a tree data structure where each node points to its parent, the lowest common ancestor can be easily determined by finding the first intersection of the paths from v and w to the root. In general, the computational time required for this algorithm is O(h) where h is the height of the tree (length of longest path from a leaf to the root). However, there exist several algorithms for processing trees so that lowest common ancestors may be found more quickly. Tarjan's off-line lowest common ancestors algorithm, for example, preprocesses a tree in linear time to provide constant-time LCA queries. In general DAGs, similar algorithms exist, but with super-linear complexity. History. The lowest common ancestor problem was defined by Alfred Aho, John Hopcroft, and Jeffrey Ullman (1973), but Dov Harel and Robert Tarjan (1984) were the first to develop an optimally efficient lowest common ancestor data structure. Their algorithm processes any tree in linear time, using a heavy path decomposition, so that subsequent lowest common ancestor queries may be answered in constant time per query. However, their data structure is complex and difficult to implement. Tarjan also found a simpler but less efficient algorithm, based on the union-find data structure, for computing lowest common ancestors of an offline batch of pairs of nodes. Baruch Schieber and Uzi Vishkin (1988) simplified the data structure of Harel and Tarjan, leading to an implementable structure with the same asymptotic preprocessing and query time bounds. Their simplification is based on the principle that, in two special kinds of trees, lowest common ancestors are easy to determine: if the tree is a path, then the lowest common ancestor can be computed simply from the minimum of the levels of the two queried nodes, while if the tree is a complete binary tree, the nodes may be indexed in such a way that lowest common ancestors reduce to simple binary operations on the indices. The structure of Schieber and Vishkin decomposes any tree into a collection of paths, such that the connections between the paths have the structure of a binary tree, and combines both of these two simpler indexing techniques. Omer Berkman and Uzi Vishkin (1993) discovered a completely new way to answer lowest common ancestor queries, again achieving linear preprocessing time with constant query time. Their method involves forming an Euler tour of a graph formed from the input tree by doubling every edge, and using this tour to write a sequence of level numbers of the nodes in the order the tour visits them; a lowest common ancestor query can then be transformed into a query that seeks the minimum value occurring within some subinterval of this sequence of numbers. They then handle this range minimum query problem (RMQ) by combining two techniques, one technique based on precomputing the answers to large intervals that have sizes that are powers of two, and the other based on table lookup for small-interval queries. This method was later presented in a simplified form by Michael Bender and Martin Farach-Colton (2000). As had been previously observed by , the range minimum problem can in turn be transformed back into a lowest common ancestor problem using the technique of Cartesian trees. Further simplifications were made by and . Sleator and Tarjan (1983) proposed the dynamic LCA variant of the problem in which the data structure should be prepared to handle LCA queries intermixed with operations that change the tree (that is, rearrange the tree by adding and removing edges). This variant can be solved in formula_0 time in the total size of the tree for all modifications and queries. This is done by maintaining the forest using the dynamic trees data structure with partitioning by size; this then maintains a heavy-light decomposition of each tree, and allows LCA queries to be carried out in logarithmic time in the size of the tree. Linear space and constant search time solution to LCA in trees. As mentioned above, LCA can be reduced to RMQ. An efficient solution to the resulting RMQ problem starts by partitioning the number sequence into blocks. Two different techniques are used for queries across blocks and within blocks. Reduction from LCA to RMQ. Reduction of LCA to RMQ starts by walking the tree. For each node visited, record in sequence its label and depth. Suppose nodes "x" and "y" occur in positions "i" and "j" in this sequence, respectively. Then the LCA of "x" and "y" will be found in position RMQ("i", "j"), where the RMQ is taken over the depth values. Linear space and constant search time algorithm for RMQ reduced from LCA. Despite that there exists a constant time and linear space solution for general RMQ, but a simplified solution can be applied that make uses of LCA’s properties. This simplified solution can only be used for RMQ reduced from LCA. Similar to the solution mentioned above, we divide the sequence into each block formula_1, where each block formula_1 has size of formula_2. By splitting the sequence into blocks, the formula_3 query can be solved by solving two different cases: Case 1: if i and j are in different blocks. To answer the formula_3 query in case one, there are 3 groups of variables precomputed to help reduce query time. First, the minimum element with the smallest index in each block formula_1 is precomputed and denoted as formula_4. A set of formula_4 takes formula_5 space. Second, given the set of formula_4, the RMQ query for this set is precomputed using the solution with constant time and linearithmic space. There are formula_6 blocks, so the lookup table in that solution takes formula_7 space. Because formula_2, formula_7 = formula_8 space. Hence, the precomputed RMQ query using the solution with constant time and linearithmic space on these blocks only take formula_8 space. Third, in each block formula_1, let formula_9 be an index in formula_1 such that formula_10. For all formula_9 from formula_11 until formula_12, block formula_1 is divided into two intervals formula_13 and formula_14. Then the minimum element with the smallest index for intervals in formula_13 and formula_14 in each block formula_1 is precomputed. Such minimum elements are called as prefix min for the interval in formula_13 and suffix min for the interval in formula_14. Each iteration of formula_9 computes a pair of prefix min and suffix min. Hence, the total number of prefix mins and suffix mins in a block formula_1 is formula_15. Since there are formula_6 blocks, in total, all prefix min and suffix min arrays take formula_16 which is formula_8 spaces. In total, it takes formula_8 space to store all 3 groups of precomputed variables mentioned above. Therefore, answering the formula_3 query in case 1 is simply tasking the minimum of the following three questions: Let formula_1 be the block that contains the element at index formula_17, and formula_18 for index formula_19. All 3 questions can be answered in constant time. Hence, case 1 can be answered in linear space and constant time. Case 2: if i and j are in the same block. The sequence of RMQ that reduced from LCA has one property that a normal RMQ doesn’t have. The next element is always +1 or -1 from the current element. For example: Therefore, each block formula_1 can be encoded as a bitstring with 0 represents the current depth -1, and 1 represent the current depth +1. This transformation turns a block formula_1into a bitstring of size formula_24. A bitstring of size formula_24 has formula_25 possible bitstrings. Since formula_2, so formula_26. Hence, formula_1 is always one of the formula_27 possible bitstring with size of formula_24. Then, for each possible bitstrings, we apply the naïve quadratic space constant time solution. This will take up formula_28 spaces, which is formula_29. Therefore, answering the formula_3 query in case 2 is simply finding the corresponding block (in which is a bitstring) and perform a table lookup for that bitstring. Hence, case 2 can be solved using linear space with constant searching time. Extension to directed acyclic graphs. While originally studied in the context of trees, the notion of lowest common ancestors can be defined for directed acyclic graphs (DAGs), using either of two possible definitions. In both, the edges of the DAG are assumed to point from parents to children. ("V", "E"), define a poset ("V", ≤) such that "x" ≤ "y" iff x is reachable from y. The lowest common ancestors of x and y are then the minimum elements under ≤ of the common ancestor set {"z" ∈ "V" | "x" ≤ "z" and "y" ≤ "z"}. In a tree, the lowest common ancestor is unique; in a DAG of n nodes, each pair of nodes may have as much as "n"-2 LCAs , while the existence of an LCA for a pair of nodes is not even guaranteed in arbitrary connected DAGs. A brute-force algorithm for finding lowest common ancestors is given by : find all ancestors of x and y, then return the maximum element of the intersection of the two sets. Better algorithms exist that, analogous to the LCA algorithms on trees, preprocess a graph to enable constant-time LCA queries. The problem of "LCA existence" can be solved optimally for sparse DAGs by means of an O(|"V"||"E"|) algorithm due to . present a unified framework for preprocessing directed acyclic graphs to compute "a representative" lowest common ancestor in "a rooted DAG" in constant time. Their framework can achieve near-linear preprocessing times for sparse graphs and is available for public use. Applications. The problem of computing lowest common ancestors of classes in an inheritance hierarchy arises in the implementation of object-oriented programming systems . The LCA problem also finds applications in models of complex systems found in distributed computing . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\log N)" }, { "math_id": 1, "text": "B_i" }, { "math_id": 2, "text": "b={1 \\over 2}\\log n" }, { "math_id": 3, "text": "RMQ(i,j)" }, { "math_id": 4, "text": "y_i" }, { "math_id": 5, "text": "O(n/b)" }, { "math_id": 6, "text": "n/b" }, { "math_id": 7, "text": "O({n \\over b} \\log {n \\over b})" }, { "math_id": 8, "text": "O(n)" }, { "math_id": 9, "text": "k_i" }, { "math_id": 10, "text": "0 \\leq ki < b" }, { "math_id": 11, "text": "0" }, { "math_id": 12, "text": "b" }, { "math_id": 13, "text": "[0, k_i)" }, { "math_id": 14, "text": "[k_i, b)" }, { "math_id": 15, "text": "2b" }, { "math_id": 16, "text": "O(2b \\cdot {n \\over b})" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "B_j" }, { "math_id": 19, "text": "j" }, { "math_id": 20, "text": "[i \\mod b, b)" }, { "math_id": 21, "text": "y" }, { "math_id": 22, "text": "\\{ B_{i+1} ... B_{j-1} \\}" }, { "math_id": 23, "text": "[0, j \\mod b)" }, { "math_id": 24, "text": "b-1" }, { "math_id": 25, "text": "2^{b-1}" }, { "math_id": 26, "text": "2^{b-1} \\leq 2^b = 2^{{1 \\over 2}\\log n} = n^{{1 \\over 2}} = \\sqrt{n}" }, { "math_id": 27, "text": "\\sqrt{n}" }, { "math_id": 28, "text": "\\sqrt{n}\\cdot b^2" }, { "math_id": 29, "text": "O(\\sqrt{n}\\cdot(\\log n)^2) \\le O(\\sqrt{n}\\cdot\\sqrt{n}) = O(n)" } ]
https://en.wikipedia.org/wiki?curid=7196522
71965326
Oper (mathematics)
Principal connection In mathematics, an oper is a principal connection, or in more elementary terms a type of differential operator. They were first defined and used by Vladimir Drinfeld and Vladimir Sokolov to study how the KdV equation and related integrable PDEs correspond to algebraic structures known as Kac–Moody algebras. Their modern formulation is due to Drinfeld and Alexander Beilinson. History. Opers were first defined, although not named, in a 1981 Russian paper by Drinfeld and Sokolov on "Equations of Korteweg–de Vries type, and simple Lie algebras". They were later generalized by Drinfeld and Beilinson in 1993, later published as an e-print in 2005. Formulation. Abstract. Let formula_0 be a connected reductive group over the complex plane formula_1, with a distinguished Borel subgroup formula_2. Set formula_3, so that formula_4 is the Cartan group. Denote by formula_5 and formula_6 the corresponding Lie algebras. There is an open formula_7-orbit formula_8 consisting of vectors stabilized by the radical formula_9 such that all of their negative simple-root components are non-zero. Let formula_10 be a smooth curve. A G-oper on formula_10 is a triple formula_11 where formula_12 is a principal formula_0-bundle, formula_13 is a connection on formula_12 and formula_14 is a formula_7-reduction of formula_12, such that the one-form formula_15 takes values in formula_16. Example. Fix formula_17 the Riemann sphere. Working at the level of the algebras, fix formula_18, which can be identified with the space of traceless formula_19 complex matrices. Since formula_20 has only one (complex) dimension, a one-form has only one component, and so an formula_21-valued one form is locally described by a matrix of functions formula_22 where formula_23 are allowed to be meromorphic functions. Denote by formula_24 the space of formula_21 valued meromorphic functions together with an action by formula_25, meromorphic functions valued in the associated Lie group formula_26. The action is by a formal gauge transformation: formula_27 Then opers are defined in terms of a subspace of these connections. Denote by formula_28 the space of connections with formula_29. Denote by formula_30 the subgroup of meromorphic functions valued in formula_31 of the form formula_32 with formula_33 meromorphic. Then for formula_34 it holds that formula_35. It therefore defines an action. The orbits of this action concretely characterize opers. However, generally this description only holds locally and not necessarily globally. Gaudin model. Opers on formula_20 have been used by Boris Feigin, Edward Frenkel and Nicolai Reshetikhin to characterize the spectrum of the Gaudin model. Specifically, for a formula_36-Gaudin model, and defining formula_37 as the Langlands dual algebra, there is a bijection between the spectrum of the Gaudin algebra generated by operators defined in the Gaudin model and an algebraic variety of formula_37 opers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\mathbb{C}" }, { "math_id": 2, "text": "B = B_G \\subset G" }, { "math_id": 3, "text": "N = [B,B]" }, { "math_id": 4, "text": "H = B/N" }, { "math_id": 5, "text": "\\mathfrak{n} < \\mathfrak{b} < \\mathfrak{g}" }, { "math_id": 6, "text": "\\mathfrak{h} = \\mathfrak{b}/\\mathfrak{n}" }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "\\mathbf{O}" }, { "math_id": 9, "text": "N\\subset B" }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "(\\mathfrak{F}, \\nabla, \\mathfrak{F}_B)" }, { "math_id": 12, "text": "\\mathfrak{F}" }, { "math_id": 13, "text": "\\nabla" }, { "math_id": 14, "text": "\\mathfrak{F}_B" }, { "math_id": 15, "text": "\\nabla/\\mathfrak{F}_B" }, { "math_id": 16, "text": "\\mathbf{O}_{\\mathfrak{F}_B}" }, { "math_id": 17, "text": "X = \\mathbb{P}^1 = \\mathbb{CP}^1" }, { "math_id": 18, "text": "\\mathfrak{g} = \\mathfrak{sl}(2, \\mathbb{C})" }, { "math_id": 19, "text": "2\\times 2" }, { "math_id": 20, "text": "\\mathbb{P}^1" }, { "math_id": 21, "text": "\\mathfrak{sl}(2,\\mathbb{C})" }, { "math_id": 22, "text": "A(z) = \\begin{pmatrix}a(z) & b(z) \\\\ c(z) & -a(z)\\end{pmatrix}" }, { "math_id": 23, "text": "a, b, c" }, { "math_id": 24, "text": "\\text{Conn}_{\\mathfrak{sl}(2,\\mathbb{C})}(\\mathbb{P}^1)" }, { "math_id": 25, "text": "g(z)" }, { "math_id": 26, "text": "G = SL(2, \\mathbb{C})" }, { "math_id": 27, "text": "g(z) * A(z) = g(z)A(z)g(z)^{-1} - g'(z) g(z)^{-1}." }, { "math_id": 28, "text": "\\text{op}_{\\mathfrak{sl}(2,\\mathbb{C})}(\\mathbb{P}^1)" }, { "math_id": 29, "text": "c(z) \\equiv 1" }, { "math_id": 30, "text": "N" }, { "math_id": 31, "text": "SL(2, \\mathbb{C})" }, { "math_id": 32, "text": " \\begin{pmatrix} 1 & f(z) \\\\ 0 & 1 \\end{pmatrix}" }, { "math_id": 33, "text": "f(z)" }, { "math_id": 34, "text": "g(z) \\in N, A(z) \\in \\text{op}_{\\mathfrak{sl}(2,\\mathbb{C})}(\\mathbb{P}^1)," }, { "math_id": 35, "text": "g(z) * A(z) \\in \\text{op}_{\\mathfrak{sl}(2,\\mathbb{C})}(\\mathbb{P}^1)" }, { "math_id": 36, "text": "\\mathfrak{g}" }, { "math_id": 37, "text": "^L\\mathfrak{g}" } ]
https://en.wikipedia.org/wiki?curid=71965326
7196964
Functionally graded material
In materials science In materials science Functionally Graded Materials (FGMs) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials. History. The concept of FGM was first considered in Japan in 1984 during a space plane project, where a combination of materials used would serve the purpose of a thermal barrier capable of withstanding a surface temperature of 2000 K and a temperature gradient of 1000 K across a 10 mm section. In recent years this concept has become more popular in Europe, particularly in Germany. A transregional collaborative research center (SFB Transregio) is funded since 2006 in order to exploit the potential of grading monomaterials, such as steel, aluminium and polypropylen, by using thermomechanically coupled manufacturing processes. General information. FGMs can vary in either composition and structure, for example, porosity, or both to produce the resulting gradient. The gradient can be categorized as either continuous or discontinuous, which exhibits a stepwise gradient. There are several examples of FGMs in nature, including bamboo and bone, which alter their microstructure to create a material property gradient. In biological materials, the gradients can be produced through changes in the chemical composition, structure, interfaces, and through the presence of gradients spanning multiple length scales. Specifically within the variation of chemical compositions, the manipulation of the mineralization, the presence of inorganic ions and biomolecules, and the level of hydration have all been known to cause gradients in plants and animals. The basic structural units of FGMs are elements or material ingredients represented by "maxel". The term maxel was introduced in 2005 by Rajeev Dwivedi and Radovan Kovacevic at Research Center for Advanced Manufacturing (RCAM). The attributes of maxel include the location and volume fraction of individual material components. A maxel is also used in the context of the additive manufacturing processes (such as stereolithography, selective laser sintering, fused deposition modeling, etc.) to describe a physical voxel (a portmanteau of the words 'volume' and 'element'), which defines the build resolution of either a rapid prototyping or rapid manufacturing process, or the resolution of a design produced by such fabrication means. The transition between the two materials can be approximated by through either a power-law or exponential law relation: Power Law: formula_0 where formula_1 is the Young's modulus at the surface of the material, z is the depth from surface, and k is a non-dimensional exponent (formula_2). Exponential Law: formula_3 where formula_4 indicates a hard surface and formula_5 indicates soft surface. Applications. There are many areas of application for FGM. The concept is to make a composite material by varying the microstructure from one material to another material with a specific gradient. This enables the material to have the best of both materials. If it is for thermal, or corrosive resistance or malleability and toughness both strengths of the material may be used to avoid corrosion, fatigue, fracture and stress corrosion cracking. There is a myriad of possible applications and industries interested in FGMs. They span from defense, looking at protective armor, to biomedical, investigating implants, to optoelectronics and energy. The aircraft and aerospace industry and the computer circuit industry are very interested in the possibility of materials that can withstand very high thermal gradients. This is normally achieved by using a ceramic layer connected with a metallic layer. The Air Vehicles Directorate has conducted a Quasi-static bending test results of functionally graded titanium/titanium boride test specimens which can be seen below. The test correlated to the finite element analysis (FEA) using a quadrilateral mesh with each element having its own structural and thermal properties. Advanced Materials and Processes Strategic Research Programme (AMPSRA) have done analysis on producing a thermal barrier coating using Zr02 and NiCoCrAlY. Their results have proved successful but no results of the analytical model are published. The rendition of the term that relates to the additive fabrication processes has its origins at the RMRG (Rapid Manufacturing Research Group) at Loughborough University in the United Kingdom. The term forms a part of a descriptive taxonomy of terms relating directly to various particulars relating to the additive CAD-CAM manufacturing processes, originally established as a part of the research conducted by architect Thomas Modeen into the application of the aforementioned techniques in the context of architecture. Gradient of elastic modulus essentially changes the fracture toughness of adhesive contacts. Additionally, there has been an increased focus on how to apply FGMs to biomedical applications, specifically dental and orthopedic implants. For example, bone is an FGM that exhibits a change in elasticity and other mechanical properties between the cortical and cancellous bone. It logically follows that FGMs for orthopedic implants would be ideal for mimicking the performance of bone. FGMs for biomedical applications have the potential benefit of preventing stress concentrations that could lead to biomechanical failure and improving biocompatibility and biomechanical stability. FGMs in relation to orthopedic implants are particularly important as the common materials used (titanium, stainless steel, etc.) are stiffer and thus pose a risk of creating abnormal physiological conditions that alter the stress concentration at the interface between the implant and the bone. If the implant is too stiff it risks causing bone resorption, while a flexible implant can cause stability and the bone-implant interface. Numerous FEM simulations have been carried out to understand the possible FGM and mechanical gradients that could be implemented into different orthopedic implants, as the gradients and mechanical properties are highly geometry specific. An example of a FGM for use in orthopedic implants is carbon fiber reinforcement polymer matrix (CRFP) with yttria-stabilized zirconia (YSZ). Varying the amount of YSZ present as a filler in the material, resulted in a flexural strength gradation ratio of 1.95. This high gradation ratio and overall high flexibility shows promise as being a supportive material in bone implants. There are quite a few FGMs being explored using hydroxyapatite (HA) due to its osteoconductivity which assists with osseointegration of implants. However, HA exhibits lower fracture strength and toughness compared to bone, which requires it to be used in conjunction with other materials in implants. One study combined HA with alumina and zirconia via a spark plasma process to create a FGM that shows a mechanical gradient as well as good cellular adhesion and proliferation. Modeling and simulation. Numerical methods have been developed for modelling the mechanical response of FGMs, with the finite element method being the most popular one. Initially, the variation of material properties was introduced by means of rows (or columns) of homogeneous elements, leading to a discontinuous step-type variation in the mechanical properties. Later, Santare and Lambros developed functionally graded finite elements, where the mechanical property variation takes place at the element level. Martínez-Pañeda and Gallego extended this approach to commercial finite element software. Contact properties of FGM can be simulated using the Boundary Element Method (which can be applied both to non-adhesive and adhesive contacts). Molecular dynamics simulation has also been implemented to study functionally graded materials. M. Islam studied the mechanical and vibrational properties of functionally graded Cu-Ni nanowires using molecular dynamics simulation. Mechanics of functionally graded material structures was considered by many authors. However, recently a new micro-mechanical model is developed to calculate the effective elastic Young modulus for graphene-reinforced plates composite. The model considers the average dimensions of the graphene nanoplates, weight fraction, and the graphene/ matrix ratio in the Representative Volume Element. The dynamic behavior of this functionally graded polymer-based composite reinforced with graphene fillers is crucial for engineering applications. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E=E_oz^k" }, { "math_id": 1, "text": "E_o" }, { "math_id": 2, "text": "0<k<1" }, { "math_id": 3, "text": "E=E_oe^{\\alpha z}" }, { "math_id": 4, "text": "\\alpha <0" }, { "math_id": 5, "text": "\\alpha >0" } ]
https://en.wikipedia.org/wiki?curid=7196964
71972809
Caitlin Casey
Observational astronomer Caitlin M Casey is an observational astronomer and associate professor at the University of Texas at Austin. She is known for her work in extragalactic astrophysics; she works on the formation and evolution of massive galaxies in the early Universe. Education and career. Casey's interest in astronomy began as a child when she was given the opportunity to visit the planetarium in Rock Bridge High School in her hometown of Columbia, Missouri. Casey completed her bachelor's degrees in physics, astronomy and applied mathematics from the University of Arizona in 2007. She attributes her decision to attend Arizona from first attending their Astronomy Camp during high school. She then obtained her Ph.D. in Astronomy from the University of Cambridge in 2010 under a Gates Cambridge Scholarship. While in Cambridge she served as president of the Gates Scholars' Society from 2009-2010. Casey was subsequently a NASA Hubble Postdoctoral Fellow at the Institute for Astronomy, University of Hawai’i at Mānoa, and then she spent two years as a postdoc at the University of California, Irvine as a McCue Postdoctoral Fellow of Cosmology. Casey became assistant professor in the Department of Astronomy at the University of Texas at Austin in 2015. Since 2021, Casey is an associate professor. Career and research. Casey is known for her research on galaxy formation and evolution, specifically on the most massive and luminous galaxies in the Universe. While in Hawaii, she examined the formation of starburst galaxies, research that was conducted with the largest spectroscopic survey using the W.M. Keck Observatory of submillimeter-luminous galaxies detected by the Herschel Space Observatory. While at the University of California, Irvine Casey authored a review paper on star-forming galaxies. Casey is principal investigator of the COSMOS-Web Survey and the Cosmic Evolution Survey. This work is a collaborative effort with Jeyhan Kartaltepe. The COSMOS-Web Survey is a formula_0 James Webb Space Telescope NIRCam imaging program that aims to reveal the sources of cosmic reionization and was the telescope's largest allocated project in its first year of observations. She presented the initial results of her research with the COSMOS-Web survey in 2023. Casey is an advocate for equity in STEM, creating the TAURUS program, a summer research experience for marginalized students in the summer of 2016. This program is hosted at the University of Texas at Austin at the McDonald Observatory and allows under-represented undergraduate students to get involved with astronomical research. Casey created a workshop designed to spread awareness about bullying, microaggressions and harassment for academic researchers with her colleague Kartik Sheth called The Ethical Gray Zone in 2013. Honors and awards. Casey received the 2018 Newton Lacy Pierce Prize awarded by the American Astronomical Society for impactful work in observational astronomy achieved before age 36. In 2019 she was awarded a Cottrell Scholar Award from the Research Corporation for Science Advancement. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.54\\,{\\rm deg}^2" } ]
https://en.wikipedia.org/wiki?curid=71972809
7198
Characteristic subgroup
Subgroup mapped to itself under every automorphism of the parent group In mathematics, particularly in the area of abstract algebra known as group theory, a characteristic subgroup is a subgroup that is mapped to itself by every automorphism of the parent group. Because every conjugation map is an inner automorphism, every characteristic subgroup is normal; though the converse is not guaranteed. Examples of characteristic subgroups include the commutator subgroup and the center of a group. Definition. A subgroup "H" of a group "G" is called a characteristic subgroup if for every automorphism "φ" of "G", one has φ("H") ≤ "H"; then write "H" char "G". It would be equivalent to require the stronger condition φ("H") = "H" for every automorphism "φ" of "G", because φ−1("H") ≤ "H" implies the reverse inclusion "H" ≤ φ("H"). Basic properties. Given "H" char "G", every automorphism of "G" induces an automorphism of the quotient group "G/H", which yields a homomorphism Aut("G") → Aut("G"/"H"). If "G" has a unique subgroup "H" of a given index, then "H" is characteristic in "G". Related concepts. Normal subgroup. A subgroup of "H" that is invariant under all inner automorphisms is called normal; also, an invariant subgroup. ∀φ ∈ Inn("G"): φ["H"] ≤ "H" Since Inn("G") ⊆ Aut("G") and a characteristic subgroup is invariant under all automorphisms, every characteristic subgroup is normal. However, not every normal subgroup is characteristic. Here are several examples: {"e", "a", "b", "ab"} . Consider H {"e", "a"} and consider the automorphism, T("e") "e", T("a") "b", T("b") "a", T("ab") "ab"; then T("H") is not contained in "H". Strictly characteristic subgroup. A "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;strictly characteristic subgroup", or a "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;distinguished subgroup", which is invariant under surjective endomorphisms. For finite groups, surjectivity of an endomorphism implies injectivity, so a surjective endomorphism is an automorphism; thus being "strictly characteristic" is equivalent to "characteristic". This is not the case anymore for infinite groups. Fully characteristic subgroup. For an even stronger constraint, a "fully characteristic subgroup" (also, "fully invariant subgroup"; cf. invariant subgroup), "H", of a group "G", is a group remaining invariant under every endomorphism of "G"; that is, ∀φ ∈ End("G"): φ["H"] ≤ "H". Every group has itself (the improper subgroup) and the trivial subgroup as two of its fully characteristic subgroups. The commutator subgroup of a group is always a fully characteristic subgroup. Every endomorphism of "G" induces an endomorphism of "G/H", which yields a map End("G") → End("G"/"H"). Verbal subgroup. An even stronger constraint is verbal subgroup, which is the image of a fully invariant subgroup of a free group under a homomorphism. More generally, any verbal subgroup is always fully characteristic. For any reduced free group, and, in particular, for any free group, the converse also holds: every fully characteristic subgroup is verbal. Transitivity. The property of being characteristic or fully characteristic is transitive; if "H" is a (fully) characteristic subgroup of "K", and "K" is a (fully) characteristic subgroup of "G", then "H" is a (fully) characteristic subgroup of "G". "H" char "K" char "G" ⇒ "H" char "G". Moreover, while normality is not transitive, it is true that every characteristic subgroup of a normal subgroup is normal. "H" char "K" ⊲ "G" ⇒ "H" ⊲ "G" Similarly, while being strictly characteristic (distinguished) is not transitive, it is true that every fully characteristic subgroup of a strictly characteristic subgroup is strictly characteristic. However, unlike normality, if "H" char "G" and "K" is a subgroup of "G" containing "H", then in general "H" is not necessarily characteristic in "K". "H" char "G", "H" &lt; "K" &lt; "G" ⇏ "H" char "K" Containments. Every subgroup that is fully characteristic is certainly strictly characteristic and characteristic; but a characteristic or even strictly characteristic subgroup need not be fully characteristic. The center of a group is always a strictly characteristic subgroup, but it is not always fully characteristic. For example, the finite group of order 12, Sym(3) × formula_1, has a homomorphism taking ("π", "y") to ((1, 2)"y", 0), which takes the center, formula_2, into a subgroup of Sym(3) × 1, which meets the center only in the identity. The relationship amongst these subgroup properties can be expressed as: Subgroup ⇐ Normal subgroup ⇐ Characteristic subgroup ⇐ Strictly characteristic subgroup ⇐ Fully characteristic subgroup ⇐ Verbal subgroup Examples. Finite example. Consider the group "G" S3 × formula_3 (the group of order 12 that is the direct product of the symmetric group of order 6 and a cyclic group of order 2). The center of "G" is isomorphic to its second factor formula_3. Note that the first factor, S3, contains subgroups isomorphic to formula_3, for instance {e, (12)}; let formula_4 be the morphism mapping formula_3 onto the indicated subgroup. Then the composition of the projection of "G" onto its second factor formula_3, followed by "f", followed by the inclusion of S3 into "G" as its first factor, provides an endomorphism of "G" under which the image of the center, formula_3, is not contained in the center, so here the center is not a fully characteristic subgroup of "G". Cyclic groups. Every subgroup of a cyclic group is characteristic. Subgroup functors. The derived subgroup (or commutator subgroup) of a group is a verbal subgroup. The torsion subgroup of an abelian group is a fully invariant subgroup. Topological groups. The identity component of a topological group is always a characteristic subgroup. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}_2 \\times \\mathbb{Z}_2" }, { "math_id": 1, "text": "\\mathbb{Z} / 2 \\mathbb{Z}" }, { "math_id": 2, "text": "1 \\times \\mathbb{Z} / 2 \\mathbb{Z}" }, { "math_id": 3, "text": "\\mathbb{Z}_2" }, { "math_id": 4, "text": "f: \\mathbb{Z}_2<\\rarr \\text{S}_3" } ]
https://en.wikipedia.org/wiki?curid=7198
71992445
Mermin's device
Thought experiment in quantum mechanics In physics, Mermin's device or Mermin's machine is a thought experiment intended to illustrate the non-classical features of nature without making a direct reference to quantum mechanics. The challenge is to reproduce the results of the thought experiment in terms of classical physics. The input of the experiment are particles, starting from a common origin, that reach detectors of a device that are independent from each other, the output are the lights of the device that turn on following a specific set of statistics depending on the configuration of the device. The results of the thought experiment are constructed in such a way to reproduce the result of a Bell test using quantum entangled particles, which demonstrate how quantum mechanics cannot be explained using a local hidden variable theory. In this way Mermin's device is a pedagogical tool to introduce the unconventional features of quantum mechanics to a larger public. History. The original version with two particles and three settings per detector, was first devised in a paper called "Bringing home the atomic world: Quantum mysteries for anybody" authored by the physicist N. David Mermin in 1981. Richard Feynman told Mermin that it was "One of the most beautiful papers in physics". Mermin later described this accolade as "the finest reward of my entire career in physics". Ed Purcell shared Mermin's article with Willard Van Orman Quine, who then asked Mermin to write a version intended for philosophers, which he then produced. Mermin also published a second version of the thought experiment in 1990 based on the GHZ experiment, with three particles and detectors with only two configurations. In 1993, Lucien Hardy devised a paradox that can be made into a Mermin-device-type thought experiment with two detectors and two settings. Original device. Assumptions. In Mermin's original thought experiment, he considers a device consisting of three parts: two detectors A and B, and a source C. The source emits two particles whenever a button is pushed, one particle reaches detector A and the other reaches detector B. The three parts A, B and C are isolated from each other (no connecting pipes, no wires, no antennas) in such a way that the detectors are not signaled when the button of the source has been pushed nor when the other detector has received a particle. Each detector (A and B) has a switch with three configurations labeled (1,2 and 3) and a red and a green light bulb. Either the green or the red light will turn on (never both) when a particle enters the device after a given period of time. The light bulbs only emit light in the direction of the observer working on the device. Additional barriers or instrument can be put in place to check that there is no interference between the three parts (A,B,C), as the parts should remain as independent as possible. Only allowing for a single particle to go from C to A and a single particle from C to B, and nothing else between A and B (no vibrations, no electromagnetic radiation). The experiment runs in the following way. The button of the source C is pushed, particles take some time to travel to the detectors and the detectors flash a light with a color determined by the switch configuration. There are nine total possible configuration of the switches (three for A, three for B). The switches can be changed at any moment during the experiment, even if the particles are still traveling to reach the detectors, but not after the detectors flash a light. The distance between the detectors can be changed so that the detectors flash a light at the same time or at different times. If detector A is set to flash a light first, the configuration of the switch of detector B can be changed after A has already flashed (similarly if B set to flash first, the settings of A can be change before A flashes). Results. The results of the experiment are given in this table in percentages: Every time the detectors are set to the same setting, the bulbs in each detector always flash same colors (either A and B flash red, or A and B flash green) and never opposite colors (A red B green, or A green B red). Every time the detectors are at different setting, the detectors flash the same color a quarter of the time and opposite colors 3/4 of the time. The challenge consists in finding a device that can reproduce these statistics. Hidden variables and classical mechanics. In order to make sense of the data using classical mechanics, one can consider the existence of three variables per particle that are measured by the detectors and follow the percentages above. Particle that goes into detector A has variables formula_0 and the particle that goes into detector B has variables formula_1. These variables determine which color will flash for a specific setting (1,2 and 3). For example, if the particle that goes in A has variables (R,G,G), then if the detector A is set to 1 it will flash red (labelled R), set to 2 or 3 it will flash green (labelled G). We have 8 possible states: where formula_2 in order to reproduce the results of table 1 when selecting the same setting for both detectors. For any given configuration, if the detector settings were chosen randomly, when the settings of the devices are different (12,13,21,23,31,32), the color of their lights would agree 100% of the time for the states (GGG) and (RRR) and for the other states the results would agree 1/3 of the time. Thus we reach an impossibility: there is no possible distribution of these states that would allow for the system to flash the same colors 1/4 of the time when the settings are not the same. Thereby, it is not possible to reproduce the results provided in Table 1. Quantum mechanics. Table 1 can be reproduced using quantum mechanics using quantum entanglement. Mermin reveals a possible construction of his device based on David Bohm's version of the Einstein–Podolsky–Rosen paradox. One can set two spin-1/2 particles in the maximally entangled singlet Bell state: formula_3, to leave the experiment, where formula_4 (formula_5) is the state where the projection of the spin of particle 1 is aligned (anti-aligned) with a given axis and particle 2 is anti-aligned (aligned) to the same axis. The measurement devices can be replaced with Stern–Gerlach devices, that measure the spin in a given direction. The three different settings determine whether the detectors are vertical or at ±120° to the vertical in the plane of perpendicular to the line of flight of the particles. Detector A flashes green when the spin of the measured particle is aligned with the detector's magnetic field and flashes red when anti-aligned. Detector B has the opposite color scheme with respect to A. Detector B flashes red when the spin of the measured particle is aligned and flashes green when anti-aligned. Another possibility is to use photons that have two possible polarizations, using polarizers as detectors, as in Aspect's experiment. Quantum mechanics predicts a probability of measuring opposite spin projections given by formula_6 where formula_7 is the relative angle between settings of the detectors. For formula_8 and formula_9 the system reproduces the result of table 1 keeping all the assumptions.
[ { "math_id": 0, "text": "(a_1,a_2,a_3)" }, { "math_id": 1, "text": "(b_1,b_2,b_3)" }, { "math_id": 2, "text": "(b_1,b_2,b_3)=(a_1,a_2,a_3)" }, { "math_id": 3, "text": "|\\Psi^-\\rangle=\\frac{|\\uparrow\\downarrow\\rangle-|\\downarrow\\uparrow\\rangle}{\\sqrt{2}}" }, { "math_id": 4, "text": "|\\uparrow\\downarrow\\rangle" }, { "math_id": 5, "text": "|\\downarrow\\uparrow\\rangle" }, { "math_id": 6, "text": "P(\\theta)=\\cos^2(\\theta/2)" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "\\theta=0" }, { "math_id": 9, "text": "\\theta=\\pm 120^\\circ" } ]
https://en.wikipedia.org/wiki?curid=71992445
71998469
475 °C embrittlement
Loss of plasticity in ferritic stainless steel Duplex stainless steels are a family of alloys with a two-phase microstructure consisting of both austenitic (face-centred cubic) and ferritic (body-centred cubic) phases. They offer excellent mechanical properties, corrosion resistance, and toughness compared to other types of stainless steel. However, duplex stainless steel can be susceptible to a phenomenon known as embrittlement or duplex stainless steel age hardening, which is a type of aging process that causes loss of plasticity in duplex stainless steel when it is heated in the range of . At this temperature range, spontaneous phase separation of the ferrite phase into iron-rich and chromium-rich nanophases occurs, with no change in the mechanical properties of the austenite phase. This type of embrittlement is due to precipitation hardening, which makes the material become brittle and prone to cracking. Duplex stainless steel. Duplex stainless steel is a type of stainless steel that has a two-phase microstructure consisting of both austenitic (face-centred cubic) and ferritic (body-centred cubic) phases. This dual-phase structure gives duplex stainless steel a combination of mechanical and corrosion-resistant properties that are superior to those of either austenitic or ferritic stainless steel alone. The austenitic phase provides the steel with good ductility, high toughness, and high corrosion resistance, especially in acidic and chloride-containing environments. The ferritic phase, on the other hand, provides the steel with good strength, high resistance to stress corrosion cracking, and high resistance to pitting and crevice corrosion. They are therefore used extensively in the offshore oil and gas industry for pipework systems, manifolds, risers, etc. and in the petrochemical industry in the form of pipelines and pressure vessels. A duplex stainless steel mixture of austenite and ferrite microstructure is not necessarily in equal proportions, and where the alloy solidifies as ferrite, it is partially transformed to austenite when the temperature falls to around . Duplex steels have a higher chromium content compared to austenitic stainless steel, 20–28%; higher molybdenum, up to 5%; lower nickel, up to 9%; and 0.05–0.50% nitrogen. Thus, duplex stainless steel alloys have good corrosion resistance and higher strength than standard austenitic stainless steels such as type 304 or 316. Alpha (α) phase is a ferritic phase with body-centred cubic (BCC) structure, Imformula_0m [229] space group, 2.866 Å lattice parameter, and has one twinning system {112}&lt;111&gt; and three slip systems {110}&lt;111&gt;, {112}&lt;111&gt; and {123}&lt;111&gt;; however, the last system rarely activates. Gamma (formula_1) phase is austenitic with a face-centred cubic (FCC) structure, Fmformula_0m [259] space group, and 3.66 Å lattice parameter. It normally has more nickel, copper, and interstitial carbon and nitrogen. Plastic deformation occurs in austenite more readily than in ferrite. During deformation, straight slip bands form in the austenite grains and propagate to the ferrite-austenite grain boundaries, assisting in the slipping of the ferrite phase. Curved slip bands also form due to the bulk-ferrite-grain deformation. The formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Age hardening by spinodal decomposition. Duplex stainless steel can have limited toughness due to its large ferritic grain size, and its tendencies to hardening and embrittlement, i.e., loss of plasticity, at temperatures ranging from , especially at . At this temperature range, spinodal decomposition of the supersaturated solid ferrite solution into iron-rich nanophase (formula_2) and chromium-rich nanophase (formula_3), accompanied by G-phase precipitation, occurs. This makes the ferrite phase a preferential initiation site for micro-cracks. This is because aging encourages Σ3 {112}&lt;111&gt; ferrite deformation twinning at slow strain rate and room temperature in tensile or compressive deformation, nucleating from local stress concentration sites, and parent-twinning boundaries, with 60° (in or out) misorientation, are suitable for cleavage crack nucleation. Spinodal decomposition refers to the spontaneous separation of a phase into two coherent phases via uphill diffusion, i.e., from a region of lower concentration to a region of higher concentration resulting in a negative diffusion coefficient formula_4, without a barrier to nucleation due to the phase being thermodynamically unstable (i.e., miscibility gap, formula_2 + formula_3 region in the figure), where formula_5 is the Gibbs free energy per mole of solution and the composition. It increases hardness and decreases magneticity. Miscibility gap describes the region in a phase diagram below the melting point of each compound where the solid phase splits into the liquid of two separated stable phases. For 475 °C embrittlement to occur, the chromium content needs to exceed 12%. The addition of nickel accelerates the spinodal decomposition by promoting the iron-rich nanophase formation. Nitrogen changes the distribution of chromium, nickel, and molybdenum in the ferrite phase but does not prevent the phase decomposition. Other elements like molybdenum, manganese, and silicon do not affect the formation of iron-rich nanophase. However, manganese and molybdenum partition to the iron-rich nanophase, while nickel partitions to the chromium-rich nanophase. Microscopy characterisation. Using Field Emission Gun Transmission Electron Microscope FEG-TEM, the nanometre-scaled modulated structure of the decomposed ferrite was revealed as chromium-rich nanophase gave the bright image, and iron-rich darker image. It also revealed that these modulated nanophases grow coarser with aging time. Decomposed phases start as irregular rounded shapes with no particular arrangement, but with time the chromium-rich nanophase takes a plat shape aligned in the &lt;110&gt; directions. Consequences. Spinodal decomposition increases the hardening of the material due to the misfit between the chromium-rich and iron-rich nano-phases, internal stress, and variation of elastic modulus. The formation of coherent precipitates induces an equal but opposite strain, raising the system's free energy depending on the precipitate shape and matrix and precipitate elastic properties. Around a spherical inclusion, the distortion is purely hydrostatic. G-phase precipitates appear prominently at grain boundaries. and are phase rich in nickel, titanium, and silicon, but chromium and manganese may substitute titanium sites. G-phase precipitates occur during long-term aging, are encouraged by increasing nickel content in the ferrite phase, and reduce corrosion resistance significantly. It has ellipsoid morphology, FCC structure (Fmformula_0m), and 11.4 Å lattice parameter, with a diameter less than 50 nm that increases with aging. Thus, the embrittlement is caused by dislocations impediment/ locking by the spinodally decomposed matrix and strain around G-phase precipitates, i.e., internal stress relaxation by the formation of Cottrell atmosphere. Furthermore, the ferrite hardness increases with aging time, the hardness of the ductile austenite phase remains nearly unchanged due to faster diffusivity in ferrite compared to the austenite. However, austenite undergoes a substitutional redistribution of elements, enhancing galvanic corrosion between the two phases. Treatment. 550 °C heat treatment can reverse spinodal decomposition but not affect the G-phase precipitates. The ferrite matrix spinodal decomposition can be substantially reversed by introducing an external pulsed electric current that changes the system's free energy due to the difference in electrical conductivity between the nanophases and the dissolution of G-phase precipitates. Cyclic loading suppresses spinodal decomposition, and radiation accelerates it but changes the decomposition nature from an interconnected network of modulated nanophases to isolated islands. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bar{3}" }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "\\acute{a}" }, { "math_id": 3, "text": "\\acute{a}\\acute{}" }, { "math_id": 4, "text": "{d^2G \\over dx^2}" }, { "math_id": 5, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=71998469
72003155
Beta-model
A class of "well-behaved" models in set theory In model theory, a mathematical discipline, a β-model (from the French "bon ordre", well-ordering) is a model which is correct about statements of the form ""X" is well-ordered". The term was introduced by Mostowski (1959) as a strengthening of the notion of ω-model. In contrast to the notation for set-theoretic properties named by ordinals, such as formula_0-indescribability, the letter β here is only denotational. In analysis. β-models appear in the study of the reverse mathematics of subsystems of second-order arithmetic. In this context, a β-model of a subsystem of second-order arithmetic is a model M where for any Σ11 formula formula_1 with parameters from M, formula_2 iff formula_3.p. 243 Every β-model of second-order arithmetic is also an ω-model, since working within the model we can prove that &lt; is a well-ordering, so &lt; really is a well-ordering of the natural numbers of the model. There is an incompleteness theorem for β-models: if T is a recursively axiomatizable theory in the language of second-order arithmetic, analogously to how there is a model of T+"there is no model of T" if there is a model of T, there is a β-model of T+"there are no countable coded β-models of T" if there is a β-model of T. A similar theorem holds for βn-models for any natural number formula_4. Axioms based on β-models provide a natural finer division of the strengths of subsystems of second-order arithmetic, and also provide a way to formulate reflection principles. For example, over formula_5, formula_6 is equivalent to the statement "for all formula_7 [of second-order sort], there exists a countable β-model M such that formula_8.p. 253 (Countable ω-models are represented by their sets of integers, and their satisfaction is formalizable in the language of analysis by an inductive definition.) Also, the theory extending KP with a canonical axiom schema for a recursively Mahlo universe (often called formula_9) is logically equivalent to the theory Δ-CA+BI+(Every true Π-formula is satisfied by a β-model of Δ-CA). Additionally, formula_10 proves a connection between β-models and the hyperjump: for all sets formula_7 of integers, formula_7 has a hyperjump iff there exists a countable β-model formula_11 such that formula_8.p. 251 In set theory. A notion of β-model can be defined for models of second-order set theories (such as Morse-Kelley set theory) as a model formula_12 such that the membership relations of formula_12 is well-founded, and for any relation formula_13, formula_14"formula_15 is well-founded" iff formula_15 is in fact well-founded. While there is no least transitive model of MK, there is a least β-model of MK.pp.17,154--156 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\xi" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "(\\omega,M,+,\\times,0,1,<)\\vDash\\phi" }, { "math_id": 3, "text": "(\\omega,\\mathcal P(\\omega),+,\\times,0,1,<)\\vDash\\phi" }, { "math_id": 4, "text": "n\\geq 1" }, { "math_id": 5, "text": "\\mathsf{ATR}_0" }, { "math_id": 6, "text": "\\Pi^1_1\\mathsf{-CA}_0" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "X\\in M" }, { "math_id": 9, "text": "KPM" }, { "math_id": 10, "text": "\\mathsf{ACA}_0" }, { "math_id": 11, "text": "M" }, { "math_id": 12, "text": "(M, \\mathcal X)" }, { "math_id": 13, "text": "R\\in\\mathcal X" }, { "math_id": 14, "text": "(M, \\mathcal X)\\vDash" }, { "math_id": 15, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=72003155
72003352
Fuzzy differential inclusion
Fuzzy differential inclusion is the extension of differential inclusion to fuzzy sets introduced by Lotfi A. Zadeh. formula_0 with formula_1 Suppose formula_2 is a fuzzy valued continuous function on Euclidean space. Then it is the collection of all normal, upper semi-continuous, convex, compactly supported fuzzy subsets of formula_3. Second order differential. The second order differential is formula_4 where formula_5, formula_6 is trapezoidal fuzzy number formula_7, and formula_8 is a trianglular fuzzy number (-1,0,1). Applications. Fuzzy differential inclusion (FDI) has applications in References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nx'(t) \\in [ f(t , x(t))]^\\alpha\n" }, { "math_id": 1, "text": " \nx(0) \\in [x_0]^\\alpha \n" }, { "math_id": 2, "text": "f(t,x(t))" }, { "math_id": 3, "text": "\\mathbb{R}^n" }, { "math_id": 4, "text": " x''(t) \\in [kx]^ \\alpha " }, { "math_id": 5, "text": " k \\in [K]^ \\alpha" }, { "math_id": 6, "text": "K" }, { "math_id": 7, "text": "(-1,-1/2,0,1/2)" }, { "math_id": 8, "text": "x_0" } ]
https://en.wikipedia.org/wiki?curid=72003352
7200539
Configuration (geometry)
Points and lines with equal incidences In mathematics, specifically projective geometry, a configuration in the plane consists of a finite set of points, and a finite arrangement of lines, such that each point is incident to the same number of lines and each line is incident to the same number of points. Although certain specific configurations had been studied earlier (for instance by Thomas Kirkman in 1849), the formal study of configurations was first introduced by Theodor Reye in 1876, in the second edition of his book "Geometrie der Lage", in the context of a discussion of Desargues' theorem. Ernst Steinitz wrote his dissertation on the subject in 1894, and they were popularized by Hilbert and Cohn-Vossen's 1932 book "Anschauliche Geometrie", reprinted in English as . Configurations may be studied either as concrete sets of points and lines in a specific geometry, such as the Euclidean or projective planes (these are said to be "realizable" in that geometry), or as a type of abstract incidence geometry. In the latter case they are closely related to regular hypergraphs and biregular bipartite graphs, but with some additional restrictions: every two points of the incidence structure can be associated with at most one line, and every two lines can be associated with at most one point. That is, the girth of the corresponding bipartite graph (the Levi graph of the configuration) must be at least six. Notation. A configuration in the plane is denoted by ("p"γ "ℓ"π), where "p" is the number of points, "ℓ" the number of lines, γ the number of lines per point, and π the number of points per line. These numbers necessarily satisfy the equation formula_0 as this product is the number of point-line incidences ("flags"). Configurations having the same symbol, say ("p"γ "ℓ"π), need not be isomorphic as incidence structures. For instance, there exist three different (93 93) configurations: the Pappus configuration and two less notable configurations. In some configurations, "p" = "ℓ" and consequently, "γ" = π. These are called "symmetric" or "balanced" configurations and the notation is often condensed to avoid repetition. For example, (93 93) abbreviates to (93). Examples. Notable projective configurations include the following: Duality of configurations. The projective dual of a configuration ("p"γ "ℓ"π) is a ("ℓ"π "p"γ) configuration in which the roles of "point" and "line" are exchanged. Types of configurations therefore come in dual pairs, except when taking the dual results in an isomorphic configuration. These exceptions are called "self-dual" configurations and in such cases "p" = "ℓ". The number of ("n"3) configurations. The number of nonisomorphic configurations of type ("n"3), starting at "n" = 7, is given by the sequence 1, 1, 3, 10, 31, 229, 2036, 21399, 245342, ... (sequence in the OEIS) These numbers count configurations as abstract incidence structures, regardless of realizability. As discusses, nine of the ten (103) configurations, and all of the (113) and (123) configurations, are realizable in the Euclidean plane, but for each "n" ≥ 16 there is at least one nonrealizable ("n"3) configuration. Gropp also points out a long-lasting error in this sequence: an 1895 paper attempted to list all (123) configurations, and found 228 of them, but the 229th configuration, the Gropp configuration, was not discovered until 1988. Constructions of symmetric configurations. There are several techniques for constructing configurations, generally starting from known configurations. Some of the simplest of these techniques construct symmetric ("p"γ) configurations. Any finite projective plane of order "n" is an (("n"2 + "n" + 1)"n" + 1) configuration. Let Π be a projective plane of order "n". Remove from Π a point "P" and all the lines of Π which pass through "P" (but not the points which lie on those lines except for "P") and remove a line "ℓ" not passing through "P" and all the points that are on line "ℓ". The result is a configuration of type (("n"2 – 1)"n"). If, in this construction, the line "ℓ" is chosen to be a line which does pass through "P", then the construction results in a configuration of type (("n"2)"n"). Since projective planes are known to exist for all orders "n" which are powers of primes, these constructions provide infinite families of symmetric configurations. Not all configurations are realizable, for instance, a (437) configuration does not exist. However, has provided a construction which shows that for "k" ≥ 3, a ("p"k) configuration exists for all "p" ≥ 2 "ℓ""k" + 1, where "ℓ""k" is the length of an optimal Golomb ruler of order "k". Unconventional configurations. Higher dimensions. The concept of a configuration may be generalized to higher dimensions, for instance to points and lines or planes in space. In such cases, the restrictions that no two points belong to more than one line may be relaxed, because it is possible for two points to belong to more than one plane. Notable three-dimensional configurations are the Möbius configuration, consisting of two mutually inscribed tetrahedra, Reye's configuration, consisting of twelve points and twelve planes, with six points per plane and six planes per point, the Gray configuration consisting of a 3×3×3 grid of 27 points and the 27 orthogonal lines through them, and the Schläfli double six, a configuration with 30 points, 12 lines, two lines per point, and five points per line. Topological configurations. Configuration in the projective plane that is realized by points and pseudolines is called topological configuration. For instance, it is known that there exists no point-line (194) configurations, however, there exists a topological configuration with these parameters. Configurations of points and circles. Another generalization of the concept of a configuration concerns configurations of points and circles, a notable example being the (83 64) Miquel configuration. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p\\gamma = \\ell\\pi\\," } ]
https://en.wikipedia.org/wiki?curid=7200539
72010909
Fuzzy differential equation
Fuzzy differential equation are general concept of ordinary differential equation in mathematics defined as differential inclusion for non-uniform upper hemicontinuity convex set with compactness in fuzzy set. formula_0 for all formula_1. First order fuzzy differential equation. A first order fuzzy differential equation with real constant or variable coefficients formula_2 where formula_3 is a real continuous function and formula_4 is a fuzzy continuous function formula_5 such that formula_6. Linear systems of fuzzy differential equations. A system of equations of the form formula_7where formula_8 are real functions and formula_9 are fuzzy functions formula_10 Fuzzy partial differential equations. A fuzzy differential equation with partial differential operator is formula_11for all formula_1. Fuzzy fractional differential equation. A fuzzy differential equation with fractional differential operator is formula_12 for all formula_1 where formula_13 is a rational number. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " dx(t)/dt= F(t,x(t),\\alpha)," }, { "math_id": 1, "text": " \\alpha \\in [0,1] " }, { "math_id": 2, "text": " x'(t) + p(t) x(t) = f(t) " }, { "math_id": 3, "text": "p(t)" }, { "math_id": 4, "text": " f(t) \\colon [t_0 , \\infty) \\rightarrow R_F " }, { "math_id": 5, "text": " y(t_0) = y_0 " }, { "math_id": 6, "text": " y_0 \\in R_F " }, { "math_id": 7, "text": " x(t)'_n = a_n1(t) x_1(t) + ......+ a_nn(t) x_n(t) + f_n(t) " }, { "math_id": 8, "text": "a_ij" }, { "math_id": 9, "text": " f_i" }, { "math_id": 10, "text": " x'_n(t)= \\sum_{i=0}^1 a_{ij} x_i." }, { "math_id": 11, "text": " \\nabla x(t) = F(t,x(t),\\alpha)," }, { "math_id": 12, "text": " \\frac {d^n x(t)} {dt^n}= F(t,x(t),\\alpha)," }, { "math_id": 13, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=72010909
72014499
Daniel Larsen (mathematician)
American mathematician (born 2003) Daniel Larsen (born 2003) is an American mathematician known for proving a 1994 conjecture of W. R. Alford, Andrew Granville and Carl Pomerance on the distribution of Carmichael numbers, commonly known as Bertrand's postulate for Carmichael numbers. Childhood and education. Larsen was born in 2003 to Indiana University Bloomington mathematics professors Michael J. Larsen and Ayelet Lindenstrauss (sister of Elon Lindenstrauss), and grew up in Bloomington, Indiana. He had a strong interest in mathematics as a child, inspired by the mathematician background of both his parents. His father hosted a math circle when he was younger that taught math on the weekend to kids in the neighborhood and Larsen attended despite being only four years old. He also had a strong interest in other projects, learning violin at age 5 and piano at age 6, along with practicing solving larger configurations of Rubik's Cubes and designing his own coin-sorting robot from Lego. He competed in the Scripps National Spelling Bee twice while in middle school, though he never made it to the final round. While attending Bloomington High School South, he became the youngest accepted contributor to "The New York Times" crossword puzzle in February 2017 and ended up submitting 11 approved puzzles before his graduation from high school. He applied to and became a finalist in the 2022 Regeneron Science Talent Search for his published research on Carmichael numbers and ultimately won 4th place in the competition, winning $100,000 to pay for his college tuition. In the fall of 2022, he began attending university at the Massachusetts Institute of Technology (MIT). Career and research. During his teenage years, after watching a documentary about Yitang Zhang, Larsen became interested in number theory and the twin primes conjecture in particular. The subsequent strengthening of Zhang’s method by James Maynard and Terence Tao not long after rekindled his desire to better understand the math involved. He found it too complex at that time, and it wasn't until after reading a paper in February 2021 on Carmichael numbers that he gained insight on the fundamentals of the problem. In November of the same year, Larsen published a paper titled "Bertrand's Postulate for Carmichael Numbers" on the open access repository arXiv that made a more consolidated proof of Maynard and Tao's postulate but involving Carmichael numbers into the twin primes conjecture and attempting to shorten the distance between the numbers per Bertrand's postulate. He concretely showed that for any formula_0 and sufficiently large formula_1 in terms of formula_2, there will always be at least formula_3 Carmichael numbers between formula_1 and formula_4 He then emailed a copy of the paper to mathematician Andrew Granville and others involved in number theory research. The paper was later published in the journal "International Mathematics Research Notices". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\displaystyle \\delta >0}" }, { "math_id": 1, "text": "{\\displaystyle x}" }, { "math_id": 2, "text": "{\\displaystyle \\delta }" }, { "math_id": 3, "text": "{\\displaystyle \\exp {\\left({\\frac {\\log {x}}{(\\log \\log {x})^{2+\\delta }}}\\right)}}" }, { "math_id": 4, "text": "{\\displaystyle x+{\\frac {x}{(\\log {x})^{\\frac {1}{2+\\delta }}}}.}" } ]
https://en.wikipedia.org/wiki?curid=72014499
7201497
Serine/threonine-specific protein kinase
Class of protein kinase enzymes A serine/threonine protein kinase (EC 2.7.11.-) is a kinase enzyme, in particular a protein kinase, that phosphorylates the OH group of the amino-acid residues serine or threonine, which have similar side chains. At least 350 of the 500+ human protein kinases are serine/threonine kinases (STK). In enzymology, the term "serine/threonine protein kinase" describes a class of enzymes in the family of transferases, that transfer phosphates to the oxygen atom of a serine or threonine side chain in proteins. This process is called phosphorylation. Protein phosphorylation in particular plays a significant role in a wide range of cellular processes and is a very important post-translational modification. The chemical reaction performed by these enzymes can be written as ATP + a protein formula_0 ADP + a phosphoprotein Thus, the two substrates of this enzyme are ATP and a protein, whereas its two products are ADP and phosphoprotein. The systematic name of this enzyme class is "ATP:protein phosphotransferase (non-specific)". Function. Serine/threonine kinases play a role in the regulation of cell proliferation, programmed cell death (apoptosis), cell differentiation, and embryonic development. Selectivity. While serine/threonine kinases all phosphorylate serine or threonine residues in their substrates, they select specific residues to phosphorylate on the basis of residues that flank the phosphoacceptor site, which together comprise the "consensus sequence". Since the consensus sequence residues of a target substrate only make contact with several key amino acids within the catalytic cleft of the kinase (usually through hydrophobic forces and ionic bonds), a kinase is usually not specific to a single substrate, but instead can phosphorylate a whole "substrate family" which share common recognition sequences. While the catalytic domain of these kinases is highly conserved, the sequence variation that is observed in the kinome (the subset of genes in the genome that encode kinases) provides for recognition of distinct substrates. Many kinases are inhibited by a pseudosubstrate that binds to the kinase like a real substrate but lacks the amino acid to be phosphorylated. When the pseudosubstrate is removed, the kinase can perform its normal function. EC numbers. Many serine/threonine protein kinases do not have their own individual EC numbers and use 2.7.11.1, "non-specific serine/threonine protein kinase". This entry is for any enzyme that phosphorylates proteins while converting ATP to ADP (i.e., ATP:protein phosphotransferases.) 2.7.11.37 "protein kinase" was the former generic placeholder and was split into several entries (including 2.7.11.1) in 2005. 2.7.11.70 "protamine kinase" was merged into 2.7.11.1 in 2004. 2.7.11.- is the generic level where all serine/threonine kinases should sit in. Types. Types include those acting directly as membrane-bound receptors (Receptor protein serine/threonine kinase) and intracellular kinases participating in Signal transduction. Of the latter, types include: Clinical significance. Serine/threonine kinase (STK) expression is altered in many types of cancer. Limited benefit of serine/threonine kinase inhibitors has been demonstrated in ovarian cancer but studies are ongoing to evaluate their safety and efficacy. Serine/threonine protein kinase p90-kDa ribosomal S6 kinase (RSK) is in involved in development of some prostate cancers. Raf inhibition has become the target for new anti-metastatic cancer drugs as they inhibit the MAPK cascade and reduce cell proliferation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=7201497
7201595
Schlegel diagram
Representation of 3D and 4D polytopes In geometry, a Schlegel diagram is a projection of a polytope from formula_0 into formula_1 through a point just outside one of its facets. The resulting entity is a polytopal subdivision of the facet in formula_1 that, together with the original facet, is combinatorially equivalent to the original polytope. The diagram is named for Victor Schlegel, who in 1886 introduced this tool for studying combinatorial and topological properties of polytopes. In dimension 3, a Schlegel diagram is a projection of a polyhedron into a plane figure; in dimension 4, it is a projection of a 4-polytope to 3-space. As such, Schlegel diagrams are commonly used as a means of visualizing four-dimensional polytopes. Construction. The most elementary Schlegel diagram, that of a polyhedron, was described by Duncan Sommerville as follows: A very useful method of representing a convex polyhedron is by plane projection. If it is projected from any external point, since each ray cuts it twice, it will be represented by a polygonal area divided twice over into polygons. It is always possible by suitable choice of the centre of projection to make the projection of one face completely contain the projections of all the other faces. This is called a "Schlegel diagram" of the polyhedron. The Schlegel diagram completely represents the morphology of the polyhedron. It is sometimes convenient to project the polyhedron from a vertex; this vertex is projected to infinity and does not appear in the diagram, the edges through it are represented by lines drawn outwards. Sommerville also considers the case of a simplex in four dimensions: "The Schlegel diagram of simplex in S4 is a tetrahedron divided into four tetrahedra." More generally, a polytope in n-dimensions has a Schlegel diagram constructed by a perspective projection viewed from a point outside of the polytope, above the center of a facet. All vertices and edges of the polytope are projected onto a hyperplane of that facet. If the polytope is convex, a point near the facet will exist which maps the facet outside, and all other facets inside, so no edges need to cross in the projection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^d" }, { "math_id": 1, "text": "\\mathbb{R}^{d-1}" } ]
https://en.wikipedia.org/wiki?curid=7201595
72016
Genetic drift
Concept in genetics Genetic drift, also known as random genetic drift, allelic drift or the Wright effect, is the change in the frequency of an existing gene variant (allele) in a population due to random chance. Genetic drift may cause gene variants to disappear completely and thereby reduce genetic variation. It can also cause initially rare alleles to become much more frequent and even fixed. When few copies of an allele exist, the effect of genetic drift is more notable, and when many copies exist, the effect is less notable (due to the law of large numbers). In the middle of the 20th century, vigorous debates occurred over the relative importance of natural selection versus neutral processes, including genetic drift. Ronald Fisher, who explained natural selection using Mendelian genetics, held the view that genetic drift plays at most a minor role in evolution, and this remained the dominant view for several decades. In 1968, population geneticist Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most instances where a genetic change spreads across a population (although not necessarily changes in phenotypes) are caused by genetic drift acting on neutral mutations. In the 1990s, constructive neutral evolution was proposed which seeks to explain how complex systems emerge through neutral transitions. Analogy with marbles in a jar. The process of genetic drift can be illustrated using 20 marbles in a jar to represent 20 organisms in a population. Consider this jar of marbles as the starting population. Half of the marbles in the jar are red and half are blue, with each colour corresponding to a different allele of one gene in the population. In each new generation, the organisms reproduce at random. To represent this reproduction, randomly select a marble from the original jar and deposit a new marble with the same colour into a new jar. This is the "offspring" of the original marble, meaning that the original marble remains in its jar. Repeat this process until 20 new marbles are in the second jar. The second jar will now contain 20 "offspring", or marbles of various colours. Unless the second jar contains exactly 10 red marbles and 10 blue marbles, a random shift has occurred in the allele frequencies. If this process is repeated a number of times, the numbers of red and blue marbles picked each generation fluctuates. Sometimes, a jar has more red marbles than its "parent" jar and sometimes more blue. This fluctuation is analogous to genetic drift – a change in the population's allele frequency resulting from a random variation in the distribution of alleles from one generation to the next. In any one generation, no marbles of a particular colour could be chosen, meaning they have no offspring. In this example, if no red marbles are selected, the jar representing the new generation contains only blue offspring. If this happens, the red allele has been lost permanently in the population, while the remaining blue allele has become fixed: all future generations are entirely blue. In small populations, fixation can occur in just a few generations. Probability and allele frequency. The mechanisms of genetic drift can be illustrated with a very simple example. Consider a very large colony of bacteria isolated in a drop of solution. The bacteria are genetically identical except for a single gene with two alleles labeled A and B, which are neutral alleles, meaning that they do not affect the bacteria's ability to survive and reproduce; all bacteria in this colony are equally likely to survive and reproduce. Suppose that half the bacteria have allele A and the other half have allele B. Thus, A and B each has an allele frequency of 1/2. The drop of solution then shrinks until it has only enough food to sustain four bacteria. All other bacteria die without reproducing. Among the four that survive, 16 possible combinations for the A and B alleles exist: (A-A-A-A), (B-A-A-A), (A-B-A-A), (B-B-A-A), (A-A-B-A), (B-A-B-A), (A-B-B-A), (B-B-B-A), (A-A-A-B), (B-A-A-B), (A-B-A-B), (B-B-A-B), (A-A-B-B), (B-A-B-B), (A-B-B-B), (B-B-B-B). Since all bacteria in the original solution are equally likely to survive when the solution shrinks, the four survivors are a random sample from the original colony. The probability that each of the four survivors has a given allele is 1/2, and so the probability that any particular allele combination occurs when the solution shrinks is formula_0 (The original population size is so large that the sampling effectively happens with replacement). In other words, each of the 16 possible allele combinations is equally likely to occur, with probability 1/16. Counting the combinations with the same number of A and B gives the following table: As shown in the table, the total number of combinations that have the same number of A alleles as of B alleles is six, and the probability of this combination is 6/16. The total number of other combinations is ten, so the probability of unequal number of A and B alleles is 10/16. Thus, although the original colony began with an equal number of A and B alleles, quite possibly, the number of alleles in the remaining population of four members will not be equal. The situation of equal numbers is actually less likely than unequal numbers. In the latter case, genetic drift has occurred because the population's allele frequencies have changed due to random sampling. In this example, the population contracted to just four random survivors, a phenomenon known as a population bottleneck. The probabilities for the number of copies of allele A (or B) that survive (given in the last column of the above table) can be calculated directly from the binomial distribution, where the "success" probability (probability of a given allele being present) is 1/2 (i.e., the probability that there are "k" copies of A (or B) alleles in the combination) is given by: formula_1 where "n=4" is the number of surviving bacteria. Mathematical models. Mathematical models of genetic drift can be designed using either branching processes or a diffusion equation describing changes in allele frequency in an idealised population. Wright–Fisher model. Consider a gene with two alleles, A or B. In diploidy, populations consisting of "N" individuals have 2"N" copies of each gene. An individual can have two copies of the same allele or two different alleles. The frequency of one allele is assigned "p" and the other "q". The Wright–Fisher model (named after Sewall Wright and Ronald Fisher) assumes that generations do not overlap (for example, annual plants have exactly one generation per year) and that each copy of the gene found in the new generation is drawn independently at random from all copies of the gene in the old generation. The formula to calculate the probability of obtaining "k" copies of an allele that had frequency "p" in the last generation is then formula_2 where the symbol "!" signifies the factorial function. This expression can also be formulated using the binomial coefficient, formula_3 Moran model. The Moran model assumes overlapping generations. At each time step, one individual is chosen to reproduce and one individual is chosen to die. So in each timestep, the number of copies of a given allele can go up by one, go down by one, or can stay the same. This means that the transition matrix is tridiagonal, which means that mathematical solutions are easier for the Moran model than for the Wright–Fisher model. On the other hand, computer simulations are usually easier to perform using the Wright–Fisher model, because fewer time steps need to be calculated. In the Moran model, it takes "N" timesteps to get through one generation, where "N" is the effective population size. In the Wright–Fisher model, it takes just one. In practice, the Moran and Wright–Fisher models give qualitatively similar results, but genetic drift runs twice as fast in the Moran model. Other models of drift. If the variance in the number of offspring is much greater than that given by the binomial distribution assumed by the Wright–Fisher model, then given the same overall speed of genetic drift (the variance effective population size), genetic drift is a less powerful force compared to selection. Even for the same variance, if higher moments of the offspring number distribution exceed those of the binomial distribution then again the force of genetic drift is substantially weakened. Random effects other than sampling error. Random changes in allele frequencies can also be caused by effects other than sampling error, for example random changes in selection pressure. One important alternative source of stochasticity, perhaps more important than genetic drift, is genetic draft. Genetic draft is the effect on a locus by selection on linked loci. The mathematical properties of genetic draft are different from those of genetic drift. The direction of the random change in allele frequency is autocorrelated across generations. Drift and fixation. The Hardy–Weinberg principle states that within sufficiently large populations, the allele frequencies remain constant from one generation to the next unless the equilibrium is disturbed by migration, genetic mutations, or selection. However, in finite populations, no new alleles are gained from the random sampling of alleles passed to the next generation, but the sampling can cause an existing allele to disappear. Because random sampling can remove, but not replace, an allele, and because random declines or increases in allele frequency influence expected allele distributions for the next generation, genetic drift drives a population towards genetic uniformity over time. When an allele reaches a frequency of 1 (100%) it is said to be "fixed" in the population and when an allele reaches a frequency of 0 (0%) it is lost. Smaller populations achieve fixation faster, whereas in the limit of an infinite population, fixation is not achieved. Once an allele becomes fixed, genetic drift comes to a halt, and the allele frequency cannot change unless a new allele is introduced in the population via mutation or gene flow. Thus even while genetic drift is a random, directionless process, it acts to eliminate genetic variation over time. Rate of allele frequency change due to drift. Assuming genetic drift is the only evolutionary force acting on an allele, after "t" generations in many replicated populations, starting with allele frequencies of "p" and "q", the variance in allele frequency across those populations is formula_4 Time to fixation or loss. Assuming genetic drift is the only evolutionary force acting on an allele, at any given time the probability that an allele will eventually become fixed in the population is simply its frequency in the population at that time. For example, if the frequency "p" for allele A is 75% and the frequency "q" for allele B is 25%, then given unlimited time the probability A will ultimately become fixed in the population is 75% and the probability that B will become fixed is 25%. The expected number of generations for fixation to occur is proportional to the population size, such that fixation is predicted to occur much more rapidly in smaller populations. Normally the effective population size, which is smaller than the total population, is used to determine these probabilities. The effective population ("N""e") takes into account factors such as the level of inbreeding, the stage of the lifecycle in which the population is the smallest, and the fact that some neutral genes are genetically linked to others that are under selection. The effective population size may not be the same for every gene in the same population. One forward-looking formula used for approximating the expected time before a neutral allele becomes fixed through genetic drift, according to the Wright–Fisher model, is formula_5 where "T" is the number of generations, "N""e" is the effective population size, and "p" is the initial frequency for the given allele. The result is the number of generations expected to pass before fixation occurs for a given allele in a population with given size ("N""e") and allele frequency ("p"). The expected time for the neutral allele to be lost through genetic drift can be calculated as formula_6 When a mutation appears only once in a population large enough for the initial frequency to be negligible, the formulas can be simplified to formula_7 for average number of generations expected before fixation of a neutral mutation, and formula_8 for the average number of generations expected before the loss of a neutral mutation in a population of actual size N. Time to loss with both drift and mutation. The formulae above apply to an allele that is already present in a population, and which is subject to neither mutation nor natural selection. If an allele is lost by mutation much more often than it is gained by mutation, then mutation, as well as drift, may influence the time to loss. If the allele prone to mutational loss begins as fixed in the population, and is lost by mutation at rate m per replication, then the expected time in generations until its loss in a haploid population is given by formula_9 where formula_10 is Euler's constant. The first approximation represents the waiting time until the first mutant destined for loss, with loss then occurring relatively rapidly by genetic drift, taking time ≫ "N""e". The second approximation represents the time needed for deterministic loss by mutation accumulation. In both cases, the time to fixation is dominated by mutation via the term , and is less affected by the effective population size. Versus natural selection. In natural populations, genetic drift and natural selection do not act in isolation; both phenomena are always at play, together with mutation and migration. Neutral evolution is the product of both mutation and drift, not of drift alone. Similarly, even when selection overwhelms genetic drift, it can only act on variation that mutation provides. While natural selection has a direction, guiding evolution towards heritable adaptations to the current environment, genetic drift has no direction and is guided only by the mathematics of chance. As a result, drift acts upon the genotypic frequencies within a population without regard to their phenotypic effects. In contrast, selection favors the spread of alleles whose phenotypic effects increase survival and/or reproduction of their carriers, lowers the frequencies of alleles that cause unfavorable traits, and ignores those that are neutral. The law of large numbers predicts that when the absolute number of copies of the allele is small (e.g., in small populations), the magnitude of drift on allele frequencies per generation is larger. The magnitude of drift is large enough to overwhelm selection at any allele frequency when the selection coefficient is less than 1 divided by the effective population size. Non-adaptive evolution resulting from the product of mutation and genetic drift is therefore considered to be a consequential mechanism of evolutionary change primarily within small, isolated populations. The mathematics of genetic drift depend on the effective population size, but it is not clear how this is related to the actual number of individuals in a population. Genetic linkage to other genes that are under selection can reduce the effective population size experienced by a neutral allele. With a higher recombination rate, linkage decreases and with it this local effect on effective population size. This effect is visible in molecular data as a correlation between local recombination rate and genetic diversity, and negative correlation between gene density and diversity at noncoding DNA regions. Stochasticity associated with linkage to other genes that are under selection is not the same as sampling error, and is sometimes known as genetic draft in order to distinguish it from genetic drift. Low allele frequency makes alleles more vulnerable to being eliminated by random chance, even overriding the influence of natural selection. For example, while disadvantageous mutations are usually eliminated quickly within the population, new advantageous mutations are almost as vulnerable to loss through genetic drift as are neutral mutations. Not until the allele frequency for the advantageous mutation reaches a certain threshold will genetic drift have no effect. Population bottleneck. A population bottleneck is when a population contracts to a significantly smaller size over a short period of time due to some random environmental event. In a true population bottleneck, the odds for survival of any member of the population are purely random, and are not improved by any particular inherent genetic advantage. The bottleneck can result in radical changes in allele frequencies, completely independent of selection. The impact of a population bottleneck can be sustained, even when the bottleneck is caused by a one-time event such as a natural catastrophe. An interesting example of a bottleneck causing unusual genetic distribution is the relatively high proportion of individuals with total rod cell color blindness (achromatopsia) on Pingelap atoll in Micronesia. After a bottleneck, inbreeding increases. This increases the damage done by recessive deleterious mutations, in a process known as inbreeding depression. The worst of these mutations are selected against, leading to the loss of other alleles that are genetically linked to them, in a process of background selection. For recessive harmful mutations, this selection can be enhanced as a consequence of the bottleneck, due to genetic purging. This leads to a further loss of genetic diversity. In addition, a sustained reduction in population size increases the likelihood of further allele fluctuations from drift in generations to come. A population's genetic variation can be greatly reduced by a bottleneck, and even beneficial adaptations may be permanently eliminated. The loss of variation leaves the surviving population vulnerable to any new selection pressures such as disease, climatic change or shift in the available food source, because adapting in response to environmental changes requires sufficient genetic variation in the population for natural selection to take place. There have been many known cases of population bottleneck in the recent past. Prior to the arrival of Europeans, North American prairies were habitat for millions of greater prairie chickens. In Illinois alone, their numbers plummeted from about 100 million birds in 1900 to about 50 birds in the 1990s. The declines in population resulted from hunting and habitat destruction, but a consequence has been a loss of most of the species' genetic diversity. DNA analysis comparing birds from the mid century to birds in the 1990s documents a steep decline in the genetic variation in just the latter few decades. Currently the greater prairie chicken is experiencing low reproductive success. However, the genetic loss caused by bottleneck and genetic drift can increase fitness, as in "Ehrlichia". Over-hunting also caused a severe population bottleneck in the northern elephant seal in the 19th century. Their resulting decline in genetic variation can be deduced by comparing it to that of the southern elephant seal, which were not so aggressively hunted. Founder effect. The founder effect is a special case of a population bottleneck, occurring when a small group in a population splinters off from the original population and forms a new one. The random sample of alleles in the just formed new colony is expected to grossly misrepresent the original population in at least some respects. It is even possible that the number of alleles for some genes in the original population is larger than the number of gene copies in the founders, making complete representation impossible. When a newly formed colony is small, its founders can strongly affect the population's genetic make-up far into the future. A well-documented example is found in the Amish migration to Pennsylvania in 1744. Two members of the new colony shared the recessive allele for Ellis–Van Creveld syndrome. Members of the colony and their descendants tend to be religious isolates and remain relatively insular. As a result of many generations of inbreeding, Ellis–Van Creveld syndrome is now much more prevalent among the Amish than in the general population. The difference in gene frequencies between the original population and colony may also trigger the two groups to diverge significantly over the course of many generations. As the difference, or genetic distance, increases, the two separated populations may become distinct, both genetically and phenetically, although not only genetic drift but also natural selection, gene flow, and mutation contribute to this divergence. This potential for relatively rapid changes in the colony's gene frequency led most scientists to consider the founder effect (and by extension, genetic drift) a significant driving force in the evolution of new species. Sewall Wright was the first to attach this significance to random drift and small, newly isolated populations with his shifting balance theory of speciation. Following after Wright, Ernst Mayr created many persuasive models to show that the decline in genetic variation and small population size following the founder effect were critically important for new species to develop. However, there is much less support for this view today since the hypothesis has been tested repeatedly through experimental research and the results have been equivocal at best. History. The role of random chance in evolution was first outlined by Arend L. Hagedoorn and Anna Cornelia Hagedoorn-Vorstheuvel La Brand in 1921. They highlighted that random survival plays a key role in the loss of variation from populations. Fisher (1922) responded to this with the first, albeit marginally incorrect, mathematical treatment of the "Hagedoorn effect". Notably, he expected that many natural populations were too large (an N ~10,000) for the effects of drift to be substantial and thought drift would have an insignificant effect on the evolutionary process. The corrected mathematical treatment and term "genetic drift" was later coined by a founder of population genetics, Sewall Wright. His first use of the term "drift" was in 1929, though at the time he was using it in the sense of a directed process of change, or natural selection. Random drift by means of sampling error came to be known as the "Sewall–Wright effect," though he was never entirely comfortable to see his name given to it. Wright referred to all changes in allele frequency as either "steady drift" (e.g., selection) or "random drift" (e.g., sampling error). "Drift" came to be adopted as a technical term in the stochastic sense exclusively. Today it is usually defined still more narrowly, in terms of sampling error, although this narrow definition is not universal. Wright wrote that the "restriction of "random drift" or even "drift" to only one component, the effects of accidents of sampling, tends to lead to confusion". Sewall Wright considered the process of random genetic drift by means of sampling error equivalent to that by means of inbreeding, but later work has shown them to be distinct. In the early days of the modern evolutionary synthesis, scientists were beginning to blend the new science of population genetics with Charles Darwin's theory of natural selection. Within this framework, Wright focused on the effects of inbreeding on small relatively isolated populations. He introduced the concept of an adaptive landscape in which phenomena such as cross breeding and genetic drift in small populations could push them away from adaptive peaks, which in turn allow natural selection to push them towards new adaptive peaks. Wright thought smaller populations were more suited for natural selection because "inbreeding was sufficiently intense to create new interaction systems through random drift but not intense enough to cause random nonadaptive fixation of genes". Wright's views on the role of genetic drift in the evolutionary scheme were controversial almost from the very beginning. One of the most vociferous and influential critics was colleague Ronald Fisher. Fisher conceded genetic drift played some role in evolution, but an insignificant one. Fisher has been accused of misunderstanding Wright's views because in his criticisms Fisher seemed to argue Wright had rejected selection almost entirely. To Fisher, viewing the process of evolution as a long, steady, adaptive progression was the only way to explain the ever-increasing complexity from simpler forms. But the debates have continued between the "gradualists" and those who lean more toward the Wright model of evolution where selection and drift together play an important role. In 1968, Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most of the genetic changes are caused by genetic drift acting on neutral mutations. The role of genetic drift by means of sampling error in evolution has been criticized by John H. Gillespie and William B. Provine, who argue that selection on linked sites is a more important stochastic force. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{1}{2} \\cdot \\frac{1}{2} \\cdot \\frac{1}{2} \\cdot \\frac{1}{2} = \\frac{1}{16}.\n" }, { "math_id": 1, "text": "\n{n\\choose k} \\left(\\frac{1}{2}\\right)^k \\left(1-\\frac{1}{2}\\right)^{n-k}={n\\choose k} \\left(\\frac{1}{2}\\right)^n\\!\n" }, { "math_id": 2, "text": "\\frac{(2N)!}{k!(2N-k)!} p^k q^{2N-k} " }, { "math_id": 3, "text": "{2N \\choose k} p^k q^{2N-k} " }, { "math_id": 4, "text": "\nV_t \\approx pq\\left(1-\\exp\\left(-\\frac{t}{2N_e} \\right)\\right)\n" }, { "math_id": 5, "text": "\n\\bar{T}_\\text{fixed} = \\frac{-4N_e(1-p) \\ln (1-p)}{p}\n" }, { "math_id": 6, "text": "\n\\bar{T}_\\text{lost} = \\frac{-4N_ep}{1-p} \\ln p.\n" }, { "math_id": 7, "text": "\n\\bar{T}_\\text{fixed} = 4N_e\n" }, { "math_id": 8, "text": "\n\\bar{T}_\\text{lost} = 2 \\left ( \\frac{N_e}{N} \\right ) \\ln (2N)\n" }, { "math_id": 9, "text": "\n\\bar{T}_\\text{lost} \\approx \\begin{cases}\n\\dfrac 1 m, \\text{ if } mN_e \\ll 1\\\\[8pt]\n\\dfrac{\\ln{(mN_e)}+\\gamma} m \\text{ if } mN_e \\gg 1\n\\end{cases}\n" }, { "math_id": 10, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=72016
72026399
Ibragimov–Iosifescu conjecture for φ-mixing sequences
Collective name for 2 closely-related conjectures in probability theory Ibragimov–Iosifescu conjecture for φ-mixing sequences in probability theory is the collective name for 2 closely related conjectures by Ildar Ibragimov and . Conjecture. Let formula_0 be a strictly stationary formula_1-mixing sequence, for which formula_2 and formula_3. Then formula_4 is asymptotically normally distributed. formula_1 -mixing coefficients are defined as formula_5, where formula_6 and formula_7 are the formula_8-algebras generated by the formula_9 (respectively formula_10), and formula_1-mixing means that formula_11. Reformulated: Suppose formula_12 is a strictly stationary sequence of random variables such that formula_13 and formula_14 as formula_15 (that is, such that it has finite second moments and formula_16 as formula_15). Per Ibragimov, under these assumptions, if also formula_17 is formula_1-mixing, then a central limit theorem holds. Per a closely related conjecture by Iosifescu, under the same hypothesis, a weak invariance principle holds. Both conjectures together formulated in similar terms: Let formula_18 be a strictly stationary, centered, formula_1-mixing sequence of random variables such that formula_19 and formula_20. Then per Ibragimov formula_21, and per Iosifescu formula_22. Also, a related conjecture by Magda Peligrad states that under the same conditions and with formula_23, formula_24.
[ { "math_id": 0, "text": "(X_n,n\\in\\Bbb N)" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "\\mathbb E(X_0^2)<\\infty" }, { "math_id": 3, "text": "\\operatorname{Var}(S_n)\\to +\\infty" }, { "math_id": 4, "text": "S_n:=\\sum_{j=1}^nX_j" }, { "math_id": 5, "text": "\\phi_X(n):=\\sup(|\\mu(B\\mid A)-\\mu(B)|, A\\in\\mathcal F^m, B\\in \\mathcal F_{m+n},m\\in\\Bbb N )" }, { "math_id": 6, "text": "\\mathcal F^m" }, { "math_id": 7, "text": "\\mathcal F_{m+n}" }, { "math_id": 8, "text": "\\sigma" }, { "math_id": 9, "text": "X_j, j\\leqslant m" }, { "math_id": 10, "text": "j\\geqslant m+n" }, { "math_id": 11, "text": "\\phi_X(n)\\to 0" }, { "math_id": 12, "text": "X:=(X_k, k \\in {\\mathbf Z})" }, { "math_id": 13, "text": "EX_0 = 0, \\ EX_0^2 < \\infty" }, { "math_id": 14, "text": "ES_n^2 \\to \\infty" }, { "math_id": 15, "text": "n \\to \\infty" }, { "math_id": 16, "text": "\\operatorname{Var}(X_1 + \\ldots + X_n) \\to \\infty" }, { "math_id": 17, "text": "X" }, { "math_id": 18, "text": "\\{X_n\\}_n" }, { "math_id": 19, "text": "EX^2_1 < \\infty" }, { "math_id": 20, "text": "\\sigma^2_n \\to \\infty" }, { "math_id": 21, "text": "S_n / \\sigma_n \\overset{W}{\\to} N(0, 1)" }, { "math_id": 22, "text": "S_{[n1]} / \\sigma_n \\overset{W}{\\to} W" }, { "math_id": 23, "text": "\\phi_1 < 1" }, { "math_id": 24, "text": "\\overset{\\sim}{W}_n \\overset{W}{\\to} W" } ]
https://en.wikipedia.org/wiki?curid=72026399
72029076
Extrinsic Geometric Flows
Geometry textbook Extrinsic Geometric Flows is an advanced mathematics textbook that overviews geometric flows, mathematical problems in which a curve or surface moves continuously according to some rule. It focuses on extrinsic flows, in which the rule depends on the embedding of a surface into space, rather than intrinsic flows such as the Ricci flow that depend on the internal geometry of the surface and can be defined without respect to an embedding. "Extrinsic Geometric Flows" was written by Ben Andrews, Bennett Chow, Christine Guenther, and Mat Langford, and published in 2020 as volume 206 of Graduate Studies in Mathematics, a book series of the American Mathematical Society. Topics. The book consists of four chapters, roughly divided into four sections: The content within each chapter includes both proofs of the results discussed in the chapter, and references to the mathematics literature; additional references are provided in a commentary section at the end of each chapter, which also provides additional intuition and descriptions of open problems, as well as brief descriptions of additional results in the same area. As well as illustrating the mathematics under discussion with many figures, it humanizes the content by providing photographs of many of the mathematicians that it references. The chapters include exercises, making this book suitable as a graduate textbooks. Audience and reception. Although intrinsic flows have been the subject of much recent attention in mathematics after their use by Grigori Perelman to solve both the Poincaré conjecture and the geometrization conjecture, extrinsic flows also have a long history of important applications in mathematics, closely related to the solutions of partial differential equations. Their uses include modeling the growth of biological cells, metallic crystal grains, bubbles in foams, and even "the deformation of rolling stones in a beach". The book's proofs are often simplifications of the proofs in the research literature, but nevertheless it still quite technical, aimed at graduate students and researchers in geometric analysis. Readers are expected to be familiar with the basics of differential geometry and partial differential equations. There is more material in the book than could be covered in a single course, but it could either form the basis of a multi-course sequence or a topics course that picks out only some of its material. As well as being a textbook, "Extrinsic Geometric Flows" can serve as reference material on flows for specialists in the area. Related works. This is not the first book on geometric flows. Others include: Although "Extrinsic Geometric Flows" is more comprehensive and up-to-date than these works, it omits some of their topics, including anisotropic flows of curves in , applications to the theory of relativity in , and the level-set methods of . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y=-\\log\\cos x" }, { "math_id": 1, "text": "\\tfrac{1}{d+2}" } ]
https://en.wikipedia.org/wiki?curid=72029076
72033530
Wilson fermion
Lattice fermion discretisation In lattice field theory, Wilson fermions are a fermion discretization that allows to avoid the fermion doubling problem proposed by Kenneth Wilson in 1974. They are widely used, for instance in lattice QCD calculations. An additional so-called Wilson term formula_0 is introduced supplementing the naively discretized Dirac action in formula_1-dimensional Euclidean spacetime with lattice spacing formula_2, Dirac fields formula_3 at every lattice point formula_4, and the vectors formula_5 being unit vectors in the formula_6 direction. The inverse free fermion propagator in momentum space now reads formula_7 where the last addend corresponds to the Wilson term again. It modifies the mass formula_8 of the doublers to formula_9 where formula_10 is the number of momentum components with formula_11. In the continuum limit formula_12 the doublers become very heavy and decouple from the theory. Wilson fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry since the Wilson term does not anti-commute with formula_13. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\t\tS_W = -a^{d+1}\\sum_{x,\\mu}\\frac{i}{2a^2}\\left(\\bar\\psi_x\\psi_{x+\\hat\\mu}+\\bar\\psi_{x+\\hat\\mu}\\psi_{x}-2\\bar\\psi_x\\psi_x\\right)\n" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "\\psi_x" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "\\hat \\mu" }, { "math_id": 6, "text": "\\mu" }, { "math_id": 7, "text": "\n\t\tD(p) = m + \\frac ia\\sum_\\mu \\gamma_\\mu\\sin\\left(p_\\mu a\\right)+\\frac1a\\sum_\\mu\\left(1-\\cos\\left(p_\\mu a\\right)\\right)\\, " }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "\n\t\tm+\\frac{2l}{a}\\, " }, { "math_id": 10, "text": "l" }, { "math_id": 11, "text": "p_\\mu = \\pi/a" }, { "math_id": 12, "text": "a\\rightarrow0" }, { "math_id": 13, "text": "\\gamma_5" } ]
https://en.wikipedia.org/wiki?curid=72033530
7203361
Reduction criterion
In quantum information theory, the reduction criterion is a necessary condition a mixed state must satisfy in order for it to be separable. In other words, the reduction criterion is a "separability criterion". It was first proved and independently formulated in 1999. Violation of the reduction criterion is closely related to the distillability of the state in question. Details. Let "H"1 and "H"2 be Hilbert spaces of finite dimensions "n" and "m" respectively. "L"("Hi") will denote the space of linear operators acting on "Hi". Consider a bipartite quantum system whose state space is the tensor product formula_0 An (un-normalized) mixed state "ρ" is a positive linear operator (density matrix) acting on "H". A linear map Φ: "L"("H"2) → "L"("H"1) is said to be positive if it preserves the cone of positive elements, i.e. "A" is positive implied "Φ"("A") is also. From the one-to-one correspondence between positive maps and entanglement witnesses, we have that a state "ρ" is entangled if and only if there exists a positive map "Φ" such that formula_1 is not positive. Therefore, if "ρ" is separable, then for all positive map Φ, formula_2 Thus every positive, but not completely positive, map Φ gives rise to a necessary condition for separability in this way. The reduction criterion is a particular example of this. Suppose "H"1 = "H"2. Define the positive map Φ: "L"("H"2) → "L"("H"1) by formula_3 It is known that Φ is positive but not completely positive. So a mixed state "ρ" being separable implies formula_4 Direct calculation shows that the above expression is the same as formula_5 where "ρ"1 is the partial trace of "ρ" with respect to the second system. The dual relation formula_6 is obtained in the analogous fashion. The reduction criterion consists of the above two inequalities. Connection with Fréchet bounds. The above last two inequalities together with lower bounds for "ρ" can be seen as quantum Fréchet inequalities, that is as the quantum analogous of the classical Fréchet probabilistic bounds, that hold for separable quantum states. The upper bounds are the previous ones formula_7, formula_8, and the lower bounds are the obvious constraint formula_9 together with formula_10, where formula_11 are identity matrices of suitable dimensions. The lower bounds have been obtained in. These bounds are satisfied by separable density matrices, while entangled states can violate them. Entangled states exhibit a form of "stochastic dependence stronger than the strongest classical dependence" and in fact they violate Fréchet like bounds. It is also worth mentioning that is possible to give a Bayesian interpretation of these bounds.
[ { "math_id": 0, "text": " H = H_1 \\otimes H_2." }, { "math_id": 1, "text": "(I \\otimes \\Phi)(\\rho)" }, { "math_id": 2, "text": "(I \\otimes \\Phi)(\\rho) \\geq 0." }, { "math_id": 3, "text": "\\Phi(A) = \\operatorname{Tr}A - A." }, { "math_id": 4, "text": "(I \\otimes \\Phi) (\\rho) \\geq 0." }, { "math_id": 5, "text": "I \\otimes \\rho_1 - \\rho \\geq 0" }, { "math_id": 6, "text": "\\rho_2 \\otimes I - \\rho \\geq 0" }, { "math_id": 7, "text": "I \\otimes \\rho_1 \\geq \\rho" }, { "math_id": 8, "text": "\\rho_2 \\otimes I \\geq \\rho" }, { "math_id": 9, "text": "\\rho \\geq 0" }, { "math_id": 10, "text": "\\rho \\geq I \\otimes \\rho_1 + \\rho_2 \\otimes I -I " }, { "math_id": 11, "text": "I" } ]
https://en.wikipedia.org/wiki?curid=7203361
7203375
Harmonic coordinate condition
The harmonic coordinate condition is one of several coordinate conditions in general relativity, which make it possible to solve the Einstein field equations. A coordinate system is said to satisfy the harmonic coordinate condition if each of the coordinate functions "x"α (regarded as scalar fields) satisfies d'Alembert's equation. The parallel notion of a harmonic coordinate system in Riemannian geometry is a coordinate system whose coordinate functions satisfy Laplace's equation. Since d'Alembert's equation is the generalization of Laplace's equation to space-time, its solutions are also called "harmonic". Motivation. The laws of physics can be expressed in a generally invariant form. In other words, the real world does not care about our coordinate systems. However, for us to be able to solve the equations, we must fix upon a particular coordinate system. A coordinate condition selects one (or a smaller set of) such coordinate system(s). The Cartesian coordinates used in special relativity satisfy d'Alembert's equation, so a harmonic coordinate system is the closest approximation available in general relativity to an inertial frame of reference in special relativity. Derivation. In general relativity, we have to use the covariant derivative instead of the partial derivative in d'Alembert's equation, so we get: formula_0 Since the coordinate "x"α is not actually a scalar, this is not a tensor equation. That is, it is not generally invariant. But coordinate conditions must not be generally invariant because they are supposed to pick out (only work for) certain coordinate systems and not others. Since the partial derivative of a coordinate is the Kronecker delta, we get: formula_1 And thus, dropping the minus sign, we get the harmonic coordinate condition (also known as the de Donder gauge after Théophile de Donder): formula_2 This condition is especially useful when working with gravitational waves. Alternative form. Consider the covariant derivative of the density of the reciprocal of the metric tensor: formula_3 The last term formula_4 emerges because formula_5 is not an invariant scalar, and so its covariant derivative is not the same as its ordinary derivative. Rather, formula_6 because formula_7, while formula_8 Contracting ν with ρ and applying the harmonic coordinate condition to the second term, we get: formula_9 Thus, we get that an alternative way of expressing the harmonic coordinate condition is: formula_10 More variant forms. If one expresses the Christoffel symbol in terms of the metric tensor, one gets formula_11 Discarding the factor of formula_12 and rearranging some indices and terms, one gets formula_13 In the context of linearized gravity, this is indistinguishable from these additional forms: formula_14 However, the last two are a different coordinate condition when you go to the second order in "h". Effect on the wave equation. For example, consider the wave equation applied to the electromagnetic vector potential: formula_15 Let us evaluate the right hand side: formula_16 Using the harmonic coordinate condition we can eliminate the right-most term and then continue evaluation as follows: formula_17
[ { "math_id": 0, "text": "0 = \\left(x^\\alpha\\right)_{; \\beta ; \\gamma} g^{\\beta \\gamma} = \\left(\\left(x^\\alpha\\right)_{, \\beta , \\gamma} - \\left(x^\\alpha\\right)_{, \\sigma} \\Gamma^{\\sigma}_{\\beta \\gamma}\\right) g^{\\beta \\gamma} \\,." }, { "math_id": 1, "text": "0 = \\left(\\delta^\\alpha_{\\beta , \\gamma} - \\delta^\\alpha_{\\sigma} \\Gamma^{\\sigma}_{\\beta \\gamma}\\right) g^{\\beta \\gamma} = \\left(0 - \\Gamma^{\\alpha}_{\\beta \\gamma}\\right) g^{\\beta \\gamma} = - \\Gamma^{\\alpha}_{\\beta \\gamma} g^{\\beta \\gamma} \\,." }, { "math_id": 2, "text": "0 = \\Gamma^{\\alpha}_{\\beta \\gamma} g^{\\beta \\gamma} \\,." }, { "math_id": 3, "text": "0 = \\left(g^{\\mu \\nu} \\sqrt {-g}\\right)_{; \\rho} = \\left(g^{\\mu \\nu} \\sqrt {-g}\\right)_{, \\rho} + g^{\\sigma \\nu} \\Gamma^{\\mu}_{\\sigma \\rho} \\sqrt {-g} + g^{\\mu \\sigma} \\Gamma^{\\nu}_{\\sigma \\rho} \\sqrt {-g} - g^{\\mu \\nu} \\Gamma^{\\sigma}_{\\sigma \\rho} \\sqrt {-g} \\,." }, { "math_id": 4, "text": " - g^{\\mu \\nu} \\Gamma^{\\sigma}_{\\sigma \\rho} \\sqrt {-g} " }, { "math_id": 5, "text": " \\sqrt {-g}" }, { "math_id": 6, "text": " \\sqrt {-g}_{; \\rho} = 0 \\!" }, { "math_id": 7, "text": " g^{\\mu \\nu}_{; \\rho} = 0 \\!" }, { "math_id": 8, "text": " \\sqrt {-g}_{, \\rho} = \\sqrt {-g} \\Gamma^{\\sigma}_{\\sigma \\rho} \\,." }, { "math_id": 9, "text": "\\begin{align}\n 0 &= \\left(g^{\\mu \\nu} \\sqrt {-g}\\right)_{, \\nu} + g^{\\sigma \\nu} \\Gamma^{\\mu}_{\\sigma \\nu} \\sqrt {-g} + g^{\\mu \\sigma} \\Gamma^{\\nu}_{\\sigma \\nu} \\sqrt {-g} - g^{\\mu \\nu} \\Gamma^{\\sigma}_{\\sigma \\nu} \\sqrt {-g} \\,\\\\\n &= \\left(g^{\\mu \\nu} \\sqrt {-g}\\right)_{, \\nu} + 0 + g^{\\mu \\alpha} \\Gamma^{\\beta}_{\\alpha \\beta} \\sqrt {-g} - g^{\\mu \\alpha} \\Gamma^{\\beta}_{\\beta \\alpha} \\sqrt {-g} \\,.\n\\end{align}" }, { "math_id": 10, "text": "0 = \\left(g^{\\mu \\nu} \\sqrt {-g}\\right)_{, \\nu} \\,." }, { "math_id": 11, "text": "0 = \\Gamma^{\\alpha}_{\\beta \\gamma} g^{\\beta \\gamma} = \\frac{1}{2} g^{\\alpha \\delta} \\left( g_{\\gamma \\delta , \\beta} + g_{\\beta \\delta , \\gamma} - g_{\\beta \\gamma , \\delta} \\right) g^{\\beta \\gamma} \\,." }, { "math_id": 12, "text": "g^{\\alpha \\delta} \\," }, { "math_id": 13, "text": " g_{\\alpha \\beta , \\gamma} \\, g^{\\beta \\gamma} = \\frac{1}{2} g_{\\beta \\gamma , \\alpha} \\, g^{\\beta \\gamma} \\,." }, { "math_id": 14, "text": "\\begin{align}\n h_{\\alpha \\beta , \\gamma} \\, g^{\\beta \\gamma} &= \\frac12 h_{\\beta \\gamma , \\alpha} \\, g^{\\beta \\gamma} \\,; \\\\\n g_{\\alpha \\beta , \\gamma} \\, \\eta^{\\beta \\gamma} &= \\frac12 g_{\\beta \\gamma , \\alpha} \\, \\eta^{\\beta \\gamma} \\,; \\\\\n h_{\\alpha \\beta , \\gamma} \\, \\eta^{\\beta \\gamma} &= \\frac12 h_{\\beta \\gamma , \\alpha} \\, \\eta^{\\beta \\gamma} \\,.\n\\end{align}" }, { "math_id": 15, "text": "0 = A_{\\alpha ; \\beta ; \\gamma} g^{\\beta \\gamma} \\,." }, { "math_id": 16, "text": "A_{\\alpha ; \\beta ; \\gamma} g^{\\beta \\gamma} = A_{\\alpha ; \\beta , \\gamma} g^{\\beta \\gamma} - A_{\\sigma ; \\beta} \\Gamma^{\\sigma}_{\\alpha \\gamma} g^{\\beta \\gamma} - A_{\\alpha ; \\sigma} \\Gamma^{\\sigma}_{\\beta \\gamma} g^{\\beta \\gamma} \\,." }, { "math_id": 17, "text": "\\begin{align}\n A_{\\alpha ; \\beta ; \\gamma} g^{\\beta \\gamma}\n &= A_{\\alpha ; \\beta , \\gamma} g^{\\beta \\gamma} - A_{\\sigma ; \\beta} \\Gamma^{\\sigma}_{\\alpha \\gamma} g^{\\beta \\gamma} \\\\\n &= A_{\\alpha , \\beta , \\gamma} g^{\\beta \\gamma} - A_{\\rho , \\gamma} \\Gamma^{\\rho}_{\\alpha \\beta} g^{\\beta \\gamma} - A_{\\rho} \\Gamma^{\\rho}_{\\alpha \\beta , \\gamma} g^{\\beta \\gamma}\n- A_{\\sigma , \\beta} \\Gamma^{\\sigma}_{\\alpha \\gamma} g^{\\beta \\gamma} \n- A_{\\rho} \\Gamma^{\\rho}_{\\sigma \\beta} \\Gamma^{\\sigma}_{\\alpha \\gamma} g^{\\beta \\gamma} \\,.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=7203375
72034215
Twisted mass fermion
Lattice fermion discretisation In lattice field theory, twisted mass fermions are a fermion discretization that extends Wilson fermions for two mass-degenerate fermions. They are well established and regularly used in non-perturbative fermion simulations, for instance in lattice QCD. The original motivation for the use of twisted mass fermions in lattice QCD simulations was the observation that the two lightest quarks (up and down) have very similar mass and can therefore be approximated with the same (degenerate) mass. They form a so-called isospin doublet and are both represented by Wilson fermions in the twisted mass formalism. The name-giving twisted mass is used as a numerical trick, assigned to the two quarks with opposite signs. It acts as an infrared regulator, that is it allows to avoid unphysical configurations at low energies. In addition, at vanishing physical mass formula_0 (maximal or full twist) it allows formula_1 improvement, getting rid of leading order lattice artifacts linear in the lattice spacing formula_2. The twisted mass Dirac operator is constructed from the (massive) Wilson Dirac operator formula_3 and reads formula_4 where formula_5 is the twisted mass and acts as an infrared regulator (all eigenvalues formula_6 of formula_7 obey formula_8). formula_9 is the third Pauli matrix acting in the flavour space spanned by the two fermions. In the continuum limit formula_10 the twisted mass becomes irrelevant in the physical sector and only appears in the doubler sectors which decouple due to the use of Wilson fermions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m=0" }, { "math_id": 1, "text": "\\mathcal{O}(a)" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "D_W" }, { "math_id": 4, "text": "\n\tD_\\text{tw} = D_W + i\\mu\\gamma_5\\sigma_3\n" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "\\lambda" }, { "math_id": 7, "text": "D_\\text{tw}" }, { "math_id": 8, "text": "\\lambda\\ge\\mu^2>0" }, { "math_id": 9, "text": "\\sigma_3" }, { "math_id": 10, "text": "a\\rightarrow0" } ]
https://en.wikipedia.org/wiki?curid=72034215
72034350
Ginsparg–Wilson equation
Lattice fermion discretisation In lattice field theory, the Ginsparg–Wilson equation generalizes chiral symmetry on the lattice in a way that approaches the continuum formulation in the continuum limit. The class of fermions whose Dirac operators satisfy this equation are known as Ginsparg–Wilson fermions, with notable examples being overlap, domain wall and fixed point fermions. They are a means to avoid the fermion doubling problem, widely used for instance in lattice QCD calculations. The equation was discovered by Paul Ginsparg and Kenneth Wilson in 1982, however it was quickly forgotten about since there were no known solutions. It was only in 1997 and 1998 that the first solutions were found in the form of the overlap and fixed point fermions, at which point the equation entered prominence. Ginsparg–Wilson fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry. More precisely, the continuum chiral symmetry relation formula_0 (where formula_1 is the massless Dirac operator) is replaced by the Ginsparg–Wilson equation formula_2 which recovers the correct continuum expression as the lattice spacing formula_3 goes to zero. In contrast to Wilson fermions, Ginsparg–Wilson fermions do not modify the inverse fermion propagator additively but multiplicatively, thus lifting the unphysical poles at formula_4. The exact form of this modification depends on the individual realisation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D\\gamma_5+\\gamma_5 D=0" }, { "math_id": 1, "text": "D" }, { "math_id": 2, "text": "\n\tD\\gamma_5 + \\gamma_5 D = a\\,D\\gamma_5 D\\, " }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "p_\\mu = \\pi/a" } ]
https://en.wikipedia.org/wiki?curid=72034350
7203729
D'Alembert's equation
In mathematics, d'Alembert's equation, sometimes also known as Lagrange's equation, is a first order nonlinear ordinary differential equation, named after the French mathematician Jean le Rond d'Alembert. The equation reads as formula_0 where formula_1. After differentiating once, and rearranging we have formula_2 The above equation is linear. When formula_3, d'Alembert's equation is reduced to Clairaut's equation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y = x f(p) + g(p)" }, { "math_id": 1, "text": "p=dy/dx" }, { "math_id": 2, "text": "\\frac{dx}{dp} + \\frac{x f'(p) + g'(p)}{f(p)-p}=0" }, { "math_id": 3, "text": "f(p)=p" } ]
https://en.wikipedia.org/wiki?curid=7203729
72039476
UPt3
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound UPt3 is an inorganic binary intermetallic crystalline compound of platinum and uranium. Production. It can be synthesised in the following ways: formula_0 formula_1 Physical properties. UPt3 forms crystals of hexagonal symmetry (some studies hypothesize a trigonal structure instead), space group P63/mmc, cell parameters "a" = 0.5766 nm and "c" = 0.4898 nm ("c" should be understood as distance from planes), with a structure similar to nisnite (Ni3Sn) and MgCd3. The compound congruently melts at 1700 °C. The enthalpy of formation of the compound is -111 kJ/mol. At temperatures below 1 K it becomes superconducting, thought to be due to the presence of heavy fermions (the uranium atoms). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathsf{ 3 Pt + U \\ \\xrightarrow{1700^oC}\\ UPt_3 }" }, { "math_id": 1, "text": "\\mathsf{ UO_2 + 2 H_2 + 3 Pt \\ \\xrightarrow{1700^oC}\\ UPt_3 + 2 H_2O }" } ]
https://en.wikipedia.org/wiki?curid=72039476
72041443
Overlap fermion
Lattice fermion discretisation In lattice field theory, overlap fermions are a fermion discretization that allows to avoid the fermion doubling problem. They are a realisation of Ginsparg–Wilson fermions. Initially introduced by Neuberger in 1998, they were quickly taken up for a variety of numerical simulations. By now overlap fermions are well established and regularly used in non-perturbative fermion simulations, for instance in lattice QCD. Overlap fermions with mass formula_0 are defined on a Euclidean spacetime lattice with spacing formula_1 by the overlap Dirac operator formula_2 where formula_3 is the ″kernel″ Dirac operator obeying formula_4, i.e. formula_3 is formula_5-hermitian. The sign-function usually has to be calculated numerically, e.g. by rational approximations. A common choice for the kernel is formula_6 where formula_7 is the massless Dirac operator and formula_8 is a free parameter that can be tuned to optimise locality of formula_9. Near formula_10 the overlap Dirac operator recovers the correct continuum form (using the Feynman slash notation) formula_11 whereas the unphysical doublers near formula_12 are suppressed by a high mass formula_13 and decouple. Overlap fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry (obeying the Ginsparg–Wilson equation) and locality. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "\n\tD_{\\text{ov}} = \\frac1a \\left(\\left(1+am\\right) \\mathbf{1} + \\left(1-am\\right)\\gamma_5 \\mathrm{sign}[\\gamma_5 A]\\right)\\, " }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "\\gamma_5 A = A^\\dagger\\gamma_5" }, { "math_id": 5, "text": "\\gamma_5" }, { "math_id": 6, "text": "\n\tA = aD - \\mathbf 1(1+s)\\, " }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": "s\\in\\left(-1,1\\right)" }, { "math_id": 9, "text": "D_\\text{ov}" }, { "math_id": 10, "text": "pa=0" }, { "math_id": 11, "text": "\n\tD_\\text{ov} = m+i\\, {p\\!\\!\\!/}\\frac{1}{1+s}+\\mathcal{O}(a)\\, " }, { "math_id": 12, "text": "pa=\\pi" }, { "math_id": 13, "text": "\n\tD_\\text{ov} = \\frac1a+m+i\\,{p\\!\\!\\!/}\\frac{1}{1-s}+\\mathcal{O}(a)\n" } ]
https://en.wikipedia.org/wiki?curid=72041443
72041620
Domain wall fermion
Lattice fermion discretisation In lattice field theory, domain wall (DW) fermions are a fermion discretization avoiding the fermion doubling problem. They are a realisation of Ginsparg–Wilson fermions in the infinite separation limit formula_0 where they become equivalent to overlap fermions. DW fermions have undergone numerous improvements since Kaplan's original formulation such as the reinterpretation by Shamir and the generalisation to Möbius DW fermions by Brower, Neff and Orginos. The original formula_1-dimensional Euclidean spacetime is lifted into formula_2 dimensions. The additional dimension of length formula_3 has open boundary conditions and the so-called domain walls form its boundaries. The physics is now found to ″live″ on the domain walls and the doublers are located on opposite walls, that is at formula_0 they completely decouple from the system. Kaplan's (and equivalently Shamir's) DW Dirac operator is defined by two addends formula_4 with formula_5 where formula_6 is the chiral projection operator and formula_7 is the canonical Dirac operator in formula_1 dimensions. formula_8 and formula_9 are (multi-)indices in the physical space whereas formula_10 and formula_11 denote the position in the additional dimension. DW fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry (asymptotically obeying the Ginsparg–Wilson equation). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_s\\rightarrow\\infty" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "d+1" }, { "math_id": 3, "text": "L_s" }, { "math_id": 4, "text": "\n\tD_\\text{DW}(x,s;y,r) = D(x;y)\\delta_{sr} + \\delta_{xy}D_{d+1}(s;r)\\, " }, { "math_id": 5, "text": "\n\tD_{d+1}(s;r) = \\delta_{sr} - (1-\\delta_{s,L_s-1})P_-\\delta_{s+1,r} - (1-\\delta_{s0})P_+\\delta_{s-1,r} + m\\left(P_-\\delta_{s,L_s-1}\\delta_{0r} + P_+\\delta_{s0}\\delta_{L_s-1,r}\\right)\\, " }, { "math_id": 6, "text": "P_\\pm=(\\mathbf1\\pm\\gamma_5)/2" }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "s" }, { "math_id": 11, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=72041620
7204363
Interchange of limiting operations
In mathematics, the study of interchange of limiting operations is one of the major concerns of mathematical analysis, in that two given limiting operations, say "L" and "M", cannot be "assumed" to give the same result when applied in either order. One of the historical sources for this theory is the study of trigonometric series. Formulation. In symbols, the assumption "LM" = "ML", where the left-hand side means that "M" is applied first, then "L", and "vice versa" on the right-hand side, is not a valid equation between mathematical operators, under all circumstances and for all operands. An algebraist would say that the operations do not commute. The approach taken in analysis is somewhat different. Conclusions that assume limiting operations do 'commute' are called "formal". The analyst tries to delineate conditions under which such conclusions are valid; in other words mathematical rigour is established by the specification of some set of sufficient conditions for the formal analysis to hold. This approach justifies, for example, the notion of uniform convergence. It is relatively rare for such sufficient conditions to be also necessary, so that a sharper piece of analysis may extend the domain of validity of formal results. Professionally speaking, therefore, analysts push the envelope of techniques, and expand the meaning of "well-behaved" for a given context. G. H. Hardy wrote that "The problem of deciding whether two given limit operations are commutative is one of the most important in mathematics". An opinion apparently not in favour of the piece-wise approach, but of leaving analysis at the level of heuristic, was that of Richard Courant. Examples. Examples abound, one of the simplest being that for a double sequence "a""m","n": it is not necessarily the case that the operations of taking the limits as "m" → ∞ and as "n" → ∞ can be freely interchanged. For example take "a""m","n" = 2"m" − "n" in which taking the limit first with respect to "n" gives 0, and with respect to "m" gives ∞. Many of the fundamental results of infinitesimal calculus also fall into this category: the symmetry of partial derivatives, differentiation under the integral sign, and Fubini's theorem deal with the interchange of differentiation and integration operators. One of the major reasons why the Lebesgue integral is used is that theorems exist, such as the dominated convergence theorem, that give sufficient conditions under which integration and limit operation can be interchanged. Necessary and sufficient conditions for this interchange were discovered by Federico Cafiero. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(f_n)" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "f'" } ]
https://en.wikipedia.org/wiki?curid=7204363
7204577
Pullback attractor
In mathematics, the attractor of a random dynamical system may be loosely thought of as a set to which the system evolves after a long enough time. The basic idea is the same as for a deterministic dynamical system, but requires careful treatment because random dynamical systems are necessarily non-autonomous. This requires one to consider the notion of a pullback attractor or attractor in the pullback sense. Set-up and motivation. Consider a random dynamical system formula_0 on a complete separable metric space formula_1, where the noise is chosen from a probability space formula_2 with base flow formula_3. A naïve definition of an attractor formula_4 for this random dynamical system would be to require that for any initial condition formula_5, formula_6 as formula_7. This definition is far too limited, especially in dimensions higher than one. A more plausible definition, modelled on the idea of an omega-limit set, would be to say that a point formula_8 lies in the attractor formula_4 if and only if there exists an initial condition, formula_5, and there is a sequence of times formula_9 such that formula_10 as formula_11. This is not too far from a working definition. However, we have not yet considered the effect of the noise formula_12, which makes the system non-autonomous (i.e. it depends explicitly on time). For technical reasons, it becomes necessary to do the following: instead of looking formula_13 seconds into the "future", and considering the limit as formula_7, one "rewinds" the noise formula_13 seconds into the "past", and evolves the system through formula_13 seconds using the same initial condition. That is, one is interested in the pullback limit formula_14. So, for example, in the pullback sense, the omega-limit set for a (possibly random) set formula_15 is the random set formula_16 Equivalently, this may be written as formula_17 Importantly, in the case of a deterministic dynamical system (one without noise), the pullback limit coincides with the deterministic forward limit, so it is meaningful to compare deterministic and random omega-limit sets, attractors, and so forth. Several examples of pullback attractors of non-autonomous dynamical systems are presented analytically and numerically. Definition. The pullback attractor (or random global attractor) formula_18 for a random dynamical system is a formula_19-almost surely unique random set such that formula_26 almost surely. There is a slight abuse of notation in the above: the first use of "dist" refers to the Hausdorff semi-distance from a point to a set, formula_27 whereas the second use of "dist" refers to the Hausdorff semi-distance between two sets, formula_28 As noted in the previous section, in the absence of noise, this definition of attractor coincides with the deterministic definition of the attractor as the minimal compact invariant set that attracts all bounded deterministic sets. Theorems relating omega-limit sets to attractors. The attractor as a union of omega-limit sets. If a random dynamical system has a compact random absorbing set formula_29, then the random global attractor is given by formula_30 where the union is taken over all bounded sets formula_25. Bounding the attractor within a deterministic set. Crauel (1999) proved that if the base flow formula_31 is ergodic and formula_32 is a deterministic compact set with formula_33 then formula_34 formula_19-almost surely. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "(X, d)" }, { "math_id": 2, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 3, "text": "\\vartheta : \\mathbb{R} \\times \\Omega \\to \\Omega" }, { "math_id": 4, "text": "\\mathcal{A}" }, { "math_id": 5, "text": "x_{0} \\in X" }, { "math_id": 6, "text": "\\varphi(t, \\omega) x_{0} \\to \\mathcal{A}" }, { "math_id": 7, "text": "t \\to + \\infty" }, { "math_id": 8, "text": "a \\in X" }, { "math_id": 9, "text": "t_{n} \\to + \\infty" }, { "math_id": 10, "text": "d \\left( \\varphi(t_{n}, \\omega) x_{0}, a \\right) \\to 0" }, { "math_id": 11, "text": "n \\to \\infty" }, { "math_id": 12, "text": "\\omega" }, { "math_id": 13, "text": "t" }, { "math_id": 14, "text": "\\lim_{t \\to + \\infty} \\varphi (t, \\vartheta_{-t} \\omega)" }, { "math_id": 15, "text": "B(\\omega) \\subseteq X" }, { "math_id": 16, "text": "\\Omega_{B} (\\omega) := \\left\\{ x \\in X \\left| \\exists t_{n} \\to + \\infty, \\exists b_{n} \\in B(\\vartheta_{-t_{n}} \\omega) \\mathrm{\\,s.t.\\,} \\varphi (t_{n}, \\vartheta_{-t_{n}} \\omega) b_{n} \\to x \\mathrm{\\,as\\,} n \\to \\infty \\right. \\right\\}." }, { "math_id": 17, "text": "\\Omega_{B} (\\omega) = \\bigcap_{t \\geq 0} \\overline{\\bigcup_{s \\geq t} \\varphi (s, \\vartheta_{-s} \\omega) B(\\vartheta_{-s} \\omega)}." }, { "math_id": 18, "text": "\\mathcal{A} (\\omega)" }, { "math_id": 19, "text": "\\mathbb{P}" }, { "math_id": 20, "text": "\\mathcal{A} (\\omega) \\subseteq X" }, { "math_id": 21, "text": "\\omega \\mapsto \\mathrm{dist} (x, \\mathcal{A} (\\omega))" }, { "math_id": 22, "text": "(\\mathcal{F}, \\mathcal{B}(X))" }, { "math_id": 23, "text": "x \\in X" }, { "math_id": 24, "text": "\\varphi (t, \\omega) ( \\mathcal{A} (\\omega) ) = \\mathcal{A} (\\vartheta_{t} \\omega)" }, { "math_id": 25, "text": "B \\subseteq X" }, { "math_id": 26, "text": "\\lim_{t \\to + \\infty} \\mathrm{dist} \\left( \\varphi (t, \\vartheta_{-t} \\omega) (B), \\mathcal{A} (\\omega) \\right) = 0" }, { "math_id": 27, "text": "\\mathrm{dist} (x, A) := \\inf_{a \\in A} d(x, a)," }, { "math_id": 28, "text": "\\mathrm{dist} (B, A) := \\sup_{b \\in B} \\inf_{a \\in A} d(b, a)." }, { "math_id": 29, "text": "K" }, { "math_id": 30, "text": "\\mathcal{A} (\\omega) = \\overline{\\bigcup_{B} \\Omega_{B} (\\omega)}," }, { "math_id": 31, "text": "\\vartheta" }, { "math_id": 32, "text": "D \\subseteq X" }, { "math_id": 33, "text": "\\mathbb{P} \\left( \\mathcal{A} (\\cdot) \\subseteq D \\right) > 0," }, { "math_id": 34, "text": "\\mathcal{A} (\\omega) = \\Omega_{D} (\\omega)" } ]
https://en.wikipedia.org/wiki?curid=7204577
7204602
Pfister form
Quadratic form In mathematics, a Pfister form is a particular kind of quadratic form, introduced by Albrecht Pfister in 1965. In what follows, quadratic forms are considered over a field "F" of characteristic not 2. For a natural number "n", an "n"-fold Pfister form over "F" is a quadratic form of dimension 2"n" that can be written as a tensor product of quadratic forms formula_0 for some nonzero elements "a"1, ..., "a""n" of "F". (Some authors omit the signs in this definition; the notation here simplifies the relation to Milnor K-theory, discussed below.) An "n"-fold Pfister form can also be constructed inductively from an ("n"−1)-fold Pfister form "q" and a nonzero element "a" of "F", as formula_1. So the 1-fold and 2-fold Pfister forms look like: formula_2. formula_3 For "n" ≤ 3, the "n"-fold Pfister forms are norm forms of composition algebras. In that case, two "n"-fold Pfister forms are isomorphic if and only if the corresponding composition algebras are isomorphic. In particular, this gives the classification of octonion algebras. The "n"-fold Pfister forms additively generate the "n"-th power "I" "n" of the fundamental ideal of the Witt ring of "F". Characterizations. A quadratic form "q" over a field "F" is multiplicative if, for vectors of indeterminates x and y, we can write "q"(x)."q"(y) = "q"(z) for some vector z of rational functions in the x and y over "F". Isotropic quadratic forms are multiplicative. For anisotropic quadratic forms, Pfister forms are multiplicative, and conversely. For "n"-fold Pfister forms with "n" ≤ 3, this had been known since the 19th century; in that case "z" can be taken to be bilinear in "x" and "y", by the properties of composition algebras. It was a remarkable discovery by Pfister that "n"-fold Pfister forms for all "n" are multiplicative in the more general sense here, involving rational functions. For example, he deduced that for any field "F" and any natural number "n", the set of sums of 2"n" squares in "F" is closed under multiplication, using that the quadratic form formula_4 is an "n"-fold Pfister form (namely, formula_5). Another striking feature of Pfister forms is that every isotropic Pfister form is in fact hyperbolic, that is, isomorphic to a direct sum of copies of the hyperbolic plane formula_6. This property also characterizes Pfister forms, as follows: If "q" is an anisotropic quadratic form over a field "F", and if "q" becomes hyperbolic over every extension field "E" such that "q" becomes isotropic over "E", then "q" is isomorphic to "a"φ for some nonzero "a" in "F" and some Pfister form φ over "F". Connection with "K"-theory. Let "k""n"("F") be the "n"-th Milnor "K"-group modulo 2. There is a homomorphism from "k""n"("F") to the quotient "I""n"/"I""n"+1 in the Witt ring of "F", given by formula_7 where the image is an "n"-fold Pfister form. The homomorphism is surjective, since the Pfister forms additively generate "I""n". One part of the Milnor conjecture, proved by Orlov, Vishik and Voevodsky, states that this homomorphism is in fact an isomorphism "k""n"("F") ≅ "I""n"/"I""n"+1. That gives an explicit description of the abelian group "I""n"/"I""n"+1 by generators and relations. The other part of the Milnor conjecture, proved by Voevodsky, says that "k""n"("F") (and hence "I""n"/"I""n"+1) maps isomorphically to the Galois cohomology group "H""n"("F", F2). Pfister neighbors. A Pfister neighbor is an anisotropic form σ which is isomorphic to a subform of "a"φ for some nonzero "a" in "F" and some Pfister form φ with dim φ &lt; 2 dim σ. The associated Pfister form φ is determined up to isomorphism by σ. Every anisotropic form of dimension 3 is a Pfister neighbor; an anisotropic form of dimension 4 is a Pfister neighbor if and only if its discriminant in "F"*/("F"*)2 is trivial. A field "F" has the property that every 5-dimensional anisotropic form over "F" is a Pfister neighbor if and only if it is a linked field. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle\\!\\langle a_1, a_2, \\ldots , a_n \\rangle\\!\\rangle \\cong \\langle 1, -a_1 \\rangle \\otimes \\langle 1, -a_2 \\rangle \\otimes \\cdots \\otimes \\langle 1, -a_n \\rangle," }, { "math_id": 1, "text": "q \\oplus (-a)q" }, { "math_id": 2, "text": "\\langle\\!\\langle a\\rangle\\!\\rangle\\cong \\langle 1, -a \\rangle = x^2 - ay^2" }, { "math_id": 3, "text": "\\langle\\!\\langle a,b\\rangle\\!\\rangle\\cong \\langle 1, -a, -b, ab \\rangle = x^2 - ay^2 - bz^2 + abw^2." }, { "math_id": 4, "text": "x_1^2 +\\cdots + x_{2^n}^2" }, { "math_id": 5, "text": "\\langle\\!\\langle -1, \\ldots , -1 \\rangle\\!\\rangle" }, { "math_id": 6, "text": "\\langle 1, -1 \\rangle" }, { "math_id": 7, "text": " \\{a_1,\\ldots,a_n\\} \\mapsto \\langle\\!\\langle a_1, a_2, \\ldots , a_n \\rangle\\!\\rangle ," } ]
https://en.wikipedia.org/wiki?curid=7204602
7204725
Base flow (random dynamical systems)
In mathematics, the base flow of a random dynamical system is the dynamical system defined on the "noise" probability space that describes how to "fast forward" or "rewind" the noise when one wishes to change the time at which one "starts" the random dynamical system. Definition. In the definition of a random dynamical system, one is given a family of maps formula_0 on a probability space formula_1. The measure-preserving dynamical system formula_2 is known as the base flow of the random dynamical system. The maps formula_3 are often known as shift maps since they "shift" time. The base flow is often ergodic. The parameter formula_4 may be chosen to run over Each map formula_3 is required Furthermore, as a family, the maps formula_3 satisfy the relations In other words, the maps formula_3 form a commutative monoid (in the cases formula_20 and formula_21) or a commutative group (in the cases formula_22 and formula_23). Example. In the case of random dynamical system driven by a Wiener process formula_24, where formula_1 is the two-sided classical Wiener space, the base flow formula_0 would be given by formula_25. This can be read as saying that formula_3 "starts the noise at time formula_4 instead of time 0". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vartheta_{s} : \\Omega \\to \\Omega" }, { "math_id": 1, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 2, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P}, \\vartheta)" }, { "math_id": 3, "text": "\\vartheta_{s}" }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": "\\mathbb{R}" }, { "math_id": 6, "text": "[0, + \\infty) \\subsetneq \\mathbb{R}" }, { "math_id": 7, "text": "\\mathbb{Z}" }, { "math_id": 8, "text": "\\mathbb{N} \\cup \\{ 0 \\}" }, { "math_id": 9, "text": "(\\mathcal{F}, \\mathcal{F})" }, { "math_id": 10, "text": "E \\in \\mathcal{F}" }, { "math_id": 11, "text": "\\vartheta_{s}^{-1} (E) \\in \\mathcal{F}" }, { "math_id": 12, "text": "\\mathbb{P}" }, { "math_id": 13, "text": "\\mathbb{P} (\\vartheta_{s}^{-1} (E)) = \\mathbb{P} (E)" }, { "math_id": 14, "text": "\\vartheta_{0} = \\mathrm{id}_{\\Omega} : \\Omega \\to \\Omega" }, { "math_id": 15, "text": "\\Omega" }, { "math_id": 16, "text": "\\vartheta_{s} \\circ \\vartheta_{t} = \\vartheta_{s + t}" }, { "math_id": 17, "text": "t" }, { "math_id": 18, "text": "\\vartheta_{s}^{-1} = \\vartheta_{-s}" }, { "math_id": 19, "text": "- s" }, { "math_id": 20, "text": "s \\in \\mathbb{N} \\cup \\{ 0 \\}" }, { "math_id": 21, "text": "s \\in [0, + \\infty)" }, { "math_id": 22, "text": "s \\in \\mathbb{Z}" }, { "math_id": 23, "text": "s \\in \\mathbb{R}" }, { "math_id": 24, "text": "W : \\mathbb{R} \\times \\Omega \\to X" }, { "math_id": 25, "text": "W (t, \\vartheta_{s} (\\omega)) = W (t + s, \\omega) - W(s, \\omega)" } ]
https://en.wikipedia.org/wiki?curid=7204725
72048
X-ray fluorescence
Emission of secondary X-rays from a material excited by high-energy X-rays X-ray fluorescence (XRF) is the emission of characteristic "secondary" (or fluorescent) X-rays from a material that has been excited by being bombarded with high-energy X-rays or gamma rays. The phenomenon is widely used for elemental analysis and chemical analysis, particularly in the investigation of metals, glass, ceramics and building materials, and for research in geochemistry, forensic science, archaeology and art objects such as paintings. Underlying physics. When materials are exposed to short-wavelength X-rays or to gamma rays, ionization of their component atoms may take place. Ionization consists of the ejection of one or more electrons from the atom, and may occur if the atom is exposed to radiation with an energy greater than its ionization energy. X-rays and gamma rays can be energetic enough to expel tightly held electrons from the inner orbitals of the atom. The removal of an electron in this way makes the electronic structure of the atom unstable, and electrons in higher orbitals "fall" into the lower orbital to fill the hole left behind. In falling, energy is released in the form of a photon, the energy of which is equal to the energy difference of the two orbitals involved. Thus, the material emits radiation, which has energy characteristic of the atoms present. The term "fluorescence" is applied to phenomena in which the absorption of radiation of a specific energy results in the re-emission of radiation of a different energy (generally lower). Characteristic radiation. Each element has electronic orbitals of characteristic energy. Following removal of an inner electron by an energetic photon provided by a primary radiation source, an electron from an outer shell drops into its place. There are a limited number of ways in which this can happen, as shown in Figure 1. The main transitions are given names: an L→K transition is traditionally called Kα, an M→K transition is called Kβ, an M→L transition is called Lα, and so on. Each of these transitions yields a fluorescent photon with a characteristic energy equal to the difference in energy of the initial and final orbital. The wavelength of this fluorescent radiation can be calculated from Planck's Law: formula_0 The fluorescent radiation can be analysed either by sorting the energies of the photons (energy-dispersive analysis) or by separating the wavelengths of the radiation (wavelength-dispersive analysis). Once sorted, the intensity of each characteristic radiation is directly related to the amount of each element in the material. This is the basis of a powerful technique in analytical chemistry. Figure 2 shows the typical form of the sharp fluorescent spectral lines obtained in the wavelength-dispersive method (see Moseley's law). Primary radiation sources. In order to excite the atoms, a source of radiation is required, with sufficient energy to expel tightly held inner electrons. Conventional X-ray generators, based on electron bombardment of a heavy metal (i.e. tungsten or rhodium) target are most commonly used, because their output can readily be "tuned" for the application, and because higher power can be deployed relative to other techniques. X-ray generators in the range 20–60 kV are used, which allow excitation of a broad range of atoms. The continuous spectrum consists of "bremsstrahlung" radiation: radiation produced when high-energy electrons passing through the tube are progressively decelerated by the material of the tube anode (the "target"). A typical tube output spectrum is shown in Figure 3. For portable XRF spectrometers, copper target is usually bombared with high energy electrons, that are produced either by impact laser or by pyroelectric crystals. Alternatively, gamma ray sources, based on radioactive isotopes (such as 109Cd, 57Co, 55Fe, 238Pu and 241Am) can be used without the need for an elaborate power supply, allowing for easier use in small, portable instruments. When the energy source is a synchrotron or the X-rays are focused by an optic like a polycapillary, the X-ray beam can be very small and very intense. As a result, atomic information on the sub-micrometer scale can be obtained. Dispersion. In energy-dispersive analysis, the fluorescent X-rays emitted by the material sample are directed into a solid-state detector which produces a "continuous" distribution of pulses, the voltages of which are proportional to the incoming photon energies. This signal is processed by a multichannel analyzer (MCA) which produces an accumulating digital spectrum that can be processed to obtain analytical data. In wavelength-dispersive analysis, the fluorescent X-rays emitted by the sample are directed into a diffraction grating-based monochromator. The diffraction grating used is usually a single crystal. By varying the angle of incidence and take-off on the crystal, a small X-ray wavelength range can be selected. The wavelength obtained is given by Bragg's law: formula_1 where "d" is the spacing of atomic layers parallel to the crystal surface. Detection. In energy-dispersive analysis, dispersion and detection are a single operation, as already mentioned above. Proportional counters or various types of solid-state detectors (PIN diode, Si(Li), Ge(Li), silicon drift detector SDD) are used. They all share the same detection principle: An incoming X-ray photon ionizes a large number of detector atoms with the amount of charge produced being proportional to the energy of the incoming photon. The charge is then collected and the process repeats itself for the next photon. Detector speed is obviously critical, as all charge carriers measured have to come from the same photon to measure the photon energy correctly (peak length discrimination is used to eliminate events that seem to have been produced by two X-ray photons arriving almost simultaneously). The spectrum is then built up by dividing the energy spectrum into discrete bins and counting the number of pulses registered within each energy bin. EDXRF detector types vary in resolution, speed and the means of cooling (a low number of free charge carriers is critical in the solid state detectors): proportional counters with resolutions of several hundred eV cover the low end of the performance spectrum, followed by PIN diode detectors, while the Si(Li), Ge(Li) and SDDs occupy the high end of the performance scale. In wavelength-dispersive analysis, the single-wavelength radiation produced by the monochromator is passed into a photomultiplier (a detector similar to a Geiger counter) which counts individual photons as they pass through. The counter is a chamber containing a gas that is ionized by X-ray photons. A central electrode is charged at (typically) +1700 V with respect to the conducting chamber walls, and each photon triggers a pulse-like cascade of current across this field. The signal is amplified and transformed into an accumulating digital count. These counts are then processed to obtain analytical data. X-ray intensity. The fluorescence process is inefficient, and the secondary radiation is much weaker than the primary beam. Furthermore, the secondary radiation from lighter elements is of relatively low energy (long wavelength) and has low penetrating power, and is severely attenuated if the beam passes through air for any distance. Because of this, for high-performance analysis, the path from tube to sample to detector is maintained under vacuum (around 10 Pa residual pressure). This means in practice that most of the working parts of the instrument have to be located in a large vacuum chamber. The problems of maintaining moving parts in vacuum, and of rapidly introducing and withdrawing the sample without losing vacuum, pose major challenges for the design of the instrument. For less demanding applications, or when the sample is damaged by a vacuum (e.g. a volatile sample), a helium-swept X-ray chamber can be substituted, with some loss of low-Z (Z = atomic number) intensities. Chemical analysis. The use of a primary X-ray beam to excite fluorescent radiation from the sample was first proposed by Glocker and Schreiber in 1928. Today, the method is used as a non-destructive analytical technique, and as a process control tool in many extractive and processing industries. In principle, the lightest element that can be analysed is beryllium (Z = 4), but due to instrumental limitations and low X-ray yields for the light elements, it is often difficult to quantify elements lighter than sodium (Z = 11), unless background corrections and very comprehensive inter-element corrections are made. Energy dispersive spectrometry. In energy-dispersive spectrometers (EDX or EDS), the detector allows the determination of the energy of the photon when it is detected. Detectors historically have been based on silicon semiconductors, in the form of lithium-drifted silicon crystals, or high-purity silicon wafers. Si(Li) detectors. These consist essentially of a 3–5 mm thick silicon junction type p-i-n diode (same as PIN diode) with a bias of −1000 V across it. The lithium-drifted centre part forms the non-conducting i-layer, where Li compensates the residual acceptors which would otherwise make the layer p-type. When an X-ray photon passes through, it causes a swarm of electron-hole pairs to form, and this causes a voltage pulse. To obtain sufficiently low conductivity, the detector must be maintained at low temperature, and liquid-nitrogen cooling must be used for the best resolution. With some loss of resolution, the much more convenient Peltier cooling can be employed. Wafer detectors. More recently, high-purity silicon wafers with low conductivity have become routinely available. Cooled by the Peltier effect, this provides a cheap and convenient detector, although the liquid nitrogen cooled Si(Li) detector still has the best resolution (i.e. ability to distinguish different photon energies). Amplifiers. The pulses generated by the detector are processed by pulse-shaping amplifiers. It takes time for the amplifier to shape the pulse for optimum resolution, and there is therefore a trade-off between resolution and count-rate: long processing time for good resolution results in "pulse pile-up" in which the pulses from successive photons overlap. Multi-photon events are, however, typically more drawn out in time (photons did not arrive exactly at the same time) than single photon events and pulse-length discrimination can thus be used to filter most of these out. Even so, a small number of pile-up peaks will remain and pile-up correction should be built into the software in applications that require trace analysis. To make the most efficient use of the detector, the tube current should be reduced to keep multi-photon events (before discrimination) at a reasonable level, e.g. 5–20%. Processing. Considerable computer power is dedicated to correcting for pulse-pile up and for extraction of data from poorly resolved spectra. These elaborate correction processes tend to be based on empirical relationships that may change with time, so that continuous vigilance is required in order to obtain chemical data of adequate precision. Digital pulse processors are widely used in high performance nuclear instrumentation. They are able to effectively reduce pile-up and base line shifts, allowing for easier processing. A low pass filter is integrated, improving the signal to noise ratio. The Digital Pulse Processor requires a significant amount of energy to run, but it provides precise results. Usage. EDX spectrometers are different from WDX spectrometers in that they are smaller, simpler in design and have fewer engineered parts, however the accuracy and resolution of EDX spectrometers are lower than for WDX. EDX spectrometers can also use miniature X-ray tubes or gamma sources, which makes them cheaper and allows miniaturization and portability. This type of instrument is commonly used for portable quality control screening applications, such as testing toys for lead (Pb) content, sorting scrap metals, and measuring the lead content of residential paint. On the other hand, the low resolution and problems with low count rate and long dead-time makes them inferior for high-precision analysis. They are, however, very effective for high-speed, multi-elemental analysis. Field Portable XRF analysers currently on the market weigh less than 2 kg, and have limits of detection on the order of 2 parts per million of lead (Pb) in pure sand. Using a Scanning Electron Microscope and using EDX, studies have been broadened to organic based samples such as biological samples and polymers. Wavelength dispersive spectrometry. In wavelength dispersive spectrometers (WDX or WDS), the photons are separated by diffraction on a single crystal before being detected. Although wavelength dispersive spectrometers are occasionally used to scan a wide range of wavelengths, producing a spectrum plot as in EDS, they are usually set up to make measurements only at the wavelength of the emission lines of the elements of interest. This is achieved in two different ways: Sample preparation. In order to keep the geometry of the tube-sample-detector assembly constant, the sample is normally prepared as a flat disc, typically of diameter 20–50 mm. This is located at a standardized, small distance from the tube window. Because the X-ray intensity follows an inverse-square law, the tolerances for this placement and for the flatness of the surface must be very tight in order to maintain a repeatable X-ray flux. Ways of obtaining sample discs vary: metals may be machined to shape, minerals may be finely ground and pressed into a tablet, and glasses may be cast to the required shape. A further reason for obtaining a flat and representative sample surface is that the secondary X-rays from lighter elements often only emit from the top few micrometres of the sample. In order to further reduce the effect of surface irregularities, the sample is usually spun at 5–20 rpm. It is necessary to ensure that the sample is sufficiently thick to absorb the entire primary beam. For higher-Z materials, a few millimetres thickness is adequate, but for a light-element matrix such as coal, a thickness of 30–40 mm is needed. Monochromators. The common feature of monochromators is the maintenance of a symmetrical geometry between the sample, the crystal and the detector. In this geometry the Bragg diffraction condition is obtained. The X-ray emission lines are very narrow (see figure 2), so the angles must be defined with considerable precision. This is achieved in two ways: Flat crystal with Söller collimators. A Söller collimator is a stack of parallel metal plates, spaced a few tenths of a millimeter apart. To improve angular resolution, one must lengthen the collimator, and/or reduce the plate spacing. This arrangement has the advantage of simplicity and relatively low cost, but the collimators reduce intensity and increase scattering, and reduce the area of sample and crystal that can be "seen". The simplicity of the geometry is especially useful for variable-geometry monochromators. Curved crystal with slits. The Rowland circle geometry ensures that the slits are both in focus, but in order for the Bragg condition to be met at all points, the crystal must first be bent to a radius of 2R (where R is the radius of the Rowland circle), then ground to a radius of R. This arrangement allows higher intensities (typically 8-fold) with higher resolution (typically 4-fold) and lower background. However, the mechanics of keeping Rowland circle geometry in a variable-angle monochromator is extremely difficult. In the case of fixed-angle monochromators (for use in simultaneous spectrometers), crystals bent to a logarithmic spiral shape give the best focusing performance. The manufacture of curved crystals to acceptable tolerances increases their price considerably. Crystal materials. An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices ("h", "k", "l"), and let their spacing be noted by "d". William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple "n" of the X-ray wavelength λ.(Fig.7) formula_2 The desirable characteristics of a diffraction crystal are: Crystals with simple structures tend to give the best diffraction performance. Crystals containing heavy atoms can diffract well, but also fluoresce more in the higher energy region, causing interference. Crystals that are water-soluble, volatile or organic tend to give poor stability. Commonly used crystal materials include LiF (lithium fluoride), ADP (ammonium dihydrogen phosphate), Ge (germanium), Si (silicon), graphite, InSb (indium antimonide), PE ("tetrakis"-(hydroxymethyl)-methane, also known as pentaerythritol), KAP (potassium hydrogen phthalate), RbAP (rubidium hydrogen phthalate) and TlAP (thallium(I) hydrogen phthalate). In addition, there is an increasing use of "layered synthetic microstructures" (LSMs), which are "sandwich" structured materials comprising successive thick layers of low atomic number matrix, and monatomic layers of a heavy element. These can in principle be custom-manufactured to diffract any desired long wavelength, and are used extensively for elements in the range Li to Mg. In scientific methods that use X-ray/neutron or electron diffraction the before mentioned planes of a diffraction can be doubled to display higher order reflections. The given planes, resulting from Miller indices, can be calculated for a single crystal. The resulting values for "h,k and l" are then called Laue indices. So a single crystal can be variable in the way, that many reflection configurations of that crystal can be used to reflect different energy ranges. The Germanium (Ge111) crystal, for example, can also be used as a Ge333, Ge444 and more. For that reason the corresponding indices used for a particular experimental setup always get noted behind the crystal material(e.g. Ge111, Ge444) Notice, that the Ge222 configuration is forbidden due to diffraction rules stating, that all allowed reflections must be with all odd or all even Miller indices that, combined, result in formula_3, where formula_4 is the order of reflection. Elemental analysis lines. The spectral lines used for elemental analysis of chemicals are selected on the basis of intensity, accessibility by the instrument, and lack of line overlaps. Typical lines used, and their wavelengths, are as follows: Other lines are often used, depending on the type of sample and equipment available. Structural analysis lines. X-ray diffraction (XRD) is still the most used method for structural analysis of chemical compounds. Yet, with increasing detail on the relation of formula_5-line spectra and the surrounding chemical environment of the ionized metal atom, measurements of the so-called valence-to-core (V2C) energy region become increasingly viable. Scientists noted that after ionization of 3d-transition metal atom, the formula_5-line intensities and energies shift with oxidation state of the metal and with the species of ligand(s). Spin states in a compound tend to affect this kind of measurement. This means, that by intense study of these spectral lines, one can obtain several crucial pieces of information from a sample. Especially, if there are references that have been studied in detail and can be used to make out differences. The information collected from this kind of measurement include: These measurements are mostly done at synchrotron facilities, although a number of so-called "in-lab"-spectrometers have been developed and used for pre-beamtime (time at a synchrotron) measurements. Detectors. Detectors used for wavelength dispersive spectrometry need to have high pulse processing speeds in order to cope with the very high photon count rates that can be obtained. In addition, they need sufficient energy resolution to allow filtering-out of background noise and spurious photons from the primary beam or from crystal fluorescence. There are four common types of detector: Gas flow proportional counters are used mainly for detection of longer wavelengths. Gas flows through it continuously. Where there are multiple detectors, the gas is passed through them in series, then led to waste. The gas is usually 90% argon, 10% methane ("P10"), although the argon may be replaced with neon or helium where very long wavelengths (over 5 nm) are to be detected. The argon is ionised by incoming X-ray photons, and the electric field multiplies this charge into a measurable pulse. The methane suppresses the formation of fluorescent photons caused by recombination of the argon ions with stray electrons. The anode wire is typically tungsten or nichrome of 20–60 μm diameter. Since the pulse strength obtained is essentially proportional to the ratio of the detector chamber diameter to the wire diameter, a fine wire is needed, but it must also be strong enough to be maintained under tension so that it remains precisely straight and concentric with the detector. The window needs to be conductive, thin enough to transmit the X-rays effectively, but thick and strong enough to minimize diffusion of the detector gas into the high vacuum of the monochromator chamber. Materials often used are beryllium metal, aluminised PET film and aluminised polypropylene. Ultra-thin windows (down to 1 μm) for use with low-penetration long wavelengths are very expensive. The pulses are sorted electronically by "pulse height selection" in order to isolate those pulses deriving from the secondary X-ray photons being counted. Sealed gas detectors are similar to the gas flow proportional counter, except that the gas does not flow through it. The gas is usually krypton or xenon at a few atmospheres pressure. They are applied usually to wavelengths in the 0.15–0.6 nm range. They are applicable in principle to longer wavelengths, but are limited by the problem of manufacturing a thin window capable of withstanding the high pressure difference. Scintillation counters consist of a scintillating crystal (typically of sodium iodide doped with thallium) attached to a photomultiplier. The crystal produces a group of scintillations for each photon absorbed, the number being proportional to the photon energy. This translates into a pulse from the photomultiplier of voltage proportional to the photon energy. The crystal must be protected with a relatively thick aluminium/beryllium foil window, which limits the use of the detector to wavelengths below 0.25 nm. Scintillation counters are often connected in series with a gas flow proportional counter: the latter is provided with an outlet window opposite the inlet, to which the scintillation counter is attached. This arrangement is particularly used in sequential spectrometers. Semiconductor detectors can be used in theory, and their applications are increasing as their technology improves, but historically their use for WDX has been restricted by their slow response (see EDX). Extracting analytical results. At first sight, the translation of X-ray photon count-rates into elemental concentrations would appear to be straightforward: WDX separates the X-ray lines efficiently, and the rate of "generation" of secondary photons is proportional to the element concentration. However, the number of photons "leaving the sample" is also affected by the physical properties of the sample: so-called "matrix effects". These fall broadly into three categories: All elements "absorb" X-rays to some extent. Each element has a characteristic absorption spectrum which consists of a "saw-tooth" succession of fringes, each step-change of which has wavelength close to an emission line of the element. Absorption attenuates the secondary X-rays leaving the sample. For example, the mass absorption coefficient of silicon at the wavelength of the aluminium Kα line is 50 m2/kg, whereas that of iron is 377 m2/kg. This means that fluorescent X-rays generated by a given concentration of aluminium in a matrix of iron are absorbed about seven times more (that is 377/50) compared with the fluorescent X-rays generated by the same concentration of aluminium, but in a silicon matrix. That would lead to about one seventh of the count rate, once the X-rays are detected. Fortunately, mass absorption coefficients are well known and can be calculated. However, to calculate the absorption for a multi-element sample, the composition must be known. For analysis of an unknown sample, an iterative procedure is therefore used. To derive the mass absorption accurately, data for the concentration of elements not measured by XRF may be needed, and various strategies are employed to estimate these. As an example, in cement analysis, the concentration of oxygen (which is not measured) is calculated by assuming that all other elements are present as standard oxides. Enhancement occurs where the secondary X-rays emitted by a heavier element are sufficiently energetic to stimulate additional secondary emission from a lighter element. This phenomenon can also be modelled, and corrections can be made provided that the full matrix composition can be deduced. Sample macroscopic effects consist of effects of inhomogeneities of the sample, and unrepresentative conditions at its surface. Samples are ideally homogeneous and isotropic, but they often deviate from this ideal. Mixtures of multiple crystalline components in mineral powders can result in absorption effects that deviate from those calculable from theory. When a powder is pressed into a tablet, the finer minerals concentrate at the surface. Spherical grains tend to migrate to the surface more than do angular grains. In machined metals, the softer components of an alloy tend to smear across the surface. Considerable care and ingenuity are required to minimize these effects. Because they are artifacts of the method of sample preparation, these effects can not be compensated by theoretical corrections, and must be "calibrated in". This means that the calibration materials and the unknowns must be compositionally and mechanically similar, and a given calibration is applicable only to a limited range of materials. Glasses most closely approach the ideal of homogeneity and isotropy, and for accurate work, minerals are usually prepared by dissolving them in a borate glass, and casting them into a flat disc or "bead". Prepared in this form, a virtually universal calibration is applicable. Further corrections that are often employed include background correction and line overlap correction. The background signal in an XRF spectrum derives primarily from scattering of primary beam photons by the sample surface. Scattering varies with the sample mass absorption, being greatest when mean atomic number is low. When measuring trace amounts of an element, or when measuring on a variable light matrix, background correction becomes necessary. This is really only feasible on a sequential spectrometer. Line overlap is a common problem, bearing in mind that the spectrum of a complex mineral can contain several hundred measurable lines. Sometimes it can be overcome by measuring a less-intense, but overlap-free line, but in certain instances a correction is inevitable. For instance, the Kα is the only usable line for measuring sodium, and it overlaps the zinc Lβ (L2-M4) line. Thus zinc, if present, must be analysed in order to properly correct the sodium value. Other spectroscopic methods using the same principle. It is also possible to create a characteristic secondary X-ray emission using other incident radiation to excite the sample: When radiated by an X-ray beam, the sample also emits other radiations that can be used for analysis: The de-excitation also ejects Auger electrons, but Auger electron spectroscopy (AES) normally uses an electron beam as the probe. Confocal microscopy X-ray fluorescence imaging is a newer technique that allows control over depth, in addition to horizontal and vertical aiming, for example, when analysing buried layers in a painting. Instrument qualification. A 2001 review, addresses the application of portable instrumentation from QA/QC perspectives. It provides a guide to the development of a set of SOPs if regulatory compliance guidelines are not available. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\lambda = \\frac{h c}{E} " }, { "math_id": 1, "text": " n \\cdot \\lambda = 2 d \\cdot \\sin(\\theta)" }, { "math_id": 2, "text": "2 d\\sin\\theta = n\\lambda." }, { "math_id": 3, "text": "4n" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "K_{\\beta}" }, { "math_id": 6, "text": "K_{\\beta1,3}" }, { "math_id": 7, "text": "K_{\\beta'}" }, { "math_id": 8, "text": "K_{\\beta2,5}" }, { "math_id": 9, "text": "K_{\\beta''}" } ]
https://en.wikipedia.org/wiki?curid=72048
72049053
TUM School of Computation, Information and Technology
The TUM School of Computation, Information and Technology (CIT) is a school of the Technical University of Munich, established in 2022 by the merger of three former departments. As of 2022, it is structured into the Department of Mathematics, the Department of Computer Engineering, the Department of Computer Science, and the Department of Electrical Engineering. Department of Mathematics. The Department of Mathematics (MATH) is located at the Garching campus. History. Mathematics was taught from the beginning at the "Polytechnische Schule in München" and the later "Technische Hochschule München". Otto Hesse was the department's first professor for calculus, analytical geometry and analytical mechanics. Over the years, several institutes for mathematics were formed. In 1974, the Institute of Geometry was merged with the Institute of Mathematics to form the Department of Mathematics, and informatics, which had been part of the Institute of Mathematics, became a separate department. Research Groups. As of 2022, the research groups at the department are: Department of Computer Science. The Department of Computer Science (CS) is located at the Garching campus. History. The first courses in computer science at the Technical University of Munich were offered in 1967 at the Department of Mathematics, when Friedrich L. Bauer introduced a two-semester lecture titled "Information Processing". In 1968, Klaus Samelson started offering a second lecture cycle titled "Introduction to Informatics". By 1992, the computer science department had separated from the Department of Mathematics to form an independent Department of Informatics. In 2002, the department relocated from its old campus in the Munich city center to the new building on the Garching campus. In 2017, the Department celebrated "50 Years of Informatics Munich" with a series of lectures and ceremonies, together with the Ludwig Maximilian University of Munich and the Bundeswehr University Munich. Chairs. As of 2022, the department consists of the following chairs: Notable people. Seven faculty members of the Department of Informatics have been awarded the Gottfried Wilhelm Leibniz Prize, one of the highest endowed research prizes in Germany with a maximum of €2.5 million per award: Friedrich L. Bauer was awarded the 1988 IEEE Computer Society Computer Pioneer Award for inventing the stack data structure. Gerd Hirzinger was awarded the 2005 IEEE Robotics and Automation Society Pioneer Award. Hans-Arno Jacobsen and Burkhard Rost were awarded the Alexander von Humboldt Professorship in 2011 and 2008, respectively. Rudolf Bayer was known for inventing the B-tree and Red–black tree. Department of Electrical Engineering. The Department of Electrical Engineering (EE) is located at the Munich campus. History. The first lectures in the field of electricity at the "Polytechnische Schule München" were given as early as 1876 by the physicist Wilhelm von Bezold. Over the years, as the field of electrical engineering became increasingly important, a separate department for electrical engineering emerged within the mechanical engineering department. In 1967, the department was renamed the Faculty of Mechanical and Electrical Engineering, and six electrical engineering departments were permanently established. In April 1974, the formal establishment of the new TUM Department of Electrical and Computer Engineering took place. While still located in the Munich campus, a new building is currently in construction on the Garching campus and the department is expected to move by 2025. Professorships. As of 2022, the department consists of the following chairs and professorships: Department of Computer Engineering. The Department of Computer Engineering was separated from the former Department of Electrical and Computer Engineering as the result of merger into the School of Computation, Information and Technology. Professorships. As of 2022, the department consists of the following chairs and professorships: Building. The Department of Computer Science shares a building with the Department of Mathematics. In the building, two massive parabolic slides run from the fourth floor to the ground floor. Their shape corresponds to the equation formula_0 and is supposed to represent the "connection of science and art". Rankings. The Department of Computer Science has been consistently rated the top computer science department in Germany by major rankings. Globally, it ranks No. 29 (QS), No. 10 (THE), and within No. 51-75 (ARWU). In the 2020 national CHE University Ranking, the department is among the top rated departments for computer science and business informatics, being rated in the top group for the majority of criteria. The Department of Mathematics has been rated as one of the top mathematics departments in Germany, ranking 43rd in the world and 2nd in Germany (after the University of Bonn) in the QS World University Rankings, and within No. 51-75 in the Academic Ranking of World Universities. In Statistics &amp; Operational Research, QS ranks TUM first in Germany and 28th in the world. The Departments of Electrical and Computer Engineering are leading in Germany. In Electrical &amp; Electronic Engineering, TUM is rated 18th worldwide by QS and 22nd by ARWU. In engineering as a whole, TUM is ranked 20th globally and 1st nationally in the Times Higher Education World University Rankings. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z=y=hx^2/d^2" } ]
https://en.wikipedia.org/wiki?curid=72049053
7204992
Norm variety
In mathematics, a norm variety is a particular type of algebraic variety "V" over a field "F", introduced for the purposes of algebraic K-theory by Voevodsky. The idea is to relate Milnor K-theory of "F" to geometric objects "V", having function fields "F"("V") that 'split' given 'symbols' (elements of Milnor K-groups). The formulation is that "p" is a given prime number, different from the characteristic of "F", and a symbol is the class mod "p" of an element formula_0 of the "n"-th Milnor K-group. A field extension is said to "split" the symbol, if its image in the K-group for that field is 0. The conditions on a norm variety "V" are that "V" is irreducible and a non-singular complete variety. Further it should have dimension "d" equal to formula_1 The key condition is in terms of the "d"-th Newton polynomial "s""d", evaluated on the (algebraic) total Chern class of the tangent bundle of "V". This number formula_2 should not be divisible by "p"2, it being known it is divisible by "p". Examples. These include ("n" = 2) cases of the Severi–Brauer variety and ("p" = 2) Pfister forms. There is an existence theorem in the general case (paper of Markus Rost cited). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{a_1, \\dots, a_n\\}\\ " }, { "math_id": 1, "text": "p^{n - 1} - 1.\\ " }, { "math_id": 2, "text": "s_d(V)\\ " } ]
https://en.wikipedia.org/wiki?curid=7204992
7205088
Absorbing set (random dynamical systems)
In mathematics, an absorbing set for a random dynamical system is a subset of the phase space. A dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. The absorbing set eventually contains the image of any bounded set under the cocycle ("flow") of the random dynamical system. As with many concepts related to random dynamical systems, it is defined in the pullback sense. Definition. Consider a random dynamical system "φ" on a complete separable metric space ("X", "d"), where the noise is chosen from a probability space (Ω, Σ, P) with base flow "θ" : R × Ω → Ω. A random compact set "K" : Ω → 2"X" is said to be absorbing if, for all "d"-bounded deterministic sets "B" ⊆ "X", there exists a () random time "τ""B" : Ω → 0, +∞) such that formula_0 This is a definition in the pullback sense, as indicated by the use of the negative time shift "θ"−"t".
[ { "math_id": 0, "text": "t \\geq \\tau_{B} (\\omega) \\implies \\varphi (t, \\theta_{-t} \\omega) B \\subseteq K(\\omega)." } ]
https://en.wikipedia.org/wiki?curid=7205088
7205312
Discrete series representation
Type of group representation for locally compact groups In mathematics, a discrete series representation is an irreducible unitary representation of a locally compact topological group "G" that is a subrepresentation of the left regular representation of "G" on L²("G"). In the Plancherel measure, such representations have positive measure. The name comes from the fact that they are exactly the representations that occur discretely in the decomposition of the regular representation. Properties. If "G" is unimodular, an irreducible unitary representation ρ of "G" is in the discrete series if and only if one (and hence all) matrix coefficient formula_0 with "v", "w" non-zero vectors is square-integrable on "G", with respect to Haar measure. When "G" is unimodular, the discrete series representation has a formal dimension "d", with the property that formula_1 for "v", "w", "x", "y" in the representation. When "G" is compact this coincides with the dimension when the Haar measure on "G" is normalized so that "G" has measure 1. Semisimple groups. Harish-Chandra (1965, 1966) classified the discrete series representations of connected semisimple groups "G". In particular, such a group has discrete series representations if and only if it has the same rank as a maximal compact subgroup "K". In other words, a maximal torus "T" in "K" must be a Cartan subgroup in "G". (This result required that the center of "G" be finite, ruling out groups such as the simply connected cover of SL(2,R).) It applies in particular to special linear groups; of these only SL(2,R) has a discrete series (for this, see the representation theory of SL(2,R)). Harish-Chandra's classification of the discrete series representations of a semisimple connected Lie group is given as follows. If "L" is the weight lattice of the maximal torus "T", a sublattice of "it" where "t" is the Lie algebra of "T", then there is a discrete series representation for every vector "v" of "L" + ρ, where ρ is the Weyl vector of "G", that is not orthogonal to any root of "G". Every discrete series representation occurs in this way. Two such vectors "v" correspond to the same discrete series representation if and only if they are conjugate under the Weyl group "W""K" of the maximal compact subgroup "K". If we fix a fundamental chamber for the Weyl group of "K", then the discrete series representation are in 1:1 correspondence with the vectors of "L" + ρ in this Weyl chamber that are not orthogonal to any root of "G". The infinitesimal character of the highest weight representation is given by "v" (mod the Weyl group "W""G" of "G") under the Harish-Chandra correspondence identifying infinitesimal characters of "G" with points of "t" ⊗ C/"W""G". So for each discrete series representation, there are exactly |"W""G"|/|"W""K"| discrete series representations with the same infinitesimal character. Harish-Chandra went on to prove an analogue for these representations of the Weyl character formula. In the case where "G" is not compact, the representations have infinite dimension, and the notion of "character" is therefore more subtle to define since it is a Schwartz distribution (represented by a locally integrable function), with singularities. The character is given on the maximal torus "T" by formula_2 When "G" is compact this reduces to the Weyl character formula, with "v" = "λ" + "ρ" for "λ" the highest weight of the irreducible representation (where the product is over roots α having positive inner product with the vector "v"). Harish-Chandra's regularity theorem implies that the character of a discrete series representation is a locally integrable function on the group. Limit of discrete series representations. Points "v" in the coset "L" + ρ orthogonal to roots of "G" do not correspond to discrete series representations, but those not orthogonal to roots of "K" are related to certain irreducible representations called limit of discrete series representations. There is such a representation for every pair ("v","C") where "v" is a vector of "L" + ρ orthogonal to some root of "G" but not orthogonal to any root of "K" corresponding to a wall of "C", and "C" is a Weyl chamber of "G" containing "v". (In the case of discrete series representations there is only one Weyl chamber containing "v" so it is not necessary to include it explicitly.) Two pairs ("v","C") give the same limit of discrete series representation if and only if they are conjugate under the Weyl group of "K". Just as for discrete series representations "v" gives the infinitesimal character. There are at most |"W""G"|/|"W""K"| limit of discrete series representations with any given infinitesimal character. Limit of discrete series representations are tempered representations, which means roughly that they only just fail to be discrete series representations. Constructions of the discrete series. Harish-Chandra's original construction of the discrete series was not very explicit. Several authors later found more explicit realizations of the discrete series.
[ { "math_id": 0, "text": "\\langle \\rho(g)\\cdot v, w \\rangle \\," }, { "math_id": 1, "text": "d\\int \\langle \\rho(g)\\cdot v, w \\rangle \\overline{\\langle \\rho(g)\\cdot x, y \\rangle}dg =\\langle v, x \\rangle\\overline{\\langle w, y \\rangle}" }, { "math_id": 2, "text": "(-1)^{\\frac{\\dim(G)-\\dim(K)}{2}} {\\sum_{w\\in W_K}\\det(w)e^{w(v)}\\over \\prod_{(v,\\alpha)>0} \\left (e^{\\frac{\\alpha}{2}}-e^{-\\frac{\\alpha}{2}} \\right )}" } ]
https://en.wikipedia.org/wiki?curid=7205312
72054885
Tetraphenyllead
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Tetraphenyllead is an organolead compound with the chemical formula or PbPh4. It is a white solid. Preparation. Tetraphenyllead can be produced by the reaction of phenylmagnesium bromide and lead chloride at diethyl ether. This is the method used by P. Pfeiffer and P. Truskier to produce tetraphenyllead first at 1904. formula_0 Reactions. A solution of hydrogen chloride in ethanol can react with tetraphenyllead and substitute some of the phenyl groups to chlorine atoms: formula_1 formula_2 Just like tetrabutyllead, tetraphenyllead and sulfur react explosively at 150 °C and produce diphenyl sulfide and lead sulfide: formula_3 Tetraphenyllead reacts with iodine in chloroform to produce triphenyllead iodide.
[ { "math_id": 0, "text": "\\mathrm{ (C_6H_5)MgBr +\\ 2\\ PbCl_2\\ \\xrightarrow[Et_2O]{} \\ Pb(C_6H_5)_4 +\\ Pb +\\ 4\\ MgBrCl }" }, { "math_id": 1, "text": "\\mathrm{ Pb(C_6H_5)_4 +\\ HCl\\ \\xrightarrow[Ethanol]{} \\ Pb(C_6H_5)_3Cl +\\ C_6H_6 }" }, { "math_id": 2, "text": "\\mathrm{ Pb(C_6H_5)_3Cl +\\ HCl\\ \\xrightarrow[Ethanol]{} \\ Pb(C_6H_5)_2Cl_2 +\\ C_6H_6 }" }, { "math_id": 3, "text": "\\mathrm{ Pb(C_6H_5)_4 +\\ 3\\ S\\ \\xrightarrow[]{} \\ PbS +\\ 2\\ S(C_6H_5)_2 }" } ]
https://en.wikipedia.org/wiki?curid=72054885
7205947
Young stellar object
Star in its early stage of evolution Young stellar object (YSO) denotes a star in its early stage of evolution. This class consists of two groups of objects: protostars and pre-main-sequence stars. Classification by spectral energy distribution. A star forms by accumulation of material that falls in to a protostar from a circumstellar disk or envelope. Material in the disk is cooler than the surface of the protostar, so it radiates at longer wavelengths of light producing excess infrared emission. As material in the disk is depleted, the infrared excess decreases. Thus, YSOs are usually classified into evolutionary stages based on the slope of their spectral energy distribution in the mid-infrared, using a scheme introduced by Lada (1987). He proposed three classes (I, II and III), based on the values of intervals of spectral index formula_0: formula_1. Here formula_2 is wavelength, and formula_3 is flux density. The formula_0 is calculated in the wavelength interval of 2.2–20 formula_4 (near- and mid-infrared region). Andre "et al." (1993) discovered a class 0: objects with strong submillimeter emission, but very faint at formula_5. Greene "et al." (1994) added a fifth class of "flat spectrum" sources. This classification schema roughly reflects evolutionary sequence. It is believed that most deeply embedded Class 0 sources evolve towards Class I stage, dissipating their circumstellar envelopes. Eventually they become optically visible on the stellar birthline as pre-main-sequence stars. Class II objects have circumstellar disks and correspond roughly to classical T Tauri stars, while Class III stars have lost their disks and correspond approximately to weak-line T Tauri stars. An intermediate stage where disks can only be detected at longer wavelengths (e.g., at formula_11) are known as transition-disk objects. Characteristics. YSOs are also associated with early star evolution phenomena: jets and bipolar outflows, masers, Herbig–Haro objects, and protoplanetary disks (circumstellar disks or proplyds). Classification of YSOs by mass. These stars may be differentiated by mass: Massive YSOs, intermediate-mass YSOs, and brown dwarfs. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "\\alpha \\," }, { "math_id": 1, "text": "\\alpha=\\frac{d\\log(\\lambda F_\\lambda)}{d\\log(\\lambda)}" }, { "math_id": 2, "text": "\\lambda \\," }, { "math_id": 3, "text": "F_\\lambda" }, { "math_id": 4, "text": "{\\mu}m" }, { "math_id": 5, "text": "{\\lambda}<10{\\mu}m" }, { "math_id": 6, "text": "{\\lambda}<20{\\mu}m" }, { "math_id": 7, "text": "{\\alpha}>0.3" }, { "math_id": 8, "text": "0.3>{\\alpha}>-0.3" }, { "math_id": 9, "text": "-0.3>{\\alpha}>-1.6" }, { "math_id": 10, "text": "{\\alpha}<-1.6" }, { "math_id": 11, "text": "24{\\mu}m" } ]
https://en.wikipedia.org/wiki?curid=7205947
72062517
Hunter Snevily
American mathematician and physics professor Hunter Snevily (1956–2013) was an American mathematician with expertise and contributions in Set theory, Graph theory, Discrete geometry, and Ramsey theory on the integers. Education and career. Hunter received his undergraduate degree from Emory University in 1981, and his Ph.D. degree from the University of Illinois Urbana-Champaign under the supervision of Douglas West in 1991. After a postdoctoral fellowship at Caltech, where he mentored many students, Hunter took a faculty position at the University of Idaho in 1993 where he was a professor until 2010. He retired early while fighting with Parkinsons, but continued research in mathematics till his last days. Mathematics research. The following are some of Hunter's most important contributions (as discussed in ): References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\mathcal L}" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "\\{A_1,A_2,\\ldots,A_m\\}" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "|A_i \\cap A_j| \\in {\\mathcal L}" }, { "math_id": 5, "text": "i \\neq j" }, { "math_id": 6, "text": "m \\leq \\sum_{i=0}^k{n-1 \\choose i}" }, { "math_id": 7, "text": "{\\mathcal F}" }, { "math_id": 8, "text": "\\{b_1,b_2,\\ldots,b_k\\}\\in{\\mathcal F}" }, { "math_id": 9, "text": "\\{a_1,a_2,\\ldots,a_k\\}\\in{\\mathcal F}" }, { "math_id": 10, "text": "a_i \\leq b_i" }, { "math_id": 11, "text": "1 \\leq i \\leq k" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "G" }, { "math_id": 14, "text": "\\{a_1,a_2,\\ldots,a_k\\}" }, { "math_id": 15, "text": "\\{b_1,b_2,\\ldots,b_k\\}" }, { "math_id": 16, "text": "\\pi" }, { "math_id": 17, "text": "[k]" }, { "math_id": 18, "text": "a_1+b_{\\pi}(1), a_2+b_{\\pi}(2), \\ldots, a_k+b_{\\pi}(k)" } ]
https://en.wikipedia.org/wiki?curid=72062517
7206425
Itô isometry
In mathematics, the Itô isometry, named after Kiyoshi Itô, is a crucial fact about Itô stochastic integrals. One of its main applications is to enable the computation of variances for random variables that are given as Itô integrals. Let formula_0 denote the canonical real-valued Wiener process defined up to time formula_1, and let formula_2 be a stochastic process that is adapted to the natural filtration formula_3 of the Wiener process. Then formula_4 where formula_5 denotes expectation with respect to classical Wiener measure. In other words, the Itô integral, as a function from the space formula_6 of square-integrable adapted processes to the space formula_7 of square-integrable random variables, is an isometry of normed vector spaces with respect to the norms induced by the inner products formula_8 and formula_9 As a consequence, the Itô integral respects these inner products as well, i.e. we can write formula_10 for formula_11 .
[ { "math_id": 0, "text": "W : [0, T] \\times \\Omega \\to \\mathbb{R}" }, { "math_id": 1, "text": "T > 0" }, { "math_id": 2, "text": "X : [0, T] \\times \\Omega \\to \\mathbb{R}" }, { "math_id": 3, "text": "\\mathcal{F}_{*}^{W}" }, { "math_id": 4, "text": "\\operatorname{E} \\left[ \\left( \\int_0^T X_t \\, \\mathrm{d} W_t \\right)^2 \\right] = \\operatorname{E} \\left[ \\int_0^T X_t^2 \\, \\mathrm{d} t \\right]," }, { "math_id": 5, "text": "\\operatorname{E}" }, { "math_id": 6, "text": "L^2_{\\mathrm{ad}} ([0,T] \\times \\Omega)" }, { "math_id": 7, "text": "L^2 (\\Omega)" }, { "math_id": 8, "text": "\n\\begin{align}\n( X, Y )_{L^2_{\\mathrm{ad}} ([0,T] \\times \\Omega)} & := \\operatorname{E} \\left( \\int_0^T X_t \\, Y_t \\, \\mathrm{d} t \\right)\n\\end{align}\n" }, { "math_id": 9, "text": "( A, B )_{L^2 (\\Omega)} := \\operatorname{E} ( A B ) ." }, { "math_id": 10, "text": "\\operatorname{E} \\left[ \\left( \\int_0^T X_t \\, \\mathrm{d} W_t \\right) \\left( \\int_0^T Y_t \\, \\mathrm{d} W_t \\right) \\right] = \\operatorname{E} \\left[ \\int_0^T X_t Y_t \\, \\mathrm{d} t \\right]" }, { "math_id": 11, "text": "X, Y \\in L^2_{\\mathrm{ad}} ([0,T] \\times \\Omega)" } ]
https://en.wikipedia.org/wiki?curid=7206425
7207519
Functional square root
Function that, applied twice, gives another function In mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function "g" is a function "f" satisfying "f"("f"("x")) = "g"("x") for all "x". Notation. Notations expressing that "f" is a functional square root of "g" are "f" = "g"[1/2] and "f" = "g"1/2. History. Ψ by an arbitrary invertible function Ψ is also a solution. In other words, the group of all invertible functions on the real line acts on the subset consisting of solutions to Babbage's functional equation by conjugation. Solutions. A systematic procedure to produce "arbitrary" functional "n"-roots (including arbitrary real, negative, and infinitesimal "n") of functions formula_1 relies on the solutions of Schröder's equation. Infinitely many trivial solutions exist when the domain of a root function "f" is allowed to be sufficiently larger than that of "g". sin[2]("x") Examples. sin(sin("x")) [red curve] sin[1]("x") sin("x") rin(rin("x")) [blue curve] sin[]("x") rin("x") qin(qin("x")) [orange curve] sin[]("x") qin("x") [black curve above the orange curve] sin[–1]("x") arcsin("x") [dashed curve] See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "g: \\mathbb{C}\\rarr \\mathbb{C}" }, { "math_id": 2, "text": "g(x)=T_n(x)" }, { "math_id": 3, "text": "f(x) = \\cos{(\\sqrt{n}\\arccos(x))}" }, { "math_id": 4, "text": "f(x) = x / (\\sqrt{2} + x(1-\\sqrt{2}))" }, { "math_id": 5, "text": "g(x)=x / (2-x)" } ]
https://en.wikipedia.org/wiki?curid=7207519
72077175
Sum of two cubes
Mathematical polynomial formula In mathematics, the sum of two cubes is a cubed number added to another cubed number. Factorization. Every sum of cubes may be factored according to the identity formula_0 in elementary algebra. Binomial numbers generalize this factorization to higher odd powers. "SOAP" method. The mnemonic "SOAP", standing for "Same, Opposite, Always Positive", is sometimes used to memorize the correct placement of the addition and subtraction symbols while factorizing cubes. When applying this method to the factorization, "Same" represents the first term with the same sign as the original expression, "Opposite" represents the second term with the opposite sign as the original expression, and "Always Positive" represents the third term and is always positive. Proof. Starting with the expression, formula_1 is multiplied by "a" and "b" formula_2 By distributing "a" and "b" to formula_1, formula_3 and by canceling the alike terms, formula_4 Similarly for the difference of cubes, formula_5 Fermat's last theorem. Fermat's last theorem in the case of exponent 3 states that the sum of two non-zero integer cubes does not result in a non-zero integer cube. The first recorded proof of the exponent 3 case was given by Euler. Taxicab and Cabtaxi numbers. Taxicab numbers are numbers that can be expressed as a sum of two positive integer cubes in "n" distinct ways. The smallest taxicab number, after Ta(1), is 1729, expressed as formula_6 or formula_7 The smallest taxicab number expressed in 3 different ways is 87,539,319, expressed as formula_8, formula_9 or formula_10 Cabtaxi numbers are numbers that can be expressed as a sum of two positive or negative integers or 0 cubes in "n" ways. The smallest cabtaxi number, after Cabtaxi(1), is 91, expressed as: formula_11 or formula_12 The smallest Cabtaxi number expressed in 3 different ways is 4104, expressed as formula_13, formula_14 or formula_15 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " a^3 + b^3 = (a + b)(a^2 - ab + b^2) " }, { "math_id": 1, "text": "a^2-ab+b^2" }, { "math_id": 2, "text": " (a+b)(a^2-ab+b^2) = a(a^2-ab+b^2) + b(a^2-ab+b^2). " }, { "math_id": 3, "text": " a^3 - a^2 b + ab^2 + a^2b - ab^2 + b^3 " }, { "math_id": 4, "text": " a^3 + b^3 " }, { "math_id": 5, "text": "\n\\begin{align}\n (a-b)(a^2+ab+b^2) & = a(a^2+ab+b^2) - b(a^2+ab+b^2) \\\\\n& = a^3 + a^2 b + ab^2 \\; - a^2b - ab^2 - b^3 \\\\\n& = a^3 - b^3.\n\\end{align}" }, { "math_id": 6, "text": "1^3 +12^3" }, { "math_id": 7, "text": "9^3 + 10^3" }, { "math_id": 8, "text": "436^3 + 167^3" }, { "math_id": 9, "text": "423^3 + 228^3" }, { "math_id": 10, "text": "414^3 + 255^3" }, { "math_id": 11, "text": "3^3 + 4^3" }, { "math_id": 12, "text": "6^3 - 5^3" }, { "math_id": 13, "text": "16^3 + 2^3" }, { "math_id": 14, "text": "15^3 + 9^3" }, { "math_id": 15, "text": "-12^3+18^3" } ]
https://en.wikipedia.org/wiki?curid=72077175
72080313
Folgar-Tucker Model
Differential equation The Folgar-Tucker-Equation (FTE) is a widespread and commercially applied model to describe the fiber orientation in injection molding simulations of fiber composites. The equation is based on Jeffrey's equation for fibers suspended in melts, but, in addition, accounts for fiber-fiber interactions. Tucker and Advani then integrate over an ensemble of fibers and hence obtain an evolution equation for the orientation/alignment tensor as a Field (physics). A compact way to express it is formula_0 The scalar quantities are the shear rate formula_1, the interaction coefficient C (for an isotropic diffusion) and the parameter accounting for the fibers aspect ratio formula_2. formula_3 is a fourth order tensor. Normally, formula_3 is expressed as a function of A. The detection of the best suited function is known as closure problem. D and W are respectively the symmetric and antisymmetric part of the velocity gradient, while 1 represents the unit tensor. formula_4 represents a contraction over two indices. Thus the Folgar Tucker is an differential equation for the second order tensor A, namely the orientation tensor. This evolution equation is in the frame of continuum mechanics and is coupled to the velocity field. Since different closure forms can be inserted, many possible formulations of the equations are possible. For most of the closure forms the FTE results in a nonlinear differential equation (though a Lemma to linearize it for some popular closure was introduced ). Analytical solutions to some versions of the FTE consists of both exponential, trigonometrical and hyperbolic functions. Numerically the FTE is solved also in commercial software for injection molding simulations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\frac{dA}{dt}=W\\cdot A-A\\cdot W+\\xi(D\\cdot A+ A\\cdot D-2A_4:D)+2C\\dot{\\gamma}(1-3A).\n" }, { "math_id": 1, "text": "\\dot{\\gamma}" }, { "math_id": 2, "text": "\\xi" }, { "math_id": 3, "text": "A_4" }, { "math_id": 4, "text": ":" } ]
https://en.wikipedia.org/wiki?curid=72080313
72085266
Control coefficient (biochemistry)
In chemistry, control coefficients are used to describe how much influence (i.e., control) a given reaction step has on the steady-state flux or species concentration level. In practice, this can be accomplished by changing the expression level of a given enzyme and measuring the resulting changes in flux and metabolite levels. Control coefficients form a central component of metabolic control analysis. There are two primary control coefficients: The simplest way to look at control coefficients is as the scaled derivatives of the steady-state change in an observable with respect to a change in enzyme activity. For example, the flux control coefficients can be written as: formula_0 while the concentration control coefficients can be written as: formula_1 Control coefficients can have any value that includes negative and positive values. A negative value indicates that the observable in question decreases as a result of the change in enzyme activity. In theory, other observables, such as growth rate, or even combinations of observables, can be defined using a control coefficient. But flux and concentration control coefficients are by far the most commonly used. The approximation in terms of percentages makes control coefficients easier to measure and more intuitively understandable. Control coefficients are useful because they tell us how much influence each enzyme or protein has in a biochemical reaction network. It is important to note that control coefficients are not fixed values but will change depending on the state of the pathway or organism. If an organism shifts to a new nutritional source, then the control coefficients in the pathway will change. Formal Definition. One criticism of the concept of the control coefficient as defined above is that it is dependent on being described relative to a change in enzyme activity. Instead, the Berlin school defined control coefficients in terms of changes to local rates brought about by any suitable parameter, which could include changes to enzyme levels or the action of drugs. Hence a more general definition is given by the following expressions: formula_2 and concentration control coefficients by formula_3 In the above expression, formula_4 could be any convenient parameter. For example, a drug, changes in enzyme expression etc. The advantage is that the control coefficient becomes independent of the applied perturbation. For control coefficients defined in terms of changes in enzyme expression, it is often assumed that the effect on the local rate by changes to the enzyme activity is proportional so that: formula_5 Relationship to rate-limiting steps. In normal usage, the rate-limiting step or rate-determining step is defined as the slowest step of a chemical reaction that determines the speed (rate) at which the overall reaction proceeds. The flux control coefficients do not measure this kind of rate-limitingness. For example, in a linear chain of reactions at steady-state, all steps carry the same flux. That is, there is no slow or fast step with respect to the rate or speed of a reaction. The flux control coefficient, instead, measures how much influence a given step has on the steady-state flux. A step with a high flux control coefficient means that changing the activity of the step (by changing the expression level of the enzyme) will have a large effect on the steady-state flux through the pathway and vice versa. Historically the concept of the rate-limiting steps was also related to the notion of the master step. However, this drew much criticism due to a misunderstanding of the concept of the steady-state. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_{e_i}^J=\\frac{d J}{d e_i} \\frac{e_i}{J}=\\frac{d \\ln J}{d \\ln e_i} \\approx \\frac{J \\%}{e_i \\%} " }, { "math_id": 1, "text": "C_{e_i}^{s_j}=\\frac{d s_j}{d e_i} \\frac{e_i}{s_j}=\\frac{d \\ln s_j}{d \\ln e_i} \\approx \\frac{s_j \\%}{e_i \\%}" }, { "math_id": 2, "text": " C^J_{v_i} = \\left( \\frac{dJ}{dp} \\frac{p}{J} \\right) \\bigg/ \\left( \\frac{\\partial v_i}{\\partial p}\\frac{p}{v_i} \\right) = \\frac{d\\ln J}{d\\ln v_i} " }, { "math_id": 3, "text": " C^s_{v_i} = \\left( \\frac{ds}{dp} \\frac{p}{s} \\right) \\bigg/ \\left( \\frac{\\partial v_i}{\\partial p} \\frac{p}{v_i} \\right) = \\frac{d\\ln s}{d\\ln v_i} " }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": " C^X_{v_i} = C^X_{e_i} " } ]
https://en.wikipedia.org/wiki?curid=72085266
72093353
Carol E. Anway
American physicist Carol Elizabeth Anway (also published as Carol Anway-Wiese; born 1965) is a retired American physicist known for her work on computational industrial physics for Boeing, and particularly on lightning protection for airplanes. Education and career. Anway grew up in Superior, Wisconsin, and studied physics and mathematics at Hamline University in Minnesota. She completed a Ph.D. in physics at the University of California, Los Angeles in 1995. Her dissertation, "Search for Rare Decays of the B Meson at 1.8-TeV formula_0 Collisions at CDF", concerned particle physics experiments on the Collider Detector at Fermilab, where she was part of a team that discovered the top quark. Her research was supervised by Thomas Müller, who later became a professor at the Karlsruhe Institute of Technology. After completing her Ph.D. she worked for Boeing, on military aircraft and on lightning protection for commercial aircraft. She retired from Boeing in 2020. Recognition. Anway was named as a Fellow of the American Physical Society (APS) in 2018, after a nomination from the APS Forum on Industrial &amp; Applied Physics, "for revolutionary advances in the areas of computational industrial physics, specifically in advanced simulation tools enabling modeling and predictive behavior of sensor and communication architectures in highly complex systems". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p\\bar p" } ]
https://en.wikipedia.org/wiki?curid=72093353
7209841
Desargues configuration
Geometric configuration of ten points and lines In geometry, the Desargues configuration is a configuration of ten points and ten lines, with three points per line and three lines per point. It is named after Girard Desargues. The Desargues configuration can be constructed in two dimensions from the points and lines occurring in Desargues's theorem, in three dimensions from five planes in general position, or in four dimensions from the 5-cell, the four-dimensional regular simplex. It has a large group of symmetries, taking any point to any other point and any line to any other line. It is also self-dual, meaning that if the points are replaced by lines and vice versa using projective duality, the same configuration results. Graphs associated with the Desargues configuration include the Desargues graph (its graph of point-line incidences) and the Petersen graph (its graph of non-incident lines). The Desargues configuration is one of ten different configurations with ten points and lines, three points per line, and three lines per point, nine of which can be realized in the Euclidean plane. Constructions. Two dimensions. Two triangles formula_0 and formula_1 are said to be in perspective centrally if the lines formula_2, formula_3, and formula_4 meet in a common point, called the "center of perspectivity". They are in perspective axially if the intersection points of the corresponding triangle sides, formula_5, formula_6, and formula_7 all lie on a common line, the "axis of perspectivity". Desargues's theorem in geometry states that these two conditions are equivalent: if two triangles are in perspective centrally then they must also be in perspective axially, and vice versa. When this happens, the ten points and ten lines of the two perspectivities (the six triangle vertices, three crossing points, and center of perspectivity, and the six triangle sides, three lines through corresponding pairs of vertices, and axis of perspectivity) together form an instance of the Desargues configuration. Three dimensions. Although it may be embedded in two dimensions, the Desargues configuration has a very simple construction in three dimensions: for any configuration of five planes in general position in Euclidean space, the ten points where three planes meet and the ten lines formed by the intersection of two of the planes together form an instance of the configuration. This construction is closely related to the property that every projective plane that can be embedded into a 3-dimensional projective space obeys Desargues' theorem. This three-dimensional realization of the Desargues configuration is also called the complete pentahedron. Four dimensions. The 5-cell or pentatope (a regular simplex in four dimensions) has five vertices, ten edges, ten triangular ridges (2-dimensional faces), and five tetrahedral facets; the edges and ridges touch each other in the same pattern as the Desargues configuration. Extend each of the edges of the 5-cell to the line that contains it (its affine hull), similarly extend each triangle of the 5-cell to the 2-dimensional plane that contains it, and intersect these lines and planes by a three-dimensional hyperplane that neither contains nor is parallel to any of them. Each line intersects the hyperplane in a point, and each plane intersects the hyperplane in a line; these ten points and lines form an instance of the Desargues configuration. Symmetries. Although Desargues' theorem chooses different roles for its ten lines and points, the Desargues configuration itself is more symmetric: "any" of the ten points may be chosen to be the center of perspectivity, and that choice determines which six points will be the vertices of triangles and which line will be the axis of perspectivity. The Desargues configuration has a symmetry group formula_8 of order 120; that is, there are 120 different ways of permuting the points and lines of the configuration in a way that preserves its point-line incidences. The three-dimensional construction of the Desargues configuration makes these symmetries more readily apparent: if the configuration is generated from five planes in general position in three dimensions, then each of the 120 different permutations of these five planes corresponds to a symmetry of the configuration. The Desargues configuration is self-dual, meaning that it is possible to find a correspondence from points of one Desargues configuration to lines of a second configuration, and from lines of the first configuration to points of a second configuration, in such a way that all of the configuration's incidences are preserved. Graphs. The Levi graph of the Desargues configuration, a graph having one vertex for each point or line in the configuration, is known as the Desargues graph. Because of the symmetries and self-duality of the Desargues configuration, the Desargues graph is a symmetric graph. draws a different graph for this configuration, with ten vertices representing its ten lines, and with two vertices connected by an edge whenever the corresponding two lines do not meet at one of the points of the configuration. Alternatively, the vertices of this graph may be interpreted as representing the points of the Desargues configuration, in which case the edges connect pairs of points for which the line connecting them is not part of the configuration. This publication marks the first known appearance of the Petersen graph in the mathematical literature, 12 years before Julius Petersen's use of the same graph as a counterexample to an edge coloring problem. Related configurations. As a projective configuration, the Desargues configuration has the notation (103103), meaning that each of its ten points is incident to three lines and each of its ten lines is incident to three points. Its ten points can be viewed in a unique way as a pair of mutually inscribed pentagons, or as a self-inscribed decagon. The Desargues graph, a 20-vertex bipartite symmetric cubic graph, is so called because it can be interpreted as the Levi graph of the Desargues configuration, with a vertex for each point and line of the configuration and an edge for every incident point-line pair. There also exist eight other (103103) configurations (that is, sets of points and lines in the Euclidean plane with three lines per point and three points per line) that are not incidence-isomorphic to the Desargues configuration, one of which is shown at right. A tenth configuration exists as an abstract finite geometry but cannot be realized using Euclidean points and lines. In all of these configurations, each point has three other points that are not collinear with it. But in the Desargues configuration, these three points are always collinear with each other (if the chosen point is the center of perspectivity, then the three points form the axis of perspectivity) while in the other configuration shown in the illustration these three points form a triangle of three lines. As with the Desargues configuration, the other depicted configuration can be viewed as a pair of mutually inscribed pentagons. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ABC" }, { "math_id": 1, "text": "abc" }, { "math_id": 2, "text": "Aa" }, { "math_id": 3, "text": "Bb" }, { "math_id": 4, "text": "Cc" }, { "math_id": 5, "text": "X=AB\\cap ab" }, { "math_id": 6, "text": "Y=AC\\cap ac" }, { "math_id": 7, "text": "Z=BC\\cap bc" }, { "math_id": 8, "text": "S_5" } ]
https://en.wikipedia.org/wiki?curid=7209841
72100476
Sparse identification of non-linear dynamics
Data-driven algorithm Sparse identification of nonlinear dynamics (SINDy) is a data-driven algorithm for obtaining dynamical systems from data. Given a series of snapshots of a dynamical system and its corresponding time derivatives, SINDy performs a sparsity-promoting regression (such as LASSO) on a library of nonlinear candidate functions of the snapshots against the derivatives to find the governing equations. This procedure relies on the assumption that most physical systems only have a few dominant terms which dictate the dynamics, given an appropriately selected coordinate system and quality training data. It has been applied to identify the dynamics of fluids, based on proper orthogonal decomposition, as well as other complex dynamical systems, such as biological networks. Mathematical Overview. First, consider a dynamical system of the form formula_0 where formula_1 is a state vector (snapshot) of the system at time formula_2 and the function formula_3 defines the equations of motion and constraints of the system. The time derivative may be either prescribed or numerically approximated from the snapshots. With formula_4 and formula_5 sampled at formula_6 equidistant points in time (formula_7), these can be arranged into matrices of the form formula_8 and similarly for formula_9. Next, a library formula_10 of nonlinear candidate functions of the columns of formula_11 is constructed, which may be constant, polynomial, or more exotic functions (like trigonometric and rational terms, and so on): formula_12 The number of possible model structures from this library is combinatorically high. formula_3 is then substituted by formula_10 and a vector of coefficients formula_13 determining the active terms in formula_3: formula_14 Because only a few terms are expected to be active at each point in time, an assumption is made that formula_3 admits a sparse representation in formula_10. This then becomes an optimization problem in finding a sparse formula_15 which optimally embeds formula_9. In other words, a parsimonious model is obtained by performing least squares regression on the system (4) with sparsity-promoting (formula_16) regularization formula_17 where formula_18 is a regularization parameter. Finally, the sparse set of formula_19 can be used to reconstruct the dynamical system: formula_20
[ { "math_id": 0, "text": "\\dot{\\textbf{x}}=\\frac{d}{dt}\\textbf{x}(t)=\\textbf{f}(\\textbf{x}(t))," }, { "math_id": 1, "text": "\\textbf{x}(t)\\in\\mathbb{R}^n" }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "\\textbf{f}(\\textbf{x}(t))" }, { "math_id": 4, "text": "\\textbf{x}" }, { "math_id": 5, "text": "\\dot{\\textbf{x}}" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "t_1,t_2,\\cdots,t_m" }, { "math_id": 8, "text": "\\bf{X}=\\begin{bmatrix}\n\\bf{x}^T(t_1) \\\\ \\bf{x}^T(t_2) \\\\ \\vdots \\\\ \\bf{x}^T(t_m)\n\\end{bmatrix} =\n\\begin{bmatrix}x_1(t_1)&x_2(t_1)&\\cdots&x_n(t_1)\\\\\nx_1(t_2)&x_2(t_2)&\\cdots&x_n(t_2)\\\\\n\\vdots&\\vdots&\\ddots&\\vdots \\\\\nx_1(t_m)&x_2(t_m)&\\cdots&x_n(t_m)\n\\end{bmatrix}," }, { "math_id": 9, "text": "\\dot{\\textbf{X}}" }, { "math_id": 10, "text": "\\bf{\\Theta}(\\textbf{X})" }, { "math_id": 11, "text": "\\textbf{X}" }, { "math_id": 12, "text": "\\ \\ \\ \\bf{\\Theta}(\\bf{X})=\\begin{bmatrix}\n\\vline&\\vline&\\vline&\\vline& &\\vline&\\vline& \\\\\n1&\\bf{X}&\\bf{X}^2&\\bf{X}^3&\\cdots & \\sin(\\bf{X})&\\cos(\\bf{X})&\\cdots\\\\\n\\vline&\\vline&\\vline&\\vline& &\\vline&\\vline&\n\\end{bmatrix}" }, { "math_id": 13, "text": "\\bf{\\Xi}=\\left[\\bf{\\xi}_1 \\bf{\\xi}_2 \\cdots \\bf{\\xi}_n \\right]" }, { "math_id": 14, "text": "\\dot{\\bf{X}}=\\bf{\\Theta}(\\bf{X})\\bf{\\Xi}" }, { "math_id": 15, "text": "\\bf{\\Xi}" }, { "math_id": 16, "text": "L_1" }, { "math_id": 17, "text": "\\bf{\\xi}_k=\\underset{\\bf{\\xi}'_k}{\\arg\\min}\\left|\\left|\\dot{\\bf{X}}_k-\\bf{\\Theta}(\\bf{X})\\bf{\\xi}'_k\\right|\\right|_2 +\\lambda \\left|\\left|\\bf{\\xi}'_k\\right|\\right|_1," }, { "math_id": 18, "text": "\\lambda" }, { "math_id": 19, "text": "\\bf{\\xi}_k" }, { "math_id": 20, "text": "\\dot{x}_k=\\bf{\\Theta}(\\bf{x})\\bf{\\xi}_k" } ]
https://en.wikipedia.org/wiki?curid=72100476
7210212
Stochastic simulation
A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities. Realizations of these random variables are generated and inserted into a model of the system. Outputs of the model are recorded, and then the process is repeated with a new set of random values. These steps are repeated until a sufficient amount of data is gathered. In the end, the distribution of the outputs shows the most probable estimates as well as a frame of expectations regarding what ranges of values the variables are more or less likely to fall in. Often random variables inserted into the model are created on a computer with a random number generator (RNG). The U(0,1) uniform distribution outputs of the random number generator are then transformed into random variables with probability distributions that are used in the system model. Etymology. "Stochastic" originally meant "pertaining to conjecture"; from Greek stokhastikos "able to guess, conjecturing": from stokhazesthai "guess"; from stokhos "a guess, aim, target, mark". The sense of "randomly determined" was first recorded in 1934, from German Stochastik. Discrete-event simulation. In order to determine the next event in a stochastic simulation, the rates of all possible changes to the state of the model are computed, and then ordered in an array. Next, the cumulative sum of the array is taken, and the final cell contains the number R, where R is the total event rate. This cumulative array is now a discrete cumulative distribution, and can be used to choose the next event by picking a random number z~U(0,R) and choosing the first event, such that z is less than the rate associated with that event. Probability distributions. A probability distribution is used to describe the potential outcome of a random variable. Limits the outcomes where the variable can only take on discrete values. Bernoulli distribution. A random variable X is Bernoulli-distributed with parameter p if it has two possible outcomes usually encoded 1 (success or default) or 0 (failure or survival) where the probabilities of success and failure are formula_0 and formula_1 where formula_2. To produce a random variable X with a Bernoulli distribution from a U(0,1) uniform distribution made by a random number generator, we define formula_3 such that the probability for formula_4 and formula_5. Example: Toss of coin. Define formula_6 For a fair coin, both realizations are equally likely. We can generate realizations of this random variable X from a formula_7 uniform distribution provided by a random number generator (RNG) by having formula_8 if the RNG outputs a value between 0 and 0.5 and formula_9 if the RNG outputs a value between 0.5 and 1. formula_10 Of course, the two outcomes may not be equally likely (e.g. success of medical treatment). Binomial distribution. A binomial distributed random variable Y with parameters "n" and "p" is obtained as the sum of "n" independent and identically Bernoulli-distributed random variables "X"1, "X"2, ..., "X""n" Example: A coin is tossed three times. Find the probability of getting exactly two heads. This problem can be solved by looking at the sample space. There are three ways to get two heads. &lt;templatestyles src="Block indent/styles.css"/&gt;HHH, HHT, HTH, THH, TTH, THT, HTT, TTT The answer is 3/8 (= 0.375). Poisson distribution. A poisson process is a process where events occur randomly in an interval of time or space. The probability distribution for Poisson processes with constant rate "λ" per time interval is given by the following equation. formula_11 Defining formula_12 as the number of events that occur in the time interval formula_13 formula_14 It can be shown that inter-arrival times for events is exponentially distributed with a cumulative distribution function (CDF) of formula_15. The inverse of the exponential CDF is given by formula_16 where formula_17 is an formula_18 uniformly distributed random variable. Simulating a Poisson process with a constant rate formula_19 for the number of events formula_20 that occur in interval formula_21 can be carried out with the following algorithm. Methods. Direct and first reaction methods. Published by Dan Gillespie in 1977, and is a linear search on the cumulative array. See Gillespie algorithm. Gillespie’s Stochastic Simulation Algorithm (SSA) is essentially an exact procedure for numerically simulating the time evolution of a well-stirred chemically reacting system by taking proper account of the randomness inherent in such a system. It is rigorously based on the same microphysical premise that underlies the chemical master equation and gives a more realistic representation of a system’s evolution than the deterministic reaction rate equation (RRE) represented mathematically by ODEs. As with the chemical master equation, the SSA converges, in the limit of large numbers of reactants, to the same solution as the law of mass action. Next reaction method. Published 2000 by Gibson and Bruck the next reaction method improves over the first reaction method by reducing the amount of random numbers that need to be generated. To make the sampling of reactions more efficient, an indexed [priority queue] is used to store the reaction times. To make the computation of reaction propensities more efficient, a dependency graph is also used. This dependency graph tells which reaction propensities to update after a particular reaction has fired. While more efficient, the next reaction method requires more complex data structures than either direct simulation or the first reaction method. Optimised and sorting direct methods. Published 2004 and 2005. These methods sort the cumulative array to reduce the average search depth of the algorithm. The former runs a presimulation to estimate the firing frequency of reactions, whereas the latter sorts the cumulative array on-the-fly. Logarithmic direct method. Published in 2006. This is a binary search on the cumulative array, thus reducing the worst-case time complexity of reaction sampling to O (log M). Partial-propensity methods. Published in 2009, 2010, and 2011 (Ramaswamy 2009, 2010, 2011). Use factored-out, partial reaction propensities to reduce the computational cost to scale with the number of species in the network, rather than the (larger) number of reactions. Four variants exist: The use of partial-propensity methods is limited to elementary chemical reactions, i.e., reactions with at most two different reactants. Every non-elementary chemical reaction can be equivalently decomposed into a set of elementary ones, at the expense of a linear (in the order of the reaction) increase in network size. Approximate Methods. A general drawback of stochastic simulations is that for big systems, too many events happen which cannot all be taken into account in a simulation. The following methods can dramatically improve simulation speed by some approximations. τ leaping method. Since the SSA method keeps track of each transition, it would be impractical to implement for certain applications due to high time complexity. Gillespie proposed an approximation procedure, the tau-leaping method which decreases computational time with minimal loss of accuracy. Instead of taking incremental steps in time, keeping track of "X"("t") at each time step as in the SSA method, the tau-leaping method leaps from one subinterval to the next, approximating how many transitions take place during a given subinterval. It is assumed that the value of the leap, τ, is small enough that there is no significant change in the value of the transition rates along the subinterval ["t", "t" + "τ"]. This condition is known as the leap condition. The tau-leaping method thus has the advantage of simulating many transitions in one leap while not losing significant accuracy, resulting in a speed up in computational time. Conditional Difference Method. This method approximates reversible processes (which includes random walk/diffusion processes) by taking only net rates of the opposing events of a reversible process into account. The main advantage of this method is that it can be implemented with a simple if-statement replacing the previous transition rates of the model with new, effective rates. The model with the replaced transition rates can thus be solved, for instance, with the conventional SSA. Continuous simulation. While in discrete state space it is clearly distinguished between particular states (values) in continuous space it is not possible due to certain continuity. The system usually change over time, variables of the model, then change continuously as well. Continuous simulation thereby simulates the system over time, given differential equations determining the rates of change of state variables. Example of continuous system is "the predator/prey model" or cart-pole balancing Probability distributions. Normal distribution. The random variable X is said to be normally distributed with parameters μ and σ, abbreviated by "X" ∈ "N"("μ", "σ"2), if the density of the random variable is given by the formula formula_27 Many things actually are normally distributed, or very close to it. For example, height and intelligence are approximately normally distributed; measurement errors also often have a normal distribution. Exponential distribution. Exponential distribution describes the time between events in a Poisson process, i.e. a process in which events occur continuously and independently at a constant average rate. The exponential distribution is popular, for example, in queuing theory when we want to model the time we have to wait until a certain event takes place. Examples include the time until the next client enters the store, the time until a certain company defaults or the time until some machine has a defect. Student's t-distribution. Student's t-distribution are used in finance as probabilistic models of assets returns. The density function of the t-distribution is given by the following equation: formula_28 where formula_29 is the number of "degrees of freedom" and formula_30 is the gamma function. For large values of "n", the t-distribution doesn't significantly differ from a standard normal distribution. Usually, for values "n" &gt; 30, the t-distribution is considered as equal to the standard normal distribution. Combined simulation. It is often possible to model one and the same system by use of completely different world views. Discrete event simulation of a problem as well as continuous event simulation of it (continuous simulation with the discrete events that disrupt the continuous flow) may lead eventually to the same answers. Sometimes however, the techniques can answer different questions about a system. If we necessarily need to answer all the questions, or if we don't know what purposes is the model going to be used for, it is convenient to apply combined continuous/discrete methodology. Similar techniques can change from a discrete, stochastic description to a deterministic, continuum description in a time-and space dependent manner. The use of this technique enables the capturing of noise due to small copy numbers, while being much faster to simulate than the conventional Gillespie algorithm. Furthermore, the use of the deterministic continuum description enables the simulations of arbitrarily large systems. Monte Carlo simulation. Monte Carlo is an estimation procedure. The main idea is that if it is necessary to know the average value of some random variable and its distribution cannot be stated, and if it is possible to take samples from the distribution, we can estimate it by taking the samples, independently, and averaging them. If there are sufficient samples, then the law of large numbers says the average must be close to the true value. The central limit theorem says that the average has a Gaussian distribution around the true value. As a simple example, suppose we need to measure area of a shape with a complicated, irregular outline. The Monte Carlo approach is to draw a square around the shape and measure the square. Then we throw darts into the square, as uniformly as possible. The fraction of darts falling on the shape gives the ratio of the area of the shape to the area of the square. In fact, it is possible to cast almost any integral problem, or any averaging problem, into this form. It is necessary to have a good way to tell if you're inside the outline, and a good way to figure out how many darts to throw. Last but not least, we need to throw the darts uniformly, i.e., using a good random number generator. Application. There are wide possibilities for use of Monte Carlo Method: Random number generators. For simulation experiments (including Monte Carlo) it is necessary to generate random numbers (as values of variables). The problem is that the computer is highly deterministic machine—basically, behind each process there is always an algorithm, a deterministic computation changing inputs to outputs; therefore it is not easy to generate uniformly spread random numbers over a defined interval or set. A random number generator is a device capable of producing a sequence of numbers which cannot be "easily" identified with deterministic properties. This sequence is then called a "sequence of stochastic numbers". The algorithms typically rely on pseudorandom numbers, computer generated numbers mimicking true random numbers, to generate a realization, one possible outcome of a process. Methods for obtaining random numbers have existed for a long time and are used in many different fields (such as gaming). However, these numbers suffer from a certain bias. Currently the best methods expected to produce truly random sequences are natural methods that take advantage of the random nature of quantum phenomena. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(X = 1) = p" }, { "math_id": 1, "text": "P(X = 0) = 1 - p" }, { "math_id": 2, "text": "0 \\leq p \\leq 1" }, { "math_id": 3, "text": "X = \\begin{cases} 1, & \\text{if } 0 \\leq U < p \\\\ 0, & \\text{if } 1 \\geq U \\geq p \\end{cases} " }, { "math_id": 4, "text": "P(X = 1) = P(0 \\leq U < p) = p" }, { "math_id": 5, "text": "P(X = 0) = P(1 \\geq U \\geq p) = 1 - p" }, { "math_id": 6, "text": "\nX = \\begin{cases}\n1 & \\text{if heads comes up} \\\\\n0 & \\text{if tails comes up}\n\\end{cases}\n" }, { "math_id": 7, "text": "U(1,0)" }, { "math_id": 8, "text": "X = 1" }, { "math_id": 9, "text": "X = 0" }, { "math_id": 10, "text": "\\begin{align}\n P (X = 1) &= P(0 \\leq U < 1/2) = 1/2 \\\\\n P (X = 0) &= P(1 \\geq U \\geq 1/2) = 1/2\n\\end{align}" }, { "math_id": 11, "text": "P(k \\text{ events in interval}) = \\frac{\\lambda^k e^{-\\lambda}}{k!}" }, { "math_id": 12, "text": "N(t)" }, { "math_id": 13, "text": "t" }, { "math_id": 14, "text": "P(N(t) = k) = \\frac{(t\\lambda)^{k}}{k!}e^{-t\\lambda}" }, { "math_id": 15, "text": "F(t) = 1 - e^{-t\\lambda}" }, { "math_id": 16, "text": "t = -\\frac{1}{\\lambda}\\ln(u)" }, { "math_id": 17, "text": "u" }, { "math_id": 18, "text": "U(0,1)" }, { "math_id": 19, "text": "\\lambda" }, { "math_id": 20, "text": "N" }, { "math_id": 21, "text": "[t_\\text{start},t_\\text{end}]" }, { "math_id": 22, "text": "N = 0" }, { "math_id": 23, "text": "t = t_\\text{start}" }, { "math_id": 24, "text": "t = t - \\ln(u) / \\lambda" }, { "math_id": 25, "text": "t > t_\\text{end}" }, { "math_id": 26, "text": "N = N + 1" }, { "math_id": 27, "text": "f_X(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{ -\\frac{(x-\\mu)^2}{2\\sigma^2} } , \\quad x \\in \\Reals." }, { "math_id": 28, "text": "f(t) = \\frac{\\Gamma(\\frac{\\nu+1}{2})} {\\sqrt{\\nu\\pi}\\,\\Gamma(\\frac{\\nu}{2})} \\left(1+\\frac{t^2}{\\nu} \\right)^{-\\frac{\\nu+1}{2}}," }, { "math_id": 29, "text": "\\nu" }, { "math_id": 30, "text": "\\Gamma" } ]
https://en.wikipedia.org/wiki?curid=7210212
72106025
P4-metric
P4 metric enables performance evaluation of the binary classifier. It is calculated from precision, recall, specificity and NPV (negative predictive value). P4 is designed in similar way to F1 metric, however addressing the criticisms leveled against F1. It may be perceived as its extension. Like the other known metrics, P4 is a function of: TP (true positives), TN (true negatives), FP (false positives), FN (false negatives). Justification. The key concept of P4 is to leverage the four key conditional probabilities: formula_0 - the probability that the sample is positive, provided the classifier result was positive. formula_1 - the probability that the classifier result will be positive, provided the sample is positive. formula_2 - the probability that the classifier result will be negative, provided the sample is negative. formula_3 - the probability the sample is negative, provided the classifier result was negative. The main assumption behind this metric is, that a properly designed binary classifier should give the results for which all the probabilities mentioned above are close to 1. P4 is designed the way that formula_4 requires all the probabilities being equal 1. It also goes to zero when any of these probabilities go to zero. Definition. P4 is defined as a harmonic mean of four key conditional probabilities: formula_5 In terms of TP,TN,FP,FN it can be calculated as follows: formula_6 Evaluation of the binary classifier performance. Evaluating the performance of binary classifier is a multidisciplinary concept. It spans from the evaluation of medical tests, psychiatric tests to machine learning classifiers from a variety of fields. Thus, many metrics in use exist under several names. Some of them being defined independently. &lt;templatestyles src="Reflist/styles.css" /&gt; Examples, comparing with the other metrics. Dependency table for selected metrics ("true" means depends, "false" - does not depend): Metrics that do not depend on a given probability are prone to misrepresentation when it approaches 0. Example 1: Rare disease detection test. Let us consider the medical test aimed to detect kind of rare disease. Population size is 100 000, while 0.05% population is infected. Test performance: 95% of all positive individuals are classified correctly (TPR=0.95) and 95% of all negative individuals are classified correctly (TNR=0.95). In such a case, due to high population imbalance, in spite of having high test accuracy (0.95), the probability that an individual who has been classified as positive is in fact positive is very low: formula_10 And now we can observe how this low probability is reflected in some of the metrics: Example 2: Image recognition - cats vs dogs. We are training neural network based image classifier. We are considering only two types of images: containing dogs (labeled as 0) and containing cats (labeled as 1). Thus, our goal is to distinguish between the cats and dogs. The classifier overpredicts in favor of cats ("positive" samples): 99.99% of cats are classified correctly and only 1% of dogs are classified correctly. The image dataset consists of 100000 images, 90% of which are pictures of cats and 10% are pictures of dogs. In such a situation, the probability that the picture containing dog will be classified correctly is pretty low: formula_15 Not all the metrics are noticing this low probability: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(+ \\mid C{+})" }, { "math_id": 1, "text": "P(C{+} \\mid +)" }, { "math_id": 2, "text": "P(C{-} \\mid -)" }, { "math_id": 3, "text": "P(- \\mid C{-})" }, { "math_id": 4, "text": "\\mathrm{P}_4 = 1" }, { "math_id": 5, "text": "\\mathrm{P}_4 = \\frac{4}{\\frac{1}{P(+ \\mid C{+})} + \\frac{1}{P(C{+} \\mid +)} + \\frac{1}{P(C{-} \\mid -)} + \\frac{1}{P(- \\mid C{-})}} = \\frac{4}{\\frac{1}{\\mathit{precision}} + \\frac{1}{\\mathit{recall}} + \\frac{1}{\\mathit{specificity}} + \\frac{1}{\\mathit{NPV}}}" }, { "math_id": 6, "text": "\\mathrm{P}_4 = \\frac{4\\cdot\\mathrm{TP}\\cdot\\mathrm{TN}}{4\\cdot\\mathrm{TP}\\cdot\\mathrm{TN} + (\\mathrm{TP} + \\mathrm{TN}) \\cdot (\\mathrm{FP} + \\mathrm{FN})}" }, { "math_id": 7, "text": "\\mathrm{P}_4 \\in [0,1]" }, { "math_id": 8, "text": "\\mathrm{P}_4 \\approx 1" }, { "math_id": 9, "text": "\\mathrm{P}_4 \\approx 0" }, { "math_id": 10, "text": "P(+ \\mid C{+}) = 0.0095 " }, { "math_id": 11, "text": "\\mathrm{P}_4 = 0.0370" }, { "math_id": 12, "text": "\\mathrm{F}_1 = 0.0188" }, { "math_id": 13, "text": "\\mathrm{J} = \\mathbf{0.9100}" }, { "math_id": 14, "text": "\\mathrm{MK} = 0.0095" }, { "math_id": 15, "text": "P(C-|-) = 0.01" }, { "math_id": 16, "text": "\\mathrm{P}_4 = 0.0388" }, { "math_id": 17, "text": "\\mathrm{F}_1 = \\mathbf{0.9478}" }, { "math_id": 18, "text": "\\mathrm{J} = 0.0099 " }, { "math_id": 19, "text": "\\mathrm{MK} = \\mathbf{0.8183} " } ]
https://en.wikipedia.org/wiki?curid=72106025
72112048
Ddbar lemma
Theorem in complex geometry In complex geometry, the formula_0 lemma (pronounced ddbar lemma) is a mathematical lemma about the de Rham cohomology class of a complex differential form. The formula_0-lemma is a result of Hodge theory and the Kähler identities on a compact Kähler manifold. Sometimes it is also known as the formula_1-lemma, due to the use of a related operator formula_2, with the relation between the two operators being formula_3 and so formula_4.1.17Lem 5.50 Statement. The formula_0 lemma asserts that if formula_5 is a compact Kähler manifold and formula_6 is a complex differential form of bidegree (p,q) (with formula_7) whose class formula_8 is zero in de Rham cohomology, then there exists a form formula_9 of bidegree (p-1,q-1) such that formula_10 where formula_11 and formula_12 are the Dolbeault operators of the complex manifold formula_13.Ch VI Lem 8.6 ddbar potential. The form formula_14 is called the formula_0-potential of formula_15. The inclusion of the factor formula_16 ensures that formula_17 is a "real" differential operator, that is if formula_15 is a differential form with real coefficients, then so is formula_14. This lemma should be compared to the notion of an exact differential form in de Rham cohomology. In particular if formula_18 is a closed differential k-form (on any smooth manifold) whose class is zero in de Rham cohomology, then formula_19 for some differential (k-1)-form formula_20 called the formula_21-potential (or just potential) of formula_15, where formula_21 is the exterior derivative. Indeed, since the Dolbeault operators sum to give the exterior derivative formula_22 and square to give zero formula_23, the formula_0-lemma implies that formula_24, refining the formula_21-potential to the formula_0-potential in the setting of compact Kähler manifolds. Proof. The formula_0-lemma is a consequence of Hodge theory applied to a compact Kähler manifold. The Hodge theorem for an elliptic complex may be applied to any of the operators formula_25 and respectively to their Laplace operators formula_26. To these operators one can define spaces of harmonic differential forms given by the kernels: formula_27 The Hodge decomposition theorem asserts that there are three orthogonal decompositions associated to these spaces of harmonic forms, given by formula_28 where formula_29 are the formal adjoints of formula_30 with respect to the Riemannian metric of the Kähler manifold, respectively.Thm. 3.2.8 These decompositions hold separately on any compact complex manifold. The importance of the manifold being Kähler is that there is a relationship between the Laplacians of formula_31 and hence of the orthogonal decompositions above. In particular on a compact Kähler manifold formula_32 which implies an orthogonal decomposition formula_33 where there are the further relations formula_34 relating the spaces of formula_35 and formula_36-harmonic forms.Prop. 3.1.12 As a result of the above decompositions, one can prove the following lemma. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Lemma (formula_0-lemma) — Let formula_37 be a formula_21-closed (p,q)-form on a compact Kähler manifold formula_13. Then the following are equivalent: The proof is as follows.Cor. 3.2.10 Let formula_39 be a closed (p,q)-form on a compact Kähler manifold formula_5. It follows quickly that (d) implies (a), (b), and (c). Moreover, the orthogonal decompositions above imply that any of (a), (b), or (c) imply (e). Therefore, the main difficulty is to show that (e) implies (d). To that end, suppose that formula_15 is orthogonal to the subspace formula_40. Then formula_41. Since formula_15 is formula_21-closed and formula_42, it is also formula_12-closed (that is formula_43). If formula_44 where formula_45 and formula_46 is contained in formula_47 then since this sum is from an orthogonal decomposition with respect to the inner product formula_48 induced by the Riemannian metric, formula_49 or in other words formula_50 and formula_51. Thus it is the case that formula_52. This allows us to write formula_53 for some differential form formula_54. Applying the Hodge decomposition for formula_11 to formula_55, formula_56 where formula_57 is formula_58-harmonic, formula_59 and formula_60. The equality formula_61 implies that formula_57 is also formula_62-harmonic and therefore formula_63. Thus formula_64. However, since formula_15 is formula_21-closed, it is also formula_11-closed. Then using a similar trick to above, formula_65 also applying the Kähler identity that formula_66. Thus formula_67 and setting formula_68 produces the formula_0-potential. Local version. A local version of the formula_0-lemma holds and can be proven without the need to appeal to the Hodge decomposition theorem.Ex 1.3.3, Rmk 3.2.11 It is the analogue of the Poincaré lemma or Dolbeault–Grothendieck lemma for the formula_0 operator. The local formula_0-lemma holds over any domain on which the aforementioned lemmas hold. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Lemma (Local formula_0-lemma) — Let formula_13 be a complex manifold and formula_37 be a differential form of bidegree (p,q) for formula_7. Then formula_15 is formula_21-closed if and only if for every point formula_69 there exists an open neighbourhood formula_70 containing formula_71 and a differential form formula_72 such that formula_38 on formula_73. The proof follows quickly from the aforementioned lemmas. Firstly observe that if formula_15 is locally of the form formula_74 for some formula_14 then formula_75 because formula_76, formula_77, and formula_78. On the other hand, suppose formula_15 is formula_21-closed. Then by the Poincaré lemma there exists an open neighbourhood formula_73 of any point formula_69 and a form formula_79 such that formula_80. Now writing formula_81 for formula_82 and formula_83 note that formula_84 and comparing the bidegrees of the forms in formula_85 implies that formula_86 and formula_87 and that formula_88. After possibly shrinking the size of the open neighbourhood formula_73, the Dolbeault–Grothendieck lemma may be applied to formula_89 and formula_90 (the latter because formula_91) to obtain local forms formula_92 such that formula_93 and formula_94. Noting then that formula_95 this completes the proof as formula_96 where formula_97. Bott–Chern cohomology. The Bott–Chern cohomology is a cohomology theory for compact complex manifolds which depends on the operators formula_11 and formula_12, and measures the extent to which the formula_0-lemma fails to hold. In particular when a compact complex manifold is a Kähler manifold, the Bott–Chern cohomology is isomorphic to the Dolbeault cohomology, but in general it contains more information. The Bott–Chern cohomology groups of a compact complex manifold are defined by formula_98 Since a differential form which is both formula_11 and formula_12-closed is formula_21-closed, there is a natural map formula_99 from Bott–Chern cohomology groups to de Rham cohomology groups. There are also maps to the formula_11 and formula_12 Dolbeault cohomology groups formula_100. When the manifold formula_13 satisfies the formula_0-lemma, for example if it is a compact Kähler manifold, then the above maps from Bott–Chern cohomology to Dolbeault cohomology are isomorphisms, and furthermore the map from Bott–Chern cohomology to de Rham cohomology is injective. As a consequence, there is an isomorphism formula_101 whenever formula_13 satisfies the formula_0-lemma. In this way, the kernel of the maps above measure the failure of the manifold formula_13 to satisfy the lemma, and in particular measure the failure of formula_13 to be a Kähler manifold. Consequences for bidegree (1,1). The most significant consequence of the formula_0-lemma occurs when the complex differential form has bidegree (1,1). In this case the lemma states that an exact differential form formula_102 has a formula_0-potential given by a smooth function formula_103: formula_104 In particular this occurs in the case where formula_105 is a Kähler form restricted to a small open subset formula_106 of a Kähler manifold (this case follows from the local version of the lemma), where the aforementioned Poincaré lemma ensures that it is an exact differential form. This leads to the notion of a Kähler potential, a locally defined function which completely specifies the Kähler form. Another important case is when formula_107 is the difference of two Kähler forms which are in the same de Rham cohomology class formula_108. In this case formula_109 in de Rham cohomology so the formula_0-lemma applies. By allowing (differences of) Kähler forms to be completely described using a single function, which is automatically a plurisubharmonic function, the study of compact Kähler manifolds can be undertaken using techniques of pluripotential theory, for which many analytical tools are available. For example, the formula_0-lemma is used to rephrase the Kähler–Einstein equation in terms of potentials, transforming it into a complex Monge–Ampère equation for the Kähler potential. ddbar manifolds. Complex manifolds which are not necessarily Kähler but still happen to satisfy the formula_0-lemma are known as formula_0-manifolds. For example, compact complex manifolds which are Fujiki class C satisfy the formula_0-lemma but are not necessarily Kähler. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\partial \\bar \\partial" }, { "math_id": 1, "text": "dd^c" }, { "math_id": 2, "text": "d^c = -\\frac{i}{2}(\\partial - \\bar \\partial)" }, { "math_id": 3, "text": "i\\partial \\bar \\partial = dd^c" }, { "math_id": 4, "text": "\\alpha = dd^c \\beta" }, { "math_id": 5, "text": "(X,\\omega)" }, { "math_id": 6, "text": "\\alpha \\in \\Omega^{p,q}(X)" }, { "math_id": 7, "text": "p,q\\ge 1" }, { "math_id": 8, "text": "[\\alpha] \\in H_{dR}^{p+q}(X,\\mathbb{C})" }, { "math_id": 9, "text": "\\beta\\in \\Omega^{p-1,q-1}(X)" }, { "math_id": 10, "text": "\\alpha = i\\partial \\bar \\partial \\beta," }, { "math_id": 11, "text": "\\partial" }, { "math_id": 12, "text": "\\bar \\partial" }, { "math_id": 13, "text": "X" }, { "math_id": 14, "text": "\\beta" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "i" }, { "math_id": 17, "text": "i\\partial \\bar \\partial" }, { "math_id": 18, "text": "\\alpha\\in \\Omega^k(X)" }, { "math_id": 19, "text": "\\alpha = d\\gamma" }, { "math_id": 20, "text": "\\gamma" }, { "math_id": 21, "text": "d" }, { "math_id": 22, "text": "d = \\partial + \\bar \\partial" }, { "math_id": 23, "text": "\\partial^2 = \\bar \\partial^2 = 0" }, { "math_id": 24, "text": "\\gamma = \\bar \\partial \\beta " }, { "math_id": 25, "text": "d, \\partial, \\bar \\partial" }, { "math_id": 26, "text": "\\Delta_d, \\Delta_{\\partial}, \\Delta_{\\bar \\partial}" }, { "math_id": 27, "text": "\\begin{align}\n\\mathcal{H}_d^k &= \\ker \\Delta_d : \\Omega^k(X) \\to \\Omega^k(X)\\\\\n\\mathcal{H}_{\\partial}^{p,q} &= \\ker \\Delta_{\\partial}: \\Omega^{p,q}(X) \\to \\Omega^{p,q}(X)\\\\\n\\mathcal{H}_{\\bar \\partial}^{p,q} &= \\ker \\Delta_{\\bar \\partial}: \\Omega^{p,q}(X) \\to \\Omega^{p,q}(X)\\\\\n\\end{align} " }, { "math_id": 28, "text": "\\begin{align}\n\\Omega^k(X) &= \\mathcal{H}_d^k \\oplus \\operatorname{im} d \\oplus \\operatorname{im} d^*\\\\\n\\Omega^{p,q}(X) &= \\mathcal{H}_{\\partial}^{p,q} \\oplus \\operatorname{im} \\partial \\oplus \\operatorname{im} \\partial^*\\\\\n\\Omega^{p,q}(X) &= \\mathcal{H}_{\\bar \\partial}^{p,q} \\oplus \\operatorname{im} \\bar \\partial \\oplus \\operatorname{im} \\bar \\partial^*\n\\end{align} " }, { "math_id": 29, "text": "d^*, \\partial^*, \\bar \\partial^* " }, { "math_id": 30, "text": "d,\\partial, \\bar\\partial " }, { "math_id": 31, "text": "d,\\partial,\\bar \\partial " }, { "math_id": 32, "text": "\\Delta_d = 2 \\Delta_{\\partial} = 2 \\Delta_{\\bar \\partial} " }, { "math_id": 33, "text": "\\mathcal{H}_d^k = \\bigoplus_{p+q=k} \\mathcal{H}_{\\partial}^{p,q} = \\bigoplus_{p+q=k} \\mathcal{H}_{\\bar \\partial}^{p,q} " }, { "math_id": 34, "text": "\\mathcal{H}_{\\partial}^{p,q} = \\overline{\\mathcal{H}_{\\bar \\partial}^{q,p}}" }, { "math_id": 35, "text": "\\partial " }, { "math_id": 36, "text": "\\bar \\partial " }, { "math_id": 37, "text": "\\alpha\\in \\Omega^{p,q}(X)" }, { "math_id": 38, "text": "\\alpha = i \\partial \\bar \\partial \\beta" }, { "math_id": 39, "text": "\\alpha\\in \\Omega^{p,q}(X) " }, { "math_id": 40, "text": "\\mathcal{H}_{\\bar \\partial}^{p,q} \\subset \\Omega^{p,q}(X)" }, { "math_id": 41, "text": "\\alpha \\in \\operatorname{im} \\bar \\partial \\oplus \\operatorname{im} \\bar \\partial^*" }, { "math_id": 42, "text": "d=\\partial + \\bar \\partial" }, { "math_id": 43, "text": "\\bar \\partial \\alpha = 0" }, { "math_id": 44, "text": "\\alpha = \\alpha' + \\alpha''" }, { "math_id": 45, "text": "\\alpha' \\in \\operatorname{im} \\bar \\partial" }, { "math_id": 46, "text": "\\alpha'' = \\bar \\partial^* \\gamma" }, { "math_id": 47, "text": "\\operatorname{im} \\bar \\partial^*" }, { "math_id": 48, "text": "\\langle - , - \\rangle" }, { "math_id": 49, "text": "\\langle \\alpha'', \\alpha''\\rangle = \\langle \\alpha, \\alpha'' \\rangle = \\langle \\alpha, \\bar \\partial^* \\gamma \\rangle = \\langle \\bar \\partial \\alpha, \\gamma \\rangle = 0" }, { "math_id": 50, "text": "\\|\\alpha''\\|^2 = 0" }, { "math_id": 51, "text": "\\alpha'' = 0" }, { "math_id": 52, "text": "\\alpha=\\alpha'\\in \\operatorname{im} \\bar \\partial" }, { "math_id": 53, "text": "\\alpha = \\bar \\partial \\eta" }, { "math_id": 54, "text": "\\eta \\in \\Omega^{p,q-1}(X)" }, { "math_id": 55, "text": "\\eta" }, { "math_id": 56, "text": "\\eta = \\eta_0 + \\partial \\eta' + \\partial^* \\eta''" }, { "math_id": 57, "text": "\\eta_0" }, { "math_id": 58, "text": "\\Delta_\\partial" }, { "math_id": 59, "text": "\\eta'\\in \\Omega^{p-1,q-1}(X)" }, { "math_id": 60, "text": "\\eta'' \\in \\Omega^{p+1,q-1}(X)" }, { "math_id": 61, "text": "\\Delta_\\bar \\partial = \\Delta_\\partial" }, { "math_id": 62, "text": "\\Delta_{\\bar \\partial}" }, { "math_id": 63, "text": "\\bar \\partial \\eta_0 = \\bar \\partial^* \\eta_0 = 0" }, { "math_id": 64, "text": "\\alpha = \\bar \\partial \\partial \\eta' + \\bar \\partial \\partial^* \\eta''" }, { "math_id": 65, "text": "\\langle \\bar \\partial \\partial^* \\eta'', \\bar \\partial \\partial^* \\eta''\\rangle = \\langle \\alpha, \\bar \\partial \\partial^* \\eta'' \\rangle = - \\langle \\alpha, \\partial^* \\bar \\partial \\eta'' \\rangle = - \\langle \\partial \\alpha, \\bar \\partial \\eta'' \\rangle = 0," }, { "math_id": 66, "text": "\\bar \\partial \\partial^* = -\\partial^* \\bar \\partial " }, { "math_id": 67, "text": "\\alpha = \\bar \\partial \\partial \\eta'" }, { "math_id": 68, "text": "\\beta = i \\eta' " }, { "math_id": 69, "text": "p\\in X" }, { "math_id": 70, "text": "U\\subset X" }, { "math_id": 71, "text": "p" }, { "math_id": 72, "text": "\\beta\\in \\Omega^{p-1,q-1}(U)" }, { "math_id": 73, "text": "U" }, { "math_id": 74, "text": "\\alpha = i\\partial \\bar \\partial \\beta" }, { "math_id": 75, "text": "d\\alpha = d(i\\partial \\bar \\partial \\beta) = i (\\partial + \\bar \\partial)(\\partial \\bar \\partial \\beta) = 0" }, { "math_id": 76, "text": "\\partial^2 = 0" }, { "math_id": 77, "text": "\\bar \\partial^2=0" }, { "math_id": 78, "text": "\\partial \\bar \\partial = - \\bar \\partial \\partial" }, { "math_id": 79, "text": "\\gamma\\in \\Omega^{p+q-1}(U)" }, { "math_id": 80, "text": "\\alpha = d \\gamma" }, { "math_id": 81, "text": "\\gamma = \\gamma' + \\gamma''" }, { "math_id": 82, "text": "\\gamma'\\in \\Omega^{p-1,q}(X)" }, { "math_id": 83, "text": "\\gamma'' \\in \\Omega^{p,q-1}(X)" }, { "math_id": 84, "text": "d\\alpha = (\\partial + \\bar \\partial) \\alpha = 0" }, { "math_id": 85, "text": "d\\alpha" }, { "math_id": 86, "text": "\\bar \\partial \\gamma' = 0" }, { "math_id": 87, "text": "\\partial \\gamma'' = 0" }, { "math_id": 88, "text": "\\alpha = \\partial \\gamma' + \\bar \\partial \\gamma''" }, { "math_id": 89, "text": "\\gamma'" }, { "math_id": 90, "text": "\\overline{\\gamma''}" }, { "math_id": 91, "text": "\\overline{ \\partial \\gamma''} = \\bar \\partial (\\overline{\\gamma''}) = 0" }, { "math_id": 92, "text": "\\eta', \\eta''\\in \\Omega^{p-1,q-1}(X)" }, { "math_id": 93, "text": "\\gamma' = \\bar \\partial \\eta'" }, { "math_id": 94, "text": "\\overline{\\gamma''} = \\bar \\partial \\eta''" }, { "math_id": 95, "text": "\\gamma'' = \\partial \\overline{\\eta''}" }, { "math_id": 96, "text": "\\alpha = \\partial \\bar \\partial \\eta' + \\bar \\partial \\partial \\overline{\\eta''} = i\\partial \\bar \\partial \\beta" }, { "math_id": 97, "text": "\\beta = -i \\eta' + i \\overline{\\eta''}" }, { "math_id": 98, "text": "H_{BC}^{p,q}(X) = \\frac{ \\ker (\\partial: \\Omega^{p,q} \\to \\Omega^{p+1,q}) \\cap \\ker (\\bar \\partial: \\Omega^{p,q} \\to \\Omega^{p,q+1})}{\\operatorname{im} (\\partial \\bar \\partial: \\Omega^{p-1,q-1} \\to \\Omega^{p,q})}." }, { "math_id": 99, "text": "H_{BC}^{p,q}(X) \\to H_{dR}^{p+q}(X,\\mathbb{C})" }, { "math_id": 100, "text": "H_{BC}^{p,q}(X) \\to H_{\\partial}^{p,q}(X), H_{\\bar \\partial}^{p,q}(X)" }, { "math_id": 101, "text": "H_{dR}^{k}(X,\\mathbb{C}) = \\bigoplus_{p+q=k} H_{BC}^{p,q}(X)" }, { "math_id": 102, "text": "\\alpha\\in \\Omega^{1,1}(X)" }, { "math_id": 103, "text": "f\\in C^{\\infty}(X,\\mathbb{C})" }, { "math_id": 104, "text": "\\alpha = i\\partial \\bar \\partial f." }, { "math_id": 105, "text": "\\alpha = \\omega" }, { "math_id": 106, "text": "U \\subset X" }, { "math_id": 107, "text": "\\alpha = \\omega - \\omega'" }, { "math_id": 108, "text": "[\\omega] = [\\omega']" }, { "math_id": 109, "text": "[\\alpha] = [\\omega] - [\\omega'] = 0" } ]
https://en.wikipedia.org/wiki?curid=72112048
72117613
Roberts's triangle theorem
On triangles in line arrangements Roberts's triangle theorem, a result in discrete geometry, states that every simple arrangement of formula_0 lines has at least formula_1 triangular faces. Thus, three lines form a triangle, four lines form at least two triangles, five lines form at least three triangles, etc. It is named after Samuel Roberts, a British mathematician who published it in 1889. Statement and example. The theorem states that every simple arrangement of formula_0 lines in the Euclidean plane has at least formula_1 triangular faces. Here, an arrangement is simple when it has no two parallel lines and no three lines through the same point. A face is one of the polygons formed by the arrangement, but not crossed by any of its lines. Faces may be bounded or infinite, but only the bounded faces with exactly three sides count as triangles for the purposes of the theorem. One way to form an arrangement of formula_0 lines with exactly formula_1 triangular faces is to choose the lines to be tangent to a semicircle. For lines arranged in this way, the only triangles are the ones formed by three lines with consecutive points of tangency. The other faces of this arrangement are either bounded quadrilaterals, or unbounded. As the formula_0 lines have formula_1 consecutive triples, they also have formula_1 triangles. Proof. Branko Grünbaum found the proof in Roberts's original paper "unconvincing", and credits the first correct proof of Roberts's theorem to Robert W. Shannon, in 1979. He presents instead the following more elementary argument, first published in Russian by Alexei Belov. It depends implicitly on a simpler version of the same theorem, according to which every simple arrangement of three or more lines has at least one triangular face. This follows easily by induction from the fact that adding a line to an arrangement cannot decrease the number of triangular faces: if the line cuts an existing triangle, one of the resulting two pieces is again a triangle. If it were true more strongly that adding a line always increased the number of triangles, then a similar induction would prove Roberts's theorem, but it is not true. There exist arrangements for which, after adding a line, the number of triangles remains unchanged. Instead, Belov uses the following argument. If the formula_0 given lines are all moved without changing their slopes, their new positions can be described by a system of formula_0 real numbers, the offsets of each line from its original position. For each triangular face, there is a linear equation on the offsets of its three lines that, if satisfied, causes the face to retain its original area. If there could be fewer than formula_1 triangles, then (because there would be more variables than equations constraining them) it would be possible to fix two of the lines in place and find a simultaneous linear motion of all remaining lines, keeping their slopes fixed, that preserves all of the triangle areas. Such a motion must pass through arrangements that are not simple, for instance when one of the moving lines passes over the crossing point of the two fixed lines. At the time when the moving lines first form a non-simple arrangement, three or more lines meet at a point. Just before these lines meet, this subset of lines would have a triangular face (also present in the original arrangement) whose area approaches zero. But this contradicts the invariance of the face areas. The contradiction shows that the assumption that there are fewer than formula_1 triangles cannot be true. Related results. Whereas Roberts's theorem concerns the fewest possible triangles made by a given number of lines, the related Kobon triangle problem concerns the largest number possible. The two problems differ already for formula_2, where Roberts's theorem guarantees that three triangles will exist, but the solution to the Kobon triangle problem has five triangles. Roberts's theorem can be generalized from simple line arrangements to some non-simple arrangements, to arrangements in the projective plane rather than in the Euclidean plane, and to arrangements of hyperplanes in higher-dimensional spaces. Beyond line arrangements, the same bound as Roberts's theorem holds for arrangements of pseudolines. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n-2" }, { "math_id": 2, "text": "n=5" } ]
https://en.wikipedia.org/wiki?curid=72117613
72117807
Hintikka set
In mathematical logic, a Hintikka set is a set of logical formulas whose elements satisfy the following properties: The exact meaning of "conjuctive-type" and "disjunctive-type" is defined by the method of semantic tableaux. Hintikka sets arise when attempting to prove completeness of propositional logic using semantic tableaux. They are named after Jaakko Hintikka. Propositional Hintikka sets. In a semantic tableau for propositional logic, Hintikka sets can be defined using uniform notation for propositional tableaux. The elements of a propositional Hintikka set S satisfy the following conditions: If a set S is a Hintikka set, then S is satisfiable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha " }, { "math_id": 1, "text": "\\alpha_1, \\alpha_2" }, { "math_id": 2, "text": "\\beta " }, { "math_id": 3, "text": "\\beta_1,\\beta_2" } ]
https://en.wikipedia.org/wiki?curid=72117807
72120709
Certain answer
In database theory and knowledge representation, the one of the certain answers is the set of answers to a given query consisting of the intersection of all the complete databases that are consistent with a given knowledge base. The notion of certain answer, investigated in database theory since the 1970s, is indeed defined in the context of open world assumption, where the given knowledge base is assumed to be incomplete. Intuitively, certain answers are the answers that are always returned when querying a given knowledge base, considering both the extensional knowledge that the possible implications inferred by automatic reasoning, regardless of the specific interpretation. Definition. In literature, the set of certain answers is usually defined as follows: formula_0 where: In description logics, such set may be defined in a similar way as follows: Given an ontology formula_5 and a query formula_6 on formula_7, formula_8 is the set of tuples formula_9 such that, for each model formula_10 of formula_7, we have that formula_11. Where:
[ { "math_id": 0, "text": "cert_\\cap(Q,D) = \\bigcap \\left\\{ Q(D')| D' \\!\\in [\\![ D ]\\!] \\right\\}" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "D'" }, { "math_id": 4, "text": "[\\![ D ]\\!]" }, { "math_id": 5, "text": "\\mathcal{K}=\\langle\\mathcal{T},\\mathcal{A}\\rangle" }, { "math_id": 6, "text": "q(\\vec x)" }, { "math_id": 7, "text": "\\mathcal{K}" }, { "math_id": 8, "text": "cert(q,\\mathcal{K})" }, { "math_id": 9, "text": "\\vec a \\subseteq \\Gamma" }, { "math_id": 10, "text": "\\mathcal{I}" }, { "math_id": 11, "text": "\\mathcal{I}\\models q[\\vec a]" }, { "math_id": 12, "text": "\\mathcal{T}" }, { "math_id": 13, "text": "\\mathcal{A}" }, { "math_id": 14, "text": "\\Gamma" }, { "math_id": 15, "text": "q[\\vec a]" }, { "math_id": 16, "text": "\\vec a" } ]
https://en.wikipedia.org/wiki?curid=72120709
72123000
SPINA-GR
Insulin receptor gain, biomarker SPINA-GR is a calculated biomarker for insulin sensitivity. It represents insulin receptor gain. How to determine GR. The index is derived from a mathematical model of insulin-glucose homeostasis. For diagnostic purposes, it is calculated from fasting insulin and glucose concentrations with: formula_0. ["I"](∞): Fasting Insulin plasma concentration (μU/mL) ["G"](∞): Fasting blood glucose concentration (mg/dL) "G"1: Parameter for pharmacokinetics (154.93 s/L) "D""R": EC50 of insulin at its receptor (1,6 nmol/L) "G""E": Effector gain (50 s/mol) "P"(∞): Constitutive endogenous glucose production (150 μmol/s) Clinical significance. Validity. Compared to healthy volunteers, SPINA-GR is significantly reduced in persons with prediabetes and diabetes mellitus, and it correlates with the M value in glucose clamp studies, triceps skinfold, subscapular skinfold and (better than HOMA-IR and QUICKI) with the two-hour value in oral glucose tolerance testing (OGTT), glucose rise in OGTT, waist-to-hip ratio, body fat content (measured via DXA) and the HbA1c fraction. Clinical utility. Both in the FAST study, an observational case-control sequencing study including 300 persons from Germany, and in a large sample from the NHANES study, SPINA-GR differed more clearly between subjects with and without diabetes than the corresponding HOMA-IR, HOMA-IS and QUICKI indices. Scientific implications and other uses. Together with the secretory capacity of pancreatic beta cells (SPINA-GBeta), SPINA-GR provides the foundation for the definition of a fasting based disposition index of insulin-glucose homeostasis (SPINA-DI). In combination with SPINA-GBeta and whole-exome sequencing, calculating SPINA-GR helped to identify a new form of monogenetic diabetes (MODY) that is characterised by primary insulin resistance and results from a missense variant of the type 2 ryanodine receptor (RyR2) gene (p.N2291D). Pathophysiological implications. In lean subjects it is significantly higher than in a population with obese persons. In several populations, SPINA-GR correlated with the area under the glucose curve and 2-hour concentrations of glucose, insulin and proinsulin in oral glucose tolerance testing, concentrations of free fatty acids, ghrelin and adiponectin, and the HbA1c fraction. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\widehat{G}}_{R}=\\frac{{G}_{1}P(\\infty )({D}_{R}+\\left[I\\right](\\infty ))}{{G}_{E}\\left[I\\right](\\infty )[G](\\infty )}-\\frac{{D}_{R}}{{G}_{E}[I](\\infty )}-\\frac{1}{{G}_{E}}" } ]
https://en.wikipedia.org/wiki?curid=72123000
72123015
SPINA-GBeta
Calculated biomarker SPINA-GBeta is a calculated biomarker for pancreatic beta cell function. It represents the maximum amount of insulin that beta cells can produce per time-unit (e.g. in one second). How to determine GBeta. The index is derived from a mathematical model of insulin-glucose homeostasis. For diagnostic purposes, it is calculated from fasting insulin and glucose concentrations with: formula_0. ["I"](∞): Fasting Insulin plasma concentration (μU/mL) ["G"](∞): Fasting blood glucose concentration (mg/dL) "D""β": EC50 for glucose at beta cells (7 mmol/L) "G"3: Parameter for pharmacokinetics (58,8 s/L) Clinical significance. Validity. SPINA-GBeta significantly correlates with the M value in glucose clamp studies and (better than HOMA-Beta) with the two-hour value in oral glucose tolerance testing (OGTT), glucose rise in OGTT, subscapular skinfold, truncal fat content and the HbA1c fraction. It has the additional advantage that it circumvents the HOMA-blind zone, which renders the calculation of HOMA-Beta impossible if the fasting glucose concentration is 3.5 mmol/L (63 mg/dL) or below. Unlike HOMA-Beta, SPINA-Beta can be sensibly calculated in the whole range of measurements. Reliability. In repeated measurements, SPINA-GBeta had higher retest reliability than HOMA-Beta, a measurement for beta cell function from the homeostasis model assessment. Clinical utility. In the FAST study, an observational case-control sequencing study including 300 persons from Germany, SPINA-GBeta differed more clearly between subjects with and without diabetes than the corresponding HOMA-Beta index. Scientific implications and other uses. Together with the reconstructed insulin receptor gain (SPINA-GR), SPINA-GBeta provides the foundation for the definition of a fasting based disposition index of insulin-glucose homeostasis (SPINA-DI). In combination with SPINA-GR and whole-exome sequencing, calculating SPINA-GBeta helped to identify a new form of monogenetic diabetes (MODY) that is characterised by primary insulin resistance and results from a missense variant of the type 2 ryanodine receptor (RyR2) gene (p.N2291D). Pathophysiological implications. In several populations, SPINA-GBeta correlated with the area under the glucose curve and 2-hour concentrations of glucose, insulin and proinsulin in oral glucose tolerance testing, concentrations of free fatty acids, ghrelin and adiponectin, and the HbA1c fraction. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\widehat{G}}_{\\beta }=\\frac{[I](\\infty )({D}_{\\beta }+\\left[G\\right](\\infty ))}{{G}_{3}[G](\\infty )}" } ]
https://en.wikipedia.org/wiki?curid=72123015
72132724
Unimog 405
The Unimog 405 is a vehicle of the Unimog-series by Mercedes-Benz, made by Daimler Truck Holding AG. Developed in the 1990s, the Unimog 405 has been in production since 2000. Originally, DaimlerChrysler produced the Unimog at Gaggenau; in 2002, production was moved to Wörth am Rhein. The Unimog 405 is the implement carrier version of the Unimog and the successor to most previous Unimogs. Although retaining many characteristics typical of the Unimog, the 405's axle and chassis design concept with control arms instead of torque tubes marks a "paradigmatic change" in Unimog design. The Unimog 405 can legally be classified as either a 7.5-tonne lorry (C1), a 40-tonne lorry (C), or agricultural tractor (T). It is produced alongside the heavy-duty, off-road lorry-like Unimog 437.4, which features a different technical design. The Unimog 405 has been made in three major variants: UGN (2000–2016), LUG (2007–2013), and UGE (since 2013). In total, 22 types of the Unimog 405 have been made, with two types (405.210 and 405.230) exclusively sold on the North-American market as the Freightliner Unimog U 500. Unimog 405 types and model family. Throughout its more than 20-year-long production, the Unimog 405 has been developed into an implement carrier family that comprises various different "variants", "models", and "types". In total, three different "variants" of the Unimog 405 were made. DaimlerChrysler originally introduced the Unimog 405 in the "UGN" variant (Unimog Geräteträger Neu; new Unimog implement carrier), which was succeeded by the "UGE" variant in 2013. From 2007 until 2013 the "LUG" variant (leichter Unimog-Geräteträger; light Unimog implement carrier) was made alongside the "UGN" variant. In total, 13,930 Unimog 405 of the LUG and UGN variants were made; the UGE is still in production (as of November 2022) and thus has varying production figures. The "model" designation (i.e. the sales designation) of all Unimog 405 vehicles is a three-digit number with a U-prefix, e.g. "U 318". The only exception is the "LUG" variant of the Unimog, which was sold as the "U 20". In the "model" designation, the first digit is roughly indicative of the maximum permissible payload: 2: 7.5–9.3 t, 3: 7.5–10.2 t, 4: 12–12.5 t, 5: 15.5–16 t. Since the introduction of the "UGE" variant in 2014, the second and third digit have been used to indicate tenths of the approximate engine power output in DIN-PS. For example, an "Unimog U 318" is a "UGE" variant with a engine, and may be classified as a 7.5-tonne lorry. There is one exception to this; the "U 300" model was sold as the "U 290" model in some markets. The UGE model designation scheme includes the "U 427" and "U 435" models. These designations are also used as the series' designations for the Unimog 427, a predecessor of the Unimog 405, and for the Unimog 435. The aforementioned Unimogs 427 and 435 are different from the Unimog discussed in this article, the Unimog 405. The "types"' designation in Unimog 405 is indicative of frame length (and thus wheelbase), the engine model, as well as the maximum permissible axle load. It consists of a six-digit number that begins with "405" and ends with a types' number. For instance, the Unimog U 318's types' designation (3000 mm wheelbase, OM 934 LA engine, low-payload axles) is "405.104". Compared with previous Unimogs, the 405 has a different axle concept, with different designs for low (U 2xx, U 3xx) and high (U 4xx, U 5xx) payload models. The types' designation scheme comes with different types's designations for different payload options. For instance, the 405.2xx types' designation is only used for high-payload U 5xx models. The types' designation also allows determining the Unimog 405's "variant". Types, variants, and models overview. "Figures of UGN and LUG according to Unimog engineer Achim Vogt" Brabus Unimog U 500 Black Edition. In December 2005, Mercedes-Benz presented the Brabus Unimog U 500 Black Edition at the Dubai Motor Show. The Brabus Unimog U 500 Black Edition is a luxury version of the Unimog, heavily modified by Brabus. It features chrome bumpers, exhaust tips, an all-black exterior, new body panels, and a giant steel rollover protection bar. The interior panels are made of carbon fiber, and the upholstery is made of alcantara, leather, and luxurious cloth. The windows are tinted. Many interior parts – such as the infotainment system, sound system, and steering wheel – were also used in the contemporary Mercedes-Benz S-Class, the W 220. Although based on the U 500 model, the Brabus Unimog U 500 Black Edition has the maximum permissible mass rating of the Unimog U 400, which is 11,990 kg. It is equipped with the 205 kW version of the OM 906 six-cylinder engine and the automatic shifting option for the gearbox; the top speed is 120 km/h. In late 2005, Mercedes-Benz announced that the Brabus Unimog U 500 Black Edition would be put into series production if demand for the vehicle existed. One year later, German news paper Die Welt wrote that six Unimog U Black Edition were sold, with three being in production. The total production figure was said to be not greater than ten. 404-based 405.110. In the 1950s and 1960s, former Daimler-Benz AG used the "Unimog U 405.110" types' designation for a prototype of an armoured personnel carrier, based on the Unimog 404 chassis. Although bearing the same fundamental design principle, the 404-based Unimog 405.110 is technically very different from the modern Unimog 405, and thus not included in the Unimog 405 family. History. 1994 – 2000: Development phase. The planning of the Unimog 405 project began in 1994. According to Unimog engineer Carl-Heinz Vogler, a market analysis showed that Unimog customers demanded a more dedicated implement carrier version of the Unimog. At Daimler-Benz it was decided to make a forward-control vehicle with reducued off-road capability but improved implement carrier capabilities. This marks a paradigmatic change in Unimog design. The new Unimog series was supposed to have four options for mounting implements: In front, in back, on the bed, and in between the axles. Also, the new implement carrier Unimog, which would later become the 405, was planned to have a much more sophisticated single- or dual-circuit hydraulics system than previous Unimogs. Eventually, in 1996, the board of directors and the works council agreed upon developing and producing the new Unimog series at the Mercedes-Benz-Werk Gaggenau. Original development was carried out within 39 months, from 1996 until 2000. Former DaimlerChrysler developed a product requirements document together with selected customers and equipment manufacturers. Key points were, among others, increased driver comfort and functionality, one-man operation, increased payload, tractor operation, road-rail operation, compatibility with existing tools, and increased economy. DaimlerChrysler engineers used computerised design tools such as CAD, FEM, as well as driving dynamics and road simulation in the Unimog 405's development; previous Unimog series – such as the Unimog 406 – were mostly developed with conventional (i.e. largely non-computerised) engineering methods. 2000 – 2007: Introduction of the UGN. In March 2000, the Unimog 405 was officially presented to the public as the new Unimog implement carrier (UGN). Series production of the U 300 and U 400 models commenced in April 2000; the U 500 model followed in February 2001. In 2002, the Unimog 405 was introduced in North America as the "Freightliner Unimog U 500" for the 2003 model year. During summer that year, production was moved from Gaggenau to Wörth am Rhein. On 26 August 2002, the first Unimog 405 – a U 400 – rolled off the Wörth plant's assembly line. DaimlerChrysler designed two types specifically for the North American market, the 405.210 short wheelbase type and 405.230 long wheelbase type. They have about 500 parts that are different compared to the German Unimog 405 types, most notably, an OM 906 engine tuned to comply with US emissions regulations. In 2007, the Unimog 405 was discontinued in the United States. In total, 183 units were made. 2006 – 2014: Changes to the UGN and short-lived LUG. 2006 brought changes to European motor vehicle legislation which eventually resulted in the Unimog 405 being updated. To comply with the then-new Euro IV emissions standard, the Unimog 405 was equipped with a selective catalytic reduction catalyst. DaimlerChrysler incremented the types's designations by one, e.g. the "405.100" became the "405.101". In September 2006, the Unimog U 20 was presented at the 61st IAA-Nutzfahrzeuge in Hannover. Series production eventually commenced on 23 October 2007, and the first U 20 was delivered in January 2008. Daimler discontinued the Unimog U 20 in 2014 when the Euro VI emissions standard came into effect. It was succeeded by the U 216 and U 218 models, which are slightly bigger in size. 2013 – present: UGE. In April 2013, Daimler AG introduced the UGE variant of the Unimog 405. The UGE is technically similar to the UGN, but comes standard with the OM 934 or OM 936 engines, and has a different exterior styling. The model naming scheme was changed with the UGE, as described above. Series production began on 22 August 2013. In summer 2021, Daimler Truck Holding presented new top models of the Unimog 405 range, the U 435 and the U 535. They are fitted with a Euro VIe-compliant version of the OM 936 Diesel engine that is rated at 260 kW, which is a boost in power of 40 kW compared to the previous top models, the U 430 and U 530. The increase in power was achieved by increasing the BMEP to  MPa. To manage the high torque output, Daimler Truck Holding designed a new gearbox for the U 435 and U 535 models, the UG 130. Another new feature to the Unimog 405 range is the optional, self-levelling hydropneumatic suspension. The U 435 and U 535 models were made available in early 2022. Also in 2022, the U 327 model was added to the lighter U 3xx model range. Technically, it is an Unimog U 427. It has the Unimog U 427's axles, dimensions, and is equipped with the 200 kW version of the straight-6 OM 936 Diesel engine. Previously, the U 2xx and U 3xx models were only offered with 110, 115, 130 or the 170 kW engine options. Technical description. Unlike previous Unimogs, the Unimog 405 is, seen from a technical perspective, primarily designed as a two-axle, off-road capable implement carrier rather than a dedicated off-road vehicle that can mount implements. The payload is also significantly greater than in previous Unimogs; the total permissible mass of the Unimog 405 is up to 16,500 kg. This is primarily achieved through the Unimog 405's frame and axle design. Chassis. The Unimog 405 has a conventional flat ladder frame made from E 500 TM HSLA steel. It was designed using FEM technology. Like in conventional lorries, the frame has bolted closed cross profiles which result in a high torsion stiffness. This allows more payload than the bent frame of previous Unimogs. DaimlerChrysler designed the frame with different length and wheelbase options. The axles are coil-sprung portal axles with hydraulic shock absorbers, and have dedicated transverse and longitudinal control arms as well as a pair of longitudinal control arms with an integrated roll stabiliser. This is different from previous Unimogs in which torque tubes are used as longitudinal control arms. This design gives the Unimog 405 a significantly improved onroad handling. In total, DaimlerChrysler designed six different axles with two different axle load capabilities for the Unimog 405: The AU4 and HU4 front- and rear axles for the U 2xx and U 3xx models, the AU5 and HU5 front- and rear axles for the U 4xx and U 500 models, and the AU6 and HU6 front- and rear axles for the U 527 / U 530 / U 535. The latter four axles make the frame sit slightly higher (30 mm). The track width of 1734 mm / 1768 mm is narrow enough for road-rail use. The U 4xx and U 5xx models can be equipped with two AU5 front axles as a factory option (to enable all-wheel steering), and are also available with hydropneumatic axle suspenstion instead of traditional coil spring suspension. Many Unimog 405 models come standard with 315/80 R 22.5 tyres, but smaller and larger tyres can also be fitted to the 405. A tyre pressure control system is not standard, but available as a factory option. The UGN U 300 has a combined hydraulic/pneumatic braking system; all other models are equipped with a solely pneumatic braking system. The Unimog 405 has disc brakes on all wheels, and automatic load-dependent brake force control (ALB) as well as anti-lock braking (ABS). The parking brake is spring-loaded, and acts on the rear wheels. The centre, rear axle, and front axle differentials are lockable to allow different amounts of torque to be sent to the wheels. The 405 has a hydraulically power-assisted, "LS 6 Bk" recirculating ball steering system. The reduction (formula_0) of the steering gearbox is steer angle dependent to improve handling at high speeds and maneuvering in tight spots. In its default configuration, the Unimog 405 is a left-hand drive vehicle. An either-hand drive version – which can be switched, by the driver, with a simple "flick of the wrist" from left-hand drive to right-hand drive (and vice versa) – is available as a factory option. In the either-hand drive version, the entire steering column, gauge cluster, and pedals can be moved by the driver without special tools. Cab. The Unimog 405 has been made with two different cabs, a forward-control cab (LUG variant) and a short-bonnet cab (UGN and UGE variants). Both cabs have one-row seating; a crew cab is not available from the factory. UGN and UGE. The cab found in the UGN and UGE variants was developed by Dornier Consulting. It makes the 405 a short-bonnet lorry with the engine installed just below the cab. The bonnet is unusually short and steep for a short-bonnet lorry, and the cab is fitted with a rather large, slightly curved but almost upright windscreen. This design does not only enable good forward visibility, but also allows the driver to sit "behind" the front axle for better 360° visibility. The bonnet's steepness is possible because the engine radiators are installed behind the front wheels, and not underneath the bonnet. The cab is made of fibre-reinforced polymer (FRP), which makes the cab light, stiff, and rigid, and allows unconventionally shaped body parts that would not be possible if the cab was made of steel. The polymers used are carbon-based, and the fibres are conventional glass fibres. In total, the cab is made of only ten different types of body parts (14 body parts in total) that are glued together in the Unimog plant in Wörth. For maintenance, the cab can be tilted. An "extending" passenger side door – which gives an implement operator better visibility – was available as a factory option. A major difference between the – otherwise more or less identical UGN and UGE cabs – is the windscreen wiper design. In the UGN, the windscreen wipers are installed below the windscreen, whereas the UGE variant has them installed above the windscreen. LUG. With the Unimog U 20 type 405.050 LUG variant of the Unimog 405, Daimler introduced the first forward-control Unimog. It shares its cab with the Mercedes-Benz Accelo lorry. Compared with other 405s, the U 20's cab is 110 mm shorter, and installed much more up front. Therefore, the U 20's front overhang is about 200 mm longer, and the driver's seat is installed approximately 600 mm further forward. Unlike the conventional cab, the U 20 cab is made of steel. It was made in Brazil and shipped to Wörth for assembly. Gearbox and drivetrain. Powertrain principle. The Unimog 405 has a powertrain principle with two mechanical torque paths; a third, hydraulic torque take-off is available as a factory option. The mechanical torque path is designed with two gearboxes, one up front and one centre gearbox; the engine is installed in between these gearboxes. While the torque sent to the front gearbox can only be sent to front implements (with the front PTO), the torque sent to the centre gearbox can be sent into three different paths: To a direct engine power take-off, for instance, for powering hydraulic systems; to either low or high-speed implements such as crane/excavator implements or firebrigade centrifugal pumps; and to the wheels. Unlike previous Unimogs, which have switchable all-wheel drive, the Unimog 405 has permanent all-wheel drive. UG 100/8 gearbox. Except for the U 535 and U 435, all Unimog 405 models have a synchromesh UG 100/8 gearbox with helical cut gears and electronically controlled, pneumatic shifting with manual gear selection. This gearbox has a modular design and allows various different factory options. In its base configuration, the gearbox has "four" gears and "two" ranges. A separate reverse gear unit is flange-mounted to the UG 100/8 gearbox. This design results in eight forward and eight reverse gears. The reverse gear unit has a different gear ratio for the reverse range, and the last two gears are electronically locked by default to prevent reverse speeds beyond 30 km/h. In road-rail Unimogs, all reverse gears are usable; the highest reverse speed is 61.9 km/h. By default, the UG 100/8 gearbox is mated with a single-disc dry clutch (i.e. foot-operated with a clutch pedal). An additional three range gearbox – giving the Unimog dedicated drive, work, and crawl ranges – is available as a factory option; it extends the number of gears to 24 forward and 24 reverse (of which, as described, two are electronically locked). With the additional three range gearbox, the speed range is 0.1–85 km/h; the top speed is electronically limited. UG 130 gearbox. The UG 130 gearbox is the standard gearbox in the U 535 and U 435 models. It is similar to the UG 100 gearbox in terms of design, but is capable of accepting a higher input torque of around 1300 N·m. Hydrodynamic torque converter and dry single-disc clutch. For road-rail use, the Unimog 405 is offered with a combined dry single-disc clutch and hydrodynamic torque converter. From the engine, the torque is sent to the hydrodynamic torque converter which provides a two-times torque multiplication, before the torque is sent through the dry single-disc clutch to the gearbox. This design allows setting off with a very high tow hitch load (e.g. laden trucks) without wearing out the dry single-disc clutch. When a certain speed is reached, a lock-up clutch in the hydrodynamic torque converter is engaged, and only the dry single-disc clutch is used for transmitting torque to the gearbox. Unimogs equipped with this system have automatic clutch actuation, and, therefore, have no clutch pedal; gear shifting is still manual. An automated version of the gearbox is available as a factory option. Hydrostatic drive. A hydrostatic drive is available as a factory option. It is flange-mounted to the UG 100/8 gearbox and fully independent of the engine (and thus front PTO) speed. Its electronic control system allows indefinitely adjustable speeds of 0...25 km/h, and it is operated either by a hand lever, or with the accelerator pedal. Front PTO gearbox. The front gearbox is a single-speed gearbox with a reduction of formula_1, resulting in a rated PTO speed of 1000/min. Torque is sent to this gearbox through a driveshaft, directly from the engine. To disengage the front gearbox, it is equipped with a torsion clutch and a hydraulic multiplate clutch. The PTO speed can be electronically locked at 540/min; the maximum take-off power is 150 kW. Engines. The Unimog 405 is powered by Mercedes-Benz OM 900 series engines. The OM 900 series consists of four- and six-cylinder Diesel engines and has been made in several different versions. In the Unimog 405, the OM 904, OM 906, OM 934, and OM 936 have been used. OM 904 and OM 906. The OM 904 four-cylinder and OM 906 six-cylinder engines were the first OM 900 series engines, introduced in 1996. They have a 102 mm cylinder bore and 130 mm piston stroke, resulting in a displacement of  cm³ (OM 904) or  cm³ (OM 906). Both engine models share the same design principle; they are water-cooled straight engines with a reverse-flow cylinder head made of cast iron. The OM 904/906 has a three-valve OHV valvetrain, with two intake valves, and one exhaust valve per cylinder. In addition to that, the engine has a decompression valve and an exhaust throttle valve for engine braking. The valves are actuated by an in-block camshaft, pushrods, and rocker arms. Instead of a conventional injection pump, the OM 904/906 engines have a unit injection pump system with one unit injection pump per cylinder. The unit injection pump creates the injection pressure with a camshaft-driven plunger. The actual fuel injector is connected to the unit injection pump with a short fuel line. This is different from Volkswagen's Pumpe-Düse system that combines the fuel injector and the unit injection pump into a single unit injector device. Nonetheless, the Mercedes-Benz unit injection pump system also has electronically actuated solenoid valves that can be freely actuated during the unit injection pump plunger actuation, allowing multiple injections per power stroke. The highest injection pressure is up to 160 MPa. In order to achieve high efficiency, Daimler-Benz installed a single wastegate turbocharger and an air-to-air intercooler. The turbocharger's exhaust turbine is relatively small to achieve good turbo response; it has a fixed turbine geometry, and a wastegate. At 1300/min, the OM 906's BMEP reaches its maximum of 2.17 MPa which is achieved through a high turbo boost of 285 kPa. The rated fuel consumption of the OM 906 is 188 g/(kW·h) at 1450/min and a BMEP of 1.6 MPa. When the Unimog 405 was first introduced, it was available with the 110 and 130 kW versions of the OM 904, and the 170 and 205 kW versions of the OM 906, which were all Euro-III compliant. Throughout the vehicle's production run, all engines were increased in power by 5 kW (except for the 130 kW version), and updated to comply with the Euro IV and V emissions standards. For reducing nitrogen oxides from the exhaust, the Euro IV and V models were fitted with a selective catalytic reduction device. The Freightliner Unimog has an OM 906 engine detuned to 194 kW to comply with the EPA4 emissions standard. It was not updated to a newer emissions standard, because DaimlerChrysler discontinued the Unimog on the North American market in 2007, prior to the EPA7 emission standard taking effect. Both the OM 904 and the OM 906 can operate on diesel fuel, or rapeseed oil methyl esters (RME). OM 934 and OM 936. In 2012, Daimler AG introduced the Euro VI OM 934 and OM 936 Diesel engines, which succeeded the OM 904 and OM 906 Diesel engines. Daimler has been using the OM 934/936 in the Unimog 405 since 2013; the engines are offered with power outputs of 115, 130, 170 (OM 934), or 200, 220, and 260 kW (OM 936). The maximum engine braking power is 178 kW (OM 934) / 302 kW (OM 936). As in the predecessor engine family, the OM 934 and OM 936 share their basic design with a 110 mm bore and a 135 mm stroke, resulting in displacements of  cm³ (OM 934) or  cm³ (OM 936). The compression is ε=17.6, and the engines are turbocharged. Instead of the unit injection pump system, the engines feature a common-rail injection system with electronically controlled solenoid valves and an injection pressure of up to 240 MPa. This system allows up to five injections per power stroke. The cylinder head design is also vastly different from the OM 904/906. Daimler designed the new engines with a crossflow cylinder head with dual overhead camshafts and four valves per cylinder. The exhaust camshaft is fitted with the variable valve timing system from the Mercedes-Benz M 272 V6 Otto engine. The variable valve timing is used in the exhaust gas treatment system for increasing exhaust gas temperatures. Daimler fitted the engine with a diesel particulate filter and selective catalytic reduction. Hydraulic system. The Unimog 405 can be equipped with two hydraulic systems, the "power" hydraulic system and the "implement" hydraulic system. Both of these systems have two hydraulic circuits, giving the Unimog 405 four independent hydraulic circuits in total. The combined total power output of the hydraulic system is 130 kW. Excess energy in the hydraulic system that is not converted into mechanical work is dissipated with a hydraulic oil cooler. A joystick is used for operating the hydraulic system. Power hydraulic system. The power hydraulic system is designed for implements that require continuous high power in the 40 kW range. It is installed below the bed area, in between the longitudinal frame members, and can be uninstalled whenever it is not required. Unlike the implement hydraulic system, the power hydraulic system is permanently powered through a driveshaft from the engine, and thus independent of the selected gear or vehicle speed. The power hydraulic system has an open, and a closed hydraulic circuit. In the open circuit, a high volumetric flow rate is maintained which requires a big hydraulic oil reservoir. It is used for powering implements that require the aforementioned high volumetric flow rate, such as hydraulic cylinders in excavator or crane implements. Implements that do not require a high volume flow rate can be powered with the more compact closed circuit. Implement hydraulic system. The implement hydraulic system is designed for powering smaller implements used with the Unimog 405. Different numbers of hydraulic pumps and different numbers of hydraulic valves can be fitted to the hydraulic circuits in the system to match implement requirements. The power output is relatively low, at &lt;20 kW per circuit. Electric and electronical systems. The Unimog 405 has a 24 V electrical system. It comes standard with a three-phase AC 28 V alternator that provides a current of 80 A; 100 A is available as a factory option. As is standard in many lorries, the Unimog 405 has two 12 V lead acid batteries in a series circuit. The amount of electric charge the batteries can provide is 115 A·h; 135 A·h batteries are also available. For powering implements, Unimog 405 models are equipped with 12 V or 24 V sockets in the front bumper, in the cab, at the rear, and the battery box (that is installed behind the cab). All Unimog 405 models have a CAN bus system and electronic control units for the engine, the instrument cluster, the gearbox, all power take-off systems, and a separate electronic tachograph. Both the accelerator pedal and the shift lever are electronic and have no mechanical connection to the unit injection pumps or gearbox. Technical specifications. The Unimog 405 is a small-series production vehicle and has been offered with various different special factory options; the following technical specifications tables only list the most common models with standard factory options. UGN. "All technical specifications according to Wischhof et al. (2000). Note that this table only includes standard options. The OM 906 LA was also offered in the U 300 on request." LUG. "Technical specifications according to Achim Vogt" UGE. "Engine specifications according to Andreescu et al.; all other data according to Daimler AG"
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "i=2.139" } ]
https://en.wikipedia.org/wiki?curid=72132724
72142349
Ammitocyon
Extinct genus of mammals Ammitocyon is a genus of large sized carnivoran mammals, belonging to the Amphicyonidae ("bear dogs"), that lived during the Late Miocene in what is now Spain. It is notable for its extreme adaptations towards hypercarnivory, its extremely robust skeleton, and was one of the last surviving members of its family. History and naming. "Ammitocyon" was described in the year 2021 by Morales et al. based on comparatively complete remains, originally referred to "Thaumastocyon", enabling a greater understanding of the subfamily Thaumastocyoninae, hitherto only known from fragmentary material. Holotype is the pair of hemimandibles BAT-3'09.1239 and BAT-3'08.604, which belong to the same individual. BAT-3'10.1689 (a skull with strong signs of corrosion, especially in the dorsal region, belonging to a senile individual) and BAT-3'11.453 (a complete mandible belonging to the same individual), as well as the isolated left m2 BAT-3'09.1124, have been designated as paratypes. Furthermore, postcranial remains are known and await a closer description. The name "Ammitocyon" is a combination of "Ammit", an ancient Egyptian goddess, chosen as her mix of crocodile, hippopotamus, and lion features reminded the authors of the fossil species, and the Ancient Greek κύων (kúōn, "dog"). The species καινός (kainós) name also derives from Ancient Greek, and translates to "of a new kind". Geographical and temporal distribution. The type and so far only locality of the genus is Batallones-3, part of the Cerro de los Batallones site complex located in Madrid, Spain. BAT-3 is the most recent deposit of the series, dating to the late Vallesian, MN10, dating to circa 9.1 Ma. The Batallones are a complex of cavities, which held water even in harsh droughts, trapping herbivores that came down to drink. Their carcasses then attracted a large variety of carnivores, which then got stuck themselves. As a result, over 90% of mammalian fossils in BAT-3 are remains of carnivorans. Description. "Ammitocyon" was a lion-sized taxon, with its weight estimated at 231 kg. It is characterized by its extremely robust build, with powerful legs, almost unmatched among caniforms, and adaptations to hypercarnivory. Its chin and muzzle are sturdy, and its snout is huge, with a wide nasal aperture. The dentition is sectorial. The reduction of its premolars, typical for thamastocyonines, is taken to an extreme - it has completely lost P2/P3 and p1-3, as well as its third molars (M3 and m3). It possesses a premaxilla more robust than that found in "Magericyon". Both the frontal and jugal possess well-developed postorbital processes, resulting in a relatively large eye socket, which is more enclosed than that of similarly sized arctoid carnivorans. The Bulla is large and somewhat inflated and the sagittal crest well-developed and convex, stretching from the post-orbital constriction to the nuchal crests. Like in "Ysengrinia americana" and some temnocynines the palate is cadually expanded beyond the molars. The mandible is long and robust, with strong symphysis, it is dorsoventrally slender between the canine and p4, and buccolingually thick under m1 and m2. In typical amphicyonids, including thaumastocyonines, the depth of masseteric fossa surpasses basal level of postcanine row, while in "Ammitocyon" it does not exceed the depth of margo alveolaris. A relatively deep rim encloses it completely in its ventral region, separating the fossa from the unique flat region in a premasseteric position. Another unique feature is the broad concavity in the ventral margin, extending from the genial tuberosity to the mesial base of the ascending ramus. The dental formula of "A. kainos" is formula_0. The upper canine is robust and shows buccolingual compression, while its large and curved lower canines are buccolingually elongated and have broad roots, relative to the height of their crown. The P4 is elongated, possessing a large parastyle and a well-developed lingual root for the protocone. The M1 and M2 meanwhile are reduced, showing a triangular outline in combination with P4, and the incisor battery is extremely wide, each one being well separated from the other, similar to that of barbourofelids. The I3 is considerably larger than the I2, which seems to be larger than the not yet recovered I1. Their most striking feature is the buccolingual width, which is much more developed than in any other carnivoran studied. In comparison to the M1, the P4 is large. The m2 is elongated, possessing a much larger paraconid than that of other thaumastocyonines, and the species lacks an M3. Its scapula is robust and almost square-shaped and possesses a large subscapular fossa and scapular spine, which occupies almost its whole length, while the dorsally developed acromion does not go past the glenoid cavity. The robust humerus possesses a long wide deltoid tuberosity and a large lateral supracondylar crest, but no entepicondylar foramen. The distal epiphysis is wide. Both its radius and ulna are short and robust, with large muscular attachments along the diaphysis. The manus is robust and short, especially towards the distal segments. Both the carpals and metapodials are short and robust, with the latter possessing a flattened distal epiphyse. The phalanxes are extremely robust, with an almost rounded cross-section. The pelvis and femur are both robust but fragmentary, with the latter being relatively short. The robust tibia possesses a wide proximal epiphysis and short diaphysis, the triangular cross-section of its distal diaphysis possessing relatively oblique, deep, and reinforced articulation sulci for the astragalus. Its pes is wide and short, particularly towards the distal segments, while the robust tarsals have few contact facets. The short calcaneus has solid articulation facets for the astragalus and a concave distal one for the cuboid. Almost no distal neck is present at the astragalus, but it has a convex articular facet for the navicular. The articulation facet between the cuboid and navicular is absent. Articulations gradually become less mobile towards the distal parts of the limb, with some being ankylosed or completely immobile. It possesses short and robust metatarsals, with flattened distal epiphyses, as well as short and extremely robust phalanxes, with an almost rounded cressection and a short and thickened claw. Phylogeny and evolution. "Ammitocyon" belongs to the subfamily Thaumastocyoninae, originally erected by Hürzeler (1940), which is defined by the complete suppression of m1 metaconid, reduction of the premolars, except the p4, which is reinforced, and the oblique abrasion of the teeth, and possesses hypercarnivorous tendencies. Within the subfamily its sister taxon is "Thaumastocyon". Below is the cladogram based on cranial, mandibular, and dental characters, after Morales et al., 2021: Paleobiology. "Ammitocyon" possess many traits that can be found in other carnivorans, but their association with each other is exclusive to this genus. Its chewing apparatus is completely unique, with no equivalent existing among other hypercarnivores. It possesses strong dental simplification, and a second carnassial pair is formed by m1 talonid and m2 occluding with the M1-M2 buccal wall. The robustness of canines and incisors combined with slender postcanine dentition is a common feature in thaumastocyonines, but is far more developed in Ammitocyon, with its extremely sectorial dentition than in related taxa such as "Thaumastocyon" or "Crassidia". The insertion of the M. masseter into the mandible is narrowed and the attachment aera for the m. masseter pars zygomaticomandibularis posterior, the muscle responsible for the main grinding movements of the jaw, in the mandible has almost completely disappeared. This, in addition to the development of large temporalis and digastricus muscles, may be a sign that Ammitocyon favored slicing over chewing, which is consistent with the shape of its teeth. A biomechanical analysis suggests the mandible was more resistant to mediolateral than dorsoventral movements. The mesial constitutes the least resistant part of the mandible, and the symphysis is the only one capable of resisting dorsoventral and lateral movements. The result is that "Ammitocyon" occupies a morphospace not known in any other amphicyonid, candid, or carnivoran included in the study, with a mandibular ramus more resistant to mediolateral bending, but less dorsoventral resistance than seen in any of the species it has been compared with. This is in accordance with the large incisors and wide canines, which may have acted as a throat or muzzle clamp, and helped to tear off the prey's skin. The large buccolingual resistance to bending suggests its ability to tear off flesh while violently moving its head from side to side, similar to movements observed in pinnipeds. The low resistance to dorsoventral bending, and P4 and m1 forming a shearing surface, while M1 and m2 acting as second cutting area, shows that it was likely unable to feed on hard matter, such as bones, unlike many other amphicyonids. The postcranial skeleton has not been closely described, but it is extremely robust, even exceeding that of bears in many areas, possessing short limbs and metapodials not unlike those seen in "Thylacosmilus". strengthened articulations in some of its bones prevent lateral movements, especially in the hind limbs, with some features being similar to those of cursorial mammals. Some of its metapodials completely lack flexing ability. The shortness of its phalanxes means the animal could not walk in a digitigrade position, as the surface area would have been too small to carry its weight. Nevertheless, reconstructing the taxon as plantigrade also leads to some discrepancies that only further studies can solve. Paleoecology. The paleoenvironment of Batallones-3 has been reconstructed as mesic C3 woodland, with patches of grassland. "Ammitocyon" shared this habitat with several other large carnivorans, each of them surpassing 100 kg in weight - the somewhat smaller bear dog "Magericyon anceps", the ursid "Indarctos arctoides", and the lion-sized saber-toothed cat "Machairodus aphanistus". The carnivoran assemblage was further enriched by the leopard-sized "Promegantereon ogygia", the mustelid "Eomellivora piveteaui", comparable to a brown hyaena in size, as well as the smaller felid "Leptofelis vallesiensis", the hyaena "Protictitherium crassum", the mephitid "Promephitis", and a number of smaller mustelids. The most common large herbivore is the equid "Hipparion", although the suid "Microstonyx", the moschid "Micromeryx", the antelope "Austroportax", and another unidentified bovid, as well as indeterminate rhinoceros remains have also been found. A variety of rodents and the lagomorph "Prolagus" make up the small mammal fauna, while non-mammals are represented by the sea eagle "Haliaeetus", the monitor lizard "Varanus marathonensis", and the giant tortoise "Titanochelon". Among the four very large carnivorans, all of which likely were solitary hunters, only "Magericyon" differs significantly from the others in regard to prey, as it predominantely hunted in open woodlands and commonly consumed "Austroportax", "while Machairodus", "Indarctos", and "Ammitocyon" most frequently preyed on "Hipparion" in wooded areas. The much smaller "Eomellivora" seems to also have commonly fed on the equid, though it lived in more open areas, and has been reconstructed as feeding on smaller prey, perhaps suggesting it may have taken foals or scavenged. Coexistence of these large predators despite overlaps of resources and habitat may have been the result of high biomass availability. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{3.1.2.2.}{3.1.1.2.}" } ]
https://en.wikipedia.org/wiki?curid=72142349
72149189
Richmann's law
Richmann's law, sometimes referred to as Richmann's rule, Richmann's mixing rule, Richmann's rule of mixture or Richmann's law of mixture, is a physical law for calculating the mixing temperature when pooling multiple bodies. It is named after the Baltic German physicist Georg Wilhelm Richmann, who published the relationship in 1750, establishing the first general equation for calorimetric calculations. Origin. Through experimental measurements, Wilhelm Richmann determined that the following relationship holds when water of different temperatures is mixed: formula_0 It follows: Initial Richman's law Here formula_1and formula_2 are the masses of the two mixture components, formula_3 and formula_4 are their respective initial temperatures, and formula_5 is the mixture temperature. This observation is called "Richmann's law" in the narrower sense and applies in principle to all substances of the same state of aggregation. According to this, the mixing temperature is the weighted arithmetic mean of the temperatures of the two initial components. Richmann's rule of mixing can also be applied in reverse, for example, to the question of the ratio in which quantities of water of given temperatures must be mixed to obtain water of a desired temperature. Determining the quantities formula_1and formula_2 required for this purpose, given a total quantity formula_6, is accomplished with the mixing cross. The corresponding formula, obtained from the above equation by rearrangement, is: formula_7 or formula_8. For the mixing ratio, this gives: formula_9. The physical background of the mixing rule is the fact that the heat energy of a substance is directly proportional to its mass and its absolute temperature. The proportionality factor is the specific heat capacity, which depends on the nature of the substance, but which was not described until some time after Richmann's discovery by Joseph Black. Thus, the validity of the formula is limited to mixtures of the same substance, since it assumes a uniform specific heat capacity. Another condition is that both components be uniformly warm everywhere and that there be no appreciable heat exchange with their other surroundings. If one wants to mix two substances with different - but known - specific heat capacities, one can formulate the mixing rule more generally, as shown below. General formulation. Under the condition that no change of aggregate state occurs and the system is closed, i.e., in particular, there is no heat exchange with the environment, the following holds: formula_10 Where formula_11 and formula_12 represent the specific enthalpy of the respective components. If the specific heat capacities formula_13 and formula_14 can be assumed to be constant, this can be transformed to. formula_15 The formula resolved by the mixture temperature is then: General Richman's law In a wider sense this equation is also referred to as "Richmann's law" because it simply extends Richmann's established relationship to include the specific heat capacity, thus allowing the calculation of the mixing temperature of different substances. If the heat capacities are not constant over the entire temperature range, the above formula can be used with an average heat capacity for component formula_16: formula_17. In this formula, formula_18 with formula_19 or formula_20 represents the specific heat capacity of the two components, which may be temperature dependent. Application of the formula may require an iterative procedure to determine the mixture temperature, since the average heat capacity is also temperature dependent.
[ { "math_id": 0, "text": "\nm_{1} \\cdot T_1 + m_2 \\cdot T_2 = (m_{1} + m_{2})\\cdot T_\\mathrm{m}\n" }, { "math_id": 1, "text": "m_1" }, { "math_id": 2, "text": "m_2" }, { "math_id": 3, "text": "T_1" }, { "math_id": 4, "text": "T_2" }, { "math_id": 5, "text": "T_m" }, { "math_id": 6, "text": "M=m_1+m_2" }, { "math_id": 7, "text": "m_1=M \\frac{T_m-T_2}{T_1-T_2}" }, { "math_id": 8, "text": "m_2=M \\frac{T_m-T_1}{T_2-T_1}" }, { "math_id": 9, "text": "\\frac{m_1}{m_2} = -M \\frac{T_2-T_m}{T_1-T_m}" }, { "math_id": 10, "text": "\\begin{align}\nQ_\\text{dispensed} & = Q_\\text{absorbed}\\\\\nm_{1} \\cdot \\left( h_{1}(T_1) - h_1(T_\\mathrm m) \\right) & = m_{2} \\cdot \\left( h_{2}(T_\\mathrm{m}) - h_{2}(T_{2}) \\right)\\\\\n\\end{align}" }, { "math_id": 11, "text": "h_1(T)" }, { "math_id": 12, "text": "h_2(T)" }, { "math_id": 13, "text": "c_1" }, { "math_id": 14, "text": "c_2" }, { "math_id": 15, "text": "\n\\begin{align}\nm_{1}\\cdot c_{1}\\cdot (T_{1}-T_\\mathrm{m}) & = m_{2}\\cdot c_{2}\\cdot (T_\\mathrm{m}-T_{2})\n\\end{align}\n" }, { "math_id": 16, "text": "i" }, { "math_id": 17, "text": "\\bar{c}_{i} = \\frac{\\int_{T_\\mathrm{m}}^{T_{i}} c_{i}(T) \\, \\mathrm dT }{T_{i} - T_\\mathrm{m}} " }, { "math_id": 18, "text": "c_i(T)" }, { "math_id": 19, "text": "i=1" }, { "math_id": 20, "text": "2" } ]
https://en.wikipedia.org/wiki?curid=72149189
7215216
Enriched Xenon Observatory
Particle physics experiment The Enriched Xenon Observatory (EXO) is a particle physics experiment searching for neutrinoless double beta decay of xenon-136 at WIPP near Carlsbad, New Mexico, U.S. Neutrinoless double beta decay (0νββ) detection would prove the Majorana nature of neutrinos and impact the neutrino mass values and ordering. These are important open topics in particle physics. EXO currently has a 200-kilogram xenon liquid time projection chamber (EXO-200) with R&amp;D efforts on a ton-scale experiment (nEXO). Xenon double beta decay was detected and limits have been set for 0νββ. Overview. EXO measures the rate of neutrinoless decay events above the expected background of similar signals, to find or limit the double beta decay half-life, which relates to the effective neutrino mass using nuclear matrix elements. A limit on effective neutrino mass below 0.01 eV would determine the neutrino mass order. The effective neutrino mass is dependent on the lightest neutrino mass in such a way that that bound indicates the normal mass hierarchy. The expected rate of 0νββ events is very low, so background radiation is a significant problem. WIPP has of rock overburden—equivalent to of water—to screen incoming cosmic rays. Lead shielding and a cryostat also protect the setup. The neutrinoless decays would appear as narrow spike in the energy spectrum around the xenon Q-value (Qββ = 2457.8 keV), which is fairly high and above most gamma decays. EXO-200. History. EXO-200 was designed with a goal of less than 40 events per year within two standard deviations of expected decay energy. This background was achieved by selecting and screening all materials for radiopurity. Originally the vessel was to be made of Teflon, but the final design of the vessel uses thin, ultra-pure copper. EXO-200 was relocated from Stanford to WIPP in the summer of 2007. Assembly and commissioning continued until the end of 2009 with data taking beginning in May 2011. Calibration was done using 228Th, 137Cs, and 60Co gamma sources. Design. The prototype EXO-200 uses a copper cylindrical time projection chamber filled with of pure liquid xenon. Xenon is a scintillator, so decay particles produce prompt light which is detected by avalanche photodiodes, providing the event time. A large electric field drives ionization electrons to wires for collection. The time between the light and first collection determines the z coordinate of the event, while a grid of wires determines the radial and angular coordinates. Results. The background from earth radioactivity(Th/U) and 137Xe contamination led to ≈2×10−3 counts/(keV·kg·yr) in the detector. Energy resolution near Qββ of 1.53% was achieved. In August 2011, EXO-200 was the first experiment to observe double beta decay of 136Xe, with a half life of 2.11×1021 years. This is the slowest directly observed process. An improved half life of 2.165 ±0.016(stat) ±0.059(sys) × 1021 years was published in 2014. EXO set a limit on neutrinoless beta decay of 1.6×1025 years in 2012. A revised analysis of run 2 data with 100 kg·yr exposure, reported in the June issue of "Nature" reduced the limits on half-life to 1.1×1025 yr, and mass to 450 meV. This was used to confirm the power of the design and validate the proposed expansion. Additional running for two years was taken. EXO-200 has performed two scientific operations, Phase I (2011-2014) and after upgrades, Phase II (2016 - 2018) for a total exposure of 234.1 kg·yr. No evidence of neutrinoless double beta decay has been found in the combined Phase I and II data, giving the lower bound of formula_0 years for the half-life and upper mass of 239 meV. Phase II was the final operation of EXO-200. nEXO. A ton-scale experiment, nEXO ("next EXO"), must overcome many backgrounds. The EXO collaboration is exploring many possibilities to do so, including barium tagging in liquid xenon. Any double beta decay event will leave behind a daughter barium ion, while backgrounds, such as radioactive impurities or neutrons, will not. Requiring a barium ion at the location of an event eliminates all backgrounds. Tagging of a single ion of barium has been demonstrated and progress has been made on a method for extracting ions out of the liquid xenon. A freezing probe method has been demonstrated, and gaseous tagging is also being developed. The 2014 EXO-200 paper indicated a 5000 kg TPC can improve the background by xenon self-shielding and better electronics. Diameter would be increased to 130 cm and a water tank would be added as shielding and muon veto. This is much larger than the attenuation length for gamma rays. Radiopure copper for nEXO has been completed. It is planned for installation in the SNOLAB "Cryopit". An Oct. 2017 paper details the experiment and discusses the sensitivity and the discovery potential of nEXO for neutrinoless double beta decay. Details on the ionization readout of the TPC have also been published. The pre-Conceptual Design Report (pCDR) for nEXO was published in 2018. The planned location is SNOLAB, Canada. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3.5 \\cdot 10^{25}" } ]
https://en.wikipedia.org/wiki?curid=7215216
72163862
Kähler identities
In complex geometry, the Kähler identities are a collection of identities between operators on a Kähler manifold relating the Dolbeault operators and their adjoints, contraction and wedge operators of the Kähler form, and the Laplacians of the Kähler metric. The Kähler identities combine with results of Hodge theory to produce a number of relations on de Rham and Dolbeault cohomology of compact Kähler manifolds, such as the Lefschetz hyperplane theorem, the hard Lefschetz theorem, the Hodge-Riemann bilinear relations, and the Hodge index theorem. They are also, again combined with Hodge theory, important in proving fundamental analytical results on Kähler manifolds, such as the formula_0-lemma, the Nakano inequalities, and the Kodaira vanishing theorem. History. The Kähler identities were first proven by W. V. D. Hodge, appearing in his book on harmonic integrals in 1941. The modern notation of formula_1 was introduced by André Weil in the first textbook on Kähler geometry, "Introduction à L’Étude des Variétés Kähleriennes."42 The operators. A Kähler manifold formula_2 admits a large number of operators on its algebra of complex differential formsformula_3built out of the smooth structure (S), complex structure (C), and Riemannian structure (R) of formula_4. The construction of these operators is standard in the literature on complex differential geometry. In the following the bold letters in brackets indicates which structures are needed to define the operator. Differential operators. The following operators are differential operators and arise out of the smooth and complex structure of formula_4: The Dolbeault operators are related directly to the exterior derivative by the formula formula_10. The characteristic property of the exterior derivative that formula_11 then implies formula_12 and formula_13. Some sources make use of the following operator to phrase the Kähler identities. This operator is useful as the Kähler identities for formula_16 can be deduced from the more succinctly phrased identities of formula_15 by comparing bidegrees. It is also useful for the property that formula_17. It can be defined in terms of the complex structure operator formula_18 by the formulaformula_19 Tensorial operators. The following operators are tensorial in nature, that is they are operators which only depend on the value of the complex differential form at a point. In particular they can each be defined as operators between vector spaces of forms formula_20 at each point formula_21 individually. The direct sum decomposition of the complex differential forms into those of bidegree (p,q) manifests a number of projection operators. Notice the last operator is the extension of the almost complex structure formula_18 of the Kähler manifold to higher degree complex differential forms, where one recalls that formula_32 for a formula_7-form and formula_33 for a formula_9-form, so formula_18 acts with factor formula_34 on a formula_35-form. Adjoints. The Riemannian metric on formula_4, as well as its natural orientation arising from the complex structure can be used to define formal adjoints of the above differential and tensorial operators. These adjoints may be defined either through integration by parts or by explicit formulas using the Hodge star operator formula_36. To define the adjoints by integration, note that the Riemannian metric on formula_4, defines an formula_37-inner product on formula_38 according to the formulaformula_39 where formula_40 is the inner product on the exterior products of the cotangent space of formula_4 induced by the Riemannian metric. Using this formula_37-inner product, formal adjoints of any of the above operators (denoted by formula_41) can be defined by the formula formula_42When the Kähler manifold is non-compact, the formula_37-inner product makes formal sense provided at least one of formula_43 are compactly supported differential forms. In particular one obtains the following formal adjoint operators of the above differential and tensorial operators. Included is the explicit formulae for these adjoints in terms of the Hodge star operator formula_36. The last operator, the adjoint of the Lefschetz operator, is known as the "contraction operator" with the Kähler form formula_25, and is commonly denoted by formula_1. Laplacians. Built out of the operators and their formal adjoints are a number of Laplace operators corresponding to formula_54 and formula_55: Each of the above Laplacians are self-adjoint operators. Real and complex operators. Even if the complex structure (C) is necessary to define the operators above, they may nevertheless be applied to real differential forms formula_59. When the resulting form also has real coefficients, the operator is said to be a "real operator". This can be further characterised in two ways: If the complex conjugate of the operator is itself, or if the operator commutes with the almost-complex structure formula_18 acting on complex differential forms. The composition of two real operators is real. The complex conjugate of the above operators are as follows: Thus formula_75 are all real operators. Moreover, in Kähler case, formula_76 and formula_77 are real. In particular if any of these operators is denoted by formula_41, then the commutator formula_78 where formula_18 is the complex structure operator above. The identities. The Kähler identities are a list of commutator relationships between the above operators. Explicitly we denote by formula_79 the operator in formula_80 obtained through composition of the above operators in various degrees. The Kähler identities are essentially local identities on the Kähler manifold, and hold even in the non-compact case. Indeed they can be proven in the model case of a Kähler metric on formula_81 and transferred to any Kähler manifold using the key property that the Kähler condition formula_82 implies that the Kähler metric takes the standard form up to second order. Since the Kähler identities are first order identities in the Kähler metric, the corresponding commutator relations on formula_81 imply the Kähler identities locally on any Kähler manifold.Ch 0 §7 When the Kähler manifold is compact the identities can be combined with Hodge theory to conclude many results about the cohomology of the manifold. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Kähler identities§3.1§5.1Ch V §4 — Let formula_2 be a Kähler manifold. Then the following identities hold: Furthermore the operators formula_96 and formula_15 satisfy the identities: The above Kähler identities can be upgraded in the case where the differential operators formula_101 are paired with a Chern connection on a holomorphic vector bundle formula_102. If formula_103 is a Hermitian metric on formula_104 and formula_105 is a Dolbeault operator defining the holomorphic structure of formula_104, then the unique compatible Chern connection formula_106 and its formula_7-part formula_107 satisfy formula_108. Denote the curvature form of the Chern connection by formula_109. The formal adjoints may be defined similarly to above using an formula_37-inner product where the Hermitian metric is combined with the inner product on forms. In this case all the Kähler identities, sometimes called the "Nakano identities",Lem 5.2.3 hold without change, except for the following:Ch VII §1§5.1 In particular note that when the Chern connection associated to formula_114 is a flat connection, so that the curvature formula_115, one still obtains the relationship that formula_116. Primitive cohomology and representation of sl(2,C). In addition to the commutation relations contained in the Kähler identities, some of the above operators satisfy other interesting commutation relations. In particular recall the Lefschetz operator formula_117, the contraction operator formula_1, and the counting operator formula_118 above. Then one can show the following commutation relations:Prop 1.2.26 Comparing with the Lie algebra formula_122, one sees that formula_123 form an sl2-triple, and therefore the algebra formula_31 of complex differential forms on a Kähler manifold becomes a representation of formula_122. The Kähler identities imply the operators formula_124 all commute with formula_95 and therefore preserve the harmonic forms inside formula_31. In particular when the Kähler manifold is compact, by applying the Hodge decomposition the triple of operators formula_123 descend to give an sl2-triple on the de Rham cohomology of X. In the language of representation theory of formula_122, the operator formula_117 is the "raising operator" and formula_1 is the "lowering operator". When formula_4 is compact, it is a consequence of Hodge theory that the cohomology groups formula_125 are finite-dimensional. Therefore the cohomologyformula_126admits a direct sum decomposition into irreducible finite-dimensional representations of formula_122.Ch V §3 Any such irreducible representation comes with a "primitive element", which is an element formula_127 such that formula_128. The primitive cohomology of formula_4 is given by formula_129The primitive cohomology also admits a direct sum splittingformula_130 Hard Lefschetz decomposition. The representation theory of formula_122 describes completely an irreducible representation in terms of its primitive element. If formula_131 is a non-zero primitive element, then since differential forms vanish above dimension formula_132, the chain formula_133 eventually terminates after finitely many powers of formula_117. This defines a finite-dimensional vector space formula_134which has an formula_122-action induced from the triple formula_123. This is the irreducible representation corresponding to formula_127. Applying this simultaneously to each primitive cohomology group, the splitting of cohomology formula_135 into its irreducible representations becomes known as the hard Lefschetz decomposition of the compact Kähler manifold. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Hard Lefschetz decompositionThm 5.27Prop 3.3.13Ch VI Thm 8.17 — Let formula_2 be a compact Kähler manifold. Then the de Rham cohomology of formula_4 admits an orthogonal direct sum decomposition formula_136 This decomposition is compatible with the Hodge decomposition into Dolbeault cohomology groups: formula_137 In addition By the Kähler identities paired with a holomorphic vector bundle, in the case where the holomorphic bundle is flat the Hodge decomposition extends to the twisted de Rham cohomology groups formula_148 and the Dolbeault cohomology groups formula_149. The triple formula_123 still acts as an sl2-triple on the bundle-valued cohomology, and the a version of the Hard Lefschetz decomposition holds in this case.Thm 5.31 Nakano inequalities. The Nakano inequalities are a pair of inequalities associated to inner products of harmonic differential forms with the curvature of a Chern connection on a holomorphic vector bundle over a compact Kähler manifold. In particular let formula_150 be a Hermitian holomorphic vector bundle over a compact Kähler manifold formula_151, and let formula_152 denote the curvature of the associated Chern connection. The Nakano inequalities state that if formula_153 is harmonic, that is, formula_154, thenCh VI Prop 2.5 These inequalities may be proven by applying the Kähler identities coupled to a holomorphic vector bundle as described above. In case where formula_157 is an ample line bundle, the Chern curvature formula_158 is itself a Kähler metric on formula_4. Applying the Nakano inequalities in this case proves the Kodaira–Nakano vanishing theorem for compact Kähler manifolds. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\partial \\bar \\partial" }, { "math_id": 1, "text": "\\Lambda" }, { "math_id": 2, "text": "(X,\\omega,J)" }, { "math_id": 3, "text": "\\Omega(X) := \\bigoplus_{k \\ge 0} \\Omega^{k}(X,\\mathbb{C}) = \\bigoplus_{p,q\\ge 0} \\Omega^{p,q}(X)" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "d:\\Omega^k(X,\\mathbb{C}) \\to \\Omega^{k+1}(X,\\mathbb{C})" }, { "math_id": 6, "text": "\\partial:\\Omega^{p,q}(X) \\to \\Omega^{p+1,q}(X)" }, { "math_id": 7, "text": "(1,0)" }, { "math_id": 8, "text": "\\bar \\partial:\\Omega^{p,q}(X) \\to \\Omega^{p,q+1}(X)" }, { "math_id": 9, "text": "(0,1)" }, { "math_id": 10, "text": "d=\\partial + \\bar \\partial" }, { "math_id": 11, "text": "d^2 = 0" }, { "math_id": 12, "text": "\\partial^2 = \\bar \\partial^2 = 0" }, { "math_id": 13, "text": "\\partial \\bar \\partial = - \\bar \\partial \\partial" }, { "math_id": 14, "text": "d^c = -\\frac{i}{2} (\\partial - \\bar \\partial): \\Omega^{p,q}(X) \\to \\Omega^{p+1,q}(X) \\oplus \\Omega^{p,q+1}(X)" }, { "math_id": 15, "text": "d^c" }, { "math_id": 16, "text": "\\partial, \\bar \\partial" }, { "math_id": 17, "text": "dd^c = i \\partial \\bar \\partial" }, { "math_id": 18, "text": "J" }, { "math_id": 19, "text": "d^c = J^{-1} \\circ d \\circ J ." }, { "math_id": 20, "text": "\\Lambda^{p,q}_x := \\Lambda^p T_{1,0}^* X_x \\otimes \\Lambda^q T_{0,1}^* X_x" }, { "math_id": 21, "text": "x\\in X" }, { "math_id": 22, "text": "\\bar \\cdot: \\Omega^{p,q}(X) \\to \\Omega^{q,p}(X)" }, { "math_id": 23, "text": "L: \\Omega^{p,q}(X) \\to \\Omega^{p+1,q+1}(X)" }, { "math_id": 24, "text": "L(\\alpha) := \\omega \\wedge \\alpha" }, { "math_id": 25, "text": "\\omega" }, { "math_id": 26, "text": "\\star: \\Omega^{p,q}(X) \\to \\Omega^{n-q,n-p}(X)" }, { "math_id": 27, "text": "\\Pi_k: \\Omega(X) \\to \\Omega^k(X,\\mathbb{C})" }, { "math_id": 28, "text": "\\Pi_{p,q}: \\Omega^k(X,\\mathbb{C}) \\to \\Omega^{p,q}(X)" }, { "math_id": 29, "text": "\\Pi = \\sum_{k=0}^{2n} (k-n) \\Pi_k: \\Omega(X) \\to \\Omega(X)" }, { "math_id": 30, "text": "J = \\sum_{p,q=0}^n i^{p-q} \\Pi_{p,q}" }, { "math_id": 31, "text": "\\Omega(X)" }, { "math_id": 32, "text": "J(\\alpha) = i\\alpha" }, { "math_id": 33, "text": "J(\\alpha) = -i\\alpha" }, { "math_id": 34, "text": "i^{p-q}" }, { "math_id": 35, "text": "(p,q)" }, { "math_id": 36, "text": "\\star" }, { "math_id": 37, "text": "L^2" }, { "math_id": 38, "text": "\\Omega^{p,q}(X)" }, { "math_id": 39, "text": " \\langle \\langle \\alpha,\\beta \\rangle \\rangle_{L^2} = \\int_X \\langle \\alpha, \\beta \\rangle \\frac{\\omega^n}{n!}" }, { "math_id": 40, "text": "\\langle \\alpha, \\beta\\rangle" }, { "math_id": 41, "text": "T" }, { "math_id": 42, "text": "\\langle \\langle T\\alpha, \\beta\\rangle \\rangle_{L^2} = \\langle \\langle \\alpha, T^* \\beta\\rangle \\rangle_{L^2}." }, { "math_id": 43, "text": "\\alpha, \\beta" }, { "math_id": 44, "text": "d^*: \\Omega^k(X,\\mathbb{C}) \\to \\Omega^{k-1}(X,\\mathbb{C})" }, { "math_id": 45, "text": "d^* = -\\star \\circ d \\circ \\star " }, { "math_id": 46, "text": "\\partial^*: \\Omega^{p,q}(X) \\to \\Omega^{p-1,q}(X)" }, { "math_id": 47, "text": "\\partial^* = - \\star \\circ \\bar \\partial \\circ \\star " }, { "math_id": 48, "text": "\\bar \\partial^*: \\Omega^{p,q}(X) \\to \\Omega^{p,q-1}(X)" }, { "math_id": 49, "text": "\\bar \\partial^* = - \\star \\circ \\partial \\circ \\star" }, { "math_id": 50, "text": "{d^c}^*: \\Omega^{k}(X,\\mathbb{C}) \\to \\Omega^{k+1}(X,\\mathbb{C})" }, { "math_id": 51, "text": "{d^c}^* = - \\star \\circ d^c \\circ \\star" }, { "math_id": 52, "text": "L^* = \\Lambda: \\Omega^{p,q}(X) \\to \\Omega^{p-1,q-1}(X)" }, { "math_id": 53, "text": "\\Lambda = \\star^{-1} \\circ L \\circ \\star" }, { "math_id": 54, "text": "d,\\partial" }, { "math_id": 55, "text": "\\bar \\partial" }, { "math_id": 56, "text": "\\Delta_d:= dd^* + d^* d: \\Omega^k(X,\\mathbb{C}) \\to \\Omega^k(X,\\mathbb{C})" }, { "math_id": 57, "text": "\\Delta_\\partial:= \\partial \\partial^* + \\partial^* \\partial: \\Omega^{p,q}(X) \\to \\Omega^{p,q}(X)" }, { "math_id": 58, "text": "\\Delta_\\bar \\partial:= \\bar \\partial \\bar \\partial^* + \\bar \\partial^* \\bar \\partial: \\Omega^{p,q}(X) \\to \\Omega^{p,q}(X)" }, { "math_id": 59, "text": "\\alpha \\in \\Omega^k(X,\\mathbb{R}) \\subset \\Omega^k(X,\\mathbb{C})" }, { "math_id": 60, "text": " \\bar d = d" }, { "math_id": 61, "text": "\\overline{d^*} = d^*" }, { "math_id": 62, "text": "\\overline{(\\partial)} = \\bar \\partial" }, { "math_id": 63, "text": "\\overline{(\\bar \\partial)} = \\partial" }, { "math_id": 64, "text": "\\partial^*" }, { "math_id": 65, "text": "\\bar \\partial^*" }, { "math_id": 66, "text": "\\overline{d^c} = d^c" }, { "math_id": 67, "text": "\\overline{{d^c}^*} = {d^c}^*" }, { "math_id": 68, "text": "\\bar \\star = \\star" }, { "math_id": 69, "text": "\\bar J = J" }, { "math_id": 70, "text": "\\bar L = L" }, { "math_id": 71, "text": "\\bar \\Lambda = \\Lambda" }, { "math_id": 72, "text": "\\bar \\Delta_d = \\Delta_d" }, { "math_id": 73, "text": "\\bar \\Delta_\\partial = \\Delta_\\bar \\partial" }, { "math_id": 74, "text": "\\bar \\Delta_\\bar \\partial = \\Delta_\\partial" }, { "math_id": 75, "text": "d,d^*, d^c, {d^c}^*, \\star, L, \\Lambda, \\Delta_d" }, { "math_id": 76, "text": " \\Delta_\\partial" }, { "math_id": 77, "text": " \\Delta_\\bar \\partial" }, { "math_id": 78, "text": "[T,J]=0" }, { "math_id": 79, "text": "[T,S] = T\\circ S - S \\circ T" }, { "math_id": 80, "text": "\\Omega(X) = \\Omega^{\\bullet}(X,\\mathbb{C})" }, { "math_id": 81, "text": "\\mathbb{C}^n" }, { "math_id": 82, "text": "d\\omega = 0" }, { "math_id": 83, "text": "[\\bar \\partial, L] = 0" }, { "math_id": 84, "text": "[\\partial, L]=0" }, { "math_id": 85, "text": " [\\bar \\partial^*, \\Lambda] = 0" }, { "math_id": 86, "text": "[\\partial^*, \\Lambda] = 0" }, { "math_id": 87, "text": "[\\bar \\partial^*, L] = i \\partial" }, { "math_id": 88, "text": "[\\partial^*, L] = - i \\bar \\partial" }, { "math_id": 89, "text": "[\\Lambda, \\bar \\partial] = -i\\partial^*" }, { "math_id": 90, "text": "[\\Lambda, \\partial] = i \\bar \\partial^*" }, { "math_id": 91, "text": " \\Delta_d = 2 \\Delta_{\\partial} = 2 \\Delta_{\\bar \\partial}" }, { "math_id": 92, "text": "\\Delta = \\Delta_d" }, { "math_id": 93, "text": "\\star, \\partial, \\bar \\partial, \\partial^*, \\bar \\partial^*, L," }, { "math_id": 94, "text": "\\Pi^{p,q}" }, { "math_id": 95, "text": "\\Delta_d" }, { "math_id": 96, "text": "d" }, { "math_id": 97, "text": "[\\Lambda, d] = -2{d^c}^*" }, { "math_id": 98, "text": "[L, d] = 0" }, { "math_id": 99, "text": "[\\Lambda,d^c] = 0" }, { "math_id": 100, "text": "[L, d^*] = 2d^c" }, { "math_id": 101, "text": "d, \\partial, \\bar \\partial" }, { "math_id": 102, "text": "E \\to X" }, { "math_id": 103, "text": "h" }, { "math_id": 104, "text": "E" }, { "math_id": 105, "text": "\\bar \\partial_E" }, { "math_id": 106, "text": "D_E" }, { "math_id": 107, "text": "\\partial_E" }, { "math_id": 108, "text": "D_E = \\partial_E + \\bar\\partial_E" }, { "math_id": 109, "text": "F" }, { "math_id": 110, "text": "[L, \\Delta_{\\bar \\partial_E}] = - i F \\wedge -" }, { "math_id": 111, "text": "[L, \\Delta_{\\partial_E}] = i F \\wedge -" }, { "math_id": 112, "text": "\\Delta_{\\bar \\partial_E} + \\Delta_{\\partial_E} = \\Delta_{D_E}" }, { "math_id": 113, "text": "\\Delta_{\\bar \\partial_E} - \\Delta_{\\partial_E} = [iF\\wedge -, \\Lambda]" }, { "math_id": 114, "text": "(h,\\bar \\partial_E)" }, { "math_id": 115, "text": "F=0" }, { "math_id": 116, "text": "\\Delta_{D_E} = 2 \\Delta_{\\partial_E} = 2 \\Delta_{\\bar \\partial_E}" }, { "math_id": 117, "text": "L" }, { "math_id": 118, "text": "\\Pi" }, { "math_id": 119, "text": "[\\Pi, L] = 2L" }, { "math_id": 120, "text": "[\\Pi, \\Lambda] = -2\\Lambda" }, { "math_id": 121, "text": "[L, \\Lambda] = \\Pi" }, { "math_id": 122, "text": "\\mathfrak{sl}(2,\\mathbb{C})" }, { "math_id": 123, "text": "\\{\\Pi, L, \\Lambda\\}" }, { "math_id": 124, "text": "\\Pi, L, \\Lambda" }, { "math_id": 125, "text": "H^i(X,\\mathbb{C})" }, { "math_id": 126, "text": "H(X) = \\bigoplus_{k=0}^{2n} H^i(X,\\mathbb{C}) = \\bigoplus_{p,q\\ge 0} H^{p,q}(X)" }, { "math_id": 127, "text": "\\alpha" }, { "math_id": 128, "text": "\\Lambda \\alpha = 0" }, { "math_id": 129, "text": "P^k(X,\\mathbb{C}) = \\{ \\alpha \\in H^k(X,\\mathbb{C}) \\mid \\Lambda \\alpha = 0\\}, \\quad P^{p,q}(X) = P^k(X,\\mathbb{C}) \\cap H^{p,q}(X)." }, { "math_id": 130, "text": "P^k(X,\\mathbb{C}) = \\bigoplus_{p+q=k} P^{p,q}(X)." }, { "math_id": 131, "text": "\\alpha\\in P^k(X,\\mathbb{C})" }, { "math_id": 132, "text": "2n" }, { "math_id": 133, "text": "\\alpha, L(\\alpha), L^2(\\alpha), \\dots" }, { "math_id": 134, "text": "V(\\alpha) = \\operatorname{span} \\langle\\alpha, L(\\alpha), L^2(\\alpha), \\dots \\rangle" }, { "math_id": 135, "text": "H(X)" }, { "math_id": 136, "text": " H^k(X,\\mathbb{C}) = \\bigoplus_{i\\ge 0} L^i (P^{k-2i}(X,\\mathbb{C}))." }, { "math_id": 137, "text": " H^{p,q}(X) = \\bigoplus_{i\\ge 0} L^i(P^{p-i,q-i}(X))." }, { "math_id": 138, "text": "k>n" }, { "math_id": 139, "text": "P^k(X,\\mathbb{C}) = 0" }, { "math_id": 140, "text": "L^{n-k}: P^k(X,\\mathbb{C}) \\to H^{2n-k}(X,\\mathbb{C})" }, { "math_id": 141, "text": "k\\le n" }, { "math_id": 142, "text": "L^{n-k}: P^{p,q}(X) \\to H^{p+n-k,q+n-k}(X,\\mathbb{C})" }, { "math_id": 143, "text": "p+q=k" }, { "math_id": 144, "text": "L^{n-k}: H^k(X,\\mathbb{C}) \\to H^{2n-k}(X,\\mathbb{C})" }, { "math_id": 145, "text": "L^{n-k}: H^{p,q}(X,\\mathbb{C}) \\to H^{p+n-k,q+n-k}(X,\\mathbb{C})" }, { "math_id": 146, "text": "P^k(X,\\mathbb{C}) = \\{ \\alpha \\in H^k(X,\\mathbb{C}) \\mid L^{n-k+1}\\alpha = 0 \\}" }, { "math_id": 147, "text": "P^{p,q}(X) = \\{ \\alpha \\in H^{p,q}(X) \\mid L^{n-p-q+1} \\alpha = 0\\}" }, { "math_id": 148, "text": "H_{dR}^k(X,E)" }, { "math_id": 149, "text": "H^{p,q}(X,E)" }, { "math_id": 150, "text": "(E,h)" }, { "math_id": 151, "text": "(X,\\omega)" }, { "math_id": 152, "text": "F(h)" }, { "math_id": 153, "text": "\\alpha \\in \\Omega^{p,q}(X)" }, { "math_id": 154, "text": "\\Delta_{\\bar \\partial} \\alpha = 0" }, { "math_id": 155, "text": "i\\langle \\langle F(h) \\wedge \\Lambda(\\alpha), \\alpha \\rangle \\rangle_{L^2} \\le 0" }, { "math_id": 156, "text": "i\\langle \\langle \\Lambda(F(h)\\wedge \\alpha), \\alpha \\rangle \\rangle_{L^2} \\ge 0" }, { "math_id": 157, "text": "E=L" }, { "math_id": 158, "text": "iF(h)" } ]
https://en.wikipedia.org/wiki?curid=72163862
7216822
Contact order
The contact order of a protein is a measure of the locality of the inter-amino acid contacts in the protein's native state tertiary structure. It is calculated as the average sequence distance between residues that form native contacts in the folded protein divided by the total length of the protein. Higher contact orders indicate longer folding times, and low contact order has been suggested as a predictor of potential downhill folding, or protein folding that occurs without a free energy barrier. This effect is thought to be due to the lower loss of conformational entropy associated with the formation of local as opposed to nonlocal contacts. Relative contact order (CO) is formally defined as: formula_0 where "N" is the total number of contacts, Δ"Si,j" is the sequence separation, in residues, between contacting residues "i" and "j", and "L" is the total number of residues in the protein. The value of contact order typically ranges from 5% to 25% for single-domain proteins, with lower contact order belonging to mainly helical proteins, and higher contact order belonging to proteins with a high beta-sheet content. Protein structure prediction methods are more accurate in predicting the structures of proteins with low contact orders. This may be partly because low contact order proteins tend to be small, but is likely to be explained by the smaller number of possible long-range residue-residue interactions to be considered during global optimization procedures that minimize an energy function. Even successful structure prediction methods such as the Rosetta method overproduce low-contact-order structure predictions compared to the distributions observed in experimentally determined protein structures. The percentage of the natively folded contact order can also be used as a measure of the "nativeness" of folding transition states. Phi value analysis in concert with molecular dynamics has produced transition-state models whose contact order is close to that of the folded state in proteins that are small and fast-folding. Further, contact orders in transition states as well as those in native states are highly correlated with overall folding time. In addition to their role in structure prediction, contact orders can themselves be predicted based on a sequence alignment, which can be useful in classifying the fold of a novel sequence with some degree of homology to known sequences.
[ { "math_id": 0, "text": "CO={1 \\over {L\\cdot N}}\\sum^{N}\\Delta S_{i,j}" } ]
https://en.wikipedia.org/wiki?curid=7216822
7217102
Harmonic coordinates
In Riemannian geometry, a branch of mathematics, harmonic coordinates are a certain kind of coordinate chart on a smooth manifold, determined by a Riemannian metric on the manifold. They are useful in many problems of geometric analysis due to their regularity properties. In two dimensions, certain harmonic coordinates known as isothermal coordinates have been studied since the early 1800s. Harmonic coordinates in higher dimensions were developed initially in the context of Lorentzian geometry and general relativity by Albert Einstein and Cornelius Lanczos (see harmonic coordinate condition). Following the work of Dennis DeTurck and Jerry Kazdan in 1981, they began to play a significant role in the geometric analysis literature, although Idzhad Sabitov and S.Z. Šefel had made the same discovery five years earlier. Definition. Let ("M", "g") be a Riemannian manifold of dimension n. One says that a coordinate chart ("x"1, ..., "x""n"), defined on an open subset U of M, is harmonic if each individual coordinate function "x""i" is a harmonic function on U. That is, one requires that formula_0 where ∆"g" is the Laplace–Beltrami operator. Trivially, the coordinate system is harmonic if and only if, as a map "U" → ℝ"n", the coordinates are a harmonic map. A direct computation with the local definition of the Laplace-Beltrami operator shows that ("x"1, ..., "x""n") is a harmonic coordinate chart if and only if formula_1 in which Γ are the Christoffel symbols of the given chart. Relative to a fixed "background" coordinate chart ("V", "y"), one can view ("x"1, ..., "x""n") as a collection of functions "x" ∘ "y"−1 on an open subset of Euclidean space. The metric tensor relative to x is obtained from the metric tensor relative to y by a local calculation having to do with the first derivatives of "x" ∘ "y"−1, and hence the Christoffel symbols relative to x are calculated from second derivatives of "x" ∘ "y"−1. So both definitions of harmonic coordinates, as given above, have the qualitative character of having to do with second-order partial differential equations for the coordinate functions. Using the definition of the Christoffel symbols, the above formula is equivalent to formula_2 Existence and basic theory. Harmonic coordinates always exist (locally), a result which follows easily from standard results on the existence and regularity of solutions of elliptic partial differential equations. In particular, the equation ∆"g""u""j" 0 has a solution in some open set around any given point p, such that "u"("p") and "du""p" are both prescribed. The basic regularity theorem concerning the metric in harmonic coordinates is that if the components of the metric are in the Hölder space "C""k", α when expressed in some coordinate chart, regardless of the smoothnness of the chart itself, then the transition function from that coordinate chart to any harmonic coordinate chart will be in the Hölder space "C""k" + 1, α. In particular this implies that the metric will also be in "C""k", α relative to harmonic coordinate charts. As was first discovered by Cornelius Lanczos in 1922, relative to a harmonic coordinate chart, the Ricci curvature is given by formula_3 The fundamental aspect of this formula is that, for any fixed i and j, the first term on the right-hand side is an elliptic operator applied to the locally defined function "g""ij". So it is automatic from elliptic regularity, and in particular the Schauder estimates, that if g is "C"2 and Ric(g) is "C""k", α relative to a harmonic coordinate charts, then g is "C""k" + 2, α relative to the same chart. More generally, if g is "C""k", α (with k larger than one) and Ric(g) is "C""l", α relative to some coordinate charts, then the transition function to a harmonic coordinate chart will be "C""k" + 1, α, and so Ric(g) will be "C"min("l", "k"), α in harmonic coordinate charts. So, by the previous result, g will be "C"min("l", "k") + 2, α in harmonic coordinate charts. As a further application of Lanczos' formula, it follows that an Einstein metric is analytic in harmonic coordinates. In particular, this shows that any Einstein metric on a smooth manifold automatically determines an analytic structure on the manifold, given by the collection of harmonic coordinate charts. Due to the above analysis, in discussing harmonic coordinates it is standard to consider Riemannian metrics which are at least twice-continuously differentiable. However, with the use of more exotic function spaces, the above results on existence and regularity of harmonic coordinates can be extended to settings where the metric has very weak regularity. Harmonic coordinates in asymptotically flat spaces. Harmonic coordinates were used by Robert Bartnik to understand the geometric properties of asymptotically flat Riemannian manifolds. Suppose that one has a complete Riemannian manifold ("M", "g"), and that there is a compact subset K of M together with a diffeomorphism Φ from "M" ∖ "K" to ℝ"n" ∖ "B""R"(0), such that Φ*"g", relative to the standard Euclidean metric δ on ℝ"n" ∖ "B""R"(0), has eigenvalues which are uniformly bounded above and below by positive numbers, and such that (Φ*"g")("x") converges, in some precise sense, to δ as x diverges to infinity. Such a diffeomorphism is known as a "structure at infinity" or as "asymptotically flat coordinates" for ("M", "g"). Bartnik's primary result is that the collection of asymptotically flat coordinates (if nonempty) has a simple asymptotic structure, in that the transition function between any two asymptotically flat coordinates is approximated, near infinity, by an affine transformation. This is significant in establishing that the ADM energy of an asymptotically flat Riemannian manifold is a geometric invariant which does not depend on a choice of asymptotically flat coordinates. The key tool in establishing this fact is the approximation of arbitrary asymptotically flat coordinates for ("M", "g") by asymptotically flat coordinates which are harmonic. The key technical work is in the establishment of a Fredholm theory for the Laplace-Beltrami operator, when acting between certain Banach spaces of functions on M which decay at infinity. Then, given any asymptotically flat coordinates Φ, from the fact that formula_4 which decays at infinity, it follows from the Fredholm theory that there are functions "z""k" which decay at infinity such that Δ"g"Φ"k" Δ"g""z""k", and hence that Φ"k" − "z""k" are harmonic. This provides the desired asymptotically flat harmonic coordinates. Bartnik's primary result then follows from the fact that the vector space of asymptotically-decaying harmonic functions on M has dimension "n" + 1, which has the consequence that any two asymptotically flat harmonic coordinates on M are related by an affine transformation. Bartnik's work is predicated on the existence of asymptotically flat coordinates. Building upon his methods, Shigetoshi Bando, Atsushi Kasue, and Hiraku Nakajima showed that the decay of the curvature in terms of the distance from a point, together with polynomial growth of the volume of large geodesic balls and the simple-connectivity of their complements, implies the existence of asymptotically flat coordinates. The essential point is that their geometric assumptions, via some of the results discussed below on harmonic radius, give good control over harmonic coordinates on regions near infinity. By the use of a partition of unity, these harmonic coordinates can be patched together to form a single coordinate chart, which is the main objective. Harmonic radius. A foundational result, due to Michael Anderson, is that given a smooth Riemannian manifold, any positive number α between 0 and 1 and any positive number Q, there is a number r which depends on α, on Q, on upper and lower bounds of the Ricci curvature, on the dimension, and on a positive lower bound for the injectivity radius, such that any geodesic ball of radius less than r is the domain of harmonic coordinates, relative to which the "C"1, α size of g and the uniform closeness of g to the Euclidean metric are both controlled by Q. This can also be reformulated in terms of "norms" of pointed Riemannian manifolds, where the "C"1, α-norm at a scale r corresponds to the optimal value of Q for harmonic coordinates whose domains are geodesic balls of radius r. Various authors have found versions of such "harmonic radius" estimates, both before and after Anderson's work. The essential aspect of the proof is the analysis, via standard methods of elliptic partial differential equations, for the Lanczos formula for the Ricci curvature in a harmonic coordinate chart. So, loosely speaking, the use of harmonic coordinates show that Riemannian manifolds can be covered by coordinate charts in which the local representations of the Riemannian metric are controlled only by the qualitative geometric behavior of the Riemannian manifold itself. Following ideas set forth by Jeff Cheeger in 1970, one can then consider sequences of Riemannian manifolds which are uniformly geometrically controlled, and using the coordinates, one can assemble a "limit" Riemannian manifold. Due to the nature of such "Riemannian convergence", it follows, for instance, that up to diffeomorphism there are only finitely many smooth manifolds of a given dimension which admit Riemannian metrics with a fixed bound on Ricci curvature and diameter, with a fixed positive lower bound on injectivity radius. Such estimates on harmonic radius are also used to construct geometrically-controlled cutoff functions, and hence partitions of unity as well. For instance, to control the second covariant derivative of a function by a locally defined second partial derivative, it is necessary to control the first derivative of the local representation of the metric. Such constructions are fundamental in studying the basic aspects of Sobolev spaces on noncompact Riemannian manifolds. References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Footnotes &lt;templatestyles src="Reflist/styles.css" /&gt; Textbooks Articles
[ { "math_id": 0, "text": "\\Delta^g x^i = 0.\\," }, { "math_id": 1, "text": "\\sum_{i=1}^n\\sum_{j=1}^ng^{ij}\\Gamma_{ij}^k = 0\\text{ for all }k=1,\\ldots,n," }, { "math_id": 2, "text": "2\\sum_{i=1}^n\\sum_{j=1}^ng^{ij}\\frac{\\partial g_{jk}}{\\partial x^i}=\\sum_{i=1}^n\\sum_{j=1}^ng^{ij}\\frac{\\partial g_{ij}}{\\partial x^k}." }, { "math_id": 3, "text": "R_{ij}=-\\frac{1}{2}\\sum_{a,b=1}^n g^{ab}\\frac{\\partial^2g_{ij}}{\\partial x^a\\partial x^b}+\\partial g\\ast\\partial g\\ast g^{-1}\\ast g^{-1}." }, { "math_id": 4, "text": "\\Delta^g\\Phi^k=-\\sum_{i=1}^n\\sum_{j=1}^n g^{ij}\\Gamma_{ij}^k," } ]
https://en.wikipedia.org/wiki?curid=7217102
7219034
Semipermutable subgroup
In mathematics, in algebra, in the realm of group theory, a subgroup formula_0 of a finite group formula_1 is said to be semipermutable if formula_0 commutes with every subgroup formula_2 whose order is relatively prime to that of formula_0. Clearly, every permutable subgroup of a finite group is semipermutable. The converse, however, is not necessarily true.
[ { "math_id": 0, "text": "H" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "K" } ]
https://en.wikipedia.org/wiki?curid=7219034
72198733
Bishop's graph
Mathematical graph relating to chess In mathematics, a bishop's graph is a graph that represents all legal moves of the chess piece the bishop on a chessboard. Each vertex represents a square on the chessboard and each edge represents a legal move of the bishop; that is, there is an edge between two vertices (squares) if they occupy a common diagonal. When the chessboard has dimensions formula_0, then the induced graph is called the formula_0 bishop's graph. Properties. The fact that the chessboard has squares of two colors, say red and black, such that squares that are horizontally or vertically adjacent have opposite colors, implies that the bishop's graph has two connected components, whose vertex sets are the red and the black squares, respectively. The reason is that the bishop's diagonal moves do not allow it to change colors, but by one or more moves a bishop can get from any square to any other of the same color. The two components are isomorphic if the board has a side of even length, but not if both sides are odd. A component of the bishop's graph can be treated as a rook's graph on a diamond if the original board is square and has sides of odd length, because if the red squares (say) are turned 45 degrees, the bishop's moves become horizontal and vertical, just like those of the rook. Domination. A square is said to be attacked by a bishop if the bishop can get to that square in exactly one move. A dominating set is an arrangement of bishops such that every square is attacked or occupied by one of those bishops. An independent dominating set is a dominating set in which no bishop attacks any other. The minimum number of bishops needed to dominate a square board of side "n" is exactly "n", and this is also the smallest number of bishops that can form an independent dominating set. By contrast, a total domination set, which is a dominating set for which every square, including those occupied by bishops, is attacked by one of the bishops, requires more bishops; on the square board of side "n" ≥ 3, the least size of a total dominating set is formula_1 about 1/3 larger than a minimum dominating set. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m\\times n" }, { "math_id": 1, "text": "2\\lceil2(n-1)/3\\rceil," } ]
https://en.wikipedia.org/wiki?curid=72198733
72202102
DisCoCat
Mathematical framework for natural language processing DisCoCat (Categorical Compositional Distributional) is a mathematical framework for natural language processing which uses category theory to unify distributional semantics with the principle of compositionality. The grammatical derivations in a categorial grammar (usually a pregroup grammar) are interpreted as linear maps acting on the tensor product of word vectors to produce the meaning of a sentence or a piece of text. String diagrams are used to visualise information flow and reason about natural language semantics. History. The framework was first introduced by Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark as an application of categorical quantum mechanics to natural language processing. It started with the observation that pregroup grammars and quantum processes shared a common mathematical structure: they both form a rigid category (also known as a non-symmetric compact closed category). As such, they both benefit from a graphical calculus, which allows a purely diagrammatic reasoning. Although the analogy with quantum mechanics was kept informal at first, it eventually led to the development of quantum natural language processing. Definition. There are multiple definitions of DisCoCat in the literature, depending on the choice made for the compositional aspect of the model. The common denominator between all the existent versions, however, always involves a categorical definition of DisCoCat as a structure-preserving functor from a category of grammar to a category of semantics, which usually encodes the distributional hypothesis. The original paper used the categorical product of FinVect with a pregroup seen as a posetal category. This approach has some shortcomings: all parallel arrows of a posetal category are equal, which means that pregroups cannot distinguish between different grammatical derivations for the same syntactically ambiguous sentence. A more intuitive manner of saying the same is that one works with diagrams rather than with partial orders when describing grammar. This problem is overcome when one considers the free rigid category formula_0 generated by the pregroup grammar. That is, formula_0 has generating objects for the words and the basic types of the grammar, and generating arrows formula_1 for the dictionary entries which assign a pregroup type formula_2 to a word formula_3. The arrows formula_4 are grammatical derivations for the sentence formula_5 which can be represented as string diagrams with cups and caps, i.e. adjunction units and counits. With this definition of pregroup grammars as free rigid categories, DisCoCat models can be defined as strong monoidal functors formula_6. Spelling things out in detail, they assign a finite dimensional vector space formula_7 to each basic type formula_8 and a vector formula_9 in the appropriate tensor product space to each dictionary entry formula_1 where formula_10 (objects for words are sent to the monoidal unit, i.e. formula_11). The meaning of a sentence formula_4 is then given by a vector formula_12 which can be computed as the contraction of a tensor network. The reason behind the choice of formula_13 as the category of semantics is that vector spaces are the usual setting of distributional reading in computational linguistics and natural language processing. The underlying idea of distributional hypothesis "A word is characterized by the company it keeps" is particularly relevant when assigning meaning to words like adjectives or verbs, whose semantic connotation is strongly dependent on context. Variations. Variations of DisCoCat have been proposed with a different choice for the grammar category. The main motivation behind this lies in the fact that pregroup grammars have been proved to be weakly equivalent to context-free grammars. One example of variation chooses Combinatory categorial grammar as the grammar category. List of linguistic phenomena. The DisCoCat framework has been used to study the following phenomena from linguistics. Applications in NLP. The DisCoCat framework has been applied to solve the following tasks in natural language processing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{G}" }, { "math_id": 1, "text": "w \\to t" }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "f: w_1 \\dots w_n \\to s" }, { "math_id": 5, "text": "w_1 \\dots w_n" }, { "math_id": 6, "text": "F : \\mathbf{G} \\to \\mathbf{FinVect}" }, { "math_id": 7, "text": "F(x)" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "F(w) \\in F(t) = F(t_1) \\otimes \\dots \\otimes F(t_n)" }, { "math_id": 10, "text": "t = t_1 \\dots t_n" }, { "math_id": 11, "text": "F(w) = 1" }, { "math_id": 12, "text": "F(f) \\in F(s)" }, { "math_id": 13, "text": "\\mathbf{FinVect}" } ]
https://en.wikipedia.org/wiki?curid=72202102
72203783
Arnold conjecture
Mathematical conjecture The Arnold conjecture, named after mathematician Vladimir Arnold, is a mathematical conjecture in the field of symplectic geometry, a branch of differential geometry. Strong Arnold conjecture. Let formula_0 be a closed (compact without boundary) symplectic manifold. For any smooth function formula_1, the symplectic form formula_2 induces a Hamiltonian vector field formula_3 on formula_4 defined by the formula formula_5 The function formula_6 is called a Hamiltonian function. Suppose there is a smooth 1-parameter family of Hamiltonian functions formula_7, formula_8. This family induces a 1-parameter family of Hamiltonian vector fields formula_9 on formula_4. The family of vector fields integrates to a 1-parameter family of diffeomorphisms formula_10. Each individual formula_11 is a called a Hamiltonian diffeomorphism of formula_4. The strong Arnold conjecture states that the number of fixed points of a Hamiltonian diffeomorphism of formula_4 is greater than or equal to the number of critical points of a smooth function on formula_4. Weak Arnold conjecture. Let formula_0 be a closed symplectic manifold. A Hamiltonian diffeomorphism formula_12 is called nondegenerate if its graph intersects the diagonal of formula_13 transversely. For nondegenerate Hamiltonian diffeomorphisms, one variant of the Arnold conjecture says that the number of fixed points is at least equal to the minimal number of critical points of a Morse function on formula_4, called the Morse number of formula_4. In view of the Morse inequality, the Morse number is greater than or equal to the sum of Betti numbers over a field formula_14, namely formula_15. The weak Arnold conjecture says that formula_16 for formula_17 a nondegenerate Hamiltonian diffeomorphism. Arnold–Givental conjecture. The Arnold–Givental conjecture, named after Vladimir Arnold and Alexander Givental, gives a lower bound on the number of intersection points of two Lagrangian submanifolds L and formula_18 in terms of the Betti numbers of formula_19, given that formula_18 intersects L transversally and formula_18 is Hamiltonian isotopic to L. Let formula_0 be a compact formula_20-dimensional symplectic manifold, let formula_21 be a compact Lagrangian submanifold of formula_4, and let formula_22 be an anti-symplectic involution, that is, a diffeomorphism formula_22 such that formula_23 and formula_24, whose fixed point set is formula_19. Let formula_25, formula_8 be a smooth family of Hamiltonian functions on formula_4. This family generates a 1-parameter family of diffeomorphisms formula_10 by flowing along the Hamiltonian vector field associated to formula_26. The Arnold–Givental conjecture states that if formula_27 intersects transversely with formula_19, then formula_28. Status. The Arnold–Givental conjecture has been proved for several special cases. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(M, \\omega)" }, { "math_id": 1, "text": "H: M \\to {\\mathbb R}" }, { "math_id": 2, "text": "\\omega" }, { "math_id": 3, "text": "X_H" }, { "math_id": 4, "text": "M" }, { "math_id": 5, "text": "\\omega( X_H, \\cdot) = dH." }, { "math_id": 6, "text": "H" }, { "math_id": 7, "text": "H_t \\in C^\\infty(M)" }, { "math_id": 8, "text": "t \\in [0,1]" }, { "math_id": 9, "text": "X_{H_t}" }, { "math_id": 10, "text": "\\varphi_t: M \\to M" }, { "math_id": 11, "text": "\\varphi_t" }, { "math_id": 12, "text": "\\varphi:M \\to M" }, { "math_id": 13, "text": "M\\times M" }, { "math_id": 14, "text": "{\\mathbb F}" }, { "math_id": 15, "text": "\\sum_{i=0}^{2n} \\dim H_i (M; {\\mathbb F})" }, { "math_id": 16, "text": "\\# \\{ \\text{fixed points of } \\varphi \\} \\geq \\sum_{i=0}^{2n} \\dim H_i (M; {\\mathbb F})" }, { "math_id": 17, "text": "\\varphi : M \\to M" }, { "math_id": 18, "text": "L'" }, { "math_id": 19, "text": "L" }, { "math_id": 20, "text": "2n" }, { "math_id": 21, "text": "L \\subset M" }, { "math_id": 22, "text": "\\tau : M \\to M" }, { "math_id": 23, "text": "\\tau^* \\omega = -\\omega" }, { "math_id": 24, "text": "\\tau^2 = \\text{id}_M" }, { "math_id": 25, "text": "H_t\\in C^\\infty(M)" }, { "math_id": 26, "text": "H_t" }, { "math_id": 27, "text": "\\varphi_1(L)" }, { "math_id": 28, "text": "\\# (\\varphi_1(L) \\cap L) \\geq \\sum_{i=0}^n \\dim H_i(L; \\mathbb Z / 2 \\mathbb Z)" }, { "math_id": 29, "text": "(M, L) = (\\mathbb{CP}^n, \\mathbb{RP}^n)" } ]
https://en.wikipedia.org/wiki?curid=72203783
7220589
Bundle map
In mathematics, a bundle map (or bundle morphism) is a morphism in the category of fiber bundles. There are two distinct, but closely related, notions of bundle map, depending on whether the fiber bundles in question have a common base space. There are also several variations on the basic theme, depending on precisely which category of fiber bundles is under consideration. In the first three sections, we will consider general fiber bundles in the category of topological spaces. Then in the fourth section, some other examples will be given. Bundle maps over a common base. Let formula_0 and formula_1 be fiber bundles over a space "M". Then a bundle map from "E" to "F" over "M" is a continuous map formula_2 such that formula_3. That is, the diagram should commute. Equivalently, for any point "x" in "M", formula_4 maps the fiber formula_5 of "E" over "x" to the fiber formula_6 of "F" over "x". General morphisms of fiber bundles. Let π"E":"E"→ "M" and π"F":"F"→ "N" be fiber bundles over spaces "M" and "N" respectively. Then a continuous map formula_7 is called a bundle map from "E" to "F" if there is a continuous map "f":"M"→ "N" such that the diagram commutes, that is, formula_8. In other words, formula_4 is fiber-preserving, and "f" is the induced map on the space of fibers of "E": since π"E" is surjective, "f" is uniquely determined by formula_4. For a given "f", such a bundle map formula_4 is said to be a bundle map "covering f". Relation between the two notions. It follows immediately from the definitions that a bundle map over "M" (in the first sense) is the same thing as a bundle map covering the identity map of "M". Conversely, general bundle maps can be reduced to bundle maps over a fixed base space using the notion of a pullback bundle. If π"F":"F"→ "N" is a fiber bundle over "N" and "f":"M"→ "N" is a continuous map, then the pullback of "F" by "f" is a fiber bundle "f"*"F" over "M" whose fiber over "x" is given by ("f"*"F")"x" = "F""f"("x"). It then follows that a bundle map from "E" to "F" covering "f" is the same thing as a bundle map from "E" to "f"*"F" over "M". Variants and generalizations. There are two kinds of variation of the general notion of a bundle map. First, one can consider fiber bundles in a different category of spaces. This leads, for example, to the notion of a smooth bundle map between smooth fiber bundles over a smooth manifold. Second, one can consider fiber bundles with extra structure in their fibers, and restrict attention to bundle maps which preserve this structure. This leads, for example, to the notion of a (vector) bundle homomorphism between vector bundles, in which the fibers are vector spaces, and a bundle map "φ" is required to be a linear map on each fiber. In this case, such a bundle map "φ" (covering "f") may also be viewed as a section of the vector bundle Hom("E","f*F") over "M", whose fiber over "x" is the vector space Hom("Ex","F""f"("x")) (also denoted "L"("Ex","F""f"("x"))) of linear maps from "Ex" to "F""f"("x"). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi_E\\colon E \\to M" }, { "math_id": 1, "text": "\\pi_F\\colon F \\to M" }, { "math_id": 2, "text": "\\varphi\\colon E \\to F" }, { "math_id": 3, "text": " \\pi_F\\circ\\varphi = \\pi_E " }, { "math_id": 4, "text": "\\varphi" }, { "math_id": 5, "text": "E_x= \\pi_E^{-1}(\\{x\\})" }, { "math_id": 6, "text": "F_x= \\pi_F^{-1}(\\{x\\})" }, { "math_id": 7, "text": "\\varphi : E \\to F" }, { "math_id": 8, "text": " \\pi_F\\circ\\varphi = f\\circ\\pi_E " } ]
https://en.wikipedia.org/wiki?curid=7220589
72211204
Transformer ratio arm bridge
Type of bridge circuit The transformer ratio arm bridge or TRA bridge is a type of bridge circuit for measuring electronic components, using a.c. It can be designed to work in terms of either impedance or admittance. It can be used on resistors, capacitors and inductors, measuring minor as well as major terms, e.g. series resistance in capacitors. It is probably the most accurate type of bridge available, being capable of the precision needed, for example, when checking secondary component standards against national standards. Like all bridges, the TRA bridge involves comparing an unknown component against a standard. Like all a.c. bridges, it requires a signal source and a null detector. The accuracy of this class of bridge depends on the ratio of the turns on one or more transformers. A notable advantage is that normal stray capacitance across the transformer, including lead capacitance, may affect the sensitivity of the bridge but does not affect its measuring accuracy. History. The invention of the TRA bridge is credited to Alan Blumlein in his UK patent 323037 (published 1929), and this class of bridge is sometimes known as a "Blumlein bridge", although links to earlier types of bridge can be seen. Blumlein's first patent was for a capacitance-measuring bridge: Fig. 1 is redrawn from one of the diagrams in the patent. Subsequently the ratio arm principle was applied more generally, to other classes of electronic components and at frequencies up to r.f., and with many variations in how the unknown component was connected to the transformer or transformers. Blumlein himself was responsible for several further related patents. He made his first bridge while employed by the British company Standard Telephones and Cables, which did not manufacture test instruments. TRA bridges have since been made by many specialist manufacturers, including Boonton, ESI (formerly Brown Engineering and BECO), General Radio, Marconi Instruments, H. W. Sullivan (now part of Megger) and Wayne Kerr. Principle. One possible configuration using two transformers is shown in Fig. 2. (The two transformers allow both the signal source and the null detector to be isolated from the measured component.) The unknown formula_0 and the standard formula_1 are both driven by T1, feeding currents to the primary of T2. Because of the winding sense of the two halves of the T2 primary, these currents are in antiphase. If formula_0 and formula_1 have the same value and are fed from the same tap on T1, the antiphase currents cancel out perfectly and the null detector will show balance. When formula_0 and formula_1 are unequal, balance can be approached by connecting formula_1 to a different tap on the T1 secondary. An exact balance may be achieved by using two or more standards connected to suitable taps. Fig. 2 shows formula_0 and formula_1 as single components. Fig. 3 shows separate standards for conductance formula_2 and susceptance formula_3, allowing minor as well as major terms of formula_4 to be resolved. The standards are shown as variable components connected to fixed taps on the T1 secondary, but bridges can equally be made with fixed standards connected to variable taps. The unknown component too may be connected to a tap part-way along the T1 secondary. Also the numbers of turns on the two arms of the T1 secondary are not necessarily equal, and likewise those on the T2 primary. Combinations of these various options offer great flexibility of construction, allowing measurements over a wide range of values while using only a small number of standards – essentially one per significant figure of the resistance or conductance value and one per significant figure of the reactance or susceptance value. In Fig. 3, at balance formula_5 The bridge may be balanced (nulled) by manual switching of the standards, but "autobalance" bridges, in which the switching is wholly or partially automated, are also made. Detailed example. The operation of a universal TRA bridge is best explained on the basis of an actual product, the Wayne Kerr B221 bridge, dating from the 1950s. It used valve (vacuum tube) technology. The following description is simplified. The bridge is based on two transformers (Fig. 4): T1 is described as the voltage transformer, and is driven by the signal source in the usual way. T2, the "current transformer", compares the two arms of the circuit – for the unknown formula_0 and the various standards – and drives the null detector, which takes the form of a phase-sensitive detector with adjustable sensitivity, feeding two magic eyes. (Later versions of the instrument, with transistorised circuitry, used a moving-coil meter as the display for the null detector.) Taps at 1, 10, 100 and 1000 turns are shown on the T1 secondary and on T2 primary P2a. Four-way selector switches are shown, but the tap selections are actually combined on a single switch to give seven measuring ranges. Full-scale limits at full accuracy (specified as ±0.1%) are 100 MΩ, 11.1 pF and 10 kH for the least sensitive range, and 100 Ω, 11.1 μF and 10 mH for the most sensitive range. Each range can be extended in the direction of higher resistance, higher inductance or lower capacitance at reduced accuracy. The voltage applied by T1 to formula_0 is about 30 V r.m.s. on the least sensitive range, 30 mV on the two most sensitive. The most significant figures of the major and minor components of formula_0 are obtained by switching the resistance standard Rs1 and the capacitance standard Cs1 to one of taps 0 to 10 on the secondary of T1. The second significant figures are obtained by switching Rs2 and Cs2 in the same way. Continuous ("vernier") fine adjustment to give third and fourth significant figures is provided by Rs3 and Cs3. Rs3 and Cs3 are shown connected to tap 10 on T1, but in practice these two standards may be connected to any convenient tap, as appropriate to their values. Primary P2b on T2 provides 100-turn taps of both polarities. Switching the capacitance standards between the positive and negative taps selects between capacitance measurements and inductance measurements. Similarly the polarity of the resistance standard can be reversed, so that measurements can be made in all four quadrants. Besides the main balance controls described above, the front panel of the instrument has zero adjustments for both resistance and capacitance. The inductive elements of the wire-wound resistance standards are compensated by trimming capacitors. All these and other trimming components are omitted in Fig. 4. This bridge measures conductance and susceptance in parallel. The susceptance reading is displayed as capacitance, and inductance must be calculated as a reciprocal using formula_6 To simplify the arithmetic, the bridge operates at 1592 Hz so that ω2 is 108 s−2. The readings can be converted to resistance and capacitance in series. On the most sensitive ranges, readings must be adjusted to take account of lead resistance and inductance. The external link allows two-, three- or four-terminal measurements to be made. Besides conventional component measurements, the bridge can also be used to measure attenuator performance, transformer turns ratio and the effectiveness of transformer screening. Subject to conditions, in-situ (in-circuit) measurement of a component is possible. With additional external components, capacitors with a polarising voltage or inductors with a standing direct current can be measured. An optional low-impedance adaptor extends the measuring range downwards by another four orders of magnitude, giving full-scale readings down to 10 mΩ, 5 F and 1 μH at ±1% basic accuracy. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. Henry P. Hall, "A History of Impedance Measurements", based on a draft for an unpublished book.
[ { "math_id": 0, "text": "Z_x" }, { "math_id": 1, "text": "Z_s" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "B" }, { "math_id": 4, "text": "Y_x" }, { "math_id": 5, "text": "Y_x=Y_s\\frac{e_sn_s}{e_xn_x}" }, { "math_id": 6, "text": "L_x=\\frac{1}{\\omega ^2C_x}" } ]
https://en.wikipedia.org/wiki?curid=72211204
7221237
Tannakian formalism
Monoidal category In mathematics, a Tannakian category is a particular kind of monoidal category "C", equipped with some extra structure relative to a given field "K". The role of such categories "C" is to generalise the category of linear representations of an algebraic group "G" defined over "K". A number of major applications of the theory have been made, or might be made in pursuit of some of the central conjectures of contemporary algebraic geometry and number theory. The name is taken from Tadao Tannaka and Tannaka–Krein duality, a theory about compact groups "G" and their representation theory. The theory was developed first in the school of Alexander Grothendieck. It was later reconsidered by Pierre Deligne, and some simplifications made. The pattern of the theory is that of Grothendieck's Galois theory, which is a theory about finite permutation representations of groups "G" which are profinite groups. The gist of the theory is that the fiber functor Φ of the Galois theory is replaced by an exact and faithful tensor functor "F" from "C" to the category of finite-dimensional vector spaces over "K". The group of natural transformations of Φ to itself, which turns out to be a profinite group in the Galois theory, is replaced by the group "G" of natural transformations of "F" into itself, that respect the tensor structure. This is in general not an algebraic group but a more general group scheme that is an inverse limit of algebraic groups (pro-algebraic group), and "C" is then found to be equivalent to the category of finite-dimensional linear representations of "G". More generally, it may be that fiber functors "F" as above only exists to categories of finite dimensional vector spaces over non-trivial extension fields "L/K". In such cases the group scheme "G" is replaced by a gerbe formula_0 on the fpqc site of Spec("K"), and "C" is then equivalent to the category of (finite-dimensional) representations of formula_0. Formal definition of Tannakian categories. Let "K" be a field and "C" a "K"-linear abelian rigid tensor (i.e., a symmetric monoidal) category such that formula_1. Then "C" is a Tannakian category (over "K") if there is an extension field "L" of "K" such that there exists a "K"-linear exact and faithful tensor functor (i.e., a strong monoidal functor) "F" from "C" to the category of finite dimensional "L"-vector spaces. A Tannakian category over "K" is neutral if such exact faithful tensor functor "F" exists with "L=K". Applications. The tannakian construction is used in relations between Hodge structure and l-adic representation. Morally, the philosophy of motives tells us that the Hodge structure and the Galois representation associated to an algebraic variety are related to each other. The closely-related algebraic groups Mumford–Tate group and motivic Galois group arise from categories of Hodge structures, category of Galois representations and motives through Tannakian categories. Mumford-Tate conjecture proposes that the algebraic groups arising from the Hodge strucuture and the Galois representation by means of Tannakian categories are isomorphic to one another up to connected components. Those areas of application are closely connected to the theory of motives. Another place in which Tannakian categories have been used is in connection with the Grothendieck–Katz p-curvature conjecture; in other words, in bounding monodromy groups. The Geometric Satake equivalence establishes an equivalence between representations of the Langlands dual group formula_2 of a reductive group "G" and certain equivariant perverse sheaves on the affine Grassmannian associated to "G". This equivalence provides a non-combinatorial construction of the Langlands dual group. It is proved by showing that the mentioned category of perverse sheaves is a Tannakian category and identifying its Tannaka dual group with formula_2. Extensions. has established partial Tannaka duality results in the situation where the category is "R"-linear, where "R" is no longer a field (as in classical Tannakian duality), but certain valuation rings. has initiated and developed Tannaka duality in the context of infinity-categories. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal G" }, { "math_id": 1, "text": "\\mathrm{End}(\\mathbf{1})\\cong K" }, { "math_id": 2, "text": "{}^L G" } ]
https://en.wikipedia.org/wiki?curid=7221237
722235
Hill sphere
Region in which an astronomical body dominates the attraction of satellites The Hill sphere is a common model for the calculation of a gravitational sphere of influence. It is the most commonly used model to calculate the spatial extent of gravitational influence of an astronomical body ("m") in which it dominates over the gravitational influence of other bodies, particularly a primary ("M"). It is sometimes confused with other models of gravitational influence, such as the Laplace sphere or being called the Roche sphere, the latter causing confusion with the Roche limit. It was defined by the American astronomer George William Hill, based on the work of the French astronomer Édouard Roche. To be retained by a more gravitationally attracting astrophysical object—a planet by a more massive star, a moon by a more massive planet—the less massive body must have an orbit that lies within the gravitational potential represented by the more massive body's Hill sphere. That moon would, in turn, have a Hill sphere of its own, and any object within that distance would tend to become a satellite of the moon, rather than of the planet itself. One simple view of the extent of the Solar System is that it is bounded by the Hill sphere of the Sun (engendered by the Sun's interaction with the galactic nucleus or other more massive stars). A more complex example is the one at right, the Earth's Hill sphere, which extends between the Lagrange points L1 and L2, which lie along the line of centers of the Earth and the more massive Sun. The gravitational influence of the less massive body is least in that direction, and so it acts as the limiting factor for the size of the Hill sphere; beyond that distance, a third object in orbit around the Earth would spend at least part of its orbit outside the Hill sphere, and would be progressively perturbed by the tidal forces of the more massive body, the Sun, eventually ending up orbiting the latter. For two massive bodies with gravitational potentials and any given energy of a third object of negligible mass interacting with them, one can define a zero-velocity surface in space which cannot be passed, the contour of the Jacobi integral. When the object's energy is low, the zero-velocity surface completely surrounds the less massive body (of this restricted three-body system), which means the third object cannot escape; at higher energy, there will be one or more gaps or bottlenecks by which the third object may escape the less massive body and go into orbit around the more massive one. If the energy is at the border between these two cases, then the third object cannot escape, but the zero-velocity surface confining it touches a larger zero-velocity surface around the less massive body at one of the nearby Lagrange points, forming a cone-like point there. At the opposite side of the less massive body, the zero-velocity surface gets close to the other Lagrange point. This limiting zero-velocity surface around the less massive body is its Hill "sphere". Definition. The Hill radius or sphere (the latter defined by the former radius) has been described as "the region around a planetary body where its own gravity (compared to that of the Sun or other nearby bodies) is the dominant force in attracting satellites," both natural and artificial. As described by de Pater and Lissauer, all bodies within a system such as the Sun's Solar System "feel the gravitational force of one another", and while the motions of just two gravitationally interacting bodies—constituting a "two-body problem"—are "completely integrable ([meaning]...there exists one independent integral or constraint per degree of freedom)" and thus an exact, analytic solution, the interactions of "three" (or more) such bodies "cannot be deduced analytically", requiring instead solutions by numerical integration, when possible. This is the case, unless the negligible mass of one of the three bodies allows approximation of the system as a two-body problem, known formally as a "restricted three-body problem". For such two- or restricted three-body problems as its simplest examples—e.g., one more massive primary astrophysical body, mass of m1, and a less massive secondary body, mass of m2—the concept of a Hill radius or sphere is of the approximate limit to the secondary mass's "gravitational dominance", a limit defined by "the extent" of its Hill sphere, which is represented mathematically as follows: formula_0, where, in this representation, major axis "a" can be understood as the "instantaneous heliocentric distance" between the two masses (elsewhere abbreviated "r"). More generally, if the less massive body, formula_1, orbits a more massive body (m1, e.g., as a planet orbiting around the Sun) and has a semi-major axis formula_2, and an eccentricity of formula_3, then the Hill radius or sphere, formula_4 of the less massive body, calculated at the pericenter, is approximately: formula_5 When eccentricity is negligible (the most favourable case for orbital stability), this expression reduces to the one presented above. Example and derivation. In the Earth-Sun example, the Earth () orbits the Sun () at a distance of 149.6 million km, or one astronomical unit (AU). The Hill sphere for Earth thus extends out to about 1.5 million km (0.01 AU). The Moon's orbit, at a distance of 0.384 million km from Earth, is comfortably within the gravitational sphere of influence of Earth and it is therefore not at risk of being pulled into an independent orbit around the Sun. The earlier eccentricity-ignoring formula can be re-stated as follows: formula_6, or formula_7, where M is the sum of the interacting masses. Derivation. The expression for the Hill radius can be found by equating gravitational and centrifugal forces acting on a test particle (of mass much smaller than formula_8) orbiting the secondary body. Assume that the distance between masses formula_9 and formula_8 is formula_10, and that the test particle is orbiting at a distance formula_11 from the secondary. When the test particle is on the line connecting the primary and the secondary body, the force balance requires that formula_12 where formula_13 is the gravitational constant and formula_14 is the (Keplerian) angular velocity of the secondary about the primary (assuming that formula_15). The above equation can also be written as formula_16 which, through a binomial expansion to leading order in formula_17, can be written as formula_18 Hence, the relation stated above formula_19 If the orbit of the secondary about the primary is elliptical, the Hill radius is maximum at the apocenter, where formula_10 is largest, and minimum at the pericenter of the orbit. Therefore, for purposes of stability of test particles (for example, of small satellites), the Hill radius at the pericenter distance needs to be considered. To leading order in formula_17, the Hill radius above also represents the distance of the Lagrangian point L1 from the secondary. Regions of stability. The Hill sphere is only an approximation, and other forces (such as radiation pressure or the Yarkovsky effect) can eventually perturb an object out of the sphere. As stated, the satellite (third mass) should be small enough that its gravity contributes negligibly. Detailed numerical calculations show that orbits at or just within the Hill sphere are not stable in the long term; it appears that stable satellite orbits exist only inside 1/2 to 1/3 of the Hill radius. The region of stability for retrograde orbits at a large distance from the primary is larger than the region for prograde orbits at a large distance from the primary. This was thought to explain the preponderance of retrograde moons around Jupiter; however, Saturn has a more even mix of retrograde/prograde moons so the reasons are more complicated. Further examples. It is possible for a Hill sphere to be so small that it is impossible to maintain an orbit around a body. For example, an astronaut could not have orbited the 104 ton Space Shuttle at an orbit 300 km above the Earth, because a 104-ton object at that altitude has a Hill sphere of only 120 cm in radius, much smaller than a Space Shuttle. A sphere of this size and mass would be denser than lead, and indeed, in low Earth orbit, a spherical body must be more dense than lead in order to fit inside its own Hill sphere, or else it will be incapable of supporting an orbit. Satellites further out in geostationary orbit, however, would only need to be more than 6% of the density of water to fit inside their own Hill sphere. Within the Solar System, the planet with the largest Hill radius is Neptune, with 116 million km, or 0.775 au; its great distance from the Sun amply compensates for its small mass relative to Jupiter (whose own Hill radius measures 53 million km). An asteroid from the asteroid belt will have a Hill sphere that can reach 220,000 km (for 1 Ceres), diminishing rapidly with decreasing mass. The Hill sphere of 66391 Moshup, a Mercury-crossing asteroid that has a moon (named Squannit), measures 22 km in radius. A typical extrasolar "hot Jupiter", HD 209458 b, has a Hill sphere radius of 593,000 km, about eight times its physical radius of approx 71,000 km. Even the smallest close-in extrasolar planet, CoRoT-7b, still has a Hill sphere radius (61,000 km), six times its physical radius (approx 10,000 km). Therefore, these planets could have small moons close in, although not within their respective Roche limits. Hill spheres for the solar system. The following table and logarithmic plot show the radius of the Hill spheres of some bodies of the Solar System calculated with the first formula stated above (including orbital eccentricity), using values obtained from the JPL DE405 ephemeris and from the NASA Solar System Exploration website. Explanatory notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_{\\mathrm{H}} \\approx a \\sqrt[3]{\\frac{m_2}{3(m_1+m_2)}}" }, { "math_id": 1, "text": "m2" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "e" }, { "math_id": 4, "text": "R_{\\mathrm{H}}" }, { "math_id": 5, "text": "R_{\\mathrm{H}} \\approx a (1-e) \\sqrt[3]{\\frac{m_2}{3(m_1+m_2)}}." }, { "math_id": 6, "text": "\\frac{R^3_{\\mathrm{H}}}{a^3} \\approx 1/3 \\frac{m2}{M}" }, { "math_id": 7, "text": "3\\frac{R^3_{\\mathrm{H}}}{a^3} \\approx \\frac{m2}{M}" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "M" }, { "math_id": 10, "text": "r" }, { "math_id": 11, "text": "r_{\\mathrm{H}}" }, { "math_id": 12, "text": "\\frac{Gm}{r^2_{\\mathrm{H}}}-\\frac{GM}{(r-r_{\\mathrm{H}})^2}+\\Omega^2(r-r_{\\mathrm{H}})=0," }, { "math_id": 13, "text": "G" }, { "math_id": 14, "text": "\\Omega=\\sqrt{\\frac{GM}{r^3}}" }, { "math_id": 15, "text": "m\\ll M" }, { "math_id": 16, "text": "\\frac{m}{r^2_{\\mathrm{H}}}-\\frac{M}{r^2}\\left(1-\\frac{r_{\\mathrm{H}}}{r}\\right)^{-2}+\\frac{M}{r^2}\\left(1-\\frac{r_{\\mathrm{H}}}{r}\\right)=0," }, { "math_id": 17, "text": "r_{\\mathrm{H}}/r" }, { "math_id": 18, "text": "\\frac{m}{r^2_{\\mathrm{H}}}-\\frac{M}{r^2}\\left(1+2\\frac{r_{\\mathrm{H}}}{r}\\right)+\\frac{M}{r^2}\\left(1-\\frac{r_{\\mathrm{H}}}{r}\\right) = \\frac{m}{r^2_{\\mathrm{H}}}-\\frac{M}{r^2}\\left(3\\frac{r_{\\mathrm{H}}}{r}\\right)\\approx 0." }, { "math_id": 19, "text": "\\frac{r_{\\mathrm{H}}}{r}\\approx \\sqrt[3]{\\frac{m}{3 M}}." } ]
https://en.wikipedia.org/wiki?curid=722235
72235151
Phylogenetic reconciliation
Technique in evolutionary study In phylogenetics, reconciliation is an approach to connect the history of two or more coevolving biological entities. The general idea of reconciliation is that a phylogenetic tree representing the evolution of an entity (e.g. homologous genes or symbionts) can be drawn within another phylogenetic tree representing an encompassing entity (respectively, species, hosts) to reveal their interdependence and the evolutionary events that have marked their shared history. The development of reconciliation approaches started in the 1980s, mainly to depict the coevolution of a gene and a genome, and of a host and a symbiont, which can be mutualist, commensalist or parasitic. It has also been used for example to detect horizontal gene transfer, or understand the dynamics of genome evolution. Phylogenetic reconciliation can account for a diversity of evolutionary trajectories of what makes life's history, intertwined with each other at all scales that can be considered, from molecules to populations or cultures. A recent avatar of the importance of interactions between levels of organization is the holobiont concept, where a macro-organism is seen as a complex partnership of diverse species. Modeling the evolution of such complex entities is one of the challenging and exciting direction of current research on reconciliation. Phylogenetic trees as nested structures. Phylogenetic trees are intertwined at all levels of organization, integrating conflicts and dependencies within and between levels. Macro-organism populations migrate between continents, their microbe symbionts switch between populations, the genes of their symbionts transfer between microbe species, and domains are exchanged between genes. This list of organization levels is not representative or exhaustive, but gives a view of levels where reconciliation methods have been used. As a generic method, reconciliation could take into account numerous other levels. For instance, it could consider the syntenic organization of genes, the interacting history of transposable elements and species, the evolution of a protein complex across species. The scale of evolutionary events considered can go from population events such as geographical diversification to nucleotids levels one inside genes, including for instance chromosome levels events inside genomes such as whole genome duplication. Phylogenies have been used for representing the diversification of life at many levels of organization: macro-organisms, their cells throughout development, micro-organisms through marker genes, chromosomes, proteins, protein domains, and can also be helpful to understand the evolution of human culture elements such as languages or fairy tales. At each of these levels, phylogenetic trees describe different stories made of specific diversification events, which may or may not be shared among levels. Yet because they are structurally nested (similar to matryoshka dolls) or functionally dependent, the evolution at a particular level is bound to those at other levels. Phylogenetic reconciliation is the identification of the links between levels through the comparison of at least two associated trees. Originally developed for two trees, reconciliations for more than two levels have been recently constructed (see section Explicit modeling of three or more levels). As such, reconciliation provides evolutionary scenarios that reveal conflict and cooperation among evolving entities. These links may be unintuitive, for instance, genes present in the same genome may show uncorrelated evolutionary histories while some genes present in the genome of a symbiont may show a strong coevolution signal with the host phylogeny. Hence, reconciliation can be a useful tool to understand the constraints and evolutionary strategies underlying the assemblage that forms a holobiont. Because all levels essentially deal with the same object, a phylogenetic tree, the same models of reconciliation—in particular those based on duplication-transfer-loss events, which are central to this article—can be transposed, with slight modifications, to any pair of connected levels: an "inner", "lower", or "associate" entity (e.g. gene, symbiont species, population) evolves inside an "upper", or "host" one (respectively species, host, or geographical area). The upper and lower entities are partially bound to the same history, leading to similarities in their phylogenetic trees, but the associations can change over time, become more or less strict or switch to other partners. History. The principle of phylogenetic reconciliation was introduced in 1979 to account for differences between genes and species-level phylogenies. In a parsimonious setting, two evolutionary events, gene duplication and gene loss were invoked to explain the discrepancies between a gene tree and a species tree. It also described a score on gene trees knowing the species tree and an aligned sequence by using the number of gene duplication, loss, and nucleotide replacement for the evolution of the aligned sequence, an approach still central today with new models of reconciliation and phylogeny inference. The term "reconciliation" has been used by Wayne Maddison in 1997, as a reverse concept of "phylogenetic discord" resulting from gene level evolutionary events. Reconciliation was then developed jointly for the coevolution of host and symbiont and the geographic diversification of species. In both settings, it was important to model a horizontal event that implied parallel branches of the host tree: host switch for host and symbiont and species dispersion from one area to another in biogeography. Unlike for genes and genomes, the coevolution of host and symbiont and the explanation of species diversification by geography are not always the null hypothesis. A visual depiction of the two phylogenies in a tanglegram can help assess such coevolution, although it has no statistical obvious interpretation. Character methods, such as Brooks Parsimony Analysis, were proposed to test coevolution and reconstruct scenarios of coevolution. In these methods, one of the trees is forgotten except for its leaves, which are then used as a character evolving on the second tree. First models for reconciliation, taking explicitly into account the two topologies and using a mechanistic event-based approach, were proposed for host and symbiont and biogeography. Debates followed, as the methods were not yet completely sound but integrated useful information in a new framework. Costs for each event and a dynamic programming technique considering all pairs of host and symbiont nodes were then introduced into a host and symbiont approach, both of which still underlie most of the current reconciliation methods for host and symbiont as well as for species and genes. Reconciliation returned to the framework it was introduced in, gene and species. After character models were considered for horizontal gene transfer, a new reconciliation model, following and improving the dynamic programming approach presented for host and symbiont, effectively introduced horizontal gene transfer to gene and species reconciliation on top of the duplication and loss model. The progressive development of phylogenetic reconciliation was thus possible through exchanges between multiple research communities studying phylogenies at the host and symbiont, gene and species, or biogeography levels. This story and its modern developments have been reviewed several times, generally focusing on specific pairs of levels, with a few exceptions. New developments start to bring the different frameworks together with new integrative models. Pocket Gophers and their chewing lices: a classical example. Pocket gophers (Geomyidae) and their chewing lice (Trichodectidae) form a well studied system of host and symbiont coevolution. The phylogeny of host and symbiont and the matching of the leaves of their trees are depicted on the left. For the host, O. stands for "Orthogeomys", G. for "Geomys" and T. for "Thomomys"; for the symbiont, G. stands for "Geomydoecus" and T. for "Thomoydoecus". Reconciling the two trees means giving a scenario with evolutionary events and matching on the ancestral nodes depicting the coevolution of the two trees. The events considered in this system are the events of the DTL model: duplication, transfer (or host switch), loss, and cospeciation, the null event of coevolution. Two scenarios were proposed in two studies, using two different frameworks which could be deemed as pre-dynamic programming DTL reconciliation. In modern DTL reconciliation frameworks, costs are assigned to events. The two scenarios were then shown to correspond to maximum parsimonious reconciliation with different cost assignments. The scenario A uses 6 cospeciations, 2 duplications, 3 losses and 2 host switches to reconcile the two trees, while scenario B uses 5 cospeciations, 3 duplications, 3 losses and 2 host switches. The cost of a scenario is the sum of the cost of its events. For instance, with a cost of 0 for cospeciation, 2 for duplication, 1 for loss and 3 for host switch, scenario A has a cost of formula_0 and scenario B of formula_1, and so according to a parsimonious principle, scenario A would be deemed more likely (scenario A stays more likely as long as the cost of cospeciation is less than the cost of duplication). Development of Phylogenetic Reconciliation Models. Models and methods used today in phylogeny are the result of several decades of research, made progressively complex, driven by the nature of the data and the quest for biological realism on one side, and the limits and progresses of mathematical and algorithmic methods on the other. Pre-reconciliation models: characters on trees. Character methods can be used when there is no tree available for one of the levels, but only values for a character at the leaves of a phylogenetic tree for the other level. A model defines the events of character value change, their rate, probabilities or costs. For instance, the character can be the presence of a host on a symbiont tree, the geographical region on a species tree, the number of genes on a genome tree, or nucleotides in a sequence. Such methods thus aim at reconstructing ancestral characters at internal nodes of the tree. Although these methods have produced results on genome evolution, the utility of a second tree appears with very simple examples. If a symbiont has recently acquired the ability to spread in a group of species and thus it is present in most of them, character methods will wrongly indicate that the common ancestor of the hosts already had the symbiont. In contrast, a comparison of the symbiont and host trees would show discrepancies revealing horizontal transfers. The origins of reconciliation: the Duplication Loss model and the Lowest Common Ancestor mapping. Duplication and loss were invoked first to explain the presence of multiple copies of a gene in a genome or its absence in certain species. It is possible with those two events to reconcile any two trees, i.e. to map the nodes and branches of the lower and upper trees, or equivalently to give a list of evolutionary events explaining the discrepancies between the upper tree and the lower tree. A most parsimonious Duplication and Loss (DL) reconciliation is computed through the Lowest Common Ancestor (LCA) mapping: proceeding from the leaves to the root, each internal node is mapped to the lowest common ancestor of the mapping of its two children. A Markovian model for reconciliation. The LCA mapping in the DL model follows a parsimony principle: no event should be invoked if it is not necessary. However the use of this principle is debated, and it is commonly admitted that it is more accurate in molecular evolution to fit a probabilistic model as a random walk, which does not necessarily produce parsimonious scenarios. A birth and death Markovian model is such a model that can generate a lower tree "inside" a fixed upper one from root to leaves. Statistical inference provides a framework to find most likely scenarios, and in that case, a maximum likelihood reconciliation of two trees is also a parsimonious one. In addition, it is possible with such a framework to sample scenarios, or integrate over several possible scenarios in order to test different hypotheses, for example to explore the space of lower trees. Moreover, probabilistic models can be integrated into larger models, as probabilities simply multiply when assuming independence, for instance combining sequence evolution and DL reconciliation. Introducing horizontal transfer. Host switch, i.e. inheritance of a symbiont from a kin lineage, is a crucial event in the evolution of parasitic or symbiotic relationships between species. This horizontal transfer also models migration events in biogeography and became of interest for the reconciliation of gene and species trees when it appeared that many discrepancies could not simply be explained by duplication and loss and that horizontal gene transfer (HGT) was a major evolutionary process in micro-organisms evolution. This switching, or horizontal transfer, pattern can also model admixture or introgression. It is considered in character methods, without information from the symbiont phylogeny. On top of the DL model, horizontal transfer enables new and very different reconciliation scenarios. The simple yet powerful dynamic programming approach. The LCA reconciliation method yields a unique solution, which has been shown to be optimal for the problem of minimizing the weighted number of events, whatever the relative weights of duplication and loss. In contrast, with Duplication, horizontal Transfer and Loss (DTL), there can be several equally parsimonious reconciliations. For instance, a succession of duplications and losses can be replaced by a single transfer. One of the first ideas to define a computational problem and approach a resolution was, in a host/symbiont framework, to maximize the number of co-speciations with a heuristic algorithm. Another solution is to give relative costs to the events and find a scenario that minimizes the sum of the costs of its events. In the probabilistic model frameworks, the equivalent task consists of assigning rates or probabilities to events and search for maximum likelihood scenarios, or sample scenarios according to their likelihood. All these problems are solved with a dynamic programming approach. This dynamic programming method involves traversing the two trees in a postorder. Proceeding from the leaves and then going up in the two trees, for each couple of internal nodes (one for each tree), the cost of a most parsimonious DTL reconciliation is computed. In a parsimony framework, costs of reconciling a lower subtree rooted at formula_2 with an upper subtree rooted at formula_3 is initialized for the leaves with their matching: formula_4 And then inductively, denoting formula_5 the children of formula_6 the children of formula_7 the costs associated with speciation, duplication, horizontal transfer and loss, respectively (with formula_8 often fixed to 0), formula_9 The costs formula_10 and formula_11, because they do not depend on formula_3, can be computed once for all formula_3, hence achieving quadratic complexity to compute formula_12 for all couples of formula_3 and formula_2. The cost of losses only appears in association with other events because in parsimony, a loss can always be associated with the preceding event in the tree. The induction behind the use of dynamic programming is based on always progressing in the trees toward the roots. However some combinations of events that can happen consecutively can make this induction ill-defined. One such combination consists of a transfer followed immediately by a loss in the donor lineage (TL). Restricting the use of this TL event repairs the induction. With an unlimited use, it is necessary to use or add other known methods to solve systems of equations like fixed point methods, or numerical solving of differential equations. In 2016, only two out of seven of the most commonly used parsimony reconciliation programs did handle TL events, although their consideration can drastically change the result of a reconciliation. Unlike LCA mapping, DTL reconciliation typically yields several scenarios of minimal cost, in some cases an exponential number. The strength of the dynamic programming approach is that it enables to compute a minimum cost of coevolution of the input upper and lower tree in quadratic time, and to get a most parsimonious scenario through backtracking. It can also be transposed to a probabilistic framework to compute the likelihood of coevolution and get a most likely reconciliation, replacing costs with rates, minimums by sums and sums by products. Moreover, through multiple backtracks, the approach is suitable for enumerating all parsimonious solutions or to sample scenarios, optimal and sub-optimal, according to their likelihood. Estimation of event costs and rates. Dynamic programming per se is only a partial solution and does not solve several problems raised by reconciliation. Defining a most parsimonious DTL reconciliation requires assigning costs to the different kinds of events (D, T and L). Different cost assignments can yield different reconciliation scenarios, so there is a need for a way to choose those costs. There is a diversity of approaches to do so. CoRe-PA explores in a recursive manner the space of cost vectors, searching for a good matching with the event frequencies in reconciliations. ALE uses the same idea in a probabilistic framework to estimate the event rates by maximum likelihood. Alternatively, COALA is a preprocess using approximate Bayesian computation with sequential Monte Carlo: simulation and statistic rejection or acceptance of parameters with successive refinement. In the parsimony framework, it is also possible to divide the space of possible event costs into areas of costs which lead to the same Pareto optimal solution. Pareto optimal reconciliations are such that no other reconciliation has a strictly inferior cost for one type of event (duplication, transfer or loss), and less or equal for the others. It is possible as well to rely on external considerations in order to choose the event costs. For example, the software Angst chooses the costs that minimize the variation of genome size, in number of genes, between parent and children species. The problem of temporal feasibility. The dynamic programming method works for dated (internal nodes are totally ordered) or undated upper trees. However, with undated trees, there is a temporal feasibility issue. Indeed, a horizontal transfer implies that the donor and the receiver are contemporaneous, therefore implying a time constraint on the tree. In consequence, two horizontal transfers may be incompatible, because they imply contradicting time constraints. The dynamic programming approach can not easily check for such incompatibilities. If the upper tree is undated, finding a temporally feasible most parsimonious reconciliation is NP-hard. It is fixed parameter tractable, which means that there are algorithms running in time bounded by an exponential of the number of transfers in the output scenarios. Some solutions imply integer linear programming or branch and bound exploration. If the upper tree is dated, then there is no incompatibility issue because horizontal transfers can be constrained to never go backward in time. Finding a coherent optimal reconciliation is then solved in polynomial time or with a speed-up in RASCAL, by testing only a fraction of node mappings. Most of the software taking undated trees does not look for temporal feasibility, except Jane, which explores the space of total orders via a genetic algorithm, or, in a post process, Notung, and Eucalypt, which searches inside the set of optimal solutions for time consistent ones. Other methods work as supplementary layers to reconciliations, correcting reconciliations or returning a subset of feasible transfers, which can be used to date a species tree. Expanding phylogenies: Transfers from the dead. In phylogenetics in general, it is important to keep in mind that the extant and ancestral species that are represented in any phylogeny are only a sparse sample of the species that currently exist or ever have existed. This is why one can safely assume that all transfers that can be detected using phylogenetic methods have originated in lineages that are, strictly speaking, absent from a studied phylogeny. Accounting for extinct or unsampled biodiversity in phylogenetic studies can give a better understanding of these processes. Originally, DTL reconciliation methods did not recognize this phenomenon and only allowed for transfer between contemporaneous branches of the tree, hence ignoring most plausible solutions. However, methods working on undated upper trees can be seen as implicitly handling the unknown diversity by allowing transfers "to the future" from the point of view of one phylogeny, that is, the donor is more ancient than the recipient. A transfer to the future can be translated into a speciation to unknown species, followed by a transfer from unknown species. ALE in its dated version explicitly takes the unknown diversity into account by adding a Moran process of speciation/extinctions of species to the dated birth/death model of gene evolution. Transfers from the dead are also handled in a parsimonious setting by Tera and ecceTERA, showing that considering these transfers improves the capacity to reconstruct gene trees using reconciliation, and with a more explicit model and in a probabilistic setting, in ALE undated. The specificity of biogeography: a tree like structure for the "evolution" of areas. In biogeography, some applications of reconciliation approaches consider as an upper tree an area cladogram with defined ancestral nodes. For instance, the root can be Pangaea and the nodes contemporary continents. Sometimes, internal nodes are not ancestral areas but the unions of the areas of their children, to account for the possibility of species evolving along the lower tree to inhabit one or several areas. In this case, the evolutionary events are migration, where one species colonizes a new area, allopatric speciation, or vicariance, equivalent to co-speciation in host/symbiont comparisons. Even though this approach does not always give a tree (if the unions AB and BC of leaves A, B, C exist, a child can have several parents), and this structure is not associated with time (it is possible for a species to go from A to AB by migration, as well as from AB to A by extinction), reconciliation methods—with events and dynamic programming—can infer evolutionary scenarios between the upper geographical structure and the lower species tree. Diva and Lagrange are two reconciliation models constructing such a tree-like structure and then applying reconciliation, the first with a parsimony principle, the second in a probabilistic framework. Additionally, BioGeoBEARS is a biogeography inference package that reimplemented DIVA and Lagrange models and allows for new options, like distant dependent transfers and discussion on statistical model selection. Graphical output. With two trees and multiple evolutionary events linking them to represent, viewing reconciled trees is a challenging but necessary question in order to make reconciliation studies more accessible. Some reconciliation softwares include annotation of the evolutionary events on the lower trees, while others, and specific packages, in DL or DTL, trace the lower tree embedded in the upper one. One difficulty in this regard is the variety of output formats for the different reconciliation softwares. A common standard, recphyloxml, has been established and endorsed by part of the community, and a viewer is available, able to display reconciliation in multi level systems. Addressing Additional Practical Considerations. Applying DTL reconciliation to biological data raises several problems related to uncertainty and confidence levels of input and output. Concerning the output, the uncertainty of the answer calls for an exploration of the whole solution space. Concerning the input, phylogenetic reconciliation has to handle uncertainties in the resolution or rooting of the upper or lower trees, or even to propose roots or resolutions according to their confidence. Exploring the space of reconciliations. Dynamic programming makes it possible to sample reconciliations, uniformly among optimal ones or according to their likelihood. It is also possible to enumerate them in time proportional to the number of solutions, a number which can quickly become intractable (even only for optimal ones). Finding and presenting structure among the multitude of possible reconciliations has been at the center of recent methodological developments, especially for host and symbiont aimed methods. Several works have focused on representing a set of reconciliations in a compact way, from a uniform sample of optimal ones or by constructing a graph summarizing the optimal solutions. This can be achieved by giving support values to specific events based on all optimal (or suboptimal) reconciliations, or with the use of a consensus reconciled tree. In a DL model, it is possible to define a median reconciliation, based on shared events and to compute it in polynomial time. EMPRess can group similar reconciliations through clustering, with all pairwise distance between reconciliations computable in polynomial time (independently of the number of most parsimonious reconciliations). With the same aim, Capybara defines equivalence classes among reconciliations, efficiently computing representatives for all classes, and outputs with linear delay a given number of reconciliations (first optimal ones, then sub optimal). The space of most parsimonious reconciliation can be expanded or reduced when increasing or decreasing horizontal transfer allowed distance, which is easily done by dynamic programming. Inferring phylogenetic trees with reconciliation. Reconciliation and input uncertainty. Reconciliation works with two fixed trees, a lower and an upper, both assumed correct and rooted. However, those trees are not first hand data. The most frequently used data for phylogenetics consists in aligned nucleotidic or proteic sequences. Extracting DNA, sequencing, assembling and annotating genomes, recognizing homology relationships among genes and producing multiple alignments for phylogenetic reconstruction are all complex processes where errors can ultimately affect the reconstructed tree. Any topology or rooting error can be misinterpreted and cause systematic bias. For instance, in DL reconciliations, errors on the lower tree bias the reconciliation toward more duplication events closer to the root and more losses closer to the leaves. On the other hand, reconciliation, as a macro evolutionary model, can work as a supplementary layer to the micro evolutionary model of sequence evolution, resolving polytomies (nodes with more than two children) or rooting trees, or be intertwined with it through integrative models in order to get better phylogenies. Most of the works in this direction focus on gene/species reconciliations, nevertheless some first steps have been made in host/symbiont, such as considering unrooted symbiont trees or dealing with polytomies in Jane. Exploring the space of lower trees with reconciliation. Reconciliation can easily take unrooted lower trees as input, which is a frequently used feature because trees inferred from molecular data are typically unrooted. It is possible to test all possible roots, or a thoughtful triple traversal of the unrooted tree allows to do it without additional time complexity. In a duplication-loss model, the set of roots minimizing the costs are found close to one another, forming a "plateau", a property which does not generalize to DTL. Reconciliation can also take as input non binary trees, that is, with internal nodes with more than two children. Such trees can be obtained for example by contracting branches with low statistical support. Inferring a binary tree from a non binary tree according to reconciliation scores is solved in DL with efficient methods. In DTL, the problem is NP hard. Heuristics and exact fixed parameter tractable algorithms are possible solutions. Another way to handle uncertainty in lower trees is to take as input a sample of alternative lower trees instead of a single one. For example, in the paper that gave reconciliation its name, it was proposed to consider all most likely lower trees, and choose from these trees the best one according to their DL costs, a principle also used by TreeFix-DTL. The sample of lower trees can similarly reflect their likelihood according to the aligned sequences, as obtained from Bayesian Markov chain Monte Carlo methods as implemented for example in Phylobayes. AngST, ALE and ecceTERA use "amalgamation", an extension of the DTL dynamic programming that is able to efficiently traverse a set of alternative lower trees instead of a single tree. A local search in the space of lower trees guided by a joint likelihood, on the one hand from multiple sequence alignments and on the other hand from reconciliation with the upper tree, is achieved in Phyldog with a DL model and in GeneRax with DTL. In a DL model with sequence evolution and relaxed molecular clock, the lower tree space can be explored with an MCMC. MowgliNNI can modify the input gene tree at poorly supported nodes to increase DTL score, while TreeSolve resolves the multifurcations added by collapsing poorly supported nodes. Finally, integrative models—mixing sequence evolution and reconciliation—can compute a joint likelihood via dynamic programming (for both reconciliation and gene sequences evolution), use Markov chain Monte Carlo to include molecular clock to estimate branch lengths, in a DL model or with a relaxed molecular clock, and in a DTL model. These models have been applied in gene/species frameworks, not yet in host/symbiont or biogeography contexts. Inferring upper trees using reconciliation. Inferring an upper tree from a set of lower trees is a long standing question related to the supertree problem. It is particularly interesting in the case of gene/species reconciliation where many (typically thousands of) gene trees are available from complete genome sequences. Supertree methods attempt to assemble a species tree based on sets of trees which may differ in terms of contemporary species sets and topology, but usually without consideration for the biological process explaining these differences. However, some supertree approaches are statistically consistent for the reconstruction of the species tree if the gene trees are simulated under a DL model. This means that if the number of input lower trees generated from the true upper tree via the DL model grows toward infinity, given that there are no additional errors, the output upper tree converges almost surely to the true one. This has been shown in the case of a quartet distance, and with a generalised Robinson Foulds multicopy distance, with better running time but assuming gene trees do not contain bipartitions contradicting the species tree, which seems rare under a DL model. Reconciliation can also be used for the inference of upper trees. This is a computationally hard problem: already resolving polytomies in a non binary upper tree with a binary lower one—minimizing a DL reconciliation score—is NP-hard. In particular, reconstructing the species tree giving the best DL cost for several gene trees is NP-hard and 2-approximable. It is called the Gene Duplication problem or more generally Gene Tree parsimony. The problem was seen as a way to detect paralogy to get better species tree reconstruction. It is NP-hard, with interesting results on the problem complexity and the behaviour of the model with different input size, structure and ILS presence. Multiple solutions exists, with ILP or heuristics, and with the possibility of a deep coalescence score. ODTL takes as input gene trees and searches a maximum likelihood species tree according to a DTL model, with a hill-climbing search. The approach produces a species tree with internal nodes ordered in time, ensuring a time compatibility for the scenarios of transfer among lower trees {link section|The problem of temporal feasibility}. Addressing a more general problem, Phyldog searches for the maximum likelihood species tree, gene trees and DL parameters from multiple family alignments via multiple rounds of local search. It thus performs the exploration of both upper and lower trees at the same time. MixTreEM presents a faster solution. Limits of the two-level DTL model. A limit to dynamic programming: non independent evolution of children lineages. The dynamic programming framework, like usual birth and death models, works under the hypothesis of independent evolution of children lineages in the lower tree. However, this hypothesis does not hold if the model is complemented with several other documented evolutionary events, such as horizontal transfer with replacement of a homologous gene in the recipient lineage, or gene conversion. Horizontal transfer with replacement is usually modeled by a rearrangement of the upper tree, called Subtree Prune and Regraft (SPR). Reconciling under SPR is NP-hard, even in dated trees, and fixed-parameter tractable regarding the output size. Another way to model and infer replacing horizontal transfers is through maximum agreement forest, where branches are cut in the lower and upper trees in order to get two identical (or statistically indistinguishable) upper and lower forests. The problem is NP-hard, but several approximations have been proposed. Replacing transfers can be considered on top of the DL model. In the same vein, gene conversion can be seen as a "replacing duplication". In this latter case, a polynomial algorithm which does not use dynamic programming and is an extension of the LCA method can find all optimal solutions, including gene conversions. Integrating population levels: failure to diverge and Incomplete Lineage Sorting. In host/symbiont frameworks, a single symbiont species is sometimes associated to several host species. This means that while a speciation or diversification has been observed in the host, the populations are indistinguishable in the symbiont. This is handled for example by additional polytomies in the symbiont tree, possibly leading to intractable inference problems, because polytomies need to be resolved. It is also modeled by an additional evolutionary event "failure to diverge" (Jane, Amocoala). Failure to diverge can be a way to allow "free" host switch in a population, a flow of symbionts between closely related hosts. Following that vision, host switch allowed only for close hosts is considered in Eucalypt. This idea of horizontal flow between close populations can also be applied to gene/species frameworks, with a definition of species based on a gradient of gene flow between populations. Failure to diverge is one way of introducing population dynamics in reconciliation, a framework mainly adapted to the multi-species level, where populations are supposed to be well differentiated. There are other population phenomena that limit this framework, one of them being deep coalescence of lineages, leading to Incomplete Lineage Sorting (ILS), which is not handled by the DTL model. The multi species coalescent is a classical model of allele evolution along a species tree, with birth of alleles and sorting of alleles at speciations, that takes into account population sizes and naturally encompasses ILS. In a reconciliation context, several attempts have been made in order to account for ILS without the complex integration of a population model. For example, ILS can be seen as a possible evolutionary pattern for the gene tree. In that case, children lineages are not independent of one another, leading to intractability results. ILS alone can be handled with LCA, but ILS + DL reconciliation is NP hard, even without transfers. Notung handles ILS by collapsing short branches of the species tree in polytomies and allowing ILS as a free diversification of gene trees on those polytomies. ecceTERA binds the maximum size of connected parts of the species tree where ILS can happen, proposing a fixed parameter tractable algorithm in that parameter. ILS and DL can be considered on an upper network instead of a tree. This models in particular introgression, with the possibility to estimate model parameters. More integrative reconciliation models accounting for ILS have been proposed, including both DL and multispecies coalescent, with DLCoal. It is a probabilistic model with a parsimony translation, proposing two sequential LCA-type heuristics handled via an intermediate locus tree between gene and species. However, outside of the gene/species reconciliation framework, ILS seems, for no particular reason, never considered in host/symbiont, nor in biogeography. Cophylogeny with more than two levels. A striking aspect of reconciliation is the common methodology handling different levels of organization: it is used for comparing domain and protein trees, gene and species trees, hosts and symbiont trees, population and geographic trees. However, now that scientists tend to consider that multi-level models of biological functioning bring a novel and game changing view of organisms and their environment, the question is how to use reconciliation to bring phylogenetics to this holobiont era. Coevolution of entities at different scales of evolution is at the basis of the holobiont idea: macro-organisms, micro-organisms and their genes all have a different history bound to a common functioning in a single ecosystem. Biological systems like the entanglement of host, symbionts and their genes imply functional and evolutionary dependencies between more than two levels. Examples of multi level systems with complex evolutionary inter-dependencies. Genes coevolving beyond genome boundaries. The holobiont concept stresses the possibility of genes from different genomes to cooperate and coevolve. For instance, certain genes in a symbiont genome may provide a function to its host, like the production of a vital compound absent from available feeding sources. An iconic example is the case for blood-feeding or sap-feeding insects, which often depend on one or several bacterial symbionts to thrive on a resource that is abundant in sugar, but lacks essential amino-acids or vitamins. Another example is the association of Fabaceae with nitrogen-fixing bacteria. The compound beneficiary to the host is typically produced by a set of genes encoded in the symbiont genome, which throughout evolution, may be transferred to other symbionts, and/or in and out of the host genome. Reconciliation methods have the potential to reveal evolutionary links between portions of genomes from different species. A search for coevolving genes beyond the boundaries of the genomes in which they are encoded would highlight the basis for the association of organisms in the holobiont. Horizontal gene transfer routes depend on multiple levels. In intracellular mutualistic symbiont insect systems, multiple occurrences of horizontal gene transfers have been identified, whether from host to symbiont, symbiont to host or symbiont to symbiont. Transfers of endosymbiont genes involved in nutrition pathways beneficiary to the insect host have been shown to occur preferentially if the donor and recipient lineages share the same host. This is also the case in insects with bacterial symbionts providing defensive protein or in obligate leaf nodule bacterial symbionts associated with plants. In the human host, gene transfer has been shown to occur preferentially among symbionts hosted in the same organs. A review of horizontal gene transfers in host/symbiont systems stresses the importance of supporting HGTs with multiple evidence. Notably it is argued that transfers should be considered better supported when involving symbionts sharing a habitat, a geographical area, or the same host. One should, however, keep in mind that most of the diversity of hosts and symbionts is unknown and that transfers may have occurred in unsampled closely related species, hosts or symbionts. The idea that gene transfer in symbionts is constrained by the host can also be used to investigate the host's phylogenetic history. For instance, based on phylogeographical studies, it is now accepted that the bacterium "Helicobacter pylori" has been associated with human populations since the origins of the human species. An analysis of the genomes of "Helicobacter pylori" in Europe suggests that they are issued from a recombination between African and Asian "Helicobacter pylori". This strongly implies early contacts between the corresponding human populations. Similarly, an analysis of HGTs in coronaviruses from different mammalian species using reconciliation methods has revealed frequent contact between viral lineages, which can be interpreted as frequent host switches. Cultural evolution. The evolution of elements of human culture, for instance languages and folktales, in association with human population genetics, has been studied using concepts from phylogenetics. Although reconciliation has never been used in this framework, some of these studies encompass multiple levels of organization, each represented by a tree or the evolution of a character, with a focus on the coevolution of these levels. Language trees can be compared with population trees in order to reveal vertically transmitted folktales, via a character model on this language tree. Variants in each folktale's family, languages, genetic diversity, populations and geography can be compared two by two, to link folktale diversification with languages on one side and with geography on the other side. As in genetics with symbionts sharing host promoting HGTs, linguistic barriers can foreclose the transmission of folktales or language elements. Investigating three-level systems using two-level reconciliation. Multi level reconciliation is not as developed as two-level reconciliation. One way to approach the evolutionary dependencies between more than two levels of organization is to try to use available standard two-level methods to give a first insight into a biological system's complexity. Multi-gene events: implicit consideration of an intermediate level. At the gene/species tree level, one typically deals with many different gene trees. In this case, the hypothesis that different gene families evolve independently is made implicitly. However, this does not need to be the case. For instance, duplication, transfer and loss can occur for segments of a genome spanning an arbitrary number of contiguous genes. It is possible to consider such multi-gene events using an intermediate guide for lower trees inside the upper one. For instance, one can compute the joint likelihood of multiple gene tree reconciliations with a dated species tree with duplication, loss and whole genome duplication or in a parsimonious setting, and one definition of the problem is NP-hard. Similarly, the DL framework can be enriched with duplication and loss of chromosome segments instead of a single gene. However, DL reconciliation becomes intractable with that new possibility. The link between two consecutive genes can also be modeled as an evolving character, subject to gain, loss, origination, breakage, duplication and transfer. The evolution of this link appears as an additional level to species and gene trees, partly constrained by the gene/species tree reconciliation, partly evolving on its own, according to genome organization. It thus models the synteny, or proximity between genes. At another scale, it can as well model the evolution of two domains belonging to a protein. The detection of "highways of transfers", the preferential acquisition of groups of genes from a specific donor, is another example of non-independence of gene histories. Similarly, multi-gene transfers can be detected. It has also led to methodological developments such as reconciliations using phylogenetic networks, seen as a tree augmented with transfer edges, which can be used to constrain transfers in a DTL model. Networks can also be used to model introgression and incomplete lineage sorting. Detecting coevolution in multiple pairs of levels. It is a central question to understand the evolution of a holobiont to know what the levels are that coevolve with each other, for instance between host species, host genes, symbionts and symbiont genes. It is possible to approach the multiple inter-dependencies between all levels of evolution by multiple pairwise comparisons of two evolving entities. Reconciliation of host and symbiont on one side and geography and symbiont on the other can also help to identify patterns of diversification of host and symbiont that reflect either coevolution or patterns that can be explained by a common geographical diversification. Similarly, a study used reconciliation methods to differentiate the effect of diet evolution and phylogenetic inertia on the composition of mammalian gut microbiomes. By reconstructing ancestral diets and microbiome composition onto a mammalian phylogeny, the study revealed that both effects contribute but at different time scales. Explicit modeling of three or more levels. In a model of a multi-level system as host/symbiont/genes, horizontal gene transfers should be more likely between two symbionts of a same host. This is invisible to a two-level gene tree/species tree or host/symbiont reconciliation: in some cases, looking at any combination of two levels can lead to missing an evolutionary scenario which can only be the most likely if the information from the three trees is considered together. Trying to face the limitation of these uses of standard two-level reconciliations with systems involving inter-dependencies at multiple levels, a methodological effort has been undertaken in the last decade to construct and use multi-level models. This requires the identification of at least one "intermediate" level between the upper and the lower one. Pre-reconciliation: characters onto reconciled trees. A first step towards integrated three-level models is to consider phylogenetic trees at two levels and another level represented only with characters at the leaves of one of the trees. For instance, a reconciliation of host and symbiont phylogenies can be informed by geographic data. Ancestral geographic locations of host and symbiont species obtained through a character inference method can then be used to constrain the host/symbiont reconciliation: ancestral hosts and symbionts can only be associated if they belong to the same geographical location. At another scale, the evolution at the sub-gene level can be approached with a character method. Here, parts of genes (e.g. the sequence coding for protein domains) is reconciled according to a DL model with a species tree, and the genes they belong to are mentioned as characters of these parts. Ancestral genes are then reconstructed a posteriori via merge and splits of gene parts. Two-level reconciliations informed by a third level. As pointed out by several studies mentioned in , an upper level can inform a reconciliation between an intermediate and lower one, notably for horizontal transfers. Three-level models can take into account these assumptions to guide reconciliations between an intermediate tree and lower levels with the knowledge of an upper tree. The model can for example give higher likelihoods to reconciliation scenarios where horizontal gene transfers happen between entities sharing the same habitat. This has been achieved for the first time with DTL gene/species reconciliations nested with a DTL gene domain and gene reconciliation. Different costs for inter and intra transfers depend on whether or not transfers happen between genes of the same genomes. Note that this model explicitly considers three levels and three trees, but does not yet define a real three-level reconciliation, with a likelihood or score associated. It relies on a sequential operation, where the second reconciliation is informed by the result of the first one. The reconciliation problem in multi-level models. The next step is to define the score of a reconciliation consisting of three nested trees and to compute, given the three trees, three-level reconciliations according to their score. It has been achieved with a species/gene/domain system, where genes evolve within the species tree with a DL model and domains evolve within the gene/species system with a DTL model, forbidding domain transfers between genes of two different species. Inference involves candidate scenarios with joint scores. Computing the minimum score scenario is NP-hard, but dynamic programming or integer linear programming can offer heuristics. Variations of the problem considering multiple domains are available, and so is a simulation framework. Inferring the intermediate tree using models of 3-level lower/intermediate/upper reconciliation. Just like two-level reconciliation can be used to improve lower or upper phylogenies, or to help constructing them from aligned sequences, joint reconciliation models can be used in the same manner. In this vein, a coupled gene/species DL, domain gene DL and gene sequence evolution model in a Bayesian framework improves the reconstruction of gene trees. Software. Multiple pieces of software have been developed to implement the various models of reconciliation. The following table does not aim for exhaustiveness but presents a number of software tools aimed at reconciling trees to infer reconciliation scenarios or for related usage, such as correcting or inferring trees, or testing coevolution. The levels of interest section details the levels for which the software was implemented, even though it is entirely possible, for instance, to use a software made for species and gene reconciliation to reconcile host and symbionts. Parsimony or probability is the underlying model that is used for the reconciliation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "6 \\times 0 + 1\\times 2 + 3\\times 1+ 1 \\times 3 = 8" }, { "math_id": 1, "text": "5\\times 0 + 1 \\times 2 + 3 \\times 1 + 2 \\times 3=11" }, { "math_id": 2, "text": "l" }, { "math_id": 3, "text": "U" }, { "math_id": 4, "text": "\n\\begin{align}\n c(U,l) = 0 \\mbox{ if } l \\in U \\mbox{ else } c(U,l) = \\infty\\\\\n\\end{align}\n" }, { "math_id": 5, "text": "l^{\\prime}, l^{\\prime \\prime}" }, { "math_id": 6, "text": "l, U^{\\prime},U^{\\prime \\prime}" }, { "math_id": 7, "text": "U, c^S,c^D,c^T,c^L " }, { "math_id": 8, "text": "c^S" }, { "math_id": 9, "text": "\n\n c(U,l) = \\min \\begin{cases}\n\n c^S + \\min( c(U^{\\prime},l^{\\prime})+c(U^{\\prime \\prime},l^{\\prime \\prime}), c(U^{\\prime \\prime},l^{\\prime}) + c(U^{\\prime},l^{\\prime \\prime}), c(U^{\\prime},l) + c^L, c(U^{\\prime \\prime},l) + c^L )\\\\\n c^D + c(U,l^{\\prime}) + c(U,l^{\\prime \\prime})\\\\\n c^T + \\min( \\min_{V}(c(V,l^{\\prime})) +c(U,l^{\\prime \\prime}), \\min_{V}(c(V,l^{\\prime \\prime}))+c(U,l^{\\prime})) \n\\end{cases}\n" }, { "math_id": 10, "text": "\\min_V(c(V,l^{\\prime}))" }, { "math_id": 11, "text": "\\min_V(c(V,l^{\\prime \\prime}))" }, { "math_id": 12, "text": "c" } ]
https://en.wikipedia.org/wiki?curid=72235151
72235948
Standard deviation line
In statistics, the standard deviation line (or SD line) marks points on a scatter plot that are an equal number of standard deviations away from the average in each dimension. For example, in a 2-dimensional scatter diagram with variables formula_0 and formula_1, points that are 1 standard deviation away from the mean of formula_0 and also 1 standard deviation away from the mean of formula_1 are on the SD line. The SD line is a useful visual tool since points in a scatter diagram tend to cluster around it, more or less tightly depending on their correlation. Properties. Relation to regression line. The SD line goes through the point of averages and has a slope of formula_2 when the correlation between formula_0 and formula_1 is positive, and formula_3 when the correlation is negative. Unlike the regression line, the SD line does not take into account the relationship between formula_0 and formula_1. The slope of the SD line is related to that of the regression line by formula_4 where formula_5 is the slope of the regression line, formula_6 is the correlation coefficient, and formula_7 is the magnitude of the slope of the SD line. Typical distance of points to SD line. The root mean square vertical distance of points from the SD line is formula_8. This gives an idea of the spread of points around the SD line.
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "\\frac{\\sigma_y}{\\sigma_x} " }, { "math_id": 3, "text": "-\\frac{\\sigma_y}{\\sigma_x}" }, { "math_id": 4, "text": "a = r \\frac{\\sigma_y}{\\sigma_x}" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "\\frac{\\sigma_y}{\\sigma_x}" }, { "math_id": 8, "text": "\\sqrt{2(1 - |r|)} \\times\\sigma_y" } ]
https://en.wikipedia.org/wiki?curid=72235948
72244919
Lexicographic optimization
Lexicographic optimization is a kind of Multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Often, the different objectives can be ranked in order of importance to the decision-maker, so that objective formula_0 is the most important, objective formula_1 is the next most important, and so on. Lexicographic optimization presumes that the decision-maker prefers even a very small increase in formula_0, to even a very large increase in formula_2 etc. Similarly, the decision-maker prefers even a very small increase in formula_1, to even a very large increase in formula_3 etc. In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values. Lexicographic optimization is sometimes called preemptive optimization, since a small increase in one objective value preempts a much larger increase in less important objective values. As an example, consider a firm which puts safety above all. It wants to maximize the safety of its workers and customers. Subject to attaining the maximum possible safety, it wants to maximize profits. This firm performs lexicographic optimization, where formula_0 denotes safety and formula_1 denotes profits. As another example, in project management, when analyzing PERT networks, one often wants to minimize the mean completion time, and subject to this, minimize the variance of the completion time. Notation. A lexicographic maximization problem is often written as:formula_4where formula_5 are the functions to maximize, ordered from the most to the least important; formula_6 is the vector of decision variables; and formula_7 is the "feasible set" - the set of possible values of formula_6. A lexicographic minimization problem can be defined analogously. Algorithms. There are several algorithms for solving lexicographic optimization problems. Sequential algorithm for general objectives. A leximin optimization problem with "n" objectives can be solved using a sequence of "n" single-objective optimization problems, as follows:Alg.1 So, in the first iteration, we find the maximum feasible value of the most important objective formula_10, and put this maximum value in formula_11. In the second iteration, we find the maximum feasible value of the second-most important objective formula_12, with the additional constraint that the most important objective must keep its maximum value of formula_11; and so on. The sequential algorithm is general - it can be applied whenever we have a solver for the single-objective functions. Lexicographic simplex algorithm for linear objectives. Linear lexicographic optimization is a special case of lexicographic optimization in which the objectives are linear, and the feasible set is described by linear inequalities. It can be written as:formula_13where formula_14 are vectors representing the linear objectives to maximize, ordered from the most to the least important; formula_6 is the vector of decision variables; and the feasible set is determined by the matrix formula_15 and the vector formula_16. Isermann extended the theory of linear programming duality to lexicographic linear programs, and developed a lexicographic simplex algorithm. In contrast to the sequential algorithm, this simplex algorithm considers all objective functions simultaneously. Weighted average for linear objectives. Sherali and Soyster prove that, for any linear lexicographic optimization problem, there exist a set of weights formula_17 such that the set of lexicographically-optimal solutions is identical to the set of solutions to the following single-objective problem:formula_18One way to compute the weights is given by Yager. He assumes that all objective values are real numbers between 0 and 1, and the smallest difference between any two possible values is some constant "d" &lt; 1 (so that values with difference smaller than "d" are considered equal). Then, the weight formula_19 of formula_20 is set to approximately formula_21. This guarantees that maximizing the weighted sum formula_22 is equivalent to lexicographic maximization. Cococcioni, Pappalardo and Sergeyev show that, given a computer that can make numeric computations with infinitesimals, it is possible to choose weights that are infinitesimals (specifically: formula_23; formula_24 is infinitesimal; formula_25 is infinitesimal-squared; etc.), and thus reduce linear lexicographic optimization to single-objective linear programming with infinitesimals. They present an adaptation of the simplex algorithm to infinitesimals, and present some running examples. Properties. (1) Uniqueness. In general, a lexicographic optimization problem may have more than one optimal solution. However, If formula_26 and formula_27 are two optimal solutions, then their value must be the same, that is, formula_28 for all formula_29.Thm.2 Moreover, if the feasible domain is a convex set, and the objective functions are strictly concave, then the problem has at most one optimal solution, since if there were two different optimal solutions, their mean would be another feasible solution in which the objective functions attain a higher value - contradicting the optimality of the original solutions. (2) Partial sums. Given a vector formula_5 of functions to optimize, for all "t" in 1...,"n", define formula_30 = the sum of all functions from the most important to the "t"-th most important one. Then, the original lexicographic optimization problem is equivalent to the following:Thm.4formula_31In some cases, the second problem may be easier to solve. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_1" }, { "math_id": 1, "text": "f_2" }, { "math_id": 2, "text": "f_2, f_3, f_4, " }, { "math_id": 3, "text": "f_3, f_4, " }, { "math_id": 4, "text": "\n\\begin{align}\n\\operatorname{lex}\n\\max &&\nf_1(x), f_2(x), \\ldots, f_n(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n" }, { "math_id": 5, "text": "f_1,\\ldots, f_n " }, { "math_id": 6, "text": "x " }, { "math_id": 7, "text": "X " }, { "math_id": 8, "text": "\n\\begin{align}\n\\max ~~~\nf_t(x)\n\\\\\n\\text{subject to} ~~~ &x\\in X,\n\\\\ &f_k(x) \\geq z_k \\text{ for all } k \\text{ in } 1, \\ldots, t-1.\n\\end{align}\n" }, { "math_id": 9, "text": "\nz_t\n" }, { "math_id": 10, "text": "\nf_1(x)\n" }, { "math_id": 11, "text": "\nz_1\n" }, { "math_id": 12, "text": "\nf_2(x)\n" }, { "math_id": 13, "text": "\n\\begin{align}\n\\operatorname{lex}\n\\max &&\nc_1\\cdot x, c_2\\cdot x, \\ldots, c_n \\cdot x\n\\\\\n\\text{subject to} && A\\cdot x \\leq b, x\\geq 0\n\\end{align}\n" }, { "math_id": 14, "text": "c_1,\\ldots, c_n " }, { "math_id": 15, "text": "A " }, { "math_id": 16, "text": "b " }, { "math_id": 17, "text": "w_1 > w_2 > \\cdots > w_n" }, { "math_id": 18, "text": "\n\\begin{align}\n\\max &&\nw_1 f_1(x) + \\cdots + w_n f_n(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n" }, { "math_id": 19, "text": "w_t " }, { "math_id": 20, "text": "f_t(x) " }, { "math_id": 21, "text": "d^t " }, { "math_id": 22, "text": "\\sum_t w_t f_t(x) " }, { "math_id": 23, "text": "w_1=1 " }, { "math_id": 24, "text": "w_2 " }, { "math_id": 25, "text": "w_3 " }, { "math_id": 26, "text": "x^1 " }, { "math_id": 27, "text": "x^2 " }, { "math_id": 28, "text": "f_i(x^1) = f_i(x^2) " }, { "math_id": 29, "text": "i\\in[n] " }, { "math_id": 30, "text": "f_{1..t} := \\sum_{i=1}^t f_i " }, { "math_id": 31, "text": "\n\\begin{align}\n\\operatorname{lex}\n\\max &&\nf_{1...1}(x), f_{1..2}(x), \\ldots, f_{1..n}(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=72244919
72246040
Fast probability integration
Fast probability integration (FPI) is a method of determining the probability of a class of events, particularly a failure event, that is faster to execute than Monte Carlo analysis. It is used where large numbers of time-variant variables contribute to the reliability of a system. The method was proposed by Wen and Chen in 1987. For a simple failure analysis with one stress variable, there will be a time-variant failure barrier, formula_0, beyond which the system will fail. This simple case may have a deterministic solution, but for more complex systems, such as crack analysis of a large structure, there can be a very large number of variables, for instance, because of the large number of ways a crack can propagate. In many cases, it is infeasible to produce a deterministic solution even when the individual variables are all individually deterministic. In this case, one defines a probabilistic failure barrier surface, formula_1, over the vector space of the stress variables. If failure barrier crossings are assumed to comply with the Poisson counting process an expression for maximum probable failure can be developed for each stress variable. The overall probability of failure is obtained by averaging (that is, integrating) over the entire variable vector space. FPI is a method of approximating this integral. The input to FPI is a time-variant expression, but the output is time-invariant, allowing it to be solved by first-order reliability method (FORM) or second-order reliability method (SORM). An FPI package is included as part of the core modules of the NASA-designed NESSUS software. It was initially used to analyse risks and uncertainties concerning the Space Shuttle main engine, but is now used much more widely in a variety of industries. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r(t)" }, { "math_id": 1, "text": " \\mathbf R (t)" } ]
https://en.wikipedia.org/wiki?curid=72246040