id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
68092250
Variational quantum eigensolver
Quantum algorithm In quantum computing, the variational quantum eigensolver (VQE) is a quantum algorithm for quantum chemistry, quantum simulations and optimization problems. It is a hybrid algorithm that uses both classical computers and quantum computers to find the ground state of a given physical system. Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to improve the guess. The algorithm is based on the variational method of quantum mechanics. It was originally proposed in 2014, with corresponding authors Alberto Peruzzo, Alán Aspuru-Guzik and Jeremy O'Brien. The algorithm has also found applications in quantum machine learning and has been further substantiated by general hybrid algorithms between quantum and classical computers. It is an example of a noisy intermediate-scale quantum (NISQ) algorithm. Description. Pauli encoding. The objective of the VQE is to find a set of quantum operations that prepares the lowest energy state (or minima) of a close approximation to some target quantity or observable. While the only strict requirement for the representation of an observable is that it is efficient to estimate its expectation values, it is often simplest if that operator has a compact or simple expression in terms of Pauli operators or tensor products of Pauli operators. For a fermionic system, it is often most convenient to qubitize: that is to write the many-body Hamiltonian of the system using second quantization, and then use a mapping to write the creation-annihiliation operators in terms of Pauli operators. Common schemes for fermions include Jordan–Wigner transformation, Bravyi-Kitaev transformation, and parity transformation. Once the Hamiltonian formula_0 is written in terms of Pauli operators and irrelevant states are discarded (finite-dimensional space), it would consist of a linear combination of Pauli strings formula_1 consisting of tensor products of Pauli operators (for example formula_2), such that formula_3, where formula_4 are numerical coefficients. Based on the coefficients, the number of Pauli strings can be reduced in order to optimize the calculation. The VQE can be adapted to other optimization problems by adapting the Hamiltonian to be a cost function. Ansatz and initial trial function. The choice of ansatz state depends on the system of interest. In gate-based quantum computing, the ansatz is given by a parametrized quantum circuit, whose parameters can be updated after each run. The ansatz has to be adaptable enough to not miss the desired state. A common method to obtain a valid ansatz is given by the unitary coupled cluster (UCC) framework and its extensions. If the ansatz is not chosen adequately the procedure may halt at suboptimal parameters that do not correspond to a minima. In this situation, the algorithm is said to have reached a 'barren plateau'. The ansatz can be set to an initial trial function to start the algorithm. For example, for a molecular system, one can use the Hartree–Fock method to provide a starting state that is close to the real ground state. Another variant of the ansatz circuit is the hardware efficient ansatz, which consists of sequence of 1 qubit rotational gates and 2 qubit entangling gates. The number of repetitions of 1-qubit rotational gates and 2-qubit entangling gates is called the depth of the circuit. Measurement. The expectation value of a given state formula_5 with parameters formula_6, has an expectation value of the energy or cost function given by formula_7 so in order to obtain the expectation value of the energy, one can measure the expectation value of each Pauli string (number of counts for a given value over the total number of counts). This step corresponds to measuring each qubit in the axis provided by the Pauli string. For example, for the string formula_8, the first qubit is to be measured in the "x"-axis, while the last two are to be measured in the "y"-axis of the Bloch sphere. If measurement in the "z"-axis is only possible, then Clifford gates can be used to transform between axes. If two Pauli strings commute, then they can be both measured simultaneously using the same circuit and interpreting the result according to the Pauli algebra. Variational method and optimization. Given a parametrized ansatz for the ground state eigenstate, with parameters that can be modified, one is sure to find the parametrized state that is closest to the ground state based on the variational method of quantum mechanics. Using classical algorithms in a digital computer, the parameters of the ansatz can be optimized. For this minimization, it is necessary to find the minima of a multivariable function. Classical optimizers using gradient descent can be used for this purpose. Formulation. For a given Hamiltonian (H) and a state vector formula_9 if we can vary formula_9 arbitrarily then formula_10 will be the ground state energy and formula_11 would be a ground state (assuming no degeneracy). But the above minimization problem over all possible states formula_9, where state formula_9 is formula_12 dimensional, is impractical. Thus to restrict the search space to a more practical size (eg. poly(n)), we need to restrict the formula_9 to only a subset of possible n-qubit states which is based on conventional physics, chemistry and quantum mechanics knowledge. Algorithm. The adjoining figure illustrates the high level steps in the VQE algorithm. The circuit formula_13 controls the subset of possible states that can be created, and the parameter formula_14 contains the variational parameters, formula_15 where the number of parameters chosen are enough to lend the algorithm expressive power to compute the ground state of the system, but not too big to increase the computational cost of the optimization step. By running the circuit many times and constantly updating the parameters to find the global minima of the expectation value of the desired observable, one can approach the ground state of the given system and store it in a quantum processor as a series of quantum gate instructions. In case of gradient descent, its required to minimize a cost function formula_16 where for the VQE case formula_17. The update rule is: formula_18 where "r" is the learning rate (step size) and formula_19 In order to compute the gradients, the parameter shift rule is used. Example. Considering a single Pauli gate example: formula_20 where "P" = "X,Y or Z", then formula_21 As, formula_22. Thus, formula_23 formula_24 formula_25 The above result has interesting properties as: Use. In chemistry. As of 2022, the variational quantum eigensolver can only simulate small molecules like the helium hydride ion or the beryllium hydride molecule. Larger molecules can be simulated by taking into account symmetry considerations. In 2020, a 12-qubit simulation of a hydrogen chain (H12) was demonstrated using Google's Sycamore quantum processor. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\hat{H}" }, { "math_id": 1, "text": "\\hat{P}_i" }, { "math_id": 2, "text": "X\\otimes I\\otimes Z\\otimes X" }, { "math_id": 3, "text": "\\hat{H}=\\sum_{i}\\alpha_i \\hat{P}_i" }, { "math_id": 4, "text": "\\alpha_i" }, { "math_id": 5, "text": "|\\psi(\\theta_1,\\cdots,\\theta_N)\\rangle" }, { "math_id": 6, "text": "\\{\\theta_i\\}_{i=1}^N" }, { "math_id": 7, "text": "E(\\theta_1,\\cdots,\\theta_n)=\\langle\\hat{H}\\rangle=\\sum_{i}\\alpha_i \\langle\\psi(\\theta_1,\\cdots,\\theta_N)|\\hat{P}_i|\\psi(\\theta_1,\\cdots,\\theta_N)\\rangle" }, { "math_id": 8, "text": "X\\otimes Y\\otimes Y" }, { "math_id": 9, "text": "|\\psi\\rangle\n" }, { "math_id": 10, "text": "\\min_{|\\psi\\rangle} \\langle \\psi | H | \\psi \\rangle\n" }, { "math_id": 11, "text": "\\operatorname{argmin}_{|\\psi\\rangle} \\langle \\psi | H | \\psi \\rangle\n" }, { "math_id": 12, "text": "2^n" }, { "math_id": 13, "text": "U(\\vec{\\theta})\n\n" }, { "math_id": 14, "text": "\\vec{\\theta}\n\n" }, { "math_id": 15, "text": "\\vec{\\theta} = \\begin{pmatrix} \\theta_1 \\\\ \\theta_2 \\\\ \\vdots \\\\ \\theta_p \\end{pmatrix}\n\n" }, { "math_id": 16, "text": "f(\\vec{\\theta})\n" }, { "math_id": 17, "text": "f(\\vec{\\theta}) = \\langle \\psi(\\vec{\\theta}) | H | \\psi(\\vec{\\theta}) \\rangle\n" }, { "math_id": 18, "text": "\\vec{\\theta}^{(\\text{new})} = \\vec{\\theta}^{(\\text{old})} - r \\nabla f(\\vec{\\theta}^{(\\text{old})})\n" }, { "math_id": 19, "text": "\\Delta f(\\vec{\\theta}^{(\\text{old})}) = \\left( \\frac{\\partial f(\\vec{\\theta}^{(\\text{old})})}{\\partial \\theta_1}, \\frac{\\partial f(\\vec{\\theta}^{(\\text{old})})}{\\partial \\theta_2}, \\ldots \\right)^\\top\n\n" }, { "math_id": 20, "text": "U(\\theta) = e^{-i\\frac{\\theta}{2}P},\n" }, { "math_id": 21, "text": "\\nabla_{\\theta} U = \\frac{\\partial U}{\\partial \\theta} = -\\frac{i}{2}P e^{-i\\frac{\\theta}{2}P} = -\\frac{i}{2}PU = -\\frac{i}{2}UP\n\n" }, { "math_id": 22, "text": "f(\\theta) = \\langle \\phi | U^{\\dagger} A U | \\phi \\rangle\n\n" }, { "math_id": 23, "text": "\\nabla_{\\theta} f(\\theta) = \\frac{\\partial}{\\partial \\theta} \\langle \\phi | U^{\\dagger} A U | \\phi \\rangle = \\langle \\phi | \\left( \\frac{i}{2}P \\right) U^{\\dagger} A U | \\phi \\rangle + \\langle \\phi | U^{\\dagger} A \\left( -\\frac{i}{2}P \\right) U | \\phi \\rangle\n\n" }, { "math_id": 24, "text": "= \\frac{1}{2} \\langle \\phi | U^{\\dagger}(\\theta + \\frac{\\pi}{2}) A U(\\theta + \\frac{\\pi}{2}) | \\phi \\rangle - \\frac{1}{2} \\langle \\phi | U^{\\dagger}(\\theta - \\frac{\\pi}{2}) A U(\\theta - \\frac{\\pi}{2}) | \\phi \\rangle\n\n" }, { "math_id": 25, "text": "= \\frac{1}{2} \\left( f(\\theta + \\frac{\\pi}{2}) - f(\\theta - \\frac{\\pi}{2}) \\right)\n\n" }, { "math_id": 26, "text": "f(\\theta)\n\n" }, { "math_id": 27, "text": "\\nabla_{\\theta} f(\\theta)\n\n" }, { "math_id": 28, "text": "f(\\cdot)\n\n\n" }, { "math_id": 29, "text": "\\pm \\frac{\\pi}{2}\n\n\n" } ]
https://en.wikipedia.org/wiki?curid=68092250
68093473
1 Kings 19
1 Kings, chapter 19 1 Kings 19 is the nineteenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section comprising 1 Kings 16:15 to 2 Kings 8:29 which documents the period of the Omrides. The focus of this chapter is the activity of prophet Elijah during the reign of king Ahab in the northern kingdom. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 21 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Elijah's flight to Horeb (19:1–8). The strife for the exclusive worship of Yahweh and against Baalism in Israel took longer time and less straightforward than expected from 1 Kings 18—a fact reflected in Elijah's sudden flight to Horeb, the name used in the Book of Deuteronomy and Chronicles for Mount Sinai, where the Israelites received the Ten Commandments. The dispirited Elijah miraculously received food and water as well as encouragement twice before reaching the mountain of God (cf. 1 Kings 18:46). "And when he saw that, he arose and ran for his life, and went to Beersheba, which belongs to Judah, and left his servant there." Elijah's meeting with God on Horeb (19:9–18). Patterning after Moses who met God on Mount Horeb (Exodus 24; 33) Elijah hoped to have a similar meeting. However, instead of encountering God in impressive natural phenomena (which would have been connected with the weather god Baal) nor in violent power (such as in 1 Kings 18:40), Elijah met a completely different God whose approach was 'extremely powerful and quietly beautiful', a clear contrast to that of 1 Kings 18 and especially 2 Kings 10. The prophet was twice asked the reason for his presence, and twice he replied with the same frustration, as if God had not appeared to him in the meantime. God spoke of the 7,000 Israelites who did not kneel before Baal to redress the balance of Elijah's complaint about his apparent solitude. During that meeting Elijah was charged to enlist three warriors for Yahweh's cause, two of whom would 'draw a line of blood through history': Hazael of Aram and Jehu of Israel. The third one is the prophet Elisha who would actually anoint the other two to carry out Elijah's mission after Elijah was taken up to heaven (cf. 2 Kings 8:7–15; 9:1–10). "11 And he said, Go forth, and stand upon the mount before the Lord. And, behold, the Lord passed by," "and a great and strong wind rent the mountains, and brake in pieces the rocks before the Lord; but the Lord was not in the wind:" "and after the wind an earthquake; but the Lord was not in the earthquake:" "12 And after the earthquake a fire; but the Lord was not in the fire:" "and after the fire a still small voice." Elijah charges Elisha (19:19–21). In his lifetime, Elijah only fulfilled one out of three required appointments in 19:15-16, that is of Elisha, who would fulfill the other two appointments, when he later took over Elijah's staff (or his mantle, which was apparently his hallmark; cf. 2 Kings 2:8,14; in 1 Kings 1:8 a different Hebrew word is used). Becoming Elijah's disciple ("servant") required Elisha, who appeared to be a rich farmer, to relinquish his property and family and only to 'follow' Elijah (cf. Matthew 4:19; 8:18–22). After Elijah was taken up to heaven, Elisha would anoint Hazael (2 Kings 8:7–15) and Jehu (1 Kings 9:1–13). "So he departed thence, and found Elisha the son of Shaphat, who was plowing with twelve yoke of oxen before him, and he with the twelfth: and Elijah passed by him, and cast his mantle upon him." "And he returned back from him, and took a yoke of oxen, and slew them, and boiled their flesh with the instruments of the oxen, and gave unto the people, and they did eat. Then he arose, and went after Elijah, and ministered unto him." Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=68093473
68101022
1 Kings 20
1 Kings, chapter 20 1 Kings 20 is the 20th chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section comprising 1 Kings 16:15 to 2 Kings 8:29 which documents the period of the Omrides. The focus of this chapter is the reign of king Ahab in the northern kingdom. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 43 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). The extant palimpsest AqBurkitt contains verses 7–17 in Koine Greek translated by Aquila of Sinope approximately in the early or mid-second century CE. Ahab's victory over the Arameans (20:1–34). 1 Kings 20 and 22 record a series of wars between an Aramean king, Ben-Hadad, and King Ahab of Israel. With the help of prophetic oracles, the Israelite king managed to repeatedly defeat an aggressive, arrogant and stronger enemy. The Arameans initially regarded YHWH to be 'a mountain god who had no power on the plains' (verses 23), based on the religious and social history that Yahweh's home was originally the mountains of southern Sinai and Edom (Exodus 3; Judges 5:4) and Israel was developed into an ethnic and political power on the mountains of Israel/Palestine (Judges 1:27-35; 1 Samuel 13–14; 2 Samuel 2:9). However, at the end it was shown that the entire country belongs to Yahweh (and his people), even Ahab managed to force Ben-Hadad to accept the establishment of an Israelite trading office in Damascus (verse 34). This period may fit the record from Assyrian sources that Ahab and the Aramean king, "Adad-idri" (Aramaic: "Hadadezer") were closely allied to each other to fight Assyrian army ("ANET" 276–277). "So Ben-Hadad said to him, "The cities which my father took from your father I will restore; and you may set up marketplaces for yourself in Damascus, as my father did in Samaria."" "Then Ahab said, "I will send you away with this treaty." So he made a treaty with him and sent him away." Chiastic structure. Biblical scholar Burke O. Long (1985) pointed out that these verses have a chiastic structure: A prophetic warning to Ahab (20:35–43). The positive outcome of the war against Aram was tarnished by Ahab's action to make business contracts with Benhadad, instead of killing him ("devoted him to destruction", which was an 'underlying principle of Deuteronomistic theory and historical writing'; cf. Deuteronomy 13:12–18; 20:16–18; Joshua 6–7; 11:10–15, etc.). The prophetic rebuke was given through a prophet's ingenious scheming which forced the king to call out his own error and 'bring judgement upon himself' (cf. as Nathan did to David in 2 Samuel 12). "And he said to him, "Thus says the Lord, ‘Because you have let go out of your hand the man whom I had devoted to destruction,[a] therefore your life shall be for his life, and your people for his people."" Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=68101022
68102162
Malgrange–Zerner theorem
Theorem about holomorphic functions of several complex variables In mathematics, Malgrange–Zerner theorem (named for Bernard Malgrange and Martin Zerner) shows that a function on formula_0 allowing holomorphic extension in each variable separately can be extended, under certain conditions, to a function holomorphic in all variables jointly. This theorem can be seen as a generalization of Bochner's tube theorem to functions defined on tube-like domains whose base is not an open set. Theorem Let formula_1 and let formula_2 convex hull of formula_3. Let formula_4 be a locally bounded function such that formula_5 and that for any fixed point formula_6 the function formula_7 is holomorphic in formula_8 in the interior of formula_9 for each formula_10. Then the function formula_11 can be uniquely extended to a function holomorphic in the interior of formula_12. History. According to Henry Epstein, this theorem was proved first by Malgrange in 1961 (unpublished), then by Zerner (as cited in ), and communicated to him privately. Epstein's lectures contain the first published proof (attributed there to Broz, Epstein and Glaser). The assumption formula_5 was later relaxed to formula_13 (see Ref.[1] in ) and finally to formula_14. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathbb{R}^n" }, { "math_id": 1, "text": "X=\\bigcup_{k=1}^n \\mathbb{R}^{k-1}\\times P \\times \\mathbb{R}^{n-k},\n\\text{ where }P=\\mathbb{R}+i [0,1), " }, { "math_id": 2, "text": "W=" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": " f: X\\to \\mathbb{C}" }, { "math_id": 5, "text": "f \\in C^\\infty(X) " }, { "math_id": 6, "text": " (x_1,\\ldots, x_{k-1},x_{k+1},\\ldots,x_n)\\in \\mathbb{R}^{n-1}" }, { "math_id": 7, "text": " f(x_1,\\ldots, x_{k-1},z,x_{k+1},\\ldots,x_n)" }, { "math_id": 8, "text": " z" }, { "math_id": 9, "text": " P" }, { "math_id": 10, "text": "k=1,\\ldots,n" }, { "math_id": 11, "text": "f" }, { "math_id": 12, "text": "W" }, { "math_id": 13, "text": "f|_{\\mathbb{R}^n}\\in C^3 " }, { "math_id": 14, "text": "f|_{\\mathbb{R}^n}\\in C " } ]
https://en.wikipedia.org/wiki?curid=68102162
6810363
Classical Wiener space
In mathematics, classical Wiener space is the collection of all continuous functions on a given domain (usually a subinterval of the real line), taking values in a metric space (usually "n"-dimensional Euclidean space). Classical Wiener space is useful in the study of stochastic processes whose sample paths are continuous functions. It is named after the American mathematician Norbert Wiener. Definition. Consider "E" ⊆ R"n" and a metric space ("M", "d"). The classical Wiener space "C"("E"; "M") is the space of all continuous functions "f" : "E" → "M". I.e. for every fixed "t" in "E", formula_0 as formula_1 In almost all applications, one takes "E" = [0, "T" ] or [0, +∞) and "M" = R"n" for some "n" in N. For brevity, write "C" for "C"([0, "T" ]; R"n"); this is a vector space. Write "C"0 for the linear subspace consisting only of those functions that take the value zero at the infimum of the set "E". Many authors refer to "C"0 as "classical Wiener space". For a stochastic process formula_2 and the space formula_3 of all functions from formula_4 to formula_5, one looks at the map formula_6. One can then define the "coordinate maps" or "canonical versions" formula_7 defined by formula_8. The formula_9 form another process. The "Wiener measure" is then the unique measure on formula_10 such that the coordinate process is a Brownian motion. Properties of classical Wiener space. Uniform topology. The vector space "C" can be equipped with the uniform norm formula_11 turning it into a normed vector space (in fact a Banach space). This norm induces a metric on "C" in the usual way: formula_12. The topology generated by the open sets in this metric is the topology of uniform convergence on [0, "T" ], or the uniform topology. Thinking of the domain [0, "T" ] as "time" and the range R"n" as "space", an intuitive view of the uniform topology is that two functions are "close" if we can "wiggle space slightly" and get the graph of "f" to lie on top of the graph of "g", while leaving time fixed. Contrast this with the Skorokhod topology, which allows us to "wiggle" both space and time. Separability and completeness. With respect to the uniform metric, "C" is both a separable and a complete space: Since it is both separable and complete, "C" is a Polish space. Tightness in classical Wiener space. Recall that the modulus of continuity for a function "f" : [0, "T" ] → R"n" is defined by formula_13 This definition makes sense even if "f" is not continuous, and it can be shown that "f" is continuous if and only if its modulus of continuity tends to zero as δ → 0: formula_14. By an application of the Arzelà-Ascoli theorem, one can show that a sequence formula_15 of probability measures on classical Wiener space "C" is tight if and only if both the following conditions are met: formula_16 and formula_17 for all ε > 0. Classical Wiener measure. There is a "standard" measure on "C"0, known as classical Wiener measure (or simply Wiener measure). Wiener measure has (at least) two equivalent characterizations: If one defines Brownian motion to be a Markov stochastic process "B" : [0, "T" ] × Ω → R"n", starting at the origin, with almost surely continuous paths and independent increments formula_18 then classical Wiener measure γ is the law of the process "B". Alternatively, one may use the abstract Wiener space construction, in which classical Wiener measure γ is the radonification of the canonical Gaussian cylinder set measure on the Cameron-Martin Hilbert space corresponding to "C"0. Classical Wiener measure is a Gaussian measure: in particular, it is a strictly positive probability measure. Given classical Wiener measure γ on "C"0, the product measure γ "n" × γ is a probability measure on "C", where γ "n" denotes the standard Gaussian measure on R"n".
[ { "math_id": 0, "text": "d(f(s), f(t)) \\to 0" }, { "math_id": 1, "text": "| s - t | \\to 0." }, { "math_id": 2, "text": "\\{X_t,t\\in T\\}:(\\Omega,\\mathcal{F},P)\\to (E,\\mathcal{B})" }, { "math_id": 3, "text": "\\mathcal{F}(T,E)" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "E" }, { "math_id": 6, "text": "\\varphi:\\Omega\\to\\mathcal{F}(T,E)\\cong E^T" }, { "math_id": 7, "text": "Y_t:E^T\\to E" }, { "math_id": 8, "text": "Y_t(\\omega)=\\omega(t)" }, { "math_id": 9, "text": "\\{Y_t,t\\in T\\}" }, { "math_id": 10, "text": "C_0(\\R_{+},\\R)" }, { "math_id": 11, "text": "\\| f \\| := \\sup_{t \\in [0,\\,T]} |f(t)|" }, { "math_id": 12, "text": "d (f, g) := \\| f-g \\|" }, { "math_id": 13, "text": "\\omega_{f} (\\delta) := \\sup \\left\\{ |f(s) - f(t)| : s, t \\in [0, T],\\, |s - t| \\leq \\delta \\right\\}." }, { "math_id": 14, "text": "f \\in C \\iff \\omega_{f} (\\delta) \\to 0 \\text{ as } \\delta \\to 0" }, { "math_id": 15, "text": "(\\mu_{n})_{n = 1}^{\\infty}" }, { "math_id": 16, "text": "\\lim_{a \\to \\infty} \\limsup_{n \\to \\infty} \\mu_{n} \\{ f \\in C \\mid | f(0) | \\geq a \\} = 0," }, { "math_id": 17, "text": "\\lim_{\\delta \\to 0} \\limsup_{n \\to \\infty} \\mu_{n} \\{ f \\in C \\mid \\omega_{f} (\\delta) \\geq \\varepsilon \\} = 0" }, { "math_id": 18, "text": "B_{t} - B_{s} \\sim\\, \\mathrm{Normal} \\left( 0, |t - s| \\right)," } ]
https://en.wikipedia.org/wiki?curid=6810363
681049
H-cobordism
Concept in topology In geometric topology and differential topology, an ("n" + 1)-dimensional cobordism "W" between "n"-dimensional manifolds "M" and "N" is an "h"-cobordism (the "h" stands for homotopy equivalence) if the inclusion maps formula_0 are homotopy equivalences. The "h"-cobordism theorem gives sufficient conditions for an "h"-cobordism to be trivial, i.e., to be C-isomorphic to the cylinder "M" × [0, 1]. Here C refers to any of the categories of smooth, piecewise linear, or topological manifolds. The theorem was first proved by Stephen Smale for which he received the Fields Medal and is a fundamental result in the theory of high-dimensional manifolds. For a start, it almost immediately proves the generalized Poincaré conjecture. Background. Before Smale proved this theorem, mathematicians became stuck while trying to understand manifolds of dimension 3 or 4, and assumed that the higher-dimensional cases were even harder. The "h"-cobordism theorem showed that (simply connected) manifolds of dimension at least 5 are much easier than those of dimension 3 or 4. The proof of the theorem depends on the "Whitney trick" of Hassler Whitney, which geometrically untangles homologically-untangled spheres of complementary dimension in a manifold of dimension >4. An informal reason why manifolds of dimension 3 or 4 are unusually hard is that the trick fails to work in lower dimensions, which have no room for entanglement. Precise statement of the "h"-cobordism theorem. Let "n" be at least 5 and let "W" be a compact ("n" + 1)-dimensional "h"-cobordism between "M" and "N" in the category C=Diff, PL, or Top such that "W", "M" and "N" are simply connected. Then "W" is C-isomorphic to "M" × [0, 1]. The isomorphism can be chosen to be the identity on "M" × {0}. This means that the homotopy equivalence between "M" and "N" (or, between "M" × [0, 1], "W" and "N" × [0, 1]) is homotopic to a C-isomorphism. Lower dimensional versions. For "n" = 4, the "h"-cobordism theorem is false. This can be seen since Wall proved that closed oriented simply-connected topological four-manifolds with equivalent intersection forms are "h"-cobordant. However, if the intersection form is odd there are non-homeomorphic 4-manifolds with the same intersection form (distinguished by the Kirby-Siebenmann class). For example, CP2 and a fake projective plane with the same homotopy type are not homeomorphic but both have intersection form of (1). For "n" = 3, the "h"-cobordism theorem for smooth manifolds has not been proved and, due to the 3-dimensional Poincaré conjecture, is equivalent to the hard open question of whether the 4-sphere has non-standard smooth structures. For "n" = 2, the "h"-cobordism theorem is equivalent to the Poincaré conjecture stated by Poincaré in 1904 (one of the Millennium Problems) and was proved by Grigori Perelman in a series of three papers in 2002 and 2003, where he follows Richard S. Hamilton's program using Ricci flow. For "n" = 1, the "h"-cobordism theorem is vacuously true, since there is no closed simply-connected 1-dimensional manifold. For "n" = 0, the "h"-cobordism theorem is trivially true: the interval is the only connected cobordism between connected 0-manifolds. A proof sketch. A Morse function formula_1 induces a handle decomposition of "W", i.e., if there is a single critical point of index "k" in formula_2, then the ascending cobordism formula_3 is obtained from formula_4 by attaching a "k"-handle. The goal of the proof is to find a handle decomposition with no handles at all so that integrating the non-zero gradient vector field of "f" gives the desired diffeomorphism to the trivial cobordism. This is achieved through a series of techniques. 1) Handle rearrangement First, we want to rearrange all handles by order so that lower order handles are attached first. The question is thus when can we slide an "i"-handle off of a "j"-handle? This can be done by a radial isotopy so long as the "i" attaching sphere and the "j" belt sphere do not intersect. We thus want formula_5 which is equivalent to formula_6. We then define the handle chain complex formula_7 by letting formula_8 be the free abelian group on the "k"-handles and defining formula_9 by sending a "k"-handle formula_10 to formula_11, where formula_12 is the intersection number of the "k"-attaching sphere and the ("k" − 1)-belt sphere. 2) Handle cancellation Next, we want to "cancel" handles. The idea is that attaching a "k"-handle formula_13 might create a hole that can be filled in by attaching a ("k" + 1)-handle formula_14. This would imply that formula_15 and so the formula_16 entry in the matrix of formula_17 would be formula_18. However, when is this condition sufficient? That is, when can we geometrically cancel handles if this condition is true? The answer lies in carefully analyzing when the manifold remains simply-connected after removing the attaching and belt spheres in question, and finding an embedded disk using the Whitney trick. This analysis leads to the requirement that "n" must be at least 5. Moreover, during the proof one requires that the cobordism has no 0-,1-,"n"-, or ("n" + 1)-handles which is obtained by the next technique. 3) Handle trading The idea of handle trading is to create a cancelling pair of ("k" + 1)- and ("k" + 2)-handles so that a given "k"-handle cancels with the ("k" + 1)-handle leaving behind the ("k" + 2)-handle. To do this, consider the core of the "k"-handle which is an element in formula_19. This group is trivial since "W" is an "h"-cobordism. Thus, there is a disk formula_20 which we can fatten to a cancelling pair as desired, so long as we can embed this disk into the boundary of "W". This embedding exists if formula_21. Since we are assuming "n" is at least 5 this means that "k" is either 0 or 1. Finally, by considering the negative of the given Morse function, −"f", we can turn the handle decomposition upside down and also remove the "n"- and ("n" + 1)-handles as desired. 4) Handle sliding Finally, we want to make sure that doing row and column operations on formula_22 corresponds to a geometric operation. Indeed, it isn't hard to show (best done by drawing a picture) that sliding a "k"-handle formula_13 over another "k"-handle formula_23 replaces formula_13 by formula_24 in the basis for formula_8. The proof of the theorem now follows: the handle chain complex is exact since formula_25. Thus formula_26 since the formula_8 are free. Then formula_22, which is an integer matrix, restricts to an invertible morphism which can thus be diagonalized via elementary row operations (handle sliding) and must have only formula_18 on the diagonal because it is invertible. Thus, all handles are paired with a single other cancelling handle yielding a decomposition with no handles. The "s"-cobordism theorem. If the assumption that "M" and "N" are simply connected is dropped, "h"-cobordisms need not be cylinders; the obstruction is exactly the Whitehead torsion τ ("W", "M") of the inclusion formula_27. Precisely, the "s"-cobordism theorem (the "s" stands for simple-homotopy equivalence), proved independently by Barry Mazur, John Stallings, and Dennis Barden, states (assumptions as above but where "M" and "N" need not be simply connected): An "h"-cobordism is a cylinder if and only if Whitehead torsion τ ("W", "M") vanishes. The torsion vanishes if and only if the inclusion formula_27 is not just a homotopy equivalence, but a simple homotopy equivalence. Note that one need not assume that the other inclusion formula_28 is also a simple homotopy equivalence—that follows from the theorem. Categorically, "h"-cobordisms form a groupoid. Then a finer statement of the "s"-cobordism theorem is that the isomorphism classes of this groupoid (up to C-isomorphism of "h"-cobordisms) are torsors for the respective Whitehead groups Wh(π), where formula_29
[ { "math_id": 0, "text": " M \\hookrightarrow W \\quad\\mbox{and}\\quad N \\hookrightarrow W" }, { "math_id": 1, "text": "f:W\\to[a,b]" }, { "math_id": 2, "text": "f^{-1}([c,c'])" }, { "math_id": 3, "text": "W_{c'}" }, { "math_id": 4, "text": "W_c" }, { "math_id": 5, "text": "(i-1)+(n-j)\\leq\\dim\\partial W-1=n-1" }, { "math_id": 6, "text": "i\\leq j" }, { "math_id": 7, "text": "(C_*,\\partial_*)" }, { "math_id": 8, "text": "C_k" }, { "math_id": 9, "text": "\\partial_k:C_k\\to C_{k-1}" }, { "math_id": 10, "text": "h_{\\alpha}^k" }, { "math_id": 11, "text": "\\sum_\\beta \\langle h_\\alpha^k\\mid h_\\beta^{k-1}\\rangle h_\\beta^{k-1}" }, { "math_id": 12, "text": "\\langle h_\\alpha^k\\mid h_\\beta^{k-1}\\rangle" }, { "math_id": 13, "text": "h_\\alpha^k" }, { "math_id": 14, "text": "h_\\beta^{k+1}" }, { "math_id": 15, "text": "\\partial_{k+1}h_\\beta^{k+1}=\\pm h_\\alpha^k" }, { "math_id": 16, "text": "(\\alpha,\\beta)" }, { "math_id": 17, "text": "\\partial_{k+1}" }, { "math_id": 18, "text": "\\pm 1" }, { "math_id": 19, "text": "\\pi_k(W,M)" }, { "math_id": 20, "text": "D^{k+1}" }, { "math_id": 21, "text": "\\dim\\partial W-1=n-1\\geq 2(k+1)" }, { "math_id": 22, "text": "\\partial_k" }, { "math_id": 23, "text": "h_{\\beta}^k" }, { "math_id": 24, "text": "h_\\alpha^k\\pm h_\\beta^k" }, { "math_id": 25, "text": "H_*(W,M;\\mathbb{Z})=0" }, { "math_id": 26, "text": "C_k\\cong \\operatorname{coker} \\partial_{k+1}\\oplus\\operatorname{im} \\partial_{k+1}" }, { "math_id": 27, "text": "M \\hookrightarrow W" }, { "math_id": 28, "text": "N \\hookrightarrow W" }, { "math_id": 29, "text": "\\pi \\cong \\pi_1(M) \\cong \\pi_1(W) \\cong \\pi_1(N)." } ]
https://en.wikipedia.org/wiki?curid=681049
68109271
1 Kings 21
1 Kings, chapter 21 1 Kings 21 is the 21st chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section comprising 1 Kings 16:15 to 2 Kings 8:29 which documents the period of the Omrides. The focus of this chapter is the reign of king Ahab in the northern kingdom. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 29 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Ahab's and Jezebel's judicial murder of Naboth (21:1–16). In ancient Israel the farming families (comprising over 90% of the population) were legally protected of landownership (; ), so they have a secure economic existence and thus firm citizen's rights by the allocation of sufficient land. The farmer Naboth had thus the right as well as the duty to bequeath his land to his family and not to outsiders, to inhibit the alienation of family property (e.g., Leviticus 25:8–34; Numbers 27:9–11). King Ahab was to abide to this rule, but this passage shows 'how unscrupulously the king's power over the civilian rights could still be used and how compliant the lay assessors' court was to his wishes', especially when it was driven by a queen originated from abroad (Jezebel was from Sidon, Phoenicia) and did not respect (or understand) Israelite ethics. In any case, the scandal of Naboth was still an individual case, whereas the theft of land by the ruling class 100 years later would become an economic principle (; Amos 2:6; Micah 2:1-2). "And it came to pass after these things that Naboth the Jezreelite had a vineyard which was in Jezreel, next to the palace of Ahab king of Samaria." Elijah's judgement against Ahab and his court (21:17–29). The evil act towards Naboth required someone to confront the king and under such circumstances this was normally a prophet, such as Elijah who suddenly stood before king Ahab in the vineyards of Naboth. After briefly listening to Ahab's surprised question (verse 20: 'Have you found me, O my enemy?'), Elijah firmly threw his accusation (verse 19: 'Have you killed, and also taken possession?'), and immediately announced the judgement (verse 19: 'In the place where dogs licked the blood of Naboth, dogs will also lick up your blood'). Elijah must have scolded Ahab in a lengthy speech (verses 20b–22, 24, closely related to the speeches in 1 Kings 14:7–11 and 2 Kings 9:7–10) repeating the king's religious failings (verses 25–26). Ahab's response as a repentant sinner postponed his judgement to the next generation (verses 27–29). The reference in 2 Kings 9:36–37 ascertains the fulfilment of the prophecy. A parallel rendering of this story can be found in 2 Kings 9:25–26. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=68109271
6810986
ISO metric screw thread
Hardware threading standard The ISO metric screw thread is the most commonly used type of general-purpose screw thread worldwide. They were one of the first international standards agreed when the International Organization for Standardization (ISO) was set up in 1947. The "M" designation for metric screws indicates the nominal outer diameter of the screw thread, in millimetres. This is also referred to as the "major" diameter in the information below. It indicates the diameter of smooth-walled hole that a male thread (e.g. on a bolt) will pass through easily to connect to an internally threaded component (e.g. a nut) on the other side. That is, an M6 screw has a nominal outer diameter of 6 millimetres and will therefore be a well-located, co-axial fit in a hole drilled to 6 mm diameter. Basic profile. The design principles of ISO general-purpose metric screw threads ("M" series threads) are defined in international standard ISO 68-1. Each thread is characterized by its major diameter, "D" ("D"maj in the diagram), and its pitch, "P". ISO metric threads consist of a symmetric V-shaped thread. In the plane of the thread axis, the flanks of the V have an angle of 60° to each other. The thread depth is 0.54125 × pitch. The outermost <templatestyles src="Fraction/styles.css" />1⁄8 and the innermost <templatestyles src="Fraction/styles.css" />1⁄4 of the height "H" of the V-shape are cut off from the profile. The relationship between the height "H" and the pitch "P" is found using the following equation where "θ" is half the included angle of the thread, in this case 30°: formula_0 or formula_1 Because only <templatestyles src="Fraction/styles.css" />5⁄8 of this height is cut, the difference between major and minor diameters is <templatestyles src="Fraction/styles.css" />5⁄4 × 0.8660 × "P" = 1.0825 × "P", so the tap drill size can be approximated by subtracting the thread pitch from the major diameter. In an external (male) thread (e.g. on a bolt), the major diameter "D"maj and the minor diameter "D"min define "maximum" dimensions of the thread. This means that the external thread must end flat at "D"maj, but can be rounded out below the minor diameter "D"min. Conversely, in an internal (female) thread (e.g. in a nut), the major and minor diameters are "minimum" dimensions; therefore the thread profile must end flat at "D"min but may be rounded out beyond "D"maj. In practice this means that one can measure the diameter over the threads of a bolt to find the nominal diameter "D"maj, and the inner diameter of a nut is "D"min. The minor diameter "D"min and effective pitch diameter "D"p are derived from the major diameter and pitch as formula_2 Tables of the derived dimensions for screw diameters and pitches defined in ISO 261 are given in ISO 724. Designation. A metric ISO screw thread is designated by the letter M followed by the value of the nominal diameter "D" (the maximum thread diameter) and the pitch "P", both expressed in millimetres and separated by a dash or sometimes the multiplication sign, "×" (e.g. M8-1.25 or M8×1.25). If the pitch is the normally used "coarse" pitch listed in ISO 261 or ISO 262, it can be omitted (e.g. M8).17 The length of a machine screw or bolt is indicated by an "×" and the length expressed in millimetres (e.g. M8-1.25×30 or M8×30). Tolerance classes defined in ISO 965-1 can be appended to these designations, if required (e.g. M500– 6g in external threads). External threads are designated by lowercase letter, g or h. Internal threads are designated by upper case letters, G or H.17 Preferred sizes. ISO 261 specifies a detailed list of preferred combinations of outer diameter "D" and pitch "P" for ISO metric screw threads. ISO 262 specifies a shorter list of thread dimensions – a subset of ISO 261. <templatestyles src="Reflist/styles.css" /> The thread values are derived from rounded Renard series. They are defined in ISO 3, with "1st choice" sizes being from the Rˈˈ10 series and "2nd choice" and "3rd choice" sizes being the remaining values from the Rˈˈ20 series. The "coarse" pitch is the commonly used default pitch for a given diameter. In addition, one or two smaller "fine" pitches are defined, for use in applications where the height of the normal "coarse" pitch would be unsuitable (e.g. threads in thin-walled pipes). The terms "coarse" and "fine" have (in this context) no relation to the manufacturing quality of the thread. In addition to coarse and fine threads, there is another division of extra fine, or "superfine" threads, with a very fine pitch thread. Superfine pitch metric threads are occasionally used in automotive components, such as suspension struts, and are commonly used in the aviation manufacturing industry. This is because extra fine threads are more resistant to coming loose from vibrations. Fine and superfine threads also have a greater minor diameter than coarse threads, which means the bolt or stud has a greater cross-sectional area (and therefore greater load-carrying capability) for the same nominal diameter. Spanner (wrench) sizes. Below are some common spanner (wrench) sizes for metric screw threads. Hexagonal (generally abbreviated to "hex") head widths (width across flats, spanner size) are for DIN 934 hex nuts and hex head bolts. Other (usually smaller) sizes may occur to reduce weight or cost, including the small series flange bolts defined in ISO 4162 which typically have hexagonal head sizes corresponding to the smaller 1st choice thread size (eg. M6 small series flange bolts have 8mm hexagonal heads, as would normally be found on M5 bolts). Standards. National. Derived standards. Japan has a JIS metric screw thread standard that largely follows the ISO, but with some differences in pitch and head sizes. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "H = \\frac {1}{\\tan\\theta} \\cdot \\frac {P}{2} = \\frac{\\sqrt 3}{2}\\cdot\nP \\approx 0.8660 \\cdot P" }, { "math_id": 1, "text": "P = 2\\tan\\theta\\cdot H = \\frac {2}{\\sqrt 3} \\cdot H \\approx 1.1547 \\cdot H" }, { "math_id": 2, "text": "\\begin{align}\n D_\\text{min} &= D_\\text{maj} - 2\\cdot\\frac58\\cdot H = D_\\text{maj} - \\frac{ 5 {\\sqrt 3}}{8}\\cdot P \\approx D_\\text{maj} - 1.082532 \\cdot P \\\\[3pt]\n D_\\text{p} &= D_\\text{maj} - 2\\cdot\\frac38\\cdot H = D_\\text{maj} - \\frac{ 3 {\\sqrt 3}}{8}\\cdot P \\approx D_\\text{maj} - 0.649519 \\cdot P\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=6810986
681150
Subbayya Sivasankaranarayana Pillai
Indian mathematician (1901–1950) Subbayya Sivasankaranarayana Pillai (5 April 1901 – 31 August 1950) was an Indian mathematician specialising in number theory. His contribution to Waring's problem was described in 1950 by K. S. Chandrasekharan as "almost certainly his best piece of work and one of the very best achievements in Indian Mathematics since Ramanujan". Biography. Subbayya Sivasankaranarayana Pillai was born to parents Subbayya Pillai and Gomati Ammal. His mother died a year after his birth and his father when Pillai was in his last year at school. Pillai did his intermediate course and B.Sc Mathematics in the Scott Christian College at Nagercoil and managed to earn a B.A. degree from Maharaja's college, Trivandrum. In 1927, Pillai was awarded a research fellowship at the University of Madras to work among professors K. Ananda Rau and Ramaswamy S. Vaidyanathaswamy. He was from 1929 to 1941 at Annamalai University where he worked as a lecturer. It was in Annamalai University that he did his major work in Waring's problem. In 1941 he went to the University of Travancore and a year later to the University of Calcutta as a lecturer (where he was at the invitation of Friedrich Wilhelm Levi). For his achievements he was invited in August 1950, for a year to visit the Institute for Advanced Study, Princeton, United States. He was also invited to participate in the International Congress of Mathematicians at Harvard University as a delegate of the Madras University but he died during the crash of TWA Flight 903 in Egypt on the way to the conference. Contributions. He proved the Waring's problem for formula_0 in 1935 under the further condition of formula_1 ahead of Leonard Eugene Dickson who around the same time proved it for formula_2 He showed that formula_3 where formula_4 is the largest natural number formula_5 and hence computed the precise value of formula_6. The Pillai sequence 1, 4, 27, 1354, ..., is a quickly growing integer sequence in which each term is the sum of the previous term and a prime number whose following prime gap is larger than the previous term. It was studied by Pillai in connection with representing numbers as sums of prime numbers. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "k\\ge 6" }, { "math_id": 1, "text": "(3^k + 1)/(2^k - 1)\\le [1.5^k] + 1" }, { "math_id": 2, "text": "k\\ge 7." }, { "math_id": 3, "text": "g(k) = 2^k + l - 2" }, { "math_id": 4, "text": "l" }, { "math_id": 5, "text": "\\le (3/2)^k" }, { "math_id": 6, "text": "g(6) = 73" } ]
https://en.wikipedia.org/wiki?curid=681150
6811610
Equivalence of metrics
In mathematics, two metrics on the same underlying set are said to be equivalent if the resulting metric spaces share certain properties. Equivalence is a weaker notion than isometry; equivalent metrics do not have to be literally the same. Instead, it is one of several ways of generalizing equivalence of norms to general metric spaces. Throughout the article, formula_0 will denote a non-empty set and formula_1 and formula_2 will denote two metrics on formula_0. Topological equivalence. The two metrics formula_1 and formula_2 are said to be topologically equivalent if they generate the same topology on formula_0. The adverb "topologically" is often dropped. There are multiple ways of expressing this condition: The following are sufficient but not necessary conditions for topological equivalence: Strong equivalence. Two metrics formula_1 and formula_2 on X are strongly or bilipschitz equivalent or uniformly equivalent if and only if there exist positive constants formula_11 and formula_12 such that, for every formula_15, formula_16 In contrast to the sufficient condition for topological equivalence listed above, strong equivalence requires that there is a single set of constants that holds for every pair of points in formula_0, rather than potentially different constants associated with each point of formula_0. Strong equivalence of two metrics implies topological equivalence, but not vice versa. For example, the metrics formula_17 and formula_18 on the interval formula_19 are topologically equivalent, but not strongly equivalent. In fact, this interval is bounded under one of these metrics but not the other. On the other hand, strong equivalences always take bounded sets to bounded sets. Relation with equivalence of norms. When X is a vector space and the two metrics formula_1 and formula_2 are those induced by norms formula_20 and formula_21, respectively, then strong equivalence is equivalent to the condition that, for all formula_4, formula_22 For linear operators between normed vector spaces, Lipschitz continuity is equivalent to continuity—an operator satisfying either of these conditions is called bounded. Therefore, in this case, formula_1 and formula_2 are topologically equivalent if and only if they are strongly equivalent; the norms formula_20 and formula_21 are simply said to be equivalent. In finite dimensional vector spaces, all metrics induced by a norm, including the euclidean metric, the taxicab metric, and the Chebyshev distance, are equivalent. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "d_1" }, { "math_id": 2, "text": "d_2" }, { "math_id": 3, "text": "A \\subseteq X" }, { "math_id": 4, "text": "x \\in X" }, { "math_id": 5, "text": "r > 0" }, { "math_id": 6, "text": "r', r'' > 0" }, { "math_id": 7, "text": "B_{r'} (x; d_1) \\subseteq B_r (x; d_2) \\text{ and } B_{r''} (x; d_2) \\subseteq B_r (x; d_1)." }, { "math_id": 8, "text": "I : (X,d_1) \\to (X,d_2)" }, { "math_id": 9, "text": "f: \\R \\to \\R_+" }, { "math_id": 10, "text": "d_2 = f \\circ d_1 " }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "\\beta" }, { "math_id": 13, "text": "y \\in X" }, { "math_id": 14, "text": "\\alpha d_1 (x, y) \\leq d_2 (x, y) \\leq \\beta d_1 (x, y)." }, { "math_id": 15, "text": "x,y\\in X" }, { "math_id": 16, "text": "\\alpha d_1(x,y) \\leq d_2(x,y) \\leq \\beta d_1 (x, y)." }, { "math_id": 17, "text": "d_1(x,y)=|x-y|" }, { "math_id": 18, "text": "d_2(x,y)=|\\tan(x)-\\tan(y)|" }, { "math_id": 19, "text": "\\left(-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right)" }, { "math_id": 20, "text": "\\|\\cdot \\|_A" }, { "math_id": 21, "text": "\\|\\cdot\\|_B" }, { "math_id": 22, "text": "\\alpha\\|x\\|_A \\leq \\|x\\|_B \\leq \\beta\\|x\\|_A" }, { "math_id": 23, "text": "f:U\\to V" }, { "math_id": 24, "text": "V" }, { "math_id": 25, "text": "U" }, { "math_id": 26, "text": "(0,1)" }, { "math_id": 27, "text": "\\mathbb R" }, { "math_id": 28, "text": "(2^{-n})_{n\\in\\mathbb N}" } ]
https://en.wikipedia.org/wiki?curid=6811610
68116656
Left atrial volume
Volume of the heart's left atrium, an important predictor of major cardiovascular adverse eventsThe volume of the heart's left atrium (left atrial volume) is an important biomarker for cardiovascular physiology and clinical cardiology. It is usually calculated as left atrial volume index in terms of body surface area. Measurement. The left atrial volume is commonly measured by echocardiography or magnetic resonance tomography. It is calculated from biplane recordings with the equation: formula_0 where "A"4"c" and "A"2"c" denote LA areas in 4- and 2-chamber views respectively, and "L" corresponds to the shortest long-axis length measured in either views. Usually, the volume of the left atrium is divided by the body surface area in order to provide an extensive property, which is independent from body size. The resulting index is referred to as left atrial volume index (LAVI): formula_1 Physiology. LAVI between 16 and 34 ml/m2 is regarded as normal. 2015 ASE guidelines. Pathophysiologiy and clinical implications. Enlargement of the left atrium is a form of cardiomegaly. Moderately increased LAVI (63 to 73 mL/m2) is associated with slightly elevated mortality hazard and severely increased LAVI (>73 mL/m2) with significantly higher hazard ratio of mortality. LAVI predicts survival after acute myocardial infarction, postoperative atrial fibrillation in subjects undergoing heart surgery, atrial fibrillation and stroke as well as hospital admission in ambulatory patients.
[ { "math_id": 0, "text": "{A}_{L}=\\frac{8}{3\\pi }\\frac{A4c \\cdot A2c}{L}" }, { "math_id": 1, "text": "LAVI=\\frac{{A}_{L}}{BSA}" } ]
https://en.wikipedia.org/wiki?curid=68116656
68116797
Assembly theory
Theory that characterizes object complexity Assembly theory is a framework for quantifying selection and evolution. When applied to molecule complexity, its authors show it to be the first technique that is experimentally verifiable, unlike other molecular complexity algorithms that lack experimental measures. Background. The hypothesis was proposed by chemist Leroy Cronin in 2017 and developed by the team he leads at the University of Glasgow, then extended in collaboration with a team at Arizona State University led by astrobiologist Sara Imari Walker, in a paper released in 2021. Assembly theory conceptualizes objects not as point particles, but as entities defined by their possible formation histories. This allows objects to show evidence of selection, within well-defined boundaries of individuals or selected units. Combinatorial objects are important in chemistry, biology and technology, in which most objects of interest (if not all) are hierarchical modular structures. For any object an 'assembly space' can be defined as all recursively assembled pathways that produce this object. The 'assembly index' is the number of steps on a shortest path producing the object. For such shortest path, the assembly space captures the minimal memory, in terms of the minimal number of operations necessary to construct an object based on objects that could have existed in its past. The assembly is defined as "the total amount of selection necessary to produce an ensemble of observed objects"; for an ensemble containing formula_0 objects in total, formula_1 of which are unique, the assembly formula_2 is defined to be formula_3, where formula_4 denotes 'copy number', the number of occurrences of objects of type formula_5 having assembly index formula_6. For example, the word 'abracadabra' contains 5 unique letters (a, b, c, d and r) and is 11 symbols long. It can be assembled from its constituents as a + b --> ab + r --> abr + a --> abra + c --> abrac + a --> abraca + d --> abracad + abra --> abracadabra, because 'abra' was already constructed at an earlier stage. Because this requires at least 7 steps, the assembly index is 7. The word ‘abracadrbaa’, of the same length, for example, has no repeats so has an assembly index of 10. Take two binary strings formula_7 and formula_8 as another example. Both have the same length formula_9 bits, both have the same Hamming weight formula_10. However, the assembly index of the first string is formula_11 ("01" is assembled, joined with itself into "0101", and joined again with "0101" taken from the assembly pool), while the assembly index of the second string is formula_12, since in this case only "01" can be taken from the assembly pool. In general, for "K" subunits of an object "O" the assembly index is bounded by formula_13. Once a pathway to assemble an object is discovered, the object can be reproduced. The rate of discovery of new objects can be defined by the expansion rate formula_14, introducing a discovery timescale formula_15. To include copy number formula_4 in the dynamics of assembly theory, a production timescale formula_16 is defined, where formula_17 is the production rate of a specific object formula_18. Defining these two distinct timescales formula_19, for the initial discovery of an object, and formula_20, for making copies of existing objects, allows to determine the regimes in which selection is possible. While other approaches can provide a measure of complexity, the researchers claim that assembly theory's molecular assembly number is the first to be measurable experimentally. Molecules with a high assembly index are very unlikely to form abiotically, and the probability of abiotic formation goes down as the value of the assembly index increases. The assembly index of a molecule can be obtained directly via spectroscopic methods. This method could be implemented in a fragmentation tandem mass spectrometry instrument to search for biosignatures. The theory was extended to map chemical space with molecular assembly trees, demonstrating the application of this approach in drug discovery, in particular in research of new opiate-like molecules by connecting the "assembly pool elements through the same pattern in which they were disconnected from their parent compound(s)". It is difficult to identify chemical signatures that are unique to life. For example, the Viking lander biological experiments detected molecules that could be explained by either living or natural non-living processes. It appears that only living samples can produce assembly index measurements above ~15. However, 2021, Cronin first explained how polyoxometalates could have large assembly indexes >15 in theory due to autocatalysis. Critical views. Chemist Steven A. Benner has publicly criticized various aspects of Assembly Theory. Benner argues that it is transparently false that non-living systems, and with no life intervention, cannot contain molecules that are complex but people would be misled in thinking that because it was published in Nature journals after peer review, these papers must be right. A paper published in the Journal of Molecular Evolution refers to Hector Zenil's blog post "that identifies no less than eight fallacies of assembly theory". The paper also refers to the video essay by the same author staying "that summarizes these fallacies, and highlights conceptual/methodological limitations, and the pervasive failure by the proponents of assembly theory to acknowledge relevant previous work in the field of complexity science". The paper concludes that "the hype around Assembly Theory reflects rather unfavorably both on the authors and the scientific publication system in general". The author concludes that what "assembly theory really does is to detect and quantify bias caused by higher-level constraints in some well-defined rule-based worlds"; one "can use assembly theory to check whether something unexpected is going on in a very broad range of computational model "worlds" or "universes"". The group led by Hector Zenil, a former Senior researcher and faculty member from Oxford and Cambridge and currently an Associate Professor in Biomedical Engineering from King's College London, is cited to have reproduced the results of Assembly Theory with traditional statistical algorithms. Another paper authored by a group of chemists and planetary scientists, including an author affiliated with NASA, published in the journal of the Royal Society Interface demonstrated that abiotic chemical processes have the potential to form crystal structures of great complexity — values exceeding the proposed abiotic/biotic divide of MA index = 15. They conclude that "while the proposal of a biosignature based on a molecular assembly index of 15 is an intriguing and testable concept, the contention that only life can generate molecular structures with MA index ≥ 15 is in error". The paper also cites the papers and posts of Hector Zenil as questioning whether a single scalar value like the assembly index can be employed to adequately discriminate between living and nonliving systems, and pointing out the noticeable similarities of the Assembly Theory approach to uncited prior efforts to distinguish biotic from abiotic molecular compounds. In particular, the paper mentions that Zenil and colleagues "may also have anticipated key conclusions of Assembly Theory by exploring connections among causal memory, selection, and evolution". References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "N_T" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "A=\\mathop{\\sum }\\limits_{i=1}^{N}{e}^{{a}_{i}}\\left(\\frac{{n}_{i}-1}{{N}_{{\\rm{T}}}}\\right)" }, { "math_id": 4, "text": "n_i" }, { "math_id": 5, "text": "i=\\{1,2,\\dots,N\\}" }, { "math_id": 6, "text": "a_i" }, { "math_id": 7, "text": "C=[01010101]" }, { "math_id": 8, "text": "D=[00010111]" }, { "math_id": 9, "text": "N=8" }, { "math_id": 10, "text": "N_1=N/2=4" }, { "math_id": 11, "text": "a(C)=3" }, { "math_id": 12, "text": "a(D)=6" }, { "math_id": 13, "text": "\\log_2(K) \\le a_O \\le K-1" }, { "math_id": 14, "text": "k_{\\text{d}}" }, { "math_id": 15, "text": "\\tau_{\\text{d}} \\approx 1/k_{\\text{d}}" }, { "math_id": 16, "text": "\\tau_{\\text{p}} \\approx 1/k_{\\text{p}}" }, { "math_id": 17, "text": "k_{\\text{p}}" }, { "math_id": 18, "text": "i" }, { "math_id": 19, "text": "\\tau_{\\text{d}}" }, { "math_id": 20, "text": "\\tau_{\\text{p}}" } ]
https://en.wikipedia.org/wiki?curid=68116797
6811795
Convergence of measures
Mathematical concept In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by "convergence of measures", consider a sequence of measures "μn" on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance "ε" > 0 we require there be N sufficiently large for "n" ≥ "N" to ensure the 'difference' between "μn" and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength. Three of the most common notions of convergence are described below. Informal descriptions. This section attempts to provide a rough intuitive description of three notions of convergence, using terminology developed in calculus courses; this section is necessarily imprecise as well as inexact, and the reader should refer to the formal clarifications in subsequent sections. In particular, the descriptions here do not address the possibility that the measure of some sets could be infinite, or that the underlying space could exhibit pathological behavior, and additional technical assumptions are needed for some of the statements. The statements in this section are however all correct if "μn" is a sequence of probability measures on a Polish space. The various notions of convergence formalize the assertion that the 'average value' of each 'sufficiently nice' function should converge: formula_0 To formalize this requires a careful specification of the set of functions under consideration and how uniform the convergence should be. The notion of "weak convergence" requires this convergence to take place for every continuous bounded function f. This notion treats convergence for different functions f independently of one another, i.e., different functions f may require different values of "N" ≤ "n" to be approximated equally well (thus, convergence is non-uniform in f). The notion of "setwise convergence" formalizes the assertion that the measure of each measurable set should converge: formula_1 Again, no uniformity over the set A is required. Intuitively, considering integrals of 'nice' functions, this notion provides more uniformity than weak convergence. As a matter of fact, when considering sequences of measures with uniformly bounded variation on a Polish space, setwise convergence implies the convergence formula_2 for any bounded measurable function f. As before, this convergence is non-uniform in f. The notion of "total variation convergence" formalizes the assertion that the measure of all measurable sets should converge "uniformly", i.e. for every "ε" > 0 there exists N such that formula_3 for every "n" > "N" and for every measurable set A. As before, this implies convergence of integrals against bounded measurable functions, but this time convergence is uniform over all functions bounded by any fixed constant. Total variation convergence of measures. This is the strongest notion of convergence shown on this page and is defined as follows. Let formula_4 be a measurable space. The total variation distance between two (positive) measures μ and ν is then given by formula_5 Here the supremum is taken over f ranging over the set of all measurable functions from X to [−1, 1]. This is in contrast, for example, to the Wasserstein metric, where the definition is of the same form, but the supremum is taken over f ranging over the set of measurable functions from X to [−1, 1] which have Lipschitz constant at most 1; and also in contrast to the Radon metric, where the supremum is taken over f ranging over the set of continuous functions from X to [−1, 1]. In the case where X is a Polish space, the total variation metric coincides with the Radon metric. If μ and ν are both probability measures, then the total variation distance is also given by formula_6 The equivalence between these two definitions can be seen as a particular case of the Monge–Kantorovich duality. From the two definitions above, it is clear that the total variation distance between probability measures is always between 0 and 2. To illustrate the meaning of the total variation distance, consider the following thought experiment. Assume that we are given two probability measures μ and ν, as well as a random variable X. We know that X has law either μ or ν but we do not know which one of the two. Assume that these two measures have prior probabilities 0.5 each of being the true law of X. Assume now that we are given "one" single sample distributed according to the law of X and that we are then asked to guess which one of the two distributions describes that law. The quantity formula_7 then provides a sharp upper bound on the prior probability that our guess will be correct. Given the above definition of total variation distance, a sequence "μn" of measures defined on the same measure space is said to converge to a measure μ in total variation distance if for every "ε" > 0, there exists an N such that for all "n" > "N", one has that formula_8 Setwise convergence of measures. For formula_4 a measurable space, a sequence "μn" is said to converge setwise to a limit μ if formula_9 for every set formula_10. Typical arrow notations are formula_11 and formula_12. For example, as a consequence of the Riemann–Lebesgue lemma, the sequence "μn" of measures on the interval [−1, 1] given by "μn"("dx") (1 + sin("nx"))"dx" converges setwise to Lebesgue measure, but it does not converge in total variation. In a measure theoretical or probabilistic context setwise convergence is often referred to as strong convergence (as opposed to weak convergence). This can lead to some ambiguity because in functional analysis, strong convergence usually refers to convergence with respect to a norm. Weak convergence of measures. In mathematics and statistics, weak convergence is one of many types of convergence relating to the convergence of measures. It depends on a topology on the underlying space and thus is not a purely measure-theoretic notion. There are several equivalent definitions of weak convergence of a sequence of measures, some of which are (apparently) more general than others. The equivalence of these conditions is sometimes known as the Portmanteau theorem. Definition. Let formula_13 be a metric space with its Borel formula_14-algebra formula_15. A bounded sequence of positive probability measures formula_16 on formula_17 is said to converge weakly to a probability measure formula_18 (denoted formula_19) if any of the following equivalent conditions is true (here formula_20 denotes expectation or the formula_21 norm with respect to formula_22, while formula_23 denotes expectation or the formula_21 norm with respect to formula_18): In the case formula_34 with its usual topology, if formula_35 and formula_36 denote the cumulative distribution functions of the measures formula_22 and formula_18, respectively, then formula_22 converges weakly to formula_18 if and only if formula_37 for all points formula_38 at which formula_36 is continuous. For example, the sequence where formula_22 is the Dirac measure located at formula_39 converges weakly to the Dirac measure located at 0 (if we view these as measures on formula_40 with the usual topology), but it does not converge setwise. This is intuitively clear: we only know that formula_39 is "close" to formula_41 because of the topology of formula_40. This definition of weak convergence can be extended for formula_13 any metrizable topological space. It also defines a weak topology on formula_42, the set of all probability measures defined on formula_43. The weak topology is generated by the following basis of open sets: formula_44 where formula_45 If formula_13 is also separable, then formula_42 is metrizable and separable, for example by the Lévy–Prokhorov metric. If formula_13 is also compact or Polish, so is formula_42. If formula_13 is separable, it naturally embeds into formula_42 as the (closed) set of Dirac measures, and its convex hull is dense. There are many "arrow notations" for this kind of convergence: the most frequently used are formula_46, formula_47, formula_48 and formula_49. Weak convergence of random variables. Let formula_50 be a probability space and X be a metric space. If "Xn": Ω → X is a sequence of random variables then "Xn" is said to converge weakly (or in distribution or in law) to the random variable "X": Ω → X as "n" → ∞ if the sequence of pushforward measures ("Xn")∗(P) converges weakly to "X"∗(P) in the sense of weak convergence of measures on X, as defined above. Comparison with vague convergence. Let formula_51 be a metric space (for example formula_52 or formula_53). The following spaces of test functions are commonly used in the convergence of probability measures. We have formula_58. Moreover, formula_59 is the closure of formula_60 with respect to uniform convergence. Vague Convergence. A sequence of measures formula_61 converges vaguely to a measure formula_62 if for all formula_63, formula_64. Weak Convergence. A sequence of measures formula_61 converges weakly to a measure formula_62 if for all formula_65, formula_64. In general, these two convergence notions are not equivalent. In a probability setting, vague convergence and weak convergence of probability measures are equivalent assuming tightness. That is, a tight sequence of probability measures formula_66 converges vaguely to a probability measure formula_62 if and only if formula_67 converges weakly to formula_62. The weak limit of a sequence of probability measures, provided it exists, is a probability measure. In general, if tightness is not assumed, a sequence of probability (or sub-probability) measures may not necessarily converge "vaguely" to a true probability measure, but rather to a sub-probability measure (a measure such that formula_68). Thus, a sequence of probability measures formula_66 such that formula_69 where formula_62 is not specified to be a probability measure is not guaranteed to imply weak convergence. Weak convergence of measures as an example of weak-* convergence. Despite having the same name as weak convergence in the context of functional analysis, weak convergence of measures is actually an example of weak-* convergence. The definitions of weak and weak-* convergences used in functional analysis are as follows: Let formula_70 be a topological vector space or Banach space. To illustrate how weak convergence of measures is an example of weak-* convergence, we give an example in terms of vague convergence (see above). Let formula_51 be a locally compact Hausdorff space. By the Riesz-Representation theorem, the space formula_82 of Radon measures is isomorphic to a subspace of the space of continuous linear functionals on formula_55. Therefore, for each Radon measure formula_83, there is a linear functional formula_84 such that formula_85 for all formula_86. Applying the definition of weak-* convergence in terms of linear functionals, the characterization of vague convergence of measures is obtained. For compact formula_87, formula_88, so in this case weak convergence of measures is a special case of weak-* convergence. Notes and references. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\int f\\, d\\mu_n \\to \\int f\\, d\\mu" }, { "math_id": 1, "text": "\\mu_n(A) \\to \\mu(A)" }, { "math_id": 2, "text": "\\int f\\, d\\mu_n \\to \\int f\\, d\\mu" }, { "math_id": 3, "text": "|\\mu_n(A) - \\mu(A)| < \\varepsilon" }, { "math_id": 4, "text": "(X, \\mathcal{F})" }, { "math_id": 5, "text": " \\left \\|\\mu- \\nu \\right \\|_\\text{TV} = \\sup_f \\left \\{ \\int_X f \\, d\\mu - \\int_X f \\, d\\nu \\right \\}." }, { "math_id": 6, "text": "\\left \\|\\mu- \\nu \\right \\|_{\\text{TV}} = 2\\cdot\\sup_{A\\in \\mathcal{F}} | \\mu (A) - \\nu (A) |." }, { "math_id": 7, "text": "{2+\\|\\mu-\\nu\\|_\\text{TV} \\over 4}" }, { "math_id": 8, "text": "\\|\\mu_n - \\mu\\|_\\text{TV} < \\varepsilon." }, { "math_id": 9, "text": " \\lim_{n \\to \\infty} \\mu_n(A) = \\mu(A)" }, { "math_id": 10, "text": "A\\in\\mathcal{F}" }, { "math_id": 11, "text": "\\mu_n \\xrightarrow{sw} \\mu" }, { "math_id": 12, "text": " \\mu_n \\xrightarrow{s} \\mu" }, { "math_id": 13, "text": "S" }, { "math_id": 14, "text": "\\sigma" }, { "math_id": 15, "text": "\\Sigma" }, { "math_id": 16, "text": "P_n\\, (n = 1, 2, \\dots)" }, { "math_id": 17, "text": "(S, \\Sigma)" }, { "math_id": 18, "text": "P" }, { "math_id": 19, "text": "P_n\\Rightarrow P" }, { "math_id": 20, "text": "\\operatorname{E}_n" }, { "math_id": 21, "text": "L^1" }, { "math_id": 22, "text": "P_n" }, { "math_id": 23, "text": "\\operatorname{E}" }, { "math_id": 24, "text": "\\operatorname{E}_n[f] \\to \\operatorname{E}[f]" }, { "math_id": 25, "text": "f" }, { "math_id": 26, "text": "\\limsup \\operatorname{E}_n[f] \\le \\operatorname{E}[f]" }, { "math_id": 27, "text": "\\liminf \\operatorname{E}_n[f] \\ge \\operatorname{E}[f]" }, { "math_id": 28, "text": "\\limsup P_n(C) \\le P(C)" }, { "math_id": 29, "text": "C" }, { "math_id": 30, "text": "\\liminf P_n(U) \\ge P(U)" }, { "math_id": 31, "text": "U" }, { "math_id": 32, "text": "\\lim P_n(A) = P(A)" }, { "math_id": 33, "text": "A" }, { "math_id": 34, "text": "S \\equiv \\mathbf{R}" }, { "math_id": 35, "text": "F_n" }, { "math_id": 36, "text": "F" }, { "math_id": 37, "text": "\\lim_{n \\to \\infty} F_n(x) = F(x)" }, { "math_id": 38, "text": "x \\in \\mathbf{R}" }, { "math_id": 39, "text": "1/n" }, { "math_id": 40, "text": "\\mathbf{R}" }, { "math_id": 41, "text": "0" }, { "math_id": 42, "text": "\\mathcal{P}(S)" }, { "math_id": 43, "text": "(S,\\Sigma)" }, { "math_id": 44, "text": "\\left\\{ \\ U_{\\varphi, x, \\delta} \\ \\left| \\quad \\varphi : S \\to \\mathbf{R} \\text{ is bounded and continuous, } x \\in \\mathbf{R} \\text{ and } \\delta > 0 \\ \\right. \\right\\}," }, { "math_id": 45, "text": "U_{\\varphi, x, \\delta} := \\left\\{ \\ \\mu \\in \\mathcal{P}(S) \\ \\left| \\quad \\left| \\int_S \\varphi \\, \\mathrm{d} \\mu - x \\right| < \\delta \\ \\right. \\right\\}." }, { "math_id": 46, "text": "P_{n} \\Rightarrow P" }, { "math_id": 47, "text": "P_{n} \\rightharpoonup P" }, { "math_id": 48, "text": "P_{n} \\xrightarrow{w} P" }, { "math_id": 49, "text": "P_{n} \\xrightarrow{\\mathcal{D}} P" }, { "math_id": 50, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 51, "text": "X" }, { "math_id": 52, "text": "\\mathbb{R}" }, { "math_id": 53, "text": "[0,1]" }, { "math_id": 54, "text": "C_c(X)" }, { "math_id": 55, "text": "C_0(X)" }, { "math_id": 56, "text": "\\lim _{|x| \\rightarrow \\infty} f(x)=0" }, { "math_id": 57, "text": "C_B(X)" }, { "math_id": 58, "text": "C_c \\subset C_0 \\subset C_B \\subset C" }, { "math_id": 59, "text": "C_0" }, { "math_id": 60, "text": "C_c" }, { "math_id": 61, "text": "\\left(\\mu_n\\right)_{n \\in \\mathbb{N}}" }, { "math_id": 62, "text": "\\mu" }, { "math_id": 63, "text": "f \\in C_c(X)" }, { "math_id": 64, "text": "\\int_X f \\, d \\mu_n \\rightarrow \\int_X f \\, d \\mu" }, { "math_id": 65, "text": "f \\in C_B(X)" }, { "math_id": 66, "text": "(\\mu_n)_{n\\in \\mathbb{N}}" }, { "math_id": 67, "text": "(\\mu_n)_{n \\in \\mathbb{N}} " }, { "math_id": 68, "text": "\\mu(X)\\leq 1" }, { "math_id": 69, "text": "\\mu_n \\overset{v}{\\to} \\mu " }, { "math_id": 70, "text": "V" }, { "math_id": 71, "text": "x_n" }, { "math_id": 72, "text": "x" }, { "math_id": 73, "text": "\\varphi\\left(x_n\\right) \\rightarrow \\varphi(x)" }, { "math_id": 74, "text": "n \\to \\infty" }, { "math_id": 75, "text": "\\varphi \\in V^*" }, { "math_id": 76, "text": "x_n \\stackrel{w}{\\rightarrow} x" }, { "math_id": 77, "text": "\\varphi_n \\in V^*" }, { "math_id": 78, "text": "\\varphi" }, { "math_id": 79, "text": "\\varphi_n(x) \\rightarrow \\varphi(x)" }, { "math_id": 80, "text": "x \\in V" }, { "math_id": 81, "text": "\\varphi_n \\stackrel{w^*}{\\rightarrow} \\varphi" }, { "math_id": 82, "text": "M(X)" }, { "math_id": 83, "text": "\\mu_n \\in M(X)" }, { "math_id": 84, "text": "\\varphi_n \\in C_0(X)^*" }, { "math_id": 85, "text": "\\varphi_n(f)=\\int_X f \\, d \\mu_n" }, { "math_id": 86, "text": "f \\in C_0(X)" }, { "math_id": 87, "text": " X " }, { "math_id": 88, "text": " C_0(X)=C_B(X) " } ]
https://en.wikipedia.org/wiki?curid=6811795
681185
Stress concentration
Location in an object where stress is far greater than the surrounding region In solid mechanics, a stress concentration (also called a stress raiser or a stress riser or notch sensitivity) is a location in an object where the stress is significantly greater than the surrounding region. Stress concentrations occur when there are irregularities in the geometry or material of a structural component that cause an interruption to the flow of stress. This arises from such details as holes, grooves, notches and fillets. Stress concentrations may also occur from accidental damage such as nicks and scratches. The degree of concentration of a discontinuity under typically tensile loads can be expressed as a non-dimensional stress concentration factor formula_0, which is the ratio of the highest stress to the nominal far field stress. For a circular hole in an infinite plate, formula_1. The stress concentration factor should not be confused with the stress intensity factor, which is used to define the effect of a crack on the stresses in the region around a crack tip. For ductile materials, large loads can cause localised plastic deformation or yielding that will typically occur first at a stress concentration allowing a redistribution of stress and enabling the component to continue to carry load. Brittle materials will typically fail at the stress concentration. However, repeated low level loading may cause a fatigue crack to initiate and slowly grow at a stress concentration leading to the failure of even ductile materials. Fatigue cracks always start at stress raisers, so removing such defects increases the fatigue strength. Description. Stress concentrations occur when there are irregularities in the geometry or material of a structural component that cause an interruption to the flow of stress. Geometric discontinuities cause an object to experience a localised increase in stress. Examples of shapes that cause stress concentrations are sharp internal corners, holes, and sudden changes in the cross-sectional area of the object as well as unintentional damage such as nicks, scratches and cracks. High local stresses can cause objects to fail more quickly, so engineers typically design the geometry to minimize stress concentrations. Material discontinuities, such as inclusions in metals, may also concentrate the stress. Inclusions on the surface of a component may be broken from machining during manufacture leading to microcracks that grow in service from cyclic loading. Internally, the failure of the interfaces around inclusions during loading may lead to static failure by microvoid coalescence. Stress concentration factor. The "stress concentration factor", formula_0, is the ratio of the highest stress formula_2 to a nominal stress formula_3 of the gross cross-section and defined as formula_4 Note that the dimensionless stress concentration factor is a function of the geometry shape and independent of its size. These factors can be found in typical engineering reference materials. E. Kirsch derived the equations for the elastic stress distribution around a hole. The maximum stress felt near a hole or notch occurs in the area of lowest radius of curvature. In an elliptical hole of length formula_5 and width formula_6, under a far-field stress formula_7, the stress at the ends of the major axes is given by Inglis' equation: formula_8 where formula_9 is the radius of curvature of the elliptical hole. For circular holes in an infinite plate where formula_10, the stress concentration factor is formula_11. As the radius of curvature approaches zero, such as at the tip of a sharp crack, the maximum stress approaches infinity and a stress concentration factor cannot therefore be used for a crack. Instead, the stress intensity factor which defines the scaling of the stress field around a crack tip, is used. Causes of Stress Concentration. Stress concentration can arise due to various factors. The following are the main causes of stress concentration: Material Defects: When designing mechanical components, it is generally presumed that the material used is consistent and homogeneous throughout. In practice, however, material inconsistencies such as internal cracks, blowholes, cavities in welds, air holes in metal parts, and non-metallic or foreign inclusions can occur. These defects act as discontinuities within the component, disrupting the uniform distribution of stress and thereby leading to stress concentration. Contact Stress: Mechanical components are frequently subjected to forces that are concentrated at specific points or small areas. This localized application of force can result in disproportionately high pressures at these points, causing stress concentration. Typical instances include the interactions at the points of contact in meshing gear teeth, the interfaces between cams and followers, and the contact zones in ball bearings. Thermal Stress: Thermal stress occurs when different parts of a structure expand or contract at different rates due to variations in temperature. This differential in thermal expansion and contraction generates internal stresses, which can lead to areas of stress concentration within the structure. Geometric Discontinuities: Features such as steps on a shaft, shoulders, and other abrupt changes in the cross-sectional area of components are often necessary for mounting elements like gears and bearings or for assembly considerations. While these features are essential for the functionality of the device, they introduce sharp transitions in geometry that become hotspots for stress concentration. Additionally, design elements like oil holes, grooves, keyways, splines, and screw threads also introduce discontinuities that further exacerbate stress concentration. Rough Surface: Imperfections on the surface of components, such as machining scratches, stamp marks, or inspection marks, can interrupt the smooth flow of stress across the surface, leading to localized increases in stress. These imperfections, although often small, can significantly impact the durability and performance of mechanical components by initiating stress concentration. Methods for determining factors. There are experimental methods for measuring stress concentration factors including photoelastic stress analysis, thermoelastic stress analysis, brittle coatings or strain gauges. During the design phase, there are multiple approaches to estimating stress concentration factors. Several catalogs of stress concentration factors have been published. Perhaps most famous is "Stress Concentration Design Factors" by Peterson, first published in 1953. Finite element methods are commonly used in design today. Other methods include the boundary element method and meshfree methods. Limiting the effects of stress concentrations. Stress concentrations can be mitigated through techniques that smoothen the flow of stress around a discontinuity: Material Removal: Introducing auxiliary holes in the high stress region to create a more gradual transition. The size and position of these holes must be optimized. Known as crack tip blunting, a counter-intuitive example of reducing one of the worst types of stress concentrations, a crack, is to drill a large hole at the end of the crack. The drilled hole, with its relatively large size, serves to increase the effective crack tip radius and thus reduce the stress concentration. Hole Reinforcement: Adding higher strength material around the hole, usually in the form of bonded rings or doublers. Composite reinforcements can reduce the SCF. Shape Optimization: Adjusting the hole shape, often transitioning from circular to elliptical, to minimize stress gradients. This must be checked for feasibility. One example is adding a fillet to internal corners. Another example is in a threaded component, where the force flow line is bent as it passes from shank portion to threaded portion; as a result, stress concentration takes place. To reduce this, a small undercut is made between the shank and threaded portions Functionally Graded Materials: Using materials with properties that vary gradually can reduce the SCF compared to a sudden change in material. The optimal mitigation technique depends on the specific geometry, loading scenario, and manufacturing constraints. In general, a combination of methods is required for the best result. While there is no universal solution, careful analysis of the stress flow and parameterization of the model can point designers toward an effective stress reduction strategy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_t" }, { "math_id": 1, "text": "K_t = 3" }, { "math_id": 2, "text": "\\sigma_\\max" }, { "math_id": 3, "text": "\\sigma_\\text{nom}" }, { "math_id": 4, "text": "K_t = \\frac{\\sigma_\\max}{\\sigma_\\text{nom}} " }, { "math_id": 5, "text": "2a" }, { "math_id": 6, "text": "2b" }, { "math_id": 7, "text": "\\sigma_0" }, { "math_id": 8, "text": "\\sigma_{\\max} = \\sigma_0\\left(1+2\\cfrac{a}{b}\\right) = \\sigma\\left(1+2\\sqrt{\\cfrac{a}{\\rho}}\\right) " }, { "math_id": 9, "text": "\\rho" }, { "math_id": 10, "text": "a=b" }, { "math_id": 11, "text": "K_t=3" } ]
https://en.wikipedia.org/wiki?curid=681185
681186
Whitehead torsion
In geometric topology, a field within mathematics, the obstruction to a homotopy equivalence formula_0 of finite CW-complexes being a simple homotopy equivalence is its Whitehead torsion formula_1 which is an element in the Whitehead group formula_2. These concepts are named after the mathematician J. H. C. Whitehead. The Whitehead torsion is important in applying surgery theory to non-simply connected manifolds of dimension &gt; 4: for simply-connected manifolds, the Whitehead group vanishes, and thus homotopy equivalences and simple homotopy equivalences are the same. The applications are to differentiable manifolds, PL manifolds and topological manifolds. The proofs were first obtained in the early 1960s by Stephen Smale, for differentiable manifolds. The development of handlebody theory allowed much the same proofs in the differentiable and PL categories. The proofs are much harder in the topological category, requiring the theory of Robion Kirby and Laurent C. Siebenmann. The restriction to manifolds of dimension greater than four are due to the application of the Whitney trick for removing double points. In generalizing the "h"-cobordism theorem, which is a statement about simply connected manifolds, to non-simply connected manifolds, one must distinguish simple homotopy equivalences and non-simple homotopy equivalences. While an "h"-cobordism "W" between simply-connected closed connected manifolds "M" and "N" of dimension "n" &gt; 4 is isomorphic to a cylinder (the corresponding homotopy equivalence can be taken to be a diffeomorphism, PL-isomorphism, or homeomorphism, respectively), the "s"-cobordism theorem states that if the manifolds are not simply-connected, an "h"-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion formula_3 vanishes. Whitehead group. The Whitehead group of a connected CW-complex or a manifold "M" is equal to the Whitehead group formula_4 of the fundamental group formula_5 of "M". If "G" is a group, the Whitehead group formula_6 is defined to be the cokernel of the map formula_7 which sends ("g", ±1) to the invertible (1,1)-matrix (±"g"). Here formula_8 is the group ring of "G". Recall that the K-group K1("A") of a ring "A" is defined as the quotient of GL(A) by the subgroup generated by elementary matrices. The group GL("A") is the direct limit of the finite-dimensional groups GL("n", "A") → GL("n"+1, "A"); concretely, the group of invertible infinite matrices which differ from the identity matrix in only a finite number of coefficients. An elementary matrix here is a transvection: one such that all main diagonal elements are 1 and there is at most one non-zero element not on the diagonal. The subgroup generated by elementary matrices is exactly the derived subgroup, in other words the smallest normal subgroup such that the quotient by it is abelian. In other words, the Whitehead group formula_6 of a group "G" is the quotient of formula_9 by the subgroup generated by elementary matrices, elements of "G" and formula_10. Notice that this is the same as the quotient of the reduced K-group formula_11 by "G". The Whitehead torsion. At first we define the Whitehead torsion formula_16 for a chain homotopy equivalence formula_17 of finite based free "R"-chain complexes. We can assign to the homotopy equivalence its mapping cone C* := cone*(h*) which is a contractible finite based free "R"-chain complex. Let formula_18 be any chain contraction of the mapping cone, i.e., formula_19 for all "n". We obtain an isomorphism formula_20 with formula_21 We define formula_22, where "A" is the matrix of formula_23 with respect to the given bases. For a homotopy equivalence formula_24 of connected finite CW-complexes we define the Whitehead torsion formula_25 as follows. Let formula_26 be the lift of formula_27 to the universal covering. It induces formula_28-chain homotopy equivalences formula_29. Now we can apply the definition of the Whitehead torsion for a chain homotopy equivalence and obtain an element in formula_30 which we map to Wh(π1("Y")). This is the Whitehead torsion τ(ƒ) ∈ Wh(π1("Y")). Properties. Homotopy invariance: Let formula_31 be homotopy equivalences of finite connected CW-complexes. If "f" and "g" are homotopic, then formula_32. Topological invariance: If formula_33 is a homeomorphism of finite connected CW-complexes, then formula_34. Composition formula: Let formula_33, formula_35 be homotopy equivalences of finite connected CW-complexes. Then formula_36. Geometric interpretation. The s-cobordism theorem states for a closed connected oriented manifold "M" of dimension "n" &gt; 4 that an h-cobordism "W" between "M" and another manifold "N" is trivial over "M" if and only if the Whitehead torsion of the inclusion formula_37 vanishes. Moreover, for any element in the Whitehead group there exists an h-cobordism "W" over "M" whose Whitehead torsion is the considered element. The proofs use handle decompositions. There exists a homotopy theoretic analogue of the s-cobordism theorem. Given a CW-complex "A", consider the set of all pairs of CW-complexes ("X", "A") such that the inclusion of "A" into "X" is a homotopy equivalence. Two pairs ("X"1, "A") and ("X"2, "A") are said to be equivalent, if there is a simple homotopy equivalence between "X"1 and "X"2 relative to "A". The set of such equivalence classes form a group where the addition is given by taking union of "X"1 and "X"2 with common subspace "A". This group is natural isomorphic to the Whitehead group Wh("A") of the CW-complex "A". The proof of this fact is similar to the proof of s-cobordism theorem.
[ { "math_id": 0, "text": "f\\colon X \\to Y" }, { "math_id": 1, "text": "\\tau(f)" }, { "math_id": 2, "text": "\\operatorname{Wh}(\\pi_1(Y))" }, { "math_id": 3, "text": "M \\hookrightarrow W" }, { "math_id": 4, "text": "\\operatorname{Wh}(\\pi_1(M))" }, { "math_id": 5, "text": "\\pi_1(M)" }, { "math_id": 6, "text": "\\operatorname{Wh}(G)" }, { "math_id": 7, "text": "G\\times \\{\\pm1\\} \\to K_1(\\Z[G])" }, { "math_id": 8, "text": "\\Z[G]" }, { "math_id": 9, "text": "\\operatorname{GL}(\\Z[G])" }, { "math_id": 10, "text": "\\pm 1" }, { "math_id": 11, "text": "\\tilde{K}_1(\\Z[G])" }, { "math_id": 12, "text": "\\Z," }, { "math_id": 13, "text": "\\Z" }, { "math_id": 14, "text": "(1-t-t^4)(1-t^2-t^3)=1," }, { "math_id": 15, "text": "K_1(\\Z[G])" }, { "math_id": 16, "text": "\\tau(h_*) \\in {\\tilde K}_1(R)" }, { "math_id": 17, "text": "h_*: D_* \\to E_*" }, { "math_id": 18, "text": "\\gamma_*: C_* \\to C_{*+1}" }, { "math_id": 19, "text": "c_{n+1} \\circ \\gamma_n + \\gamma_{n-1} \\circ c_n = \\operatorname{id}_{C_n}" }, { "math_id": 20, "text": "(c_* + \\gamma_*)_\\mathrm{odd}: C_\\mathrm{odd} \\to C_\\mathrm{even}" }, { "math_id": 21, "text": "C_\\mathrm{odd} := \\bigoplus_{n \\text{ odd}} C_n,\\qquad C_\\mathrm{even} := \\bigoplus_{n \\text{ even}} C_n." }, { "math_id": 22, "text": "\\tau(h_*) := [A] \\in {\\tilde K}_1(R)" }, { "math_id": 23, "text": "(c_* + \\gamma_*)_{\\rm odd}" }, { "math_id": 24, "text": "f: X\\to Y" }, { "math_id": 25, "text": "\\tau(f) \\in \\operatorname{Wh}(\\pi_1(Y))" }, { "math_id": 26, "text": "{\\tilde f}: {\\tilde X} \\to {\\tilde Y}" }, { "math_id": 27, "text": "f: X \\to Y" }, { "math_id": 28, "text": "\\Z[\\pi_1(Y)]" }, { "math_id": 29, "text": "C_*({\\tilde f}): C_*({\\tilde X}) \\to C_*({\\tilde Y})" }, { "math_id": 30, "text": "{\\tilde K}_1(\\Z[\\pi_1(Y)])" }, { "math_id": 31, "text": "f,g\\colon X\\to Y" }, { "math_id": 32, "text": "\\tau(f) = \\tau(g)" }, { "math_id": 33, "text": "f\\colon X\\to Y" }, { "math_id": 34, "text": "\\tau(f) = 0" }, { "math_id": 35, "text": "g\\colon Y\\to Z" }, { "math_id": 36, "text": "\\tau(g \\circ f) = g_* \\tau(f) + \\tau(g)" }, { "math_id": 37, "text": "M\\hookrightarrow W" } ]
https://en.wikipedia.org/wiki?curid=681186
681190
Compression (functional analysis)
In functional analysis, the compression of a linear operator "T" on a Hilbert space to a subspace "K" is the operator formula_0, where formula_1 is the orthogonal projection onto "K". This is a natural way to obtain an operator on "K" from an operator on the whole Hilbert space. If "K" is an invariant subspace for "T", then the compression of "T" to "K" is the restricted operator "K→K" sending "k" to "Tk". More generally, for a linear operator "T" on a Hilbert space formula_2 and an isometry "V" on a subspace formula_3 of formula_2, define the compression of "T" to formula_3 by formula_4, where formula_5 is the adjoint of "V". If "T" is a self-adjoint operator, then the compression formula_6 is also self-adjoint. When "V" is replaced by the inclusion map formula_7, formula_8, and we acquire the special definition above.
[ { "math_id": 0, "text": "P_K T \\vert_K : K \\rightarrow K " }, { "math_id": 1, "text": "P_K : H \\rightarrow K" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "W" }, { "math_id": 4, "text": "T_W = V^*TV : W \\rightarrow W" }, { "math_id": 5, "text": "V^*" }, { "math_id": 6, "text": "T_W" }, { "math_id": 7, "text": "I: W \\to H" }, { "math_id": 8, "text": "V^* = I^*=P_K : H \\to W" } ]
https://en.wikipedia.org/wiki?curid=681190
68121
Kolmogorov space
Concept in topology In topology and related branches of mathematics, a topological space "X" is a T0 space or Kolmogorov space (named after Andrey Kolmogorov) if for every pair of distinct points of "X", at least one of them has a neighborhood not containing the other. In a T0 space, all points are topologically distinguishable. This condition, called the T0 condition, is the weakest of the separation axioms. Nearly all topological spaces normally studied in mathematics are T0 spaces. In particular, all T1 spaces, i.e., all spaces in which for every pair of distinct points, each has a neighborhood not containing the other, are T0 spaces. This includes all T2 (or Hausdorff) spaces, i.e., all topological spaces in which distinct points have disjoint neighbourhoods. In another direction, every sober space (which may not be T1) is T0; this includes the underlying topological space of any scheme. Given any topological space one can construct a T0 space by identifying topologically indistinguishable points. T0 spaces that are not T1 spaces are exactly those spaces for which the specialization preorder is a nontrivial partial order. Such spaces naturally occur in computer science, specifically in denotational semantics. Definition. A T0 space is a topological space in which every pair of distinct points is topologically distinguishable. That is, for any two different points "x" and "y" there is an open set that contains one of these points and not the other. More precisely the topological space "X" is Kolmogorov or formula_0 if and only if: If formula_1 and formula_2, there exists an open set "O" such that either formula_3 or formula_4. Note that topologically distinguishable points are automatically distinct. On the other hand, if the singleton sets {"x"} and {"y"} are separated then the points "x" and "y" must be topologically distinguishable. That is, "separated" ⇒ "topologically distinguishable" ⇒ "distinct" The property of being topologically distinguishable is, in general, stronger than being distinct but weaker than being separated. In a T0 space, the second arrow above also reverses; points are distinct if and only if they are distinguishable. This is how the T0 axiom fits in with the rest of the separation axioms. Examples and counter examples. Nearly all topological spaces normally studied in mathematics are T0. In particular, all Hausdorff (T2) spaces, T1 spaces and sober spaces are T0. Operating with T0 spaces. Commonly studied topological spaces are all T0. Indeed, when mathematicians in many fields, notably analysis, naturally run across non-T0 spaces, they usually replace them with T0 spaces, in a manner to be described below. To motivate the ideas involved, consider a well-known example. The space L2(R) is meant to be the space of all measurable functions "f" from the real line R to the complex plane C such that the Lebesgue integral of |"f"("x")|2 over the entire real line is finite. This space should become a normed vector space by defining the norm ||"f"|| to be the square root of that integral. The problem is that this is not really a norm, only a seminorm, because there are functions other than the zero function whose (semi)norms are zero. The standard solution is to define L2(R) to be a set of equivalence classes of functions instead of a set of functions directly. This constructs a quotient space of the original seminormed vector space, and this quotient is a normed vector space. It inherits several convenient properties from the seminormed space; see below. In general, when dealing with a fixed topology T on a set "X", it is helpful if that topology is T0. On the other hand, when "X" is fixed but T is allowed to vary within certain boundaries, to force T to be T0 may be inconvenient, since non-T0 topologies are often important special cases. Thus, it can be important to understand both T0 and non-T0 versions of the various conditions that can be placed on a topological space. The Kolmogorov quotient. Topological indistinguishability of points is an equivalence relation. No matter what topological space "X" might be to begin with, the quotient space under this equivalence relation is always T0. This quotient space is called the Kolmogorov quotient of "X", which we will denote KQ("X"). Of course, if "X" was T0 to begin with, then KQ("X") and "X" are naturally homeomorphic. Categorically, Kolmogorov spaces are a reflective subcategory of topological spaces, and the Kolmogorov quotient is the reflector. Topological spaces "X" and "Y" are Kolmogorov equivalent when their Kolmogorov quotients are homeomorphic. Many properties of topological spaces are preserved by this equivalence; that is, if "X" and "Y" are Kolmogorov equivalent, then "X" has such a property if and only if "Y" does. On the other hand, most of the "other" properties of topological spaces "imply" T0-ness; that is, if "X" has such a property, then "X" must be T0. Only a few properties, such as being an indiscrete space, are exceptions to this rule of thumb. Even better, many structures defined on topological spaces can be transferred between "X" and KQ("X"). The result is that, if you have a non-T0 topological space with a certain structure or property, then you can usually form a T0 space with the same structures and properties by taking the Kolmogorov quotient. The example of L2(R) displays these features. From the point of view of topology, the seminormed vector space that we started with has a lot of extra structure; for example, it is a vector space, and it has a seminorm, and these define a pseudometric and a uniform structure that are compatible with the topology. Also, there are several properties of these structures; for example, the seminorm satisfies the parallelogram identity and the uniform structure is complete. The space is not T0 since any two functions in L2(R) that are equal almost everywhere are indistinguishable with this topology. When we form the Kolmogorov quotient, the actual L2(R), these structures and properties are preserved. Thus, L2(R) is also a complete seminormed vector space satisfying the parallelogram identity. But we actually get a bit more, since the space is now T0. A seminorm is a norm if and only if the underlying topology is T0, so L2(R) is actually a complete normed vector space satisfying the parallelogram identity—otherwise known as a Hilbert space. And it is a Hilbert space that mathematicians (and physicists, in quantum mechanics) generally want to study. Note that the notation L2(R) usually denotes the Kolmogorov quotient, the set of equivalence classes of square integrable functions that differ on sets of measure zero, rather than simply the vector space of square integrable functions that the notation suggests. Removing T0. Although norms were historically defined first, people came up with the definition of seminorm as well, which is a sort of non-T0 version of a norm. In general, it is possible to define non-T0 versions of both properties and structures of topological spaces. First, consider a property of topological spaces, such as being Hausdorff. One can then define another property of topological spaces by defining the space "X" to satisfy the property if and only if the Kolmogorov quotient KQ("X") is Hausdorff. This is a sensible, albeit less famous, property; in this case, such a space "X" is called "preregular". (There even turns out to be a more direct definition of preregularity). Now consider a structure that can be placed on topological spaces, such as a metric. We can define a new structure on topological spaces by letting an example of the structure on "X" be simply a metric on KQ("X"). This is a sensible structure on "X"; it is a pseudometric. (Again, there is a more direct definition of pseudometric.) In this way, there is a natural way to remove T0-ness from the requirements for a property or structure. It is generally easier to study spaces that are T0, but it may also be easier to allow structures that aren't T0 to get a fuller picture. The T0 requirement can be added or removed arbitrarily using the concept of Kolmogorov quotient. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf T_0" }, { "math_id": 1, "text": "a,b\\in X" }, { "math_id": 2, "text": "a\\neq b" }, { "math_id": 3, "text": "(a\\in O) \\wedge (b\\notin O)" }, { "math_id": 4, "text": "(a\\notin O) \\wedge (b\\in O)" }, { "math_id": 5, "text": "\\left(\\int_{\\mathbb{R}} |f(x)|^2 \\,dx\\right)^{\\frac{1}{2}} < \\infty " } ]
https://en.wikipedia.org/wiki?curid=68121
681230
Linkage disequilibrium
Allele association in population genetics In population genetics, linkage disequilibrium (LD) is a measure of non-random association between segments of DNA (alleles) at different positions on the chromosome (loci) in a given population based on a comparison between the frequency at which two alleles are detected together at the same loci versus the frequencies at which each allele is simply detected (alone or with the second allele) at that same loci. Loci are said to be in linkage disequilibrium when the frequency of being detected together (the frequency of association of their different alleles) is higher or lower than expected if the loci were independent and associated randomly. While the pattern of linkage disequilibrium in a genome is a powerful signal of the population genetic processes that are structuring it, it does not indicate why the pattern emerges by itself. Linkage disequilibrium is influenced by many factors, including selection, the rate of genetic recombination, mutation rate, genetic drift, the system of mating, population structure, and genetic linkage. In spite of its name, linkage disequilibrium may exist between alleles at different loci without any genetic linkage between them and independently of whether or not allele frequencies are in equilibrium (not changing with time). Furthermore, linkage disequilibrium is sometimes referred to as gametic phase disequilibrium; however, the concept also applies to asexual organisms and therefore does not depend on the presence of gametes. Formal definition. Suppose that among the gametes that are formed in a sexually reproducing population, allele "A" occurs with frequency formula_0 at one locus (i.e. formula_0 is the proportion of gametes with "A" at that locus), while at a different locus allele "B" occurs with frequency formula_1. Similarly, let formula_2 be the frequency with which both "A" and "B" occur together in the same gamete (i.e. formula_2 is the frequency of the "AB" haplotype). The association between the alleles "A" and "B" can be regarded as completely random—which is known in statistics as "independence"—when the occurrence of one does not affect the occurrence of the other, in which case the probability that both "A" and "B" occur together is given by the product formula_3 of the probabilities. There is said to be a linkage disequilibrium between the two alleles whenever formula_2 differs from formula_4 for any reason. The level of linkage disequilibrium between "A" and "B" can be quantified by the "coefficient of linkage disequilibrium" formula_5, which is defined as formula_6 Linkage disequilibrium corresponds to formula_7. In the case formula_8 we have formula_9 and the alleles "A" and "B" are said to be in "linkage equilibrium". The subscript "AB" on formula_10 emphasizes that linkage disequilibrium is a property of the pair formula_11 of alleles and not of their respective loci. Other pairs of alleles at those same two loci may have different coefficients of linkage disequilibrium. For two biallelic loci, where "a" and "b" are the other alleles at these two loci, the restrictions are so strong that only one value of "D" is sufficient to represent all linkage disequilibrium relationships between these alleles. In this case, formula_12. Their relationships can be characterized as follows. formula_13 formula_14 formula_15 formula_16 The sign of "D" in this case is chosen arbitrarily. The magnitude of "D" is more important than the sign of "D" because the magnitude of "D" is representative of the degree of linkage disequilibrium. However, positive "D" value means that the gamete is more frequent than expected while negative means that the combination of these two alleles are less frequent than expected. Linkage disequilibrium in asexual populations can be defined in a similar way in terms of population allele frequencies. Furthermore, it is also possible to define linkage disequilibrium among three or more alleles, however these higher-order associations are not commonly used in practice. Normalization. The linkage disequilibrium formula_17 reflects both changes in the intensity of the linkage correlation and changes in gene frequency. This poses an issue when comparing linkage disequilibrium between alleles with differing frequencies. Normalization of linkage disequilibrium allows these alleles to be compared more easily. D' Method. Lewontin suggested calculating the normalized linkage disequilibrium (also referred to as relative linkage disequilibrium) formula_18 by dividing formula_17 by the theoretical maximum difference between the observed and expected allele frequencies as follows: formula_19 where formula_20 The value of formula_18 will be within the range formula_21. When formula_22, the loci are independent. When formula_23, the alleles are found less often than expected. When formula_24, the alleles are found more often than expected. Note that formula_25 may be used in place of formula_18 when measuring how close two alleles are to linkage equilibrium. r² Method. An alternative to formula_18 is the correlation coefficient between pairs of loci, usually expressed as its square, formula_26. formula_27 The value of formula_26 will be within the range formula_28. When formula_29, there is no correlation between the pair. When formula_30, the correlation is either perfect positive or perfect negative according to the sign of formula_26. d Method. Another alternative normalizes formula_17 by the product of two of the four allele frequencies when the two frequencies represent alleles from the same locus. This allows comparison of asymmetry between a pair of loci. This is often used in case-control studies where formula_31 is the locus containing a disease allele. formula_32 ρ Method. Similar to the d method, this alternative normalizes formula_17 by the product of two of the four allele frequencies when the two frequencies represent alleles from different loci. formula_33 Limits for the ranges of linkage disequilibrium measures. The measures formula_26 and formula_18 have limits to their ranges and do not range over all values of zero to one for all pairs of loci. The maximum of formula_26 depends on the allele frequencies at the two loci being compared and can only range fully from zero to one where either the allele frequencies at both loci are equal, formula_34 where formula_35, or when the allele frequencies have the relationship formula_36 when formula_37. While formula_18 can always take a maximum value of 1, its minimum value for two loci is equal to formula_38 for those loci. Example: Two-loci and two-alleles. Consider the haplotypes for two loci A and B with two alleles each—a two-loci, two-allele model. Then the following table defines the frequencies of each combination: Note that these are relative frequencies. One can use the above frequencies to determine the frequency of each of the alleles: If the two loci and the alleles are independent from each other, then we would expect the frequency of each haplotype to be equal to the product of the frequencies of its corresponding alleles (e.g. formula_43). The deviation of the observed frequency of a haplotype from the expected is a quantity called the linkage disequilibrium and is commonly denoted by a capital "D": formula_44 Thus, if the loci were inherited independently, then formula_43, so formula_45, and there is linkage equilibrium. However, if the observed frequency of haplotype formula_39 were higher than what would be expected based on the individual frequencies of formula_41 and formula_42 then formula_46, so formula_47, and there is positive linkage disequilibrium. Conversely, if the observed frequency were lower, then formula_48, formula_49, and there is negative linkage disequilibrium. The following table illustrates the relationship between the haplotype frequencies and allele frequencies and D. Additionally, we can normalize our data based on what we are trying to accomplish. For example, if we aim to create an association map in a case-control study, then we may use the d method due to its asymmetry. If we are trying to find the probability that a given haplotype will descend in a population without being recombined by other haplotypes, then it may be better to use the ρ method. But for most scenarios, formula_26 tends to be the most popular method due to the usefulness of the correlation coefficient in statistics. A couple examples of where formula_26 may be very useful would include measuring the recombination rate in an evolving population, or detecting disease associations. Role of recombination. In the absence of evolutionary forces other than random mating, Mendelian segregation, random chromosomal assortment, and chromosomal crossover (i.e. in the absence of natural selection, inbreeding, and genetic drift), the linkage disequilibrium measure formula_17 converges to zero along the time axis at a rate depending on the magnitude of the recombination rate formula_52 between the two loci. Using the notation above, formula_53, we can demonstrate this convergence to zero as follows. In the next generation, formula_54, the frequency of the haplotype formula_55, becomes formula_56 This follows because a fraction formula_57 of the haplotypes in the offspring have not recombined, and are thus copies of a random haplotype in their parents. A fraction formula_40 of those are formula_55. A fraction formula_52 have recombined these two loci. If the parents result from random mating, the probability of the copy at locus formula_58 having allele formula_41 is formula_51 and the probability of the copy at locus formula_31 having allele formula_42 is formula_50, and as these copies are initially in the two different gametes that formed the diploid genotype, these are independent events so that the probabilities can be multiplied. This formula can be rewritten as formula_59 so that formula_60 where formula_17 at the formula_61-th generation is designated as formula_62. Thus we have formula_63 If formula_64, then formula_65 so that formula_62 converges to zero. If at some time we observe linkage disequilibrium, it will disappear in the future due to recombination. However, the smaller the distance between the two loci, the smaller will be the rate of convergence of formula_17 to zero. Visualization. Once linkage disequilibrium has been calculated for a dataset, a visualization method is often chosen to display the linkage disequilibrium to make it more easily understandable. The most common method is to use a heatmap, where colors are used to indicate the loci with positive linkage disequilibrium, and linkage equilibrium. This example displays the full heatmap, but because the heatmap is symmetrical across the diagonal (that is, the linkage disequilibrium between loci A and B is the same as between B and A), a triangular heatmap that shows the pairs only once is also commonly employed. This method has the advantage of being easy to interpret, but it also cannot display information about other variables that may be of interest. More robust visualization options are also available, like the textile plot. In a textile plot, combinations of alleles at a certain loci can be linked with combinations of alleles at a different loci. Each genotype (combination of alleles) is represented by a circle which has an area proportional to the frequency of that genotype, with a column for each loci. Lines are drawn from each circle to the circles in the other column(s), and the thickness of the connecting line is proportional to the frequency that the two genotypes occur together. Linkage disequilibrium is seen through the number of line crossings in the diagram, where a greater number of line crossings indicates a low linkage disequilibrium and fewer crossings indicate a high linkage disequilibrium. The advantage of this method is that it shows the individual genotype frequencies and includes a visual difference between absolute (where the alleles at the two loci always appear together) and complete (where alleles at the two loci show a strong connection but with the possibility of recombination) linkage disequilibrium by the shape of the graph. Another visualization option is forests of hierarchical latent class models (FHLCM). All loci are plotted along the top layer of the graph, and below this top layer, boxes representing latent variables are added with links to the top level. Lines connect the loci at the top level to the latent variables below, and the lower the level of the box that the loci are connected to, the greater the linkage disequilibrium and the smaller the distance between the loci. While this method does not have the same advantages of the textile plot, it does allow for the visualization of loci that are far apart without requiring the sequence to be rearranged, as is the case with the textile plot. This is not an exhaustive list of visualization methods, and multiple methods may be used to display a data set in order to give a better picture of the data based on the information that the researcher aims to highlight. Resources. A comparison of different measures of LD is provided by Devlin &amp; Risch The International HapMap Project enables the study of LD in human populations online. The Ensembl project integrates HapMap data with other genetic information from dbSNP. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " p_A " }, { "math_id": 1, "text": " p_B " }, { "math_id": 2, "text": " p_{AB} " }, { "math_id": 3, "text": " p_{A} p_{B} " }, { "math_id": 4, "text": " p_A p_B " }, { "math_id": 5, "text": " D_{AB}" }, { "math_id": 6, "text": " D_{AB} = p_{AB} - p_A p_B," }, { "math_id": 7, "text": " D_{AB} \\neq 0 " }, { "math_id": 8, "text": "D_{AB}=0" }, { "math_id": 9, "text": " p_{AB} = p_A p_B " }, { "math_id": 10, "text": " D_{AB} " }, { "math_id": 11, "text": "\\{A, B\\}" }, { "math_id": 12, "text": "D_{AB} = -D_{Ab} = -D_{aB} = D_{ab}" }, { "math_id": 13, "text": "D = P_{AB} -P_{A}P_{B}" }, { "math_id": 14, "text": "-D = P_{Ab} -P_{A}P_{b}" }, { "math_id": 15, "text": "-D = P_{aB} -P_{a}P_{B}" }, { "math_id": 16, "text": "D = P_{ab} -P_{a}P_{b}" }, { "math_id": 17, "text": "D" }, { "math_id": 18, "text": "D'" }, { "math_id": 19, "text": "D' = \\frac D {D_\\max}" }, { "math_id": 20, "text": "D_\\max= \\begin{cases}\n\\min\\{p_A p_B,\\,(1-p_A)(1-p_B)\\} & \\text{when } D < 0\\\\\n\\min\\{p_A (1-p_B),\\,p_B(1-p_A)\\} & \\text{when } D > 0\n\\end{cases} " }, { "math_id": 21, "text": "-1\\leq D'\\leq 1" }, { "math_id": 22, "text": "D' = 0" }, { "math_id": 23, "text": "-1 \\leq D' < 0" }, { "math_id": 24, "text": "0 < D' \\leq 1" }, { "math_id": 25, "text": "|D'|" }, { "math_id": 26, "text": "r^2" }, { "math_id": 27, "text": "r^2=\\frac{D^2}{p_A(1-p_A)p_B (1-p_B)}" }, { "math_id": 28, "text": "-1 \\leq r^2 \\leq 1" }, { "math_id": 29, "text": "r^2 = 0" }, { "math_id": 30, "text": "|r^2| = 1" }, { "math_id": 31, "text": "B" }, { "math_id": 32, "text": "d =\\frac{D}{p_B (1-p_B)}" }, { "math_id": 33, "text": "\\rho =\\frac{D}{(1-p_A) p_B}" }, { "math_id": 34, "text": "P_A=P_B" }, { "math_id": 35, "text": "D>0" }, { "math_id": 36, "text": "P_A=1-P_B" }, { "math_id": 37, "text": "D<0" }, { "math_id": 38, "text": "|r|" }, { "math_id": 39, "text": "A_1B_1" }, { "math_id": 40, "text": "x_{11}" }, { "math_id": 41, "text": "A_1" }, { "math_id": 42, "text": "B_1" }, { "math_id": 43, "text": "x_{11} = p_1 q_1" }, { "math_id": 44, "text": "D = x_{11} - p_1q_1" }, { "math_id": 45, "text": "D = 0" }, { "math_id": 46, "text": "x_{11} > p_1 q_1" }, { "math_id": 47, "text": "D > 0" }, { "math_id": 48, "text": "x_{11} < p_1 q_1" }, { "math_id": 49, "text": "D < 0" }, { "math_id": 50, "text": "q_1" }, { "math_id": 51, "text": "p_1" }, { "math_id": 52, "text": "c" }, { "math_id": 53, "text": "D= x_{11}-p_1 q_1" }, { "math_id": 54, "text": "x_{11}'" }, { "math_id": 55, "text": "A_1 B_1" }, { "math_id": 56, "text": "x_{11}' = (1-c)\\,x_{11} + c\\,p_1 q_1" }, { "math_id": 57, "text": "(1-c)" }, { "math_id": 58, "text": "A" }, { "math_id": 59, "text": "x_{11}' - p_1 q_1 = (1-c)\\,(x_{11} - p_1 q_1)" }, { "math_id": 60, "text": "D_1 = (1-c)\\;D_0" }, { "math_id": 61, "text": "n" }, { "math_id": 62, "text": "D_n" }, { "math_id": 63, "text": "D_n = (1-c)^n\\; D_0." }, { "math_id": 64, "text": "n \\to \\infty" }, { "math_id": 65, "text": "(1-c)^n \\to 0" } ]
https://en.wikipedia.org/wiki?curid=681230
68123985
Modified half-normal distribution
Probability distribution In probability theory and statistics, the modified half-normal distribution (MHN) is a three-parameter family of continuous probability distributions supported on the positive part of the real line. It can be viewed as a generalization of multiple families, including the half-normal distribution, truncated normal distribution, gamma distribution, and square root of the gamma distribution, all of which are special cases of the MHN distribution. Therefore, it is a flexible probability model for analyzing real-valued positive data. The name of the distribution is motivated by the similarities of its density function with that of the half-normal distribution. In addition to being used as a probability model, MHN distribution also appears in Markov chain Monte Carlo (MCMC)-based Bayesian procedures, including Bayesian modeling of the directional data, Bayesian binary regression, and Bayesian graphical modeling. In Bayesian analysis, new distributions often appear as a conditional posterior distribution; usage for many such probability distributions are too contextual, and they may not carry significance in a broader perspective. Additionally, many such distributions lack a tractable representation of its distributional aspects, such as the known functional form of the normalizing constant. However, the MHN distribution occurs in diverse areas of research, signifying its relevance to contemporary Bayesian statistical modeling and the associated computation. The moments (including variance and skewness) of the MHN distribution can be represented via the Fox–Wright Psi functions. There exists a recursive relation between the three consecutive moments of the distribution; this is helpful in developing an efficient approximation for the mean of the distribution, as well as constructing a moment-based estimation of its parameters. Definitions. The probability density function of the modified half-normal distribution is formula_0 where formula_1 denotes the Fox–Wright Psi function. The connection between the normalizing constant of the distribution and the Fox–Wright function in provided in Sun, Kong, Pal. The cumulative distribution function (CDF) is formula_2 where formula_3 denotes the lower incomplete gamma function. Properties. The modified half-normal distribution is an exponential family of distributions, and thus inherits the properties of exponential families. Moments. Let formula_4. Choose a real value formula_5 such that formula_6. Then the "formula_7"th moment isformula_8Additionally,formula_9The variance of the distribution is formula_10 The moment generating function of the MHN distribution is given asformula_11 Modal characterization. Consider formula_12 with formula_13, formula_14, and formula_15. Additional properties involving mode and expected values. Let formula_26 for formula_27, formula_28, and formula_29, and let the mode of the distribution be denoted by formula_30 If formula_17, then formula_31for all formula_32. As formula_33 gets larger, the difference between the upper and lower bounds approaches zero. Therefore, this also provides a high precision approximation of formula_34 when formula_33 is large. On the other hand, if formula_19 and formula_35, then formula_36For all formula_13, formula_28, and formula_32, formula_37. Also, the condition formula_35 is a sufficient condition for its validity. The fact that formula_38 implies the distribution is positively skewed. Mixture representation. Let formula_39. If formula_19, then there exists a random variable formula_40 such that formula_41 and formula_42. On the contrary, if formula_43 then there exists a random variable formula_44 such that formula_45 and formula_46, where formula_47 denotes the generalized inverse Gaussian distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f(x)= \\frac{2\\beta^{\\alpha/2} x^{\\alpha-1} \\exp(-\\beta x^2+ \\gamma x )}{\\Psi\\left(\\frac \\alpha 2, \\frac \\gamma {\\sqrt\\beta}\\right)} \\text{ for } x>0 " }, { "math_id": 1, "text": " \\Psi\\left(\\frac \\alpha 2, \\frac \\gamma {\\sqrt\\beta}\\right) = {}_1 \\Psi_1 \\left[\\begin{matrix}(\\frac \\alpha 2 ,\\frac 1 2) \\\\ (1,0) \\end{matrix}; \\frac \\gamma {\\sqrt\\beta} \\right]" }, { "math_id": 2, "text": " F_{_{\\text{MHN}}}(x\\mid \\alpha, \\beta, \\gamma)= \\frac{2\\beta^{\\alpha/2}}{\\Psi \\left(\\frac \\alpha 2, \\frac \\gamma {\\sqrt\\beta} \\right)} \\sum_{i=0}^\\infty \\frac{\\gamma^i}{2 i!} \\beta^{-(\\alpha+i)/2} \\gamma \\left(\\frac{\\alpha +i}{2}, \\beta x^2\\right) \\text{ for } x\\ge0," }, { "math_id": 3, "text": "\\gamma (s,y)=\\int_0^y t^{s-1} e^{-t} \\, dt" }, { "math_id": 4, "text": "X\\sim \\text{MHN}(\\alpha, \\beta, \\gamma)" }, { "math_id": 5, "text": "k\\geq 0" }, { "math_id": 6, "text": "\\alpha+k>0" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": " E(X^k)= \\frac{\\Psi\\left(\\frac{\\alpha+k}2, \\frac \\gamma {\\sqrt\\beta}\\right) }{ \\beta^{k/2} \\Psi\\left(\\frac\\alpha2, \\frac \\gamma{\\sqrt\\beta}\\right)}." }, { "math_id": 9, "text": " E(X^{k+2})\n=\\frac{\\alpha+k}{2\\beta} E(X^k) +\\frac \\gamma {2 \\beta} E(X^{k+1})." }, { "math_id": 10, "text": "\\operatorname{Var}(X)= \\frac \\alpha{2\\beta} + E(X) \\left( \\frac\\gamma{2\\beta}-E(X)\\right). " }, { "math_id": 11, "text": " M_{X}(t) = \\frac{\\Psi\\left(\\frac{\\alpha}{2}, \\frac{ \\gamma+t}{\\sqrt{\\beta}}\\right) }{ \\left(\\frac{\\alpha}{2}, \\frac{ \\gamma}{\\sqrt{\\beta}}\\right)}." }, { "math_id": 12, "text": "\\text{MHN} (\\alpha, \\beta, \\gamma)" }, { "math_id": 13, "text": "\\alpha>0" }, { "math_id": 14, "text": "\\beta >0" }, { "math_id": 15, "text": "\\gamma \\in \\mathbb{R}" }, { "math_id": 16, "text": "\\alpha\\geq 1" }, { "math_id": 17, "text": "\\alpha>1" }, { "math_id": 18, "text": "\\frac{\\gamma + \\sqrt{\\gamma^2+8\\beta(\\alpha-1)}}{4 \\beta} ." }, { "math_id": 19, "text": "\\gamma>0" }, { "math_id": 20, "text": "1- \\frac{\\gamma^2}{8 \\beta} \\leq \\alpha < 1" }, { "math_id": 21, "text": "\\frac{\\gamma + \\sqrt{\\gamma^2+8\\beta(\\alpha-1)}}{4 \\beta}" }, { "math_id": 22, "text": "\\frac{\\gamma - \\sqrt{\\gamma^2+8\\beta(\\alpha-1)}}{4 \\beta}." }, { "math_id": 23, "text": "\\mathbb{R}_{+}" }, { "math_id": 24, "text": "0 <\\alpha <1-\\frac{\\gamma^2}{8 \\beta}" }, { "math_id": 25, "text": "\\gamma<0, \\alpha\\leq 1" }, { "math_id": 26, "text": "X\\sim \\text{MHN}(\\alpha,\\beta,\\gamma)" }, { "math_id": 27, "text": "\\alpha \\geq 1" }, { "math_id": 28, "text": "\\beta>0" }, { "math_id": 29, "text": "\\gamma\\in \\R{}" }, { "math_id": 30, "text": "X_{\\text{mode}}=\\frac{\\gamma+\\sqrt{\\gamma^2+8\\beta(\\alpha-1)}}{4 \\beta}." }, { "math_id": 31, "text": "X_{\\text{mode}} \\leq E(X)\\leq \\frac{\\gamma+\\sqrt{\\gamma^2+8 \\alpha\\beta}}{4 \\beta}" }, { "math_id": 32, "text": "\\gamma\\in \\mathbb{R}" }, { "math_id": 33, "text": "\\alpha" }, { "math_id": 34, "text": "E(X)" }, { "math_id": 35, "text": "\\alpha\\geq 4" }, { "math_id": 36, "text": "\\log(X_{\\text{mode}}) \\leq E(\\log(X))\\leq \\log\\left( \\frac{\\gamma+\\sqrt{\\gamma^2+8 \\alpha\\beta}}{4 \\beta} \\right) . " }, { "math_id": 37, "text": "\\text{Var}(X)\\leq \\frac{1}{2\\beta}" }, { "math_id": 38, "text": "X_{\\text{mode}}\\leq E(X)" }, { "math_id": 39, "text": "X\\sim \\operatorname{MHN}(\\alpha, \\beta, \\gamma)" }, { "math_id": 40, "text": "V" }, { "math_id": 41, "text": "V\\mid X\\sim \\operatorname{Poisson}(\\gamma X)" }, { "math_id": 42, "text": "X^2\\mid V \\sim \\operatorname{Gamma} \\left( \\frac{\\alpha+V}2, \\beta \\right)" }, { "math_id": 43, "text": "\\gamma<0" }, { "math_id": 44, "text": "U" }, { "math_id": 45, "text": "U\\mid X\\sim \\text{GIG}\\left(\\frac 1 2, 1, \\gamma^2 X^2\\right)" }, { "math_id": 46, "text": "X^2\\mid U \\sim \\text{Gamma}\\left(\\frac \\alpha 2, \\left( \\beta + \\frac{\\gamma^2} U \\right) \\right)" }, { "math_id": 47, "text": "\\text{GIG} " } ]
https://en.wikipedia.org/wiki?curid=68123985
681241
Creep (deformation)
Tendency of a solid material to move slowly or deform permanently under mechanical stress In materials science, creep (sometimes called cold flow) is the tendency of a solid material to undergo slow deformation while subject to persistent mechanical stresses. It can occur as a result of long-term exposure to high levels of stress that are still below the yield strength of the material. Creep is more severe in materials that are subjected to heat for long periods and generally increases as they near their melting point. The rate of deformation is a function of the material's properties, exposure time, exposure temperature and the applied structural load. Depending on the magnitude of the applied stress and its duration, the deformation may become so large that a component can no longer perform its function – for example creep of a turbine blade could cause the blade to contact the casing, resulting in the failure of the blade. Creep is usually of concern to engineers and metallurgists when evaluating components that operate under high stresses or high temperatures. Creep is a deformation mechanism that may or may not constitute a failure mode. For example, moderate creep in concrete is sometimes welcomed because it relieves tensile stresses that might otherwise lead to cracking. Unlike brittle fracture, creep deformation does not occur suddenly upon the application of stress. Instead, strain accumulates as a result of long-term stress. Therefore, creep is a "time-dependent" deformation. Creep or cold flow is of great concern in plastics. Blocking agents are chemicals used to prevent or inhibit cold flow. Otherwise rolled or stacked sheets stick together. Temperature dependence. The temperature range in which creep deformation occurs depends on the material. Creep deformation generally occurs when a material is stressed at a temperature near its melting point. While tungsten requires a temperature in the thousands of degrees before the onset of creep deformation, lead may creep at room temperature, and ice will creep at temperatures below . Plastics and low-melting-temperature metals, including many solders, can begin to creep at room temperature. Glacier flow is an example of creep processes in ice. The effects of creep deformation generally become noticeable at approximately 35% of the melting point (in Kelvin) for metals and at 45% of melting point for ceramics. Theoretical framework. Creep behavior can be split into three main stages. In primary, or transient, creep, the strain rate is a function of time. In Class M materials, which include most pure materials, primary strain rate decreases over time. This can be due to increasing dislocation density, or it can be due to evolving grain size. In class A materials, which have large amounts of solid solution hardening, strain rate increases over time due to a thinning of solute drag atoms as dislocations move. In the secondary, or steady-state, creep, dislocation structure and grain size have reached equilibrium, and therefore strain rate is constant. Equations that yield a strain rate refer to the steady-state strain rate. Stress dependence of this rate depends on the creep mechanism. In tertiary creep, the strain rate exponentially increases with stress. This can be due to necking phenomena, internal cracks, or voids, which all decrease the cross-sectional area and increase the true stress on the region, further accelerating deformation and leading to fracture. Mechanisms of deformation. Depending on the temperature and stress, different deformation mechanisms are activated. Though there are generally many deformation mechanisms active at all times, usually one mechanism is dominant, accounting for almost all deformation. Various mechanisms are: At low temperatures and low stress, creep is essentially nonexistent and all strain is elastic. At low temperatures and high stress, materials experience plastic deformation rather than creep. At high temperatures and low stress, diffusional creep tends to be dominant, while at high temperatures and high stress, dislocation creep tends to be dominant. Deformation mechanism maps. Deformation mechanism maps provide a visual tool categorizing the dominant deformation mechanism as a function of homologous temperature, shear modulus-normalized stress, and strain rate. Generally, two of these three properties (most commonly temperature and stress) are the axes of the map, while the third is drawn as contours on the map. To populate the map, constitutive equations are found for each deformation mechanism. These are used to solve for the boundaries between each deformation mechanism, as well as the strain rate contours. Deformation mechanism maps can be used to compare different strengthening mechanisms, as well as compare different types of materials. formula_0 where "ε" is the creep strain, "C" is a constant dependent on the material and the particular creep mechanism, "m" and "b" are exponents dependent on the creep mechanism, "Q" is the activation energy of the creep mechanism, "σ" is the applied stress, "d" is the grain size of the material, "k" is Boltzmann's constant, and "T" is the absolute temperature. Dislocation creep. At high stresses (relative to the shear modulus), creep is controlled by the movement of dislocations. For dislocation creep, "Q" = "Q"(self diffusion), 4 ≤ "m" ≤ 6, and "b" &lt; 1. Therefore, dislocation creep has a strong dependence on the applied stress and the intrinsic activation energy and a weaker dependence on grain size. As grain size gets smaller, grain boundary area gets larger, so dislocation motion is impeded. Some alloys exhibit a very large stress exponent ("m" &gt; 10), and this has typically been explained by introducing a "threshold stress," "σ"th, below which creep can't be measured. The modified power law equation then becomes: formula_1 where "A", "Q" and "m" can all be explained by conventional mechanisms (so 3 ≤ "m" ≤ 10), and "R" is the gas constant. The creep increases with increasing applied stress, since the applied stress tends to drive the dislocation past the barrier, and make the dislocation get into a lower energy state after bypassing the obstacle, which means that the dislocation is inclined to pass the obstacle. In other words, part of the work required to overcome the energy barrier of passing an obstacle is provided by the applied stress and the remainder by thermal energy. Nabarro–Herring creep. Nabarro–Herring (NH) creep is a form of diffusion creep, while dislocation glide creep does not involve atomic diffusion. Nabarro–Herring creep dominates at high temperatures and low stresses. As shown in the figure on the right, the lateral sides of the crystal are subjected to tensile stress and the horizontal sides to compressive stress. The atomic volume is altered by applied stress: it increases in regions under tension and decreases in regions under compression. So the activation energy for vacancy formation is changed by ±"σΩ", where "Ω" is the atomic volume, the positive value is for compressive regions and negative value is for tensile regions. Since the fractional vacancy concentration is proportional to exp(−), where "Q"f is the vacancy-formation energy, the vacancy concentration is higher in tensile regions than in compressive regions, leading to a net flow of vacancies from the regions under tension to the regions under compression, and this is equivalent to a net atom diffusion in the opposite direction, which causes the creep deformation: the grain elongates in the tensile stress axis and contracts in the compressive stress axis. In Nabarro–Herring creep, "k" is related to the diffusion coefficient of atoms through the lattice, "Q" = "Q"(self diffusion), "m" = 1, and "b" = 2. Therefore, Nabarro–Herring creep has a weak stress dependence and a moderate grain size dependence, with the creep rate decreasing as the grain size is increased. Nabarro–Herring creep is strongly temperature dependent. For lattice diffusion of atoms to occur in a material, neighboring lattice sites or interstitial sites in the crystal structure must be free. A given atom must also overcome the energy barrier to move from its current site (it lies in an energetically favorable potential well) to the nearby vacant site (another potential well). The general form of the diffusion equation is formula_2 where "D"0 has a dependence on both the attempted jump frequency and the number of nearest neighbor sites and the probability of the sites being vacant. Thus there is a double dependence upon temperature. At higher temperatures the diffusivity increases due to the direct temperature dependence of the equation, the increase in vacancies through Schottky defect formation, and an increase in the average energy of atoms in the material. Nabarro–Herring creep dominates at very high temperatures relative to a material's melting temperature. Coble creep. Coble creep is the second form of diffusion-controlled creep. In Coble creep the atoms diffuse along grain boundaries to elongate the grains along the stress axis. This causes Coble creep to have a stronger grain size dependence than Nabarro–Herring creep, thus, Coble creep will be more important in materials composed of very fine grains. For Coble creep "k" is related to the diffusion coefficient of atoms along the grain boundary, "Q" = "Q"(grain boundary diffusion), "m" = 1, and "b" = 3. Because "Q"(grain boundary diffusion) is less than "Q"(self diffusion), Coble creep occurs at lower temperatures than Nabarro–Herring creep. Coble creep is still temperature dependent, as the temperature increases so does the grain boundary diffusion. However, since the number of nearest neighbors is effectively limited along the interface of the grains, and thermal generation of vacancies along the boundaries is less prevalent, the temperature dependence is not as strong as in Nabarro–Herring creep. It also exhibits the same linear dependence on stress as Nabarro–Herring creep. Generally, the diffusional creep rate should be the sum of Nabarro–Herring creep rate and Coble creep rate. Diffusional creep leads to grain-boundary separation, that is, voids or cracks form between the grains. To heal this, grain-boundary sliding occurs. The diffusional creep rate and the grain boundary sliding rate must be balanced if there are no voids or cracks remaining. When grain-boundary sliding can not accommodate the incompatibility, grain-boundary voids are generated, which is related to the initiation of creep fracture. Solute drag creep. Solute drag creep is one of the mechanisms for power-law creep (PLC), involving both dislocation and diffusional flow. Solute drag creep is observed in certain metallic alloys. In these alloys, the creep rate increases during the first stage of creep (Transient creep) before reaching a steady-state value. This phenomenon can be explained by a model associated with solid-solution strengthening. At low temperatures, the solute atoms are immobile and increase the flow stress required to move dislocations. However, at higher temperatures, the solute atoms are more mobile and may form atmospheres and clouds surrounding the dislocations. This is especially likely if the solute atom has a large misfit in the matrix. The solutes are attracted by the dislocation stress fields and are able to relieve the elastic stress fields of existing dislocations. Thus the solutes become bound to the dislocations. The concentration of solute, "C", at a distance, "r", from a dislocation is given by the Cottrell atmosphere defined as formula_3 where "C"0 is the concentration at "r" = ∞ and "β" is a constant which defines the extent of segregation of the solute. When surrounded by a solute atmosphere, dislocations that attempt to glide under an applied stress are subjected to a back stress exerted on them by the cloud of solute atoms. If the applied stress is sufficiently high, the dislocation may eventually break away from the atmosphere, allowing the dislocation to continue gliding under the action of the applied stress. The maximum force (per unit length) that the atmosphere of solute atoms can exert on the dislocation is given by Cottrell and Jaswon formula_4 When the diffusion of solute atoms is activated at higher temperatures, the solute atoms which are "bound" to the dislocations by the misfit can move along with edge dislocations as a "drag" on their motion if the dislocation motion or the creep rate is not too high. The amount of "drag" exerted by the solute atoms on the dislocation is related to the diffusivity of the solute atoms in the metal at that temperature, with a higher diffusivity leading to lower drag and vice versa. The velocity at which the dislocations glide can be approximated by a power law of the form formula_5 where "m" is the effective stress exponent, "Q" is the apparent activation energy for glide and "B"0 is a constant. The parameter "B" in the above equation was derived by Cottrell and Jaswon for interaction between solute atoms and dislocations on the basis of the relative atomic size misfit "ε"a of solutes to be formula_6 where "k" is Boltzmann's constant, and "r"1 and "r"2 are the internal and external cut-off radii of dislocation stress field. "c"0 and "D"sol are the atomic concentration of the solute and solute diffusivity respectively. "D"sol also has a temperature dependence that makes a determining contribution to "Q"g. If the cloud of solutes does not form or the dislocations are able to break away from their clouds, glide occurs in a jerky manner where fixed obstacles, formed by dislocations in combination with solutes, are overcome after a certain waiting time with support by thermal activation. The exponent "m" is greater than 1 in this case. The equations show that the hardening effect of solutes is strong if the factor "B" in the power-law equation is low so that the dislocations move slowly and the diffusivity "D"sol is low. Also, solute atoms with both high concentration in the matrix and strong interaction with dislocations are strong gardeners. Since misfit strain of solute atoms is one of the ways they interact with dislocations, it follows that solute atoms with large atomic misfit are strong gardeners. A low diffusivity "D"sol is an additional condition for strong hardening. Solute drag creep sometimes shows a special phenomenon, over a limited strain rate, which is called the Portevin–Le Chatelier effect. When the applied stress becomes sufficiently large, the dislocations will break away from the solute atoms since dislocation velocity increases with the stress. After breakaway, the stress decreases and the dislocation velocity also decreases, which allows the solute atoms to approach and reach the previously departed dislocations again, leading to a stress increase. The process repeats itself when the next local stress maximum is obtained. So repetitive local stress maxima and minima could be detected during solute drag creep. Dislocation climb-glide creep. Dislocation climb-glide creep is observed in materials at high temperature. The initial creep rate is larger than the steady-state creep rate. Climb-glide creep could be illustrated as follows: when the applied stress is not enough for a moving dislocation to overcome the obstacle on its way via dislocation glide alone, the dislocation could climb to a parallel slip plane by diffusional processes, and the dislocation can glide on the new plane. This process repeats itself each time when the dislocation encounters an obstacle. The creep rate could be written as: formula_7 where "A"CG includes details of the dislocation loop geometry, "D"L is the lattice diffusivity, "M" is the number of dislocation sources per unit volume, "σ" is the applied stress, and "Ω" is the atomic volume. The exponent "m" for dislocation climb-glide creep is 4.5 if "M" is independent of stress and this value of "m" is consistent with results from considerable experimental studies. Harper–Dorn creep. Harper–Dorn creep is a climb-controlled dislocation mechanism at low stresses that has been observed in aluminum, lead, and tin systems, in addition to nonmetal systems such as ceramics and ice. It was first observed by Harper and Dorn in 1957. It is characterized by two principal phenomena: a power-law relationship between the steady-state strain rate and applied stress at a constant temperature which is weaker than the natural power-law of creep, and an independent relationship between the steady-state strain rate and grain size for a provided temperature and applied stress. The latter observation implies that Harper–Dorn creep is controlled by dislocation movement; namely, since creep can occur by vacancy diffusion (Nabarro–Herring creep, Coble creep), grain boundary sliding, and/or dislocation movement, and since the first two mechanisms are grain-size dependent, Harper–Dorn creep must therefore be dislocation-motion dependent. The same was also confirmed in 1972 by Barrett and co-workers where FeAl3 precipitates lowered the creep rates by 2 orders of magnitude compared to highly pure Al, thus, indicating Harper–Dorn creep to be a dislocation based mechanism. Harper–Dorn creep is typically overwhelmed by other creep mechanisms in most situations, and is therefore not observed in most systems. The phenomenological equation which describes Harper–Dorn creep is formula_8 where "ρ"0 is dislocation density (constant for Harper–Dorn creep), "D"v is the diffusivity through the volume of the material, "G" is the shear modulus and "b" is the Burgers vector, "σ"s, and "n" is the stress exponent which varies between 1 and 3. Later investigation of the creep region. Twenty-five years after Harper and Dorn published their work, Mohamed and Ginter made an important contribution in 1982 by evaluating the potential for achieving Harper–Dorn creep in samples of Al using different processing procedures. The experiments showed that Harper–Dorn creep is achieved with stress exponent "n" = 1, and only when the internal dislocation density prior to testing is exceptionally low. By contrast, Harper–Dorn creep was not observed in polycrystalline Al and single crystal Al when the initial dislocation density was high. However, various conflicting reports demonstrate the uncertainties at very low stress levels. One report by Blum and Maier, claimed that the experimental evidence for Harper–Dorn creep is not fully convincing. They argued that the necessary condition for Harper–Dorn creep is not fulfilled in Al with 99.99% purity and the steady-state stress exponent "n" of the creep rate is always much larger than 1. The subsequent work conducted by Ginter et al. confirmed that Harper–Dorn creep was attained in Al with 99.9995% purity but not in Al with 99.99% purity and, in addition, the creep curves obtained in the very high purity material exhibited regular and periodic accelerations. They also found that the creep behavior no longer follows a stress exponent of "n" = 1 when the tests are extended to very high strains of &gt;0.1 but instead there is evidence for a stress exponent of "n" &gt; 2. Sintering. At high temperatures, it is energetically favorable for voids to shrink in a material. The application of tensile stress opposes the reduction in energy gained by void shrinkage. Thus, a certain magnitude of applied tensile stress is required to offset these shrinkage effects and cause void growth and creep fracture in materials at high temperature. This stress occurs at the "sintering limit" of the system. The stress tending to shrink voids that must be overcome is related to the surface energy and surface area-volume ratio of the voids. For a general void with surface energy "γ" and principle radii of curvature of "r"1 and "r"2, the sintering limit stress is formula_9 Below this critical stress, voids will tend to shrink rather than grow. Additional void shrinkage effects will also result from the application of a compressive stress. For typical descriptions of creep, it is assumed that the applied tensile stress exceeds the sintering limit. Creep also explains one of several contributions to densification during metal powder sintering by hot pressing. A main aspect of densification is the shape change of the powder particles. Since this change involves permanent deformation of crystalline solids, it can be considered a plastic deformation process and thus sintering can be described as a high temperature creep process. The applied compressive stress during pressing accelerates void shrinkage rates and allows a relation between the steady-state creep power law and densification rate of the material. This phenomenon is observed to be one of the main densification mechanisms in the final stages of sintering, during which the densification rate (assuming gas-free pores) can be explained by: formula_10 in which "ρ̇" is the densification rate, "ρ" is the density, "P"e is the pressure applied, "n" describes the exponent of strain rate behavior, and "A" is a mechanism-dependent constant. "A" and "n" are from the following form of the general steady-state creep equation, formula_11 where "ε̇" is the strain rate, and "σ" is the tensile stress. For the purposes of this mechanism, the constant "A" comes from the following expression, where "A"′ is a dimensionless, experimental constant, "μ" is the shear modulus, "b" is the Burgers vector, "k" is Boltzmann's constant, "T" is absolute temperature, "D"0 is the diffusion coefficient, and "Q" is the diffusion activation energy: formula_12 Examples. Polymers. Creep can occur in polymers and metals which are considered viscoelastic materials. When a polymeric material is subjected to an abrupt force, the response can be modeled using the Kelvin–Voigt model. In this model, the material is represented by a Hookean spring and a Newtonian dashpot in parallel. The creep strain is given by the following convolution integral: formula_13 where "σ" is applied stress, "C"0 is instantaneous creep compliance, "C" is creep compliance coefficient, "τ" is retardation time, and "f"("τ") is the distribution of retardation times. When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep. At a time "t"0, a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time "t"1 at which the stress is relieved, at which time the strain immediately decreases (discontinuity) then continues decreasing gradually to a residual strain. Viscoelastic creep data can be presented in one of two ways. Total strain can be plotted as a function of time for a given temperature or temperatures. Below a critical value of applied stress, a material may exhibit linear viscoelasticity. Above this critical stress, the creep rate grows disproportionately faster. The second way of graphically presenting viscoelastic creep in a material is by plotting the creep modulus (constant applied stress divided by total strain at a particular time) as a function of time. Below its critical stress, the viscoelastic creep modulus is independent of the stress applied. A family of curves describing strain versus time response to various applied stress may be represented by a single viscoelastic creep modulus versus time curve if the applied stresses are below the material's critical stress value. Additionally, the molecular weight of the polymer of interest is known to affect its creep behavior. The effect of increasing molecular weight tends to promote secondary bonding between polymer chains and thus make the polymer more creep resistant. Similarly, aromatic polymers are even more creep resistant due to the added stiffness from the rings. Both molecular weight and aromatic rings add to polymers' thermal stability, increasing the creep resistance of a polymer. Both polymers and metals can creep. Polymers experience significant creep at temperatures above around ; however, there are three main differences between polymeric and metallic creep. In metals, creep is not linearly viscoelastic, it is not recoverable, and it is only present at high temperatures. Polymers show creep basically in two different ways. At typical work loads (5% up to 50%) ultra-high-molecular-weight polyethylene (Spectra, Dyneema) will show time-linear creep, whereas polyester or aramids (Twaron, Kevlar) will show a time-logarithmic creep. Wood. Wood is considered as an orthotropic material, exhibiting different mechanical properties in three mutually perpendicular directions. Experiments show that the tangential direction in solid wood tend display a slightly higher creep compliance than in the radial direction. In the longitudinal direction, the creep compliance is relatively low and usually do not show any time-dependency in comparison to the other directions. It has also been shown that there is a substantial difference in viscoelastic properties of wood depending on loading modality (creep in compression or tension). Studies have shown that certain Poisson's ratios gradually go from positive to negative values during the duration of the compression creep test, which does not occur in tension. Concrete. The creep of concrete, which originates from the calcium silicate hydrates (C-S-H) in the hardened Portland cement paste (which is the binder of mineral aggregates), is fundamentally different from the creep of metals as well as polymers. Unlike the creep of metals, it occurs at all stress levels and, within the service stress range, is linearly dependent on the stress if the pore water content is constant. Unlike the creep of polymers and metals, it exhibits multi-months aging, caused by chemical hardening due to hydration which stiffens the microstructure, and multi-year aging, caused by long-term relaxation of self-equilibrated microstresses in the nanoporous microstructure of the C-S-H. If concrete is fully cured, creep effectively ceases. Metals. Creep in metals primarily manifests as movement in their microstructures. While polymers and metals share some similarities in creep, the behavior of creep in metals displays a different mechanical response and must be modeled differently. For example, with polymers, creep can be modeled using the Kelvin-Voigt model with a Hookean spring dashpot but with metals, the creep can be represented by plastic deformation mechanisms such as dislocation glide, climb and grain boundary sliding. Understanding the mechanisms behind creep in metals is becoming increasingly more important for reliability and material lifetime as the operating temperatures for applications involving metals rise.  Unlike polymers, in which creep deformation can occur at very low temperatures, creep for metals typically occur at high temperatures. Key examples would be scenarios in which these metal components like intermetallic or refractory metals are subject to high temperatures and mechanical loads like turbine blades, engine components and other structural elements. Refractory metals, such as tungsten, molybdenum, and niobium, are known for their exceptional mechanical properties at high temperatures, proving to be useful materials in aerospace, defense and electronics industries. Case studies. Although mostly due to the reduced yield strength at higher temperatures, the collapse of the World Trade Center was due in part to creep from increased temperature. The creep rate of hot pressure-loaded components in a nuclear reactor at power can be a significant design constraint, since the creep rate is enhanced by the flux of energetic particles. Creep in epoxy anchor adhesive was blamed for the Big Dig tunnel ceiling collapse in Boston, Massachusetts that occurred in July 2006. The design of tungsten light bulb filaments attempts to reduce creep deformation. Sagging of the filament coil between its supports increases with time due to the weight of the filament itself. If too much deformation occurs, the adjacent turns of the coil touch one another, causing local overheating, which quickly leads to failure of the filament. The coil geometry and supports are therefore designed to limit the stresses caused by the weight of the filament, and a special tungsten alloy with small amounts of oxygen trapped in the crystallite grain boundaries is used to slow the rate of Coble creep. Creep can cause gradual cut-through of wire insulation, especially when stress is concentrated by pressing insulated wire against a sharp edge or corner. Special creep-resistant insulations such as Kynar (polyvinylidene fluoride) are used in wire wrap applications to resist cut-through due to the sharp corners of wire wrap terminals. Teflon insulation is resistant to elevated temperatures and has other desirable properties, but is notoriously vulnerable to cold-flow cut-through failures caused by creep. In steam turbine power plants, pipes carry steam at high temperatures () and pressures (above ). In jet engines, temperatures can reach up to and initiate creep deformation in even advanced-design coated turbine blades. Hence, it is crucial for correct functionality to understand the creep deformation behavior of materials. Creep deformation is important not only in systems where high temperatures are endured such as nuclear power plants, jet engines and heat exchangers, but also in the design of many everyday objects. For example, metal paper clips are stronger than plastic ones because plastics creep at room temperatures. Aging glass windows are often erroneously used as an example of this phenomenon: measurable creep would only occur at temperatures above the glass transition temperature around . While glass does exhibit creep under the right conditions, apparent sagging in old windows may instead be a consequence of obsolete manufacturing processes, such as that used to create crown glass, which resulted in inconsistent thickness. Fractal geometry, using a deterministic Cantor structure, is used to model the surface topography, where recent advancements in thermoviscoelastic creep contact of rough surfaces are introduced. Various viscoelastic idealizations are used to model the surface materials, including the Maxwell, Kelvin–Voigt, standard linear solid and Jeffrey models. Nimonic 75 has been certified by the European Union as a standard creep reference material. The practice of tinning stranded wires to facilitate the process of connecting the wire to a screw terminal, though having been prevalent and considered standard practice for quite a while, has been discouraged by professional electricians, as solder is likely to creep under the pressure exerted on the tinned wire end by the screw of the terminal, causing the joint to lose tension and hence create a loose contact over time. The accepted practice when connecting stranded wire to a screw terminal is to use a wire ferrule on the end of the wire. Prevention. Generally, materials have better creep resistance if they have higher melting temperatures, lower diffusivity, and higher shear strength. Close-packed structures are usually more creep resistant as they tend to have lower diffusivity than non-close-packed structures. Common methods to reduce creep include: Superalloys. Materials operating in high-performance systems, such as jet engines, often reach extreme temperatures surpassing , necessitating specialized material design. Superalloys based on cobalt, nickel, and iron have been engineered to be highly resistant to creep. The term ‘superalloy’ generally refers to austenitic nickel-, iron-, or cobalt-based alloys that use either γ′ or γ″ precipitation strengthening to maintain strength at high temperature. The γ′ phase is a cubic L12-structure phase that produces cuboidal precipitates. Superalloys often have a high (60–75%) volume fraction of γ′ precipitates. γ′ precipitates are coherent with the parent γ phase, and are resistant to shearing due to the development of an anti-phase boundary when the precipitate is sheared. The γ″ phase is a tetragonal Ni3Nb or Ni3V structure. The γ″ phase, however, is unstable above , so γ″ is less commonly used as a strengthening phase in high temperature applications. Carbides are also used in polycrystalline superalloys to inhibit grain boundary sliding. Many other elements can be added to superalloys to tailor their properties. They can be used for solid solution strengthening, to reduce the formation of undesirable brittle precipitates, and to increase oxidation or corrosion resistance. Nickel-based superalloys have found widespread use in high-temperature, low stress applications. Iron-based superalloys are generally not used at high temperatures as the γ′ phase is not stable in the iron matrix, but are sometimes used at moderately high temperatures, as iron is significantly cheaper than nickel. Cobalt-based γ′ structure was found in 2006, allowing the development of cobalt-based superalloys, which are superior to nickel-based superalloys in corrosion resistance. However, in the base (cobalt–tungsten–aluminum) system, γ′ is only stable below , and cobalt-based superalloys tend to be weaker than their Ni counterparts. Contributing factors in creep resistivity. 1. Stages of creep. Based on the description of creep mechanisms and its three different stages mentioned earlier, creep resistance generally can be accomplished by using materials in which their tertiary stage is not active since, at this stage, the strain rate increases significantly by increasing stress. Therefore, a sound component design should satisfy the primary stage of creep, which has a relatively high initial creep rate that decreases with increasing exposure time, leading to the second stage of creep, in which the creep rate in the material is decelerated and reaches its minimum value via work hardening. The minimum value of creep rate is actually a constant creep rate, which plays a crucial role in designing a component, and its magnitude depends on temperature and stress. The minimum value of creep rate that is commonly applied to alloys is based on two norms: (1) the stress required to produce a creep rate of and (2) the stress required to produce a creep rate of , which takes roughly about 11.5 years. The former standard has widely been used in the component design of turbine blades, while the latter is frequently used in designing steam turbines. One of the primary goals of creep tests is to determine the minimum value of the creep rate at the secondary stage and also to investigate the time required for an ultimate failure of a component. However, when it comes to ceramic applications, there are no such standards to determine their minimum creep rates, but ceramics are often chosen for high-temperature operations under load mainly because they possess a long lifetime. However, by acquiring information obtained from creep tests, proper ceramic material(s) can be selected for desired application to assure the safe service and evaluate the time period of secure service in high-temperature environments that the structural thermostability is essential. Therefore, by making the proper choice, suitable ceramic components may be selected, capable of operating at various conditions of high temperature and creep deformation. 2. Materials selection. Generally speaking, the structure of materials is different from one another. Metallic materials have different structures compared to polymer or ceramics, and even within the same class of materials, different structures might have existed at different temperatures. The difference in the structure includes the difference in grains (for example, their sizes, shapes, distributions), their crystalline or amorphous nature, and even dislocation and/or vacancy contents that are prone to change following a deformation. So, since creep is a time-dependent process and differs from one material to another, all these parameters must be considered in materials selection for a specific application. For instance, materials with low-dislocation content are among the suitable candidates for creep resistance. In other words, the dislocation glide and climb can be reduced if the proper material is selected (for example, ceramic materials are very popular in this case). In terms of vacancies, it's noteworthy to mention that the vacancy content not only depends on the chosen material but also on the component's service temperature. Vacancy-diffusion-controlled processes that promote creep can be categorized into grain boundary diffusion (Coble creep) and lattice diffusion (Nabarro–Herring creep). Therefore, the properties of dislocations and vacancies, their distributions throughout the structure, and their potential change due to long-time exposure to stress and temperature must be considered seriously in materials selection for components design. Therefore, the role of dislocations, vacancies, wide range of obstacles that can retard the dislocation motion, including grain boundaries in polycrystalline materials, solute atoms, precipitates, impurities, and strain fields originating from other dislocations or their pile-ups which increase the lifetime of materials and make them more resistive with respect to creep are always at the top list of materials selection and component design. For instance, different types of vacancies in ceramic materials have different charges stemming from their dominant chemical boding. So, existing or newly-formed vacancies must be charge-balanced to maintain the overall neutrality of the final structure. Besides paying attention to individual dislocation and vacancy contents, the correlation between them is also worth exploring since dislocation's ability to climb depends heavily on how many vacancies are available in its vicinity. To recap, materials must be selected and developed that possess low dislocation and vacancy contents to have a practical creep resistance component. 3. Various working conditions. ● Temperature Creep is a phenomenon that is related to a material's melting point (Tm). Generally, a high lifetime is expected when we have a higher melting point, and the reason behind selecting materials with high melting points originates from the fact that diffusion processes are related to temperature-dependent vacancy concentrations. The diffusion rate is slower in high-temperature materials. Ceramic materials are well known for having high melting points, and that's the reason why ceramics gravitated considerable attention to creep resistance applications. Although it's widely known that creep starts at a temperature equal to 0.5Tm, the safe temperature to avoid the initiation of creep is 0.3Tm. Creep that starts below or at 0.5Tm is called "low temperature creep" because diffusion is not very progressive at such low temperatures, and the kind of creep that occurs is not diffusion-dominant and is related to other mechanisms.&lt;br&gt; ● Time As mentioned previously, creep is a time-dependent deformation. Fortunately, creep doesn't occur suddenly in brittle materials as it does under tension and other forms of deformation, and it's an advantage for designers. Over time, creep strain develops in a material that is exposed to stress at the temperature of the application, and it depends on the duration of the exposure. Thus, besides temperature and stress, the creep rate is also a function of time, and it can be generalized as this function ε = F(t, T, σ) that tells the designer all the three parameters, including time, temperature, and stress acting in concert, and all of them must be considered if a successful creep-resistance component is to be attained. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{\\mathrm{d}\\varepsilon}{\\mathrm{d}t} = \\frac{C\\sigma^m}{d^b} e^\\frac{-Q}{kT}" }, { "math_id": 1, "text": "\\frac{\\mathrm{d}\\varepsilon}{\\mathrm{d}t} = A \\left(\\sigma-\\sigma_{\\rm th}\\right)^m e^\\frac{-Q}{\\bar R T}" }, { "math_id": 2, "text": "D = D_0e^{\\frac{E}{KT}}" }, { "math_id": 3, "text": " C_r = C_0 \\exp\\left(-\\frac{\\beta\\sin\\theta}{rKT}\\right) " }, { "math_id": 4, "text": " \\frac{F_{\\rm max}}{L} = \\frac{C_0 \\beta^2}{bkT} " }, { "math_id": 5, "text": " v = B {\\sigma^*}^m B =B_0 \\exp\\left(\\frac{-Q_{\\rm g}}{RT}\\right)" }, { "math_id": 6, "text": "B= \\frac{9kT}{MG^2b^4\\ln\\frac{r2}{r1}} \\cdot \\frac{D_{\\rm sol}}{\\varepsilon_{\\rm a}^2c_0}" }, { "math_id": 7, "text": "\\frac{\\mathrm{d}\\varepsilon}{\\mathrm{d}t} = \\frac{A_{\\rm CG}D_{\\rm L}}{\\sqrt M}\\left(\\frac{\\sigma\\Omega}{kT}\\right)^{4.5}" }, { "math_id": 8, "text": "\\frac{\\mathrm{d}\\varepsilon}{\\mathrm{d}t} = \\rho_0 \\frac{D_{\\rm v} G b^3}{k T} \\left(\\frac{\\sigma_{\\rm s}^n} G \\right)" }, { "math_id": 9, "text": "\\sigma_{\\rm sint} = \\frac{\\gamma}{r_1}+\\frac{\\gamma}{r_2}" }, { "math_id": 10, "text": "\\dot{\\rho}=\\frac{3A}{2}\\frac{\\rho(1-\\rho)}{\\left(1-(1-\\rho)^\\frac1n\\right)^n}\\left(\\frac32\\frac{P_{\\rm e}}{n}\\right)^n" }, { "math_id": 11, "text": "\\dot{\\varepsilon}=A\\sigma^n" }, { "math_id": 12, "text": "A = A'\\frac{D_0 \\mu b}{kT} \\exp\\left(-\\frac{Q}{kT}\\right)" }, { "math_id": 13, "text": "\\varepsilon(t) = \\sigma C_0 + \\sigma C \\int_0^\\infty f(\\tau)\\left(1-e^{-t/\\tau}\\right) \\,\\mathrm{d} \\tau" } ]
https://en.wikipedia.org/wiki?curid=681241
681265
Hopf–Rinow theorem
Gives equivalent statements about the geodesic completeness of Riemannian manifolds Hopf–Rinow theorem is a set of statements about the geodesic completeness of Riemannian manifolds. It is named after Heinz Hopf and his student Willi Rinow, who published it in 1931. Stefan Cohn-Vossen extended part of the Hopf–Rinow theorem to the context of certain types of metric spaces. Statement. Let formula_0 be a connected and smooth Riemannian manifold. Then the following statements are equivalent: Furthermore, any one of the above implies that given any two points formula_4 there exists a length minimizing geodesic connecting these two points (geodesics are in general critical points for the length functional, and may or may not be minima). In the Hopf–Rinow theorem, the first characterization of completeness deals purely with the topology of the manifold and the boundedness of various sets; the second deals with the existence of minimizers to a certain problem in the calculus of variations (namely minimization of the length functional); the third deals with the nature of solutions to a certain system of ordinary differential equations. In fact these properties characterize completeness for locally compact length-metric spaces. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(M, g)" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "p \\in M," }, { "math_id": 3, "text": "\\operatorname{T}_p M." }, { "math_id": 4, "text": "p, q \\in M," } ]
https://en.wikipedia.org/wiki?curid=681265
6812823
Product metric
Metric on the Cartesian product of finitely many metric spaces In mathematics, a product metric is a metric on the Cartesian product of finitely many metric spaces formula_0 which metrizes the product topology. The most prominent product metrics are the "p" product metrics for a fixed formula_1 : It is defined as the "p" norm of the "n"-vector of the distances measured in "n" subspaces: formula_2 For formula_3 this metric is also called the sup metric: formula_4 Choice of norm. For Euclidean spaces, using the L2 norm gives rise to the Euclidean metric in the product space; however, any other choice of "p" will lead to a topologically equivalent metric space. In the category of metric spaces (with Lipschitz maps having Lipschitz constant 1), the product (in the category theory sense) uses the sup metric. The case of Riemannian manifolds. For Riemannian manifolds formula_5 and formula_6, the product metric formula_7 on formula_8 is defined by formula_9 for formula_10 under the natural identification formula_11.
[ { "math_id": 0, "text": "(X_1,d_{X_1}),\\ldots,(X_n,d_{X_n})" }, { "math_id": 1, "text": "p\\in[1,\\infty)" }, { "math_id": 2, "text": "d_p((x_1,\\ldots,x_n),(y_1,\\ldots,y_n)) = \\|\\left(d_{X_1}(x_1,y_1), \\ldots, d_{X_n}(x_n,y_n)\\right)\\|_p" }, { "math_id": 3, "text": "p=\\infty" }, { "math_id": 4, "text": "d_{\\infty} ((x_1,\\ldots,x_n),(y_1,\\ldots,y_n)) := \\max \\left\\{ d_{X_1}(x_1,y_1), \\ldots, d_{X_n}(x_n,y_n) \\right\\}." }, { "math_id": 5, "text": "(M_1,g_1)" }, { "math_id": 6, "text": "(M_2,g_2)" }, { "math_id": 7, "text": "g=g_1\\oplus g_2" }, { "math_id": 8, "text": "M_1\\times M_2" }, { "math_id": 9, "text": "g(X_1+X_2,Y_1+Y_2)=g_1(X_1,Y_1)+g_2(X_2,Y_2)" }, { "math_id": 10, "text": "X_i,Y_i\\in T_{p_i}M_i" }, { "math_id": 11, "text": "T_{(p_1,p_2)}(M_1\\times M_2)=T_{p_1}M_1\\oplus T_{p_2}M_2" } ]
https://en.wikipedia.org/wiki?curid=6812823
6813
Chandrasekhar limit
Maximum mass of a stable white dwarf star The Chandrasekhar limit () is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about 1.4 M☉ (). The limit was named after Subrahmanyan Chandrasekhar. White dwarfs resist gravitational collapse primarily through electron degeneracy pressure, compared to main sequence stars, which resist collapse through thermal pressure. The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Physics. Normal stars fuse gravitationally compressed hydrogen into helium, generating vast amounts of heat. As the hydrogen is consumed, the stars' core compresses further allowing the helium and heavier nuclei to fuse ultimately resulting in stable iron nuclei, a process called stellar evolution. The next step depends upon the mass of the star. Stars below the Chandrasekhar limit become stable white dwarf stars, remaining that way throughout the rest of the history of the universe absent external forces. Stars above the limit can become neutron stars or black holes. The Chandrasekhar limit is a consequence of competition between gravity and electron degeneracy pressure. Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form "P" = "K"1"ρ"5/3, where P is the pressure, ρ is the mass density, and "K"1 is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass. As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form "P" = "K"2"ρ"4/3. This yields a polytrope of index 3, which has a total mass, "M"limit, depending only on "K"2. For a fully relativistic treatment, the equation of state used interpolates between the equations "P" = "K"1"ρ"5/3 for small ρ and "P" = "K"2"ρ"4/3 for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at "M"limit. This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. "μ"e has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses. Calculated values for the limit vary depending on the nuclear composition of the mass. Chandrasekhareq. (36)eq. (58)eq. (43) gives the following expression, based on the equation of state for an ideal Fermi gas: formula_0 where: As is the Planck mass, the limit is of the order of formula_1 The limiting mass can be obtained formally from the Chandrasekhar's white dwarf equation by taking the limit of large central density. A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation. History. In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately . In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for "μ"e 2.5). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community. A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. The existence of a related limit, based on the conceptual breakthrough of combining relativity with Fermi degeneracy, was first established in separate papers published by Wilhelm Anderson and E. C. Stoner for a uniform density star in 1929. Eric G. Blackman wrote that the roles of Stoner and Anderson in the discovery of mass limits were overlooked when Freeman Dyson wrote a biography of Chandrasekhar. Michael Nauenberg claims that Stoner established the mass limit first. The priority dispute has also been discussed at length by Virginia Trimble who writes that: "Chandrasekhar famously, perhaps even notoriously did his critical calculation on board ship in 1930, and ... was not aware of either Stoner's or Anderson's work at the time. His work was therefore independent, but, more to the point, he adopted Eddington's polytropes for his models which could, therefore, be in hydrostatic equilibrium, which constant density stars cannot, and real ones must be." This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass. Chandrasekhar–Eddington dispute. Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. ... I think there should be a law of Nature to prevent a star from behaving in this absurd way! Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law "P" "K"1"ρ"5/3 universally applicable, even for large ρ. Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar. Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of "Empire of the Stars", Arthur I. Miller's biography of Chandrasekhar. In Miller's view: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten.150 However, Chandrasekhar chose to move on, leaving the study of stellar structure to focus on stellar dynamics. In 1983 in recognition for his work, Chandrasekhar shared a Nobel prize "for his theoretical studies of the physical processes of importance to the structure and evolution of the stars" with William Alfred Fowler. Applications. The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse. If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of (100 foes). Most of this energy is carried away by the emitted neutrinos and the kinetic energy of the expanding shell of gas; only about 1% is emitted as optical light. This process is believed responsible for supernovae of types Ib, Ic, and II. Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon–oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova.§5.1.2 A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, "M"V is approximately −19.3, with a standard deviation of no more than 0.3.eq. (1) A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy. Super-Chandrasekhar mass supernovas. In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf that had grown to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" may have been spinning so fast that a centrifugal tendency allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles. Since the observation of the Champagne Supernova in 2003, several more type Ia supernovae have been observed that are very bright, and thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if, and SN 2009dc. The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses. One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely. Tolman–Oppenheimer–Volkoff limit. Stars sufficiently massive to pass the Chandrasekhar limit provided by electron degeneracy pressure do not become white dwarf stars. Instead they explode as supernovae. If the final mass is below the Tolman–Oppenheimer–Volkoff limit, then neutron degeneracy pressure contributes to the balance against gravity and the end result will be a neutron star; otherwise the result will be a black hole. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " M_\\text{limit} = \\frac{\\omega_3^0 \\sqrt{3\\pi}}{2} \\left ( \\frac{\\hbar c}{G}\\right )^\\frac{3}{2} \\frac{1}{(\\mu_\\text{e} m_\\text{H})^2}" }, { "math_id": 1, "text": "\\frac{M_\\text{Pl}^3}{m_\\text{H}^2}" } ]
https://en.wikipedia.org/wiki?curid=6813
6813660
Apparent horizon
Alternative of event horizon first suggested by Stephen Hawking In general relativity, an apparent horizon is a surface that is the boundary between light rays that are directed outwards and moving outwards and those directed outward but moving inward. Apparent horizons are not invariant properties of spacetime, and in particular, they are distinct from event horizons. Within an apparent horizon, light does not move outward; this is in contrast with the event horizon. In a dynamical spacetime, there can be outgoing light rays exterior to an apparent horizon (but still interior to the event horizon). An apparent horizon is a "local" notion of the boundary of a black hole, whereas an event horizon is a "global" notion. The notion of a horizon in general relativity is subtle and depends on fine distinctions. Definition. The notion of an "apparent horizon" begins with the notion of a trapped null surface. A (compact, orientable, spacelike) surface always has two independent forward-in-time pointing, lightlike, normal directions. For example, a (spacelike) sphere in Minkowski space has lightlike vectors pointing inward and outward along the radial direction. In Euclidean space (i.e. flat and unaffected by gravitational effects), the inward-pointing, lightlike normal vectors converge, while the outward-pointing, lightlike normal vectors diverge. It can, however, happen that both inward-pointing and outward-pointing lightlike normal vectors converge. In such a case, the surface is called "trapped". The apparent horizon is the outermost of all trapped surfaces, also called the "marginally outer trapped surface" (MOTS). Differences from the (absolute) event horizon. In the context of black holes, the term event horizon refers almost exclusively to the notion of the "absolute horizon". Much confusion seems to arise concerning the differences between an apparent horizon (AH) and an event horizon (EH). In general, the two need not be the same. For example, in the case of a perturbed black hole, the EH and the AH generally do not coincide as long as either horizon fluctuates. Event horizons can, in principle, arise and evolve in exactly flat regions of spacetime, having no black hole inside, if a hollow spherically symmetric thin shell of matter is collapsing in a vacuum spacetime. The exterior of the shell is a portion of Schwarzschild space and the interior of the hollow shell is exactly flat Minkowski space. Bob Geroch has pointed out that if all the stars in the Milky Way gradually aggregate towards the Galactic Center while keeping their proportionate distances from each other, they will all fall within their joint Schwarzschild radius long before they are forced to collide. In the simple picture of stellar collapse leading to formation of a black hole, an event horizon forms before an apparent horizon. As the black hole settles down, the two horizons approach each other, and asymptotically become the same surface. If the null curvature condition formula_0 (where formula_1 denotes the Ricci tensor and formula_2 a null vector) is satisfied, then the AH is located inside the EH. Apparent horizons depend on the "slicing" of a spacetime. That is, the location and even existence of an apparent horizon depends on the way spacetime is divided into space and time. For example, it is possible to slice the Schwarzschild geometry in such a way that there is "no" apparent horizon, ever, despite the fact that there is certainly an event horizon. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " R_{\\mu\\nu} \\ell^\\mu \\ell^\\nu \\geqslant 0 " }, { "math_id": 1, "text": " R_{\\mu\\nu} " }, { "math_id": 2, "text": " \\ell^\\mu " } ]
https://en.wikipedia.org/wiki?curid=6813660
68138929
Andrei Roiter
Ukrainian mathematician Andrei Vladimirovich Roiter ("Russian": Андрей Владимирович Ройтер; "Ukrainian": Андрій Володимирович Ройтер, November 30, 1937, Dnipro – July 26, 2006, Riga, Latvia) was a Ukrainian mathematician, specializing in algebra. A. V. Roiter's father was the Ukrainian physical chemist V. A. Roiter, a leading expert on catalysis. In 1955 Andrei V. Roiter matriculated at Taras Shevchenko National University of Kyiv, where he met a fellow mathematics major Lyudmyla Nazarova. In 1958 he and Nazarova transferred to Saint Petersburg State University (then named Leningrad State University). They married and began a lifelong collaboration on representation theory. He received in 1960 his Diploma (M.S.) and in 1963 his Candidate of Sciences degree (PhD). His PhD thesis was supervised by Dmitry Konstantinovich Faddeev, who also supervised Ludmila Nazarova's PhD. A. V. Roiter was hired in 1961 as a researcher at the Institute of Mathematics of the Academy of Sciences of Ukraine, where he worked until his death in 2006 and since 1991 was Head of the Department of Algebra. He received his Doctor of Sciences degree (habilitation) in 1969. In 1978 he was an invited speaker at the International Congress of Mathematicians in Helsinki. In his first published paper, Roiter in 1960 proved an important result that eventually led several other mathematicians to establish that a finite group formula_0 has finitely many non-isomorphic indecomposable integral representations if and only if, for each prime "p", its Sylow "p"-subgroup is cyclic of order at most "p"2. In a 1966 paper he proved an important theorem in the theory of the integral representation of rings. In a famous 1968 paper he proved the first Brauer-Thrall conjecture. Roiter proved the first Brauer-Thrall conjecture for finite-dimensional algebras; his paper never mentioned Artin algebras, but his techniques work for Artin algebras as well. There is an important line of research inspired by the paper and started by Maurice Auslander and Sverre Olaf Smalø in a 1980 paper. Auslander and Smalø's paper and its follow-ups by several researchers introduced, among other things, covariantly and contravariantly finite subcategories of the category of finitely generated modules over an Artin algebra, which led to the theory of almost split sequences in subcategories. According to Auslander and Smalø: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Roiter did important research on "p"-adic representations, especially his 1967 paper with Yuriy Drozd and Vladimir V. Kirichenko on hereditary and Bass orders and the Drozd-Roiter criterion for a commutative order to have finitely many non-isomorphic indecomposable representations. An important tool in this research was his theory of divisibility of modules. In 1972 Nazarova and Roiter introduced representations of partially ordered sets, an important class of matrix problems with many applications in mathematics, such as the representation theory of finite dimensional algebras. (In 2005 they with M. N. Smirnova proved a theorem about antimonotone quadratic forms and partially ordered sets.) Also in the 1970s Roiter in three papers, two of which were joint work with Mark Kleiner, introduced representations of bocses, a very large class of matrix problems. The monograph by Roiter and P. Gabriel (with a contribution by Bernhard Keller), published by Springer in 1992 in English translation, is important for its influence on the theory of representations of finite-dimensional algebras and the theory of matrix problems. There is a 1997 reprint of the English translation. In the years shortly before his death, Roiter did research on representations in Hilbert spaces. In two papers, he with his wife and Stanislav A. Kruglyak introduced the notion of locally scalar representations of quivers ("i.e." directed multigraphs) in Hilbert spaces. In their 2006 paper they constructed for such representations Coxeter functors analogous to Bernstein-Gelfand-Ponomarev functors and applied the new functors to the study of locally scalar representations. In particular, they proved that a graph has only finitely many indecomposable locally scalar representations (up to unitary isomorphism) if and only if it is a Dynkin graph. Their result is analogous to that of Gabriel for the “usual” representations of quivers. In 1961 Roiter started in Kyiv a seminar on the theory of representations. The seminar became the foundation of the highly esteemed Kyiv school of the representation theory. He was the supervisor for 13 Candidate of Sciences degrees (PhDs). In 2007 A. V. Roiter was posthumously awarded the State Prize of Ukraine in Science and Technology for his research on representation theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=68138929
681409
Stable marriage problem
Pairing where no unchosen pair prefers each other over their choice In mathematics, economics, and computer science, the stable marriage problem (also stable matching problem) is the problem of finding a stable matching between two equally sized sets of elements given an ordering of preferences for each element. A matching is a bijection from the elements of one set to the elements of the other set. A matching is "not" stable if: In other words, a matching is stable when there does not exist any pair ("A", "B") which both prefer each other to their current partner under the matching. The stable marriage problem has been stated as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Given "n" men and "n" women, where each person has ranked all members of the opposite sex in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners. When there are no such pairs of people, the set of marriages is deemed stable. The existence of two classes that need to be paired with each other (heterosexual men and women in this example) distinguishes this problem from the stable roommates problem. Applications. Algorithms for finding solutions to the stable marriage problem have applications in a variety of real-world situations, perhaps the best known of these being in the assignment of graduating medical students to their first hospital appointments. In 2012, the Nobel Memorial Prize in Economic Sciences was awarded to Lloyd S. Shapley and Alvin E. Roth "for the theory of stable allocations and the practice of market design." An important and large-scale application of stable marriage is in assigning users to servers in a large distributed Internet service. Billions of users access web pages, videos, and other services on the Internet, requiring each user to be matched to one of (potentially) hundreds of thousands of servers around the world that offer that service. A user prefers servers that are proximal enough to provide a faster response time for the requested service, resulting in a (partial) preferential ordering of the servers for each user. Each server prefers to serve users that it can with a lower cost, resulting in a (partial) preferential ordering of users for each server. Content delivery networks that distribute much of the world's content and services solve this large and complex stable marriage problem between users and servers every tens of seconds to enable billions of users to be matched up with their respective servers that can provide the requested web pages, videos, or other services. The Gale-Shapley algorithm for stable matching is used to assign rabbis who graduate from Hebrew Union College to Jewish congregations. Different stable matchings. In general, there may be many different stable matchings. For example, suppose there are three men (A,B,C) and three women (X,Y,Z) which have preferences of: A: YXZ   B: ZYX   C: XZY   X: BAC   Y: CBA   Z: ACB There are three stable solutions to this matching arrangement: All three are stable, because instability requires both of the participants to be happier with an alternative match. Giving one group their first choices ensures that the matches are stable because they would be unhappy with any other proposed match. Giving everyone their second choice ensures that any other match would be disliked by one of the parties. In general, the family of solutions to any instance of the stable marriage problem can be given the structure of a finite distributive lattice, and this structure leads to efficient algorithms for several problems on stable marriages. In a uniformly-random instance of the stable marriage problem with n men and n women, the average number of stable matchings is asymptotically formula_0. In a stable marriage instance chosen to maximize the number of different stable matchings, this number is an exponential function of n. Counting the number of stable matchings in a given instance is #P-complete. Algorithmic solution. In 1962, David Gale and Lloyd Shapley proved that, for any equal number of men and women, it is always possible to solve the stable marriage problem and make all marriages stable. They presented an algorithm to do so. The Gale–Shapley algorithm (also known as the deferred acceptance algorithm) involves a number of "rounds" (or "iterations"): This algorithm is guaranteed to produce a stable marriage for all participants in time formula_1 where formula_2 is the number of men or women. Among all possible different stable matchings, it always yields the one that is best for all men among all stable matchings, and worst for all women. It is a truthful mechanism from the point of view of men (the proposing side), i.e., no man can get a better matching for himself by misrepresenting his preferences. Moreover, the GS algorithm is even "group-strategy proof" for men, i.e., no coalition of men can coordinate a misrepresentation of their preferences such that all men in the coalition are strictly better-off. However, it is possible for some coalition to misrepresent their preferences such that some men are better-off and the other men retain the same partner. The GS algorithm is non-truthful for the women (the reviewing side): each woman may be able to misrepresent her preferences and get a better match. Rural hospitals theorem. The rural hospitals theorem concerns a more general variant of the stable matching problem, like that applying in the problem of matching doctors to positions at hospitals, differing in the following ways from the basic n-to-n form of the stable marriage problem: In this case, the condition of stability is that no unmatched pair prefer each other to their situation in the matching (whether that situation is another partner or being unmatched). With this condition, a stable matching will still exist, and can still be found by the Gale–Shapley algorithm. For this kind of stable matching problem, the rural hospitals theorem states that: Related problems. In stable matching with indifference, some men might be indifferent between two or more women and vice versa. The stable roommates problem is similar to the stable marriage problem, but differs in that all participants belong to a single pool (instead of being divided into equal numbers of "men" and "women"). The hospitals/residents problem – also known as the college admissions problem – differs from the stable marriage problem in that a hospital can take multiple residents, or a college can take an incoming class of more than one student. Algorithms to solve the hospitals/residents problem can be "hospital-oriented" (as the NRMP was before 1995) or "resident-oriented". This problem was solved, with an algorithm, in the same original paper by Gale and Shapley, in which the stable marriage problem was solved. The hospitals/residents problem with couples allows the set of residents to include couples who must be assigned together, either to the same hospital or to a specific pair of hospitals chosen by the couple (e.g., a married couple want to ensure that they will stay together and not be stuck in programs that are far away from each other). The addition of couples to the hospitals/residents problem renders the problem NP-complete. The assignment problem seeks to find a matching in a weighted bipartite graph that has maximum weight. Maximum weighted matchings do not have to be stable, but in some applications a maximum weighted matching is better than a stable one. The matching with contracts problem is a generalization of matching problem, in which participants can be matched with different terms of contracts. An important special case of contracts is matching with flexible wages. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e^{-1}n\\ln n" }, { "math_id": 1, "text": "O(n^2)" }, { "math_id": 2, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=681409
6814674
Malliavin derivative
In mathematics, the Malliavin derivative is a notion of derivative in the Malliavin calculus. Intuitively, it is the notion of derivative appropriate to paths in classical Wiener space, which are "usually" not differentiable in the usual sense. Definition. Let formula_0 be the Cameron–Martin space, and formula_1 denote classical Wiener space: formula_2; formula_3 By the Sobolev embedding theorem, formula_4. Let formula_5 denote the inclusion map. Suppose that formula_6 is Fréchet differentiable. Then the Fréchet derivative is a map formula_7 i.e., for paths formula_8, formula_9 is an element of formula_10, the dual space to formula_11. Denote by formula_12 the continuous linear map formula_13 defined by formula_14 sometimes known as the "H"-derivative. Now define formula_15 to be the adjoint of formula_16 in the sense that formula_17 Then the Malliavin derivative formula_18 is defined by formula_19 The domain of formula_18 is the set formula_20 of all Fréchet differentiable real-valued functions on formula_11; the codomain is formula_21. The Skorokhod integral formula_22 is defined to be the adjoint of the Malliavin derivative: formula_23 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H" }, { "math_id": 1, "text": "C_{0}" }, { "math_id": 2, "text": "H := \\{ f \\in W^{1,2} ([0, T]; \\mathbb{R}^{n}) \\;|\\; f(0) = 0 \\} := \\{ \\text{paths starting at 0 with first derivative in } L^{2} \\}" }, { "math_id": 3, "text": "C_{0} := C_{0} ([0, T]; \\mathbb{R}^{n}) := \\{ \\text{continuous paths starting at 0} \\};" }, { "math_id": 4, "text": "H \\subset C_0" }, { "math_id": 5, "text": "i : H \\to C_{0}" }, { "math_id": 6, "text": "F : C_{0} \\to \\mathbb{R}" }, { "math_id": 7, "text": "\\mathrm{D} F : C_{0} \\to \\mathrm{Lin} (C_{0}; \\mathbb{R});" }, { "math_id": 8, "text": "\\sigma \\in C_{0}" }, { "math_id": 9, "text": "\\mathrm{D} F (\\sigma)\\;" }, { "math_id": 10, "text": "C_{0}^{*}" }, { "math_id": 11, "text": "C_{0}\\;" }, { "math_id": 12, "text": "\\mathrm{D}_{H} F(\\sigma)\\;" }, { "math_id": 13, "text": "H \\to \\mathbb{R}" }, { "math_id": 14, "text": "\\mathrm{D}_{H} F (\\sigma) := \\mathrm{D} F (\\sigma) \\circ i : H \\to \\mathbb{R}, " }, { "math_id": 15, "text": "\\nabla_{H} F : C_{0} \\to H" }, { "math_id": 16, "text": "\\mathrm{D}_{H} F\\;" }, { "math_id": 17, "text": "\\int_0^T \\left(\\partial_t \\nabla_H F(\\sigma)\\right) \\cdot \\partial_t h := \\langle \\nabla_{H} F (\\sigma), h \\rangle_{H} = \\left( \\mathrm{D}_{H} F \\right) (\\sigma) (h) = \\lim_{t \\to 0} \\frac{F (\\sigma + t i(h)) - F(\\sigma)}{t}." }, { "math_id": 18, "text": "\\mathrm{D}_{t}" }, { "math_id": 19, "text": "\\left( \\mathrm{D}_{t} F \\right) (\\sigma) := \\frac{\\partial}{\\partial t} \\left( \\left( \\nabla_{H} F \\right) (\\sigma) \\right)." }, { "math_id": 20, "text": "\\mathbf{F}" }, { "math_id": 21, "text": "L^{2} ([0, T]; \\mathbb{R}^{n})" }, { "math_id": 22, "text": "\\delta\\;" }, { "math_id": 23, "text": "\\delta := \\left( \\mathrm{D}_{t} \\right)^{*} : \\operatorname{image} \\left( \\mathrm{D}_{t} \\right) \\subseteq L^{2} ([0, T]; \\mathbb{R}^{n}) \\to \\mathbf{F}^{*} = \\mathrm{Lin} (\\mathbf{F}; \\mathbb{R})." } ]
https://en.wikipedia.org/wiki?curid=6814674
681481
Immirzi parameter
Numerical coefficient in loop quantum gravity The Immirzi parameter (also known as the Barbero–Immirzi parameter) is a numerical coefficient appearing in loop quantum gravity (LQG), a nonperturbative theory of quantum gravity. The Immirzi parameter measures the size of the quantum of area in Planck units. As a result, its value is currently fixed by matching the semiclassical black hole entropy, as calculated by Stephen Hawking, and the counting of microstates in loop quantum gravity. The reality conditions. The Immirzi parameter arises in the process of expressing a Lorentz connection with noncompact group SO(3,1) in terms of a complex connection with values in a compact group of rotations, either SO(3) or its double cover SU(2). Although named after Giorgio Immirzi, the possibility of including this parameter was first pointed out by Fernando Barbero. The significance of this parameter remained obscure until the spectrum of the area operator in LQG was calculated. It turns out that the area spectrum is proportional to the Immirzi parameter. Black hole thermodynamics. In the 1970s Stephen Hawking, motivated by the analogy between the law of increasing area of black hole event horizons and the second law of thermodynamics, performed a semiclassical calculation showing that black holes are in equilibrium with thermal radiation outside them, and that black hole entropy (that is, the entropy of the black hole itself, not the entropy of the radiation in equilibrium with the black hole, which is infinite) equals formula_0 (in Planck units) In 1997, Ashtekar, Baez, Corichi and Krasnov quantized the classical phase space of the exterior of a black hole in vacuum General Relativity. They showed that the geometry of spacetime outside a black hole is described by spin networks, some of whose edges puncture the event horizon, contributing area to it, and that the quantum geometry of the horizon can be described by a U(1) Chern–Simons theory. The appearance of the group U(1) is explained by the fact that two-dimensional geometry is described in terms of the rotation group SO(2), which is isomorphic to U(1). The relationship between area and rotations is explained by Girard's theorem relating the area of a spherical triangle to its angular excess. By counting the number of spin-network states corresponding to an event horizon of area A, the entropy of black holes is seen to be formula_1 Here formula_2 is the Immirzi parameter and either formula_3 or formula_4 depending on the gauge group used in loop quantum gravity. So, by choosing the Immirzi parameter to be equal to formula_5, one recovers the Bekenstein–Hawking formula. This computation appears independent of the kind of black hole, since the given Immirzi parameter is always the same. However, Krzysztof Meissner and Marcin Domagala with Jerzy Lewandowski have corrected the assumption that only the minimal values of the spin contribute. Their result involves the logarithm of a transcendental number instead of the logarithms of integers mentioned above. The Immirzi parameter appears in the denominator because the entropy counts the number of edges puncturing the event horizon and the Immirzi parameter is proportional to the area contributed by each puncture. Immirzi parameter in spin foam theory. In late 2006, independent from the definition of isolated horizon theory, Ansari reported that in loop quantum gravity the eigenvalues of the area operator are symmetric by the ladder symmetry. Corresponding to each eigenvalue there are a finite number of degenerate states. One application could be if the classical null character of a horizon is disregarded in the quantum sector, in the lack of energy condition and presence of gravitational propagation the Immirzi parameter tunes to: formula_6 by the use of Olaf Dreyer's conjecture for identifying the evaporation of minimal area cell with the corresponding area of the highly damping quanta. This proposes a kinematical picture for defining a quantum horizon via spin foam models, however the dynamics of such a model has not yet been studied. Scale-invariant theory. For scale-invariant dilatonic theories of gravity with standard model-type matter couplings, Charles Wang and co-workers show that their loop quantization lead to a conformal class of Ashtekar–Barbero connection variables using the Immirzi parameter as a conformal gauge parameter without a preferred value. Accordingly, a different choice of the value for the Immirzi parameter for such a theory merely singles out a conformal frame without changing the physical descriptions. Interpretation. The parameter may be viewed as a renormalization of Newton's constant. Various speculative proposals to explain this parameter have been suggested: for example, an argument due to Olaf Dreyer based on quasinormal modes. Another more recent interpretation is that it is the measure of the value of parity violation in quantum gravity, analogous to the theta parameter of QCD, and its positive real value is necessary for the Kodama state of loop quantum gravity. As of today (2004), no alternative calculation of this constant exists. If a second match with experiment or theory (for example, the value of Newton's force at long distance) were found requiring a different value of the Immirzi parameter, it would constitute evidence that loop quantum gravity cannot reproduce the physics of general relativity at long distances. On the other hand, the Immirzi parameter seems to be the only free parameter of vacuum LQG, and once it is fixed by matching one calculation to an "experimental" result, it could in principle be used to predict other experimental results. Unfortunately, no such alternative calculations have been made so far. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\, S=A/4\\!" }, { "math_id": 1, "text": "\\, S=\\gamma_0 A/4\\gamma.\\!" }, { "math_id": 2, "text": "\\gamma " }, { "math_id": 3, "text": "\\gamma_0=\\ln(2) / \\sqrt{3}\\pi" }, { "math_id": 4, "text": "\\gamma_0=\\ln(3) / \\sqrt{8}\\pi," }, { "math_id": 5, "text": "\\,\\gamma_0" }, { "math_id": 6, "text": "\\ln(3) / \\sqrt{8} \\pi, " } ]
https://en.wikipedia.org/wiki?curid=681481
68149028
Parity measurement
Parity measurement (also referred to as Operator measurement) is a procedure in quantum information science used for error detection in quantum qubits. A parity measurement checks the equality of two qubits to return a true or false answer, which can be used to determine whether a correction needs to occur. Additional measurements can be made for a system greater than two qubits. Because parity measurement does not measure the state of singular bits but rather gets information about the whole state, it is considered an example of a joint measurement. Joint measurements do not have the consequence of destroying the original state of a qubit as normal quantum measurements do. Mathematically speaking, parity measurements are used to project a state into an eigenstate of an operator and to acquire its eigenvalue. Parity measurement is an essential concept of quantum error correction. From the parity measurement, an appropriate unitary operation can be applied to correct the error without knowing the beginning state of the qubit. Parity and parity checking. A qubit is a two-level system, and when we measure one qubit, we can have either 1 or 0 as a result. One corresponds to odd parity, and zero corresponds to even parity. This is what a parity check is. This idea can be generalized beyond single qubits. This can be generalized beyond a single qubit and it is useful in QEC. The idea of parity checks in QEC is to have just parity information of multiple data qubits over one (auxiliary) qubit without revealing any other information. Any unitary can be used for the parity check. If we want to have the parity information of a valid quantum observable U, we need to apply the controlled-U gates between the ancilla qubit and the data qubits sequentially. For example, for making parity check measurement in the X basis, we need to apply CNOT gates between the ancilla qubit and the data qubits sequentially since the controlled gate in this case is a CNOT (CX) gate. The unique state of the ancillary qubit is then used to determine either even or odd parity of the qubits. When the qubits of the input states are equal, an even parity will be measured, indicating that no error has occurred. When the qubits are unequal, an odd parity will be measured, indicating a single bit-flip error. With more than two qubits, additional parity measurements can be performed to determine if the qubits are the same value, and if not, to find which is the outlier. For example, in a system of three qubits, one can first perform a parity measurement on the first and second qubit, and then on the first and third qubit. Specifically, one is measuring formula_0 to determine if an formula_1 error has occurred on the first two qubits, and then formula_2 to determine if an formula_1 error has occurred on the first and third qubits. In a circuit, an ancillary qubit is prepared in the formula_3 state. During measurement, a CNOT gate is performed on the ancillary bit dependent on the first qubit being checked, followed by a second CNOT gate performed on the ancillary bit dependent on the second qubit being checked. If these qubits are the same, the double CNOT gates will revert the ancillary qubit to its initial formula_3 state, which indicates even parity. If these qubits are not the same, the double CNOT gates will alter the ancillary qubit to the opposite formula_4 state, which indicates odd parity. Looking at the ancillary qubits, a corresponding correction can be performed. Alternatively, the parity measurement can be thought of as a projection of a qubit state into an eigenstate of an operator and to acquire its eigenvalue. For the formula_0 measurement, checking the ancillary qubit in the basis formula_5 will return the eigenvalue of the measurement. If the eigenvalue here is measured to be +1, this indicates even parity of the bits without error. If the eigenvalue is measured to be -1, this indicates odd parity of the bits with a bit-flip error. Example. Alice, a sender, wants to transmit a qubit to Bob, a receiver. The state of any qubit that Alice would wish to send can be written as formula_6 where formula_7and formula_8are coefficients. Alice encodes this into three qubits, so that the initial state she transmits is formula_9. Following noise in the channel, the three qubits state can be seen in the following table with the corresponding probability: A parity measurement can be performed on the altered state, with two ancillary qubits storing the measurement. First, the first and second qubits' parity is checked. If they are equal, a formula_11 is stored in the first ancillary qubit. If they are not equal, a formula_12 is stored in the first ancillary qubit. The same action is performed comparing the first and third qubits, with the check being stored in the second ancillary qubit. Important to note is that we do not actually need to know the input qubit state, and can perform the CNOT operations indicating the parity without this knowledge. The ancillary qubits are what indicates what bit has been altered, and the formula_10 correction operation can be performed as needed. An easy way to visualize this is in the circuit above. First, the input state formula_13 is encoded into 3 bits, and parity checks are performed with subsequent error correction performed based on the results of the ancilla qubits at the bottom. Finally, decoding is performing to put get back to the same basis of the input state. Parity check matrix. A parity check matrix for a quantum circuit can also be constructed using these principles. For some message "x" encoded as "Gx", where "G" corresponds to the generator matrix, "Hx" = 0 where "H" is the parity matrix containing 0's and 1's for a situation where there is no error. However, if an error occurs at one component, then the pattern in the errors can be used to find which bit is incorrect. Types of parity measurements. Two types of parity measurement are indirect and direct. Indirect parity measurements coincide with the typical way we think of parity measurement as described above, by measuring an ancilla qubit to determine the parity of the input bits. Direct parity measurements differ from the previous type in that a common mode with the parities coupled to the qubits is measured, without the need for an ancilla qubit. While indirect parity measurements can put a strain on experimental capacity, direct measurements may interfere with the fidelity of the initial states. Example. For example, given a Hermitian and Unitary operator formula_14 (whose eigenvalues are formula_15) and a state formula_16, the circuit on the top right performs a Parity measurement on formula_14. After the first Hadamard gate, the state of the circuit is formula_17 After applying the "controlled-U" gate, the state of the circuit evolves to formula_18 After applying the second Hadamard gate, the state of the circuit turns into formula_19 If the state of the top qubit after measurement is formula_3, then formula_20; which is the formula_21 eigenstate of formula_14. If the state of the top qubit is formula_4, then formula_22; which is the formula_23 eigenstate of formula_14. Experiments and applications. In experiments, parity measurements are not only a mechanism for quantum error correction, but they can also help combat non-ideal conditions. Given the existent possibility for bit flip errors, there is an additional likelihood for errors as a result of leakage. This phenomenon is due to unused high-energy qubits becoming excited. It has been demonstrated in superconducting qubits that parity measurements can be applied repetitively during quantum error correction to remove leakage errors. Repetitive parity measurements can be used to stabilize an entangled state and prevent leakage errors (which normally is not possible with typical quantum error correction), but the first group to accomplish this did so in 2020. By performing interleaving XX and ZZ checks, which can ultimately tell whether an X (bit), Y (iXZ), or Z (phase) flip error occurs. The outcomes of these parity measurements of ancilla qubits are used with Hidden Markov Models to complete leakage detection and correction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Z \\otimes Z \\otimes I\n" }, { "math_id": 1, "text": "X\n" }, { "math_id": 2, "text": " Z \\otimes I \\otimes Z\n" }, { "math_id": 3, "text": "|0\\rangle" }, { "math_id": 4, "text": "|1\\rangle" }, { "math_id": 5, "text": "|0\\rangle \\pm \\ |1\\rangle \n" }, { "math_id": 6, "text": "a\\ |0\\rangle +b\\ |1\\rangle \n" }, { "math_id": 7, "text": "a\\ \n" }, { "math_id": 8, "text": "b\\ \n" }, { "math_id": 9, "text": "a\\ |000\\rangle +b\\ |111\\rangle \n" }, { "math_id": 10, "text": "\\sigma_x " }, { "math_id": 11, "text": "|0\\rangle \n" }, { "math_id": 12, "text": "|1\\rangle \n" }, { "math_id": 13, "text": "|\\psi\\rangle \n" }, { "math_id": 14, "text": "U" }, { "math_id": 15, "text": "\\pm1" }, { "math_id": 16, "text": "|\\psi\\rangle" }, { "math_id": 17, "text": "\\frac{1}{\\sqrt{2}}(|0\\rangle |\\psi\\rangle + |1\\rangle |\\psi\\rangle) " }, { "math_id": 18, "text": "\\frac{1}{\\sqrt{2}}(|0\\rangle |\\psi\\rangle + |1\\rangle U|\\psi\\rangle) " }, { "math_id": 19, "text": "\\frac{1}{2}|0\\rangle(|\\psi\\rangle + U|\\psi\\rangle) + \\frac{1}{2}|1\\rangle(|\\psi\\rangle - U|\\psi\\rangle) " }, { "math_id": 20, "text": "|\\phi\\rangle = |\\psi\\rangle + U|\\psi\\rangle" }, { "math_id": 21, "text": "+1" }, { "math_id": 22, "text": "|\\phi\\rangle = |\\psi\\rangle - U|\\psi\\rangle" }, { "math_id": 23, "text": "-1" } ]
https://en.wikipedia.org/wiki?curid=68149028
681582
Effective field theory
Type of approximation to an underlying physical theory In physics, an effective field theory is a type of approximation, or effective theory, for an underlying physical theory, such as a quantum field theory or a statistical mechanics model. An effective field theory includes the appropriate degrees of freedom to describe physical phenomena occurring at a chosen length scale or energy scale, while ignoring substructure and degrees of freedom at shorter distances (or, equivalently, at higher energies). Intuitively, one averages over the behavior of the underlying theory at shorter length scales to derive what is hoped to be a simplified model at longer length scales. Effective field theories typically work best when there is a large separation between length scale of interest and the length scale of the underlying dynamics. Effective field theories have found use in particle physics, statistical mechanics, condensed matter physics, general relativity, and hydrodynamics. They simplify calculations, and allow treatment of dissipation and radiation effects. The renormalization group. Presently, effective field theories are discussed in the context of the renormalization group (RG) where the process of "integrating out" short distance degrees of freedom is made systematic. Although this method is not sufficiently concrete to allow the actual construction of effective field theories, the gross understanding of their usefulness becomes clear through an RG analysis. This method also lends credence to the main technique of constructing effective field theories, through the analysis of symmetries. If there is a single mass scale M in the "microscopic" theory, then the effective field theory can be seen as an expansion in 1/M. The construction of an effective field theory accurate to some power of 1/M requires a new set of free parameters at each order of the expansion in 1/M. This technique is useful for scattering or other processes where the maximum momentum scale k satisfies the condition k/M≪1. Since effective field theories are not valid at small length scales, they need not be renormalizable. Indeed, the ever expanding number of parameters at each order in 1/M required for an effective field theory means that they are generally not renormalizable in the same sense as quantum electrodynamics which requires only the renormalization of two parameters. Examples of effective field theories. Fermi theory of beta decay. The best-known example of an effective field theory is the Fermi theory of beta decay. This theory was developed during the early study of weak decays of nuclei when only the hadrons and leptons undergoing weak decay were known. The typical reactions studied were: formula_0 This theory posited a pointlike interaction between the four fermions involved in these reactions. The theory had great phenomenological success and was eventually understood to arise from the gauge theory of electroweak interactions, which forms a part of the standard model of particle physics. In this more fundamental theory, the interactions are mediated by a flavour-changing gauge boson, the W±. The immense success of the Fermi theory was because the W particle has mass of about 80 GeV, whereas the early experiments were all done at an energy scale of less than 10 MeV. Such a separation of scales, by over 3 orders of magnitude, has not been met in any other situation as yet. BCS theory of superconductivity. Another famous example is the BCS theory of superconductivity. Here the underlying theory is the theory of electrons in a metal interacting with lattice vibrations called phonons. The phonons cause attractive interactions between some electrons, causing them to form Cooper pairs. The length scale of these pairs is much larger than the wavelength of phonons, making it possible to neglect the dynamics of phonons and construct a theory in which two electrons effectively interact at a point. This theory has had remarkable success in describing and predicting the results of experiments on superconductivity. Effective field theories in gravity. General relativity itself is expected to be the low energy effective field theory of a full theory of quantum gravity, such as string theory or Loop Quantum Gravity. The expansion scale is the Planck mass. Effective field theories have also been used to simplify problems in General Relativity, in particular in calculating the gravitational wave signature of inspiralling finite-sized objects. The most common EFT in GR is "Non-Relativistic General Relativity" (NRGR), which is similar to the post-Newtonian expansion. Another common GR EFT is the Extreme Mass Ratio (EMR), which in the context of the inspiralling problem is called EMRI. Other examples. Presently, effective field theories are written for many situations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\nn & \\to p+e^-+\\overline\\nu_e \\\\\n\\mu^- & \\to e^-+\\overline\\nu_e+\\nu_\\mu.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=681582
681625
Group-velocity dispersion
Dependence of group velocity on frequency In optics, group-velocity dispersion (GVD) is a characteristic of a dispersive medium, used most often to determine how the medium affects the duration of an optical pulse traveling through it. Formally, GVD is defined as the derivative of the inverse of group velocity of light in a material with respect to angular frequency, formula_0 where formula_1 and formula_2 are angular frequencies, and the group velocity formula_3 is defined as formula_4. The units of group-velocity dispersion are [time]2/[distance], often expressed in fs2/mm. Equivalently, group-velocity dispersion can be defined in terms of the medium-dependent wave vector formula_5 according to formula_6 or in terms of the refractive index formula_7 according to formula_8 Applications. Group-velocity dispersion is most commonly used to estimate the amount of chirp that will be imposed on a pulse of light after passing through a material of interest: formula_9 Derivation. A simple illustration of how GVD can be used to determine pulse chirp can be seen by looking at the effect of a transform-limited pulse of duration formula_10 passing through a planar medium of thickness "d". Before passing through the medium, the phase offsets of all frequencies are aligned in time, and the pulse can be described as a function of time, formula_11 or equivalently, as a function of frequency, formula_12 (the parameters "A" and "B" are normalization constants). Passing through the medium results in a frequency-dependent phase accumulation formula_13, such that the post-medium pulse can be described by formula_14 In general, the refractive index formula_7, and therefore the wave vector formula_15, can be an arbitrary function of formula_1, making it difficult to analytically perform the inverse Fourier transform back into the time domain. However, if the bandwidth of the pulse is narrow relative to the curvature of formula_16, then good approximations of the impact of the refractive index can be obtained by replacing formula_5 with its Taylor expansion centered about formula_2: formula_17 Truncating this expression and inserting it into the post-medium frequency-domain expression results in a post-medium time-domain expression formula_18 On balance, the pulse is lengthened to an intensity standard deviation value of formula_19 thus validating the initial expression. Note that a transform-limited pulse has formula_20, which makes it appropriate to identify 1/(2"σt") as the bandwidth. Alternate derivation. An alternate derivation of the relationship between pulse chirp and GVD, which more immediately illustrates the reason why GVD can be defined by the derivative of inverse group velocity, can be outlined as follows. Consider two transform-limited pulses of carrier frequencies formula_21 and formula_22, which are initially overlapping in time. After passing through the medium, these two pulses will exhibit a time delay between their respective pulse-envelope centers, given by formula_23 The expression can be approximated as a Taylor expansion, giving formula_24 or formula_25 From here it is possible to imagine scaling this expression up two pulses to infinitely many. The frequency difference formula_26 must be replaced by the bandwidth, and the time delay formula_27 evolves into the induced chirp. Group-delay dispersion. A closely related yet independent quantity is the group-delay dispersion (GDD), defined such that group-velocity dispersion is the group-delay dispersion per unit length. GDD is commonly used as a parameter in characterizing layered mirrors, where the group-velocity dispersion is not particularly well-defined, yet the chirp induced after bouncing off the mirror can be well-characterized. The units of group-delay dispersion are [time]2, often expressed in fs2. The group-delay dispersion (GDD) of an optical element is the derivative of the group delay with respect to angular frequency, and also the second derivative of the optical phase: formula_28 It is a measure of the chromatic dispersion of the element. GDD is related to the total dispersion parameter formula_29 as formula_30 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\text{GVD}(\\omega_0) \\equiv \\frac{\\partial}{\\partial \\omega} \\left( \\frac{1}{v_g(\\omega)} \\right)_{\\omega=\\omega_0},\n" }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "\\omega_0" }, { "math_id": 3, "text": "v_g(\\omega)" }, { "math_id": 4, "text": "v_g(\\omega) \\equiv \\partial \\omega / \\partial k" }, { "math_id": 5, "text": "k(\\omega)" }, { "math_id": 6, "text": "\n \\text{GVD}(\\omega_0) \\equiv \\left( \\frac{\\partial^2 k}{\\partial \\omega^2}\\right)_{\\omega=\\omega_0},\n" }, { "math_id": 7, "text": "n(\\omega)" }, { "math_id": 8, "text": "\n \\text{GVD}(\\omega_0) \\equiv \\frac{2}{c} \\left(\\frac{\\partial n}{\\partial \\omega}\\right)_{\\omega=\\omega_0} +\n \\frac{\\omega_0}{c}\\left( \\frac{\\partial^2 n}{\\partial \\omega^2}\\right)_{\\omega=\\omega_0}.\n" }, { "math_id": 9, "text": "\n \\text{chirp} = (\\text{material thickness}) \\times \\text{GVD}(\\omega_0) \\times (\\text{bandwidth}).\n" }, { "math_id": 10, "text": "\\sigma" }, { "math_id": 11, "text": "\n E(t) = Ae^{-\\frac{t^2}{4 \\sigma^2}} e^{-i \\omega_0 t},\n" }, { "math_id": 12, "text": "\n E(\\omega) = Be^{-\\frac{(w - w_0)^2}{4 (1/2\\sigma)^2}}\n" }, { "math_id": 13, "text": "\\Delta \\phi(\\omega) = k(\\omega) d" }, { "math_id": 14, "text": "\n E(\\omega) = Be^{-\\frac{(w - w_0)^2}{4 (1/2\\sigma)^2}} e^{i k(\\omega) d}.\n" }, { "math_id": 15, "text": "k(\\omega) = n(\\omega)\\omega/c" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "\n \\frac{n(\\omega)\\omega}{c} = \\underbrace{\\frac{n(\\omega_0)\\omega_0}{c}}_{k(\\omega_0)} +\n \\underbrace{\\left[ \\frac{n(\\omega_0) + n'(\\omega_0)\\omega_0}{c}\\right]}_{k'(\\omega_0)}(\\omega - \\omega_0) +\n \\frac{1}{2} \\underbrace{\\left[ \\frac{2 n'(\\omega_0) + n''(\\omega_0)\\omega_0}{c} \\right]}_{\\text{GVD}} (\\omega - \\omega_0)^2 +\n \\dots\n" }, { "math_id": 18, "text": "\n E_\\text{post}(t) = A_\\text{post} \\exp\\left[-\\frac{\\left(t - k'(\\omega_0) d\\right)^2}{4 \\left(\\sigma^2 - i\\, \\text{GVD}\\, d/2\\right)}\\right] e^{i[k(\\omega_0) d - \\omega_0 t]}.\n" }, { "math_id": 19, "text": "\n \\sigma_\\text{post} = \\sqrt{ \\sigma^2 + \\left[ d\\, \\textrm{GVD}(\\omega_0) \\frac{1}{2\\sigma}\\right]^2 },\n" }, { "math_id": 20, "text": "\\sigma_\\omega \\sigma_t = 1/2" }, { "math_id": 21, "text": "\\omega_1" }, { "math_id": 22, "text": "\\omega_2" }, { "math_id": 23, "text": "\n \\Delta T = d \\left( \\frac{1}{v_g(\\omega_2)} - \\frac{1}{v_g(\\omega_1)} \\right).\n" }, { "math_id": 24, "text": "\n \\Delta T = d \\left(\n \\frac{1}{v_g(\\omega_1)} +\n \\frac{\\partial}{\\partial \\omega}\\left( \\frac{1}{v_g(\\omega')}\\right)_{\\omega' = \\omega_1} (\\omega_2 - \\omega_1) -\n \\frac{1}{v_g(\\omega_1)}\n \\right),\n" }, { "math_id": 25, "text": "\n \\Delta T = d \\times \\textrm{GVD}(\\omega_1) \\times (\\omega_2 - \\omega_1).\n" }, { "math_id": 26, "text": "\\omega_2 - \\omega_1" }, { "math_id": 27, "text": "\\Delta T" }, { "math_id": 28, "text": "\n D_2(\\omega) = -\\frac{\\partial T_g}{d \\omega} = \\frac{d^2 \\phi}{d\\omega^2}.\n" }, { "math_id": 29, "text": "D_\\text{tot}" }, { "math_id": 30, "text": "\n D_2(\\omega) = -\\frac{2\\pi c}{\\lambda^2} D_\\text{tot}.\n" } ]
https://en.wikipedia.org/wiki?curid=681625
681628
Quasinormal mode
Quasinormal modes (QNM) are the modes of energy dissipation of a perturbed object or field, "i.e." they describe perturbations of a field that decay in time. Example. A familiar example is the perturbation (gentle tap) of a wine glass with a knife: the glass begins to ring, it rings with a set, or superposition, of its natural frequencies — its modes of sonic energy dissipation. One could call these modes "normal" if the glass went on ringing forever. Here the amplitude of oscillation decays in time, so we call its modes "quasi-normal". To a high degree of accuracy, quasinormal ringing can be approximated by formula_0 where formula_1 is the amplitude of oscillation, formula_2 is the frequency, and formula_3 is the decay rate. The quasinormal frequency is described by two numbers, formula_4 or, more compactly formula_5 formula_6 Here, formula_7 is what is commonly referred to as the quasinormal mode frequency. It is a complex number with two pieces of information: real part is the temporal oscillation; imaginary part is the temporal, exponential decay. In certain cases the amplitude of the wave decays quickly, to follow the decay for a longer time one may plot formula_8 Mathematical physics. In theoretical physics, a quasinormal mode is a formal solution of linearized differential equations (such as the linearized equations of general relativity constraining perturbations around a black hole solution) with a complex eigenvalue (frequency). Black holes have many quasinormal modes (also: ringing modes) that describe the exponential decrease of asymmetry of the black hole in time as it evolves towards the perfect spherical shape. Recently, the properties of quasinormal modes have been tested in the context of the AdS/CFT correspondence. Also, the asymptotic behavior of quasinormal modes was proposed to be related to the Immirzi parameter in loop quantum gravity, but convincing arguments have not been found yet. Electromagnetism and photonics. There are essentially two types of resonators in optics. In the first type, a high-Q factor optical microcavity is achieved with lossless dielectric optical materials, with mode volumes of the order of a cubic wavelength, essentially limited by the diffraction limit. Famous examples of high-Q microcavities are micropillar cavities, microtoroid resonators, photonic-crystal cavities. In the second type of resonators, the characteristic size is well below the diffraction limit, routinely by 2-3 orders of magnitude. In such small volumes, energies are stored for a small period of time. A plasmonic nanoantenna supporting a localized surface plasmon quasinormal mode essentially behaves as a poor antenna that radiates energy rather than stores it. Thus, as the optical mode becomes deeply sub-wavelength in all three dimensions, independent of its shape, the Q-factor is limited to about 10 or less. Formally, the resonances (i.e., the quasinormal mode) of an open (non-Hermitian) electromagnetic micro or nanoresonators are all found by solving the time-harmonic source-free Maxwell’s equations with a complex frequency, the real part being the resonance frequency and the imaginary part the damping rate. The damping is due to energy loses via leakage (the resonator is coupled to the open space surrounding it) and/or material absorption. Quasinormal-mode solvers exist to efficiently compute and normalize all kinds of modes of plasmonic nanoresonators and photonic microcavities. The proper normalisation of the mode leads to the important concept of mode volume of non-Hermitian (open and lossy) systems. The mode volume directly impact the physics of the interaction of light and electrons with optical resonance, e.g. the local density of electromagnetic states, Purcell effect, cavity perturbation theory, strong interaction with quantum emitters, superradiance. Biophysics. In computational biophysics, quasinormal modes, also called quasiharmonic modes, are derived from diagonalizing the matrix of equal-time correlations of atomic fluctuations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi(t) \\approx e^{-\\omega^{\\prime\\prime}t}\\cos\\omega^{\\prime}t" }, { "math_id": 1, "text": "\\psi\\left(t\\right)" }, { "math_id": 2, "text": "\\omega^{\\prime}" }, { "math_id": 3, "text": "\\omega^{\\prime\\prime}" }, { "math_id": 4, "text": "\\omega = \\left(\\omega^{\\prime} , \\omega^{\\prime\\prime}\\right)" }, { "math_id": 5, "text": "\\psi\\left(t\\right) \\approx \\operatorname{Re}(e^{i\\omega t})" }, { "math_id": 6, "text": "\\omega =\\omega^{\\prime} + i\\omega^{\\prime\\prime}" }, { "math_id": 7, "text": "\\mathbf{\\omega}" }, { "math_id": 8, "text": "\\log\\left|\\psi(t)\\right|" } ]
https://en.wikipedia.org/wiki?curid=681628
68162942
Graph neural network
Class of artificial neural networks &lt;templatestyles src="Machine learning/styles.css"/&gt; A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs. In the more general subject of "geometric deep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs. A convolutional neural network layer, in the context of computer vision, can be seen as a GNN applied to graphs whose nodes are pixels and only adjacent pixels are connected by edges in the graph. A transformer layer, in natural language processing, can be seen as a GNN applied to complete graphs whose nodes are words or tokens in a passage of natural language text. The key design element of GNNs is the use of "pairwise message passing", such that graph nodes iteratively update their representations by exchanging information with their neighbors. Since their inception, several different GNN architectures have been proposed, which implement different flavors of message passing, started by recursive or convolutional constructive approaches. As of 2022[ [update]], whether it is possible to define GNN architectures "going beyond" message passing, or if every GNN can be built on message passing over suitably defined graphs, is an open research question. Relevant application domains for GNNs include Natural Language Processing, social networks, citation networks, molecular biology, chemistry, physics and NP-hard combinatorial optimization problems. Several open source libraries implementing graph neural networks are available, such as PyTorch Geometric (PyTorch), TensorFlow GNN (TensorFlow), jraph (Google JAX), and GraphNeuralNetworks.jl/GeometricFlux.jl (Julia, Flux). Architecture. The architecture of a generic GNN implements the following fundamental layers: It has been demonstrated that GNNs cannot be more expressive than the Weisfeiler–Leman Graph Isomorphism Test. In practice, this means that there exist different graph structures (e.g., molecules with the same atoms but different bonds) that cannot be distinguished by GNNs. More powerful GNNs operating on higher-dimension geometries such as simplicial complexes can be designed. As of 2022[ [update]], whether or not future architectures will overcome the message passing primitive is an open research question. Message passing layers. Message passing layers are permutation-equivariant layers mapping a graph into an updated representation of the same graph. Formally, they can be expressed as message passing neural networks (MPNNs). Let formula_1 be a graph, where formula_2 is the node set and formula_3 is the edge set. Let formula_4 be the neighbourhood of some node formula_5. Additionally, let formula_6 be the features of node formula_5, and formula_7 be the features of edge formula_8. An MPNN layer can be expressed as follows: formula_9 where formula_0 and formula_10 are differentiable functions (e.g., artificial neural networks), and formula_11 is a permutation invariant aggregation operator that can accept an arbitrary number of inputs (e.g., element-wise sum, mean, or max). In particular, formula_0 and formula_10 are referred to as "update" and "message" functions, respectively. Intuitively, in an MPNN computational block, graph nodes "update" their representations by "aggregating" the "messages" received from their neighbours. The outputs of one or more MPNN layers are node representations formula_12 for each node formula_5 in the graph. Node representations can be employed for any downstream task, such as node/graph classification or edge prediction. Graph nodes in an MPNN update their representation aggregating information from their immediate neighbours. As such, stacking formula_13 MPNN layers means that one node will be able to communicate with nodes that are at most formula_13 "hops" away. In principle, to ensure that every node receives information from every other node, one would need to stack a number of MPNN layers equal to the graph diameter. However, stacking many MPNN layers may cause issues such as oversmoothing and oversquashing. Oversmoothing refers to the issue of node representations becoming indistinguishable. Oversquashing refers to the bottleneck that is created by squeezing long-range dependencies into fixed-size representations. Countermeasures such as skip connections (as in residual neural networks), gated update rules and jumping knowledge can mitigate oversmoothing. Modifying the final layer to be a fully-adjacent layer, i.e., by considering the graph as a complete graph, can mitigate oversquashing in problems where long-range dependencies are required. Other "flavours" of MPNN have been developed in the literature, such as graph convolutional networks and graph attention networks, whose definitions can be expressed in terms of the MPNN formalism. Graph convolutional network. The graph convolutional network (GCN) was first introduced by Thomas Kipf and Max Welling in 2017. A GCN layer defines a first-order approximation of a localized spectral filter on graphs. GCNs can be understood as a generalization of convolutional neural networks to graph-structured data. The formal expression of a GCN layer reads as follows: formula_14 where formula_15 is the matrix of node representations formula_12, formula_16 is the matrix of node features formula_6, formula_17 is an activation function (e.g., ReLU), formula_18 is the graph adjacency matrix with the addition of self-loops, formula_19 is the graph degree matrix with the addition of self-loops, and formula_20 is a matrix of trainable parameters. In particular, let formula_21 be the graph adjacency matrix: then, one can define formula_22 and formula_23, where formula_24 denotes the identity matrix. This normalization ensures that the eigenvalues of formula_25 are bounded in the range formula_26, avoiding numerical instabilities and exploding/vanishing gradients. A limitation of GCNs is that they do not allow multidimensional edge features formula_7. It is however possible to associate scalar weights formula_27 to each edge by imposing formula_28, i.e., by setting each nonzero entry in the adjacency matrix equal to the weight of the corresponding edge. Graph attention network. The graph attention network (GAT) was introduced by Petar Veličković et al. in 2018. Graph attention network is a combination of a graph neural network and an attention layer. The implementation of attention layer in graphical neural networks helps provide attention or focus to the important information from the data instead of focusing on the whole data. A multi-head GAT layer can be expressed as follows: formula_29 where formula_30 is the number of attention heads, formula_31 denotes vector concatenation, formula_17 is an activation function (e.g., ReLU), formula_32 are attention coefficients, and formula_33 is a matrix of trainable parameters for the formula_34-th attention head. For the final GAT layer, the outputs from each attention head are averaged before the application of the activation function. Formally, the final GAT layer can be written as: formula_35 Attention in Machine Learning is a technique that mimics cognitive attention. In the context of learning on graphs, the attention coefficient formula_36 measures "how important" is node formula_5 to node formula_37. Normalized attention coefficients are computed as follows: formula_38 where formula_39 is a vector of learnable weights, formula_40 indicates transposition, and formula_41 is a modified ReLU activation function. Attention coefficients are normalized to make them easily comparable across different nodes. A GCN can be seen as a special case of a GAT where attention coefficients are not learnable, but fixed and equal to the edge weights formula_27. Gated graph sequence neural network. The gated graph sequence neural network (GGS-NN) was introduced by Yujia Li et al. in 2015. The GGS-NN extends the GNN formulation by Scarselli et al. to output sequences. The message passing framework is implemented as an update rule to a gated recurrent unit (GRU) cell. A GGS-NN can be expressed as follows: formula_42 formula_43 formula_44 where formula_45 denotes vector concatenation, formula_46 is a vector of zeros, formula_20 is a matrix of learnable parameters, formula_47 is a GRU cell, and formula_48 denotes the sequence index. In a GGS-NN, the node representations are regarded as the hidden states of a GRU cell. The initial node features formula_49 are zero-padded up to the hidden state dimension of the GRU cell. The same GRU cell is used for updating representations for each node. Local pooling layers. Local pooling layers coarsen the graph via downsampling. We present here several learnable local pooling strategies that have been proposed. For each cases, the input is the initial graph is represented by a matrix formula_16 of node features, and the graph adjacency matrix formula_21. The output is the new matrix formula_50of node features, and the new graph adjacency matrix formula_51. Top-k pooling. We first set formula_52 where formula_53 is a learnable projection vector. The projection vector formula_53 computes a scalar projection value for each graph node. The top-k pooling layer can then be formalised as follows: formula_54 formula_55 where formula_56 is the subset of nodes with the top-k highest projection scores, formula_57 denotes element-wise matrix multiplication, and formula_58 is the sigmoid function. In other words, the nodes with the top-k highest projection scores are retained in the new adjacency matrix formula_51. The formula_58 operation makes the projection vector formula_53 trainable by backpropagation, which otherwise would produce discrete outputs. Self-attention pooling. We first set formula_59 where formula_60 is a generic permutation equivariant GNN layer (e.g., GCN, GAT, MPNN). The Self-attention pooling layer can then be formalised as follows: formula_61 formula_55 where formula_56 is the subset of nodes with the top-k highest projection scores, formula_57 denotes element-wise matrix multiplication. The self-attention pooling layer can be seen as an extension of the top-k pooling layer. Differently from top-k pooling, the self-attention scores computed in self-attention pooling account both for the graph features and the graph topology. Applications. Protein folding. Graph neural networks are one of the main building blocks of AlphaFold, an artificial intelligence program developed by Google's DeepMind for solving the protein folding problem in biology. AlphaFold achieved first place in several CASP competitions. Social networks. Social networks are a major application domain for GNNs due to their natural representation as social graphs. GNNs are used to develop recommender systems based on both social relations and item relations. Combinatorial optimization. GNNs are used as fundamental building blocks for several combinatorial optimization algorithms. Examples include computing shortest paths or Eulerian circuits for a given graph, deriving chip placements superior or competitive to handcrafted human solutions, and improving expert-designed branching rules in branch and bound. Cyber security. When viewed as a graph, a network of computers can be analyzed with GNNs for anomaly detection. Anomalies within provenance graphs often correlate to malicious activity within the network. GNNs have been used to identify these anomalies on individual nodes and within paths to detect malicious processes, or on the edge level to detect lateral movement. Water Distribution Networks. Water distribution systems can be modelled as graphs, being then a straightforward application of GNN. This kind of algorithm has been applied to water demand forecasting, interconnecting District Measuring Areas for improve the forecasting capacity. Other application of this algorithm on water distribution modelling is the development of metamodels. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "G = (V,E)" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "N_u" }, { "math_id": 5, "text": "u \\in V" }, { "math_id": 6, "text": "\\mathbf{x}_u" }, { "math_id": 7, "text": "\\mathbf{e}_{uv}" }, { "math_id": 8, "text": "(u, v) \\in E" }, { "math_id": 9, "text": "\\mathbf{h}_u = \\phi \\left( \\mathbf{x}_u, \\bigoplus_{v \\in N_u} \\psi(\\mathbf{x}_u, \\mathbf{x}_v, \\mathbf{e}_{uv}) \\right)" }, { "math_id": 10, "text": "\\psi" }, { "math_id": 11, "text": "\\bigoplus" }, { "math_id": 12, "text": "\\mathbf{h}_u" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "\\mathbf{H} = \\sigma\\left(\\tilde{\\mathbf{D}}^{-\\frac{1}{2}} \\tilde{\\mathbf{A}} \\tilde{\\mathbf{D}}^{-\\frac{1}{2}} \\mathbf{X} \\mathbf{\\Theta}\\right)" }, { "math_id": 15, "text": "\\mathbf{H}" }, { "math_id": 16, "text": "\\mathbf{X}" }, { "math_id": 17, "text": "\\sigma(\\cdot)" }, { "math_id": 18, "text": "\\tilde{\\mathbf{A}}" }, { "math_id": 19, "text": "\\tilde{\\mathbf{D}}" }, { "math_id": 20, "text": "\\mathbf{\\Theta}" }, { "math_id": 21, "text": "\\mathbf{A}" }, { "math_id": 22, "text": "\\tilde{\\mathbf{A}} = \\mathbf{A} + \\mathbf{I}" }, { "math_id": 23, "text": "\\tilde{\\mathbf{D}}_{ii} = \\sum_{j \\in V} \\tilde{A}_{ij}" }, { "math_id": 24, "text": "\\mathbf{I}" }, { "math_id": 25, "text": "\\tilde{\\mathbf{D}}^{-\\frac{1}{2}} \\tilde{\\mathbf{A}} \\tilde{\\mathbf{D}}^{-\\frac{1}{2}}" }, { "math_id": 26, "text": "[0, 1]" }, { "math_id": 27, "text": "w_{uv}" }, { "math_id": 28, "text": "A_{uv} = w_{uv}" }, { "math_id": 29, "text": " \\mathbf{h}_u = \\overset{K}{\\underset{k=1}{\\Big\\Vert}} \\sigma \\left(\\sum_{v \\in N_u} \\alpha_{uv} \\mathbf{W}^k \\mathbf{x}_v\\right) " }, { "math_id": 30, "text": "K" }, { "math_id": 31, "text": "\\Big\\Vert" }, { "math_id": 32, "text": "\\alpha_{ij}" }, { "math_id": 33, "text": "W^k" }, { "math_id": 34, "text": "k" }, { "math_id": 35, "text": " \\mathbf{h}_u = \\sigma \\left(\\frac{1}{K}\\sum_{k=1}^K \\sum_{v \\in N_u} \\alpha_{uv} \\mathbf{W}^k \\mathbf{x}_v\\right) " }, { "math_id": 36, "text": "\\alpha_{uv}" }, { "math_id": 37, "text": "v \\in V" }, { "math_id": 38, "text": "\\alpha_{uv} = \\frac{\\exp(\\text{LeakyReLU}\\left(\\mathbf{a}^T [\\mathbf{W} \\mathbf{h}_u \\Vert \\mathbf{W} \\mathbf{h}_v \\Vert \\mathbf{e}_{uv}]\\right))}{\\sum_{z \\in N_u}\\exp(\\text{LeakyReLU}\\left(\\mathbf{a}^T [\\mathbf{W} \\mathbf{h}_u \\Vert \\mathbf{W} \\mathbf{h}_z \\Vert \\mathbf{e}_{uz}]\\right))}" }, { "math_id": 39, "text": "\\mathbf{a}" }, { "math_id": 40, "text": "\\cdot^T" }, { "math_id": 41, "text": "\\text{LeakyReLU}" }, { "math_id": 42, "text": "\\mathbf{h}_u^{(0)} = \\mathbf{x}_u \\, \\Vert \\, \\mathbf{0}" }, { "math_id": 43, "text": "\\mathbf{m}_u^{(l+1)} = \\sum_{v \\in N_u} \\mathbf{\\Theta} \\mathbf{h}_v" }, { "math_id": 44, "text": "\\mathbf{h}_u^{(l+1)} = \\text{GRU}(\\mathbf{m}_u^{(l+1)}, \\mathbf{h}_u^{(l)})" }, { "math_id": 45, "text": "\\Vert" }, { "math_id": 46, "text": "\\mathbf{0}" }, { "math_id": 47, "text": "\\text{GRU}" }, { "math_id": 48, "text": "l" }, { "math_id": 49, "text": "\\mathbf{x}_u^{(0)}" }, { "math_id": 50, "text": "\\mathbf{X}'" }, { "math_id": 51, "text": "\\mathbf{A}'" }, { "math_id": 52, "text": "\\mathbf{y} = \\frac{\\mathbf{X}\\mathbf{p}}{\\Vert\\mathbf{p}\\Vert}" }, { "math_id": 53, "text": "\\mathbf{p}" }, { "math_id": 54, "text": "\\mathbf{X}' = (\\mathbf{X} \\odot \\text{sigmoid}(\\mathbf{y}))_{\\mathbf{i}}" }, { "math_id": 55, "text": "\\mathbf{A}' = \\mathbf{A}_{\\mathbf{i}, \\mathbf{i}}" }, { "math_id": 56, "text": "\\mathbf{i} = \\text{top}_k(\\mathbf{y})" }, { "math_id": 57, "text": "\\odot" }, { "math_id": 58, "text": "\\text{sigmoid}(\\cdot)" }, { "math_id": 59, "text": "\\mathbf{y} = \\text{GNN}(\\mathbf{X}, \\mathbf{A})" }, { "math_id": 60, "text": "\\text{GNN}" }, { "math_id": 61, "text": "\\mathbf{X}' = (\\mathbf{X} \\odot \\mathbf{y})_{\\mathbf{i}}" } ]
https://en.wikipedia.org/wiki?curid=68162942
681646
Ashtekar variables
Variables used in general relativity In the ADM formulation of general relativity, spacetime is split into spatial slices and a time axis. The basic variables are taken to be the induced metric formula_0 on the spatial slice and the metric's conjugate momentum formula_1, which is related to the extrinsic curvature and is a measure of how the induced metric evolves in time. These are the metric canonical coordinates. In 1986 Abhay Ashtekar introduced a new set of canonical variables, Ashtekar (new) variables to represent an unusual way of rewriting the metric canonical variables on the three-dimensional spatial slices in terms of an SU(2) gauge field and its complementary variable. Overview. Ashtekar variables provide what is called the connection representation of canonical general relativity, which led to the loop representation of quantum general relativity and in turn loop quantum gravity and quantum holonomy theory. Let us introduce a set of three vector fields formula_2 formula_3 that are orthogonal, that is, formula_4 The formula_5 are called a triad or "drei-bein" (German literal translation, "three-leg"). There are now two different types of indices, "space" indices formula_6 that behave like regular indices in a curved space, and "internal" indices formula_7 which behave like indices of flat-space (the corresponding "metric" which raises and lowers internal indices is simply formula_8). Define the dual "drei-bein" formula_9 as formula_10 We then have the two orthogonality relationships formula_11 where formula_12 is the inverse matrix of the metric formula_13 (this comes from substituting the formula for the dual "drei-bein" in terms of the "drei-bein" into formula_14 and using the orthogonality of the "drei-beins"). and formula_15 (this comes about from contracting formula_16 with formula_17 and using the linear independence of the formula_18). It is then easy to verify from the first orthogonality relation, employing formula_19 that formula_20 we have obtained a formula for the inverse metric in terms of the "drei-beins". The "drei-beins" can be thought of as the 'square-root' of the metric (the physical meaning to this is that the metric formula_21 when written in terms of a basis formula_22 is locally flat). Actually what is really considered is formula_23 which involves the "densitized" "drei-bein" formula_24 instead ("densitized" as formula_25). One recovers from formula_26 the metric times a factor given by its determinant. It is clear that formula_26 and formula_27 contain the same information, just rearranged. Now the choice for formula_26 is not unique, and in fact one can perform a local in space rotation with respect to the internal indices formula_28 without changing the (inverse) metric. This is the origin of the formula_29 gauge invariance. Now if one is going to operate on objects that have internal indices one needs to introduce an appropriate derivative (covariant derivative), for example the covariant derivative for the object formula_30 will be formula_31 where formula_32 is the usual Levi-Civita connection and formula_33 is the so-called spin connection. Let us take the configuration variable to be formula_34 where formula_35 and formula_36 The densitized "drei-bein" is the conjugate momentum variable of this three-dimensional SU(2) gauge field (or connection) formula_37 in that it satisfies the Poisson bracket relation formula_38 The constant formula_39 is the Immirzi parameter, a factor that renormalizes Newton's constant formula_40 The densitized "drei-bein" can be used to re construct the metric as discussed above and the connection can be used to reconstruct the extrinsic curvature. Ashtekar variables correspond to the choice formula_41 (the negative of the imaginary number, formula_42), formula_43 is then called the chiral spin connection. The reason for this choice of spin connection, was that Ashtekar could much simplify the most troublesome equation of canonical general relativity – namely the Hamiltonian constraint of LQG. This choice made its formidable second term vanish, and the remaining term became polynomial in his new variables. This simplification raised new hopes for the canonical quantum gravity programme. However it did present certain difficulties: Although Ashtekar variables had the virtue of simplifying the Hamiltonian, it has the problem that the variables become complex. When one quantizes the theory it is a difficult task to ensure that one recovers real general relativity, as opposed to complex general relativity. Also the Hamiltonian constraint Ashtekar worked with was the densitized version, instead of the original Hamiltonian; that is, he worked with formula_44 There were serious difficulties in promoting this quantity to a quantum operator. In 1996 Thomas Thiemann who was able to use a generalization of Ashtekar's formalism to real connections (formula_39 takes real values) and in particular devised a way of simplifying the original Hamiltonian, together with the second term. He was also able to promote this Hamiltonian constraint to a well defined quantum operator within the loop representation. Lee Smolin &amp; Ted Jacobson, and Joseph Samuel independently discovered that there exists in fact a Lagrangian formulation of the theory by considering the self-dual formulation of the tetradic Palatini action principle of general relativity. These proofs were given in terms of spinors. A purely tensorial proof of the new variables in terms of triads was given by Goldberg and in terms of tetrads by Henneaux, Nelson, &amp; Schomblond (1989). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q_{ab} (x)" }, { "math_id": 1, "text": "K^{ab} (x)" }, { "math_id": 2, "text": "\\ E^a_j\\ ," }, { "math_id": 3, "text": "\\ j = 1,2,3\\ " }, { "math_id": 4, "text": "\\delta_{jk} = q_{ab}\\ E_j^a\\ E_k^b ~." }, { "math_id": 5, "text": "\\ E_i^a\\ " }, { "math_id": 6, "text": "\\ a,b,c\\ " }, { "math_id": 7, "text": "\\ j,k,\\ell\\ " }, { "math_id": 8, "text": "\\ \\delta_{jk}\\ " }, { "math_id": 9, "text": "\\ E^j_a\\ " }, { "math_id": 10, "text": "\\ E^j_a = q_{ab}\\ E^b_j ~." }, { "math_id": 11, "text": "\\ \\delta^{jk} = q^{ab}\\ E^j_a\\ E^k_b\\ ," }, { "math_id": 12, "text": "q^{ab}" }, { "math_id": 13, "text": "\\ q_{ab}\\ " }, { "math_id": 14, "text": "\\ q^{ab}\\ E^j_a\\ E^k_b\\ " }, { "math_id": 15, "text": "\\ E_j^a\\ E^k_b\\ = \\delta_b^a\\ " }, { "math_id": 16, "text": "\\ \\delta_{jk} = q_{ab}\\ E_k^b\\ E_j^a\\ " }, { "math_id": 17, "text": "\\ E^j_c\\ " }, { "math_id": 18, "text": "\\ E_a^k\\ " }, { "math_id": 19, "text": "\\ E_j^a\\ E^j_b = \\delta_b^a\\ ," }, { "math_id": 20, "text": "\\ q^{ab} ~=~ \\sum_{j,\\ k=1}^{3}\\; \\delta_{jk}\\ E_j^a\\ E_k^b ~=~ \\sum_{j=1}^{3}\\; E_j^a\\ E_j^b\\ ," }, { "math_id": 21, "text": "\\ q^{ab}\\ ," }, { "math_id": 22, "text": "\\ E_j^a\\ ," }, { "math_id": 23, "text": "\\ \\left( \\mathrm{det} (q) \\right)\\ q^{ab} ~=~ \\sum_{j=1}^{3}\\; \\tilde{E}_j^a\\ \\tilde{E}_j^b\\ ," }, { "math_id": 24, "text": "\\tilde{E}_i^a" }, { "math_id": 25, "text": "\\ \\tilde{E}_j^a = \\sqrt{ \\det (q)\\ }\\ E_j^a\\ " }, { "math_id": 26, "text": "\\ \\tilde{E}_j^a\\ " }, { "math_id": 27, "text": "\\ E_j^a\\ " }, { "math_id": 28, "text": "\\ j\\ " }, { "math_id": 29, "text": "\\ \\mathrm{ SU(2) }\\ " }, { "math_id": 30, "text": "\\ V_i^b\\ " }, { "math_id": 31, "text": "\\ D_a\\ V_j^b = \\partial_a V_j^b - \\Gamma_{a \\;\\; j}^{\\;\\; k}\\ V_k^b + \\Gamma^b_{ac}\\ V_j^c\\ " }, { "math_id": 32, "text": "\\ \\Gamma^b_{ac}\\ " }, { "math_id": 33, "text": "\\ \\Gamma_{a \\;\\; j}^{\\;\\; k}\\ " }, { "math_id": 34, "text": "\\ A_a^j = \\Gamma_a^j + \\beta\\ K_a^j\\ " }, { "math_id": 35, "text": "\\Gamma_a^j = \\Gamma_{ak\\ell}\\ \\epsilon^{k \\ell j}" }, { "math_id": 36, "text": "K_a^j = K_{ab}\\ \\tilde{E}^{bj} / \\sqrt{\\det (q)\\ } ~." }, { "math_id": 37, "text": "\\ A^k_b\\ ," }, { "math_id": 38, "text": "\\ \\{\\ \\tilde{E}_j^a (x) ,\\ A^k_b (y)\\ \\} = 8\\pi\\ G_\\mathsf{Newton}\\ \\beta\\ \\delta^a_b\\ \\delta^k_j\\ \\delta^3 (x - y) ~." }, { "math_id": 39, "text": "\\beta" }, { "math_id": 40, "text": "\\ G_\\mathsf{Newton} ~." }, { "math_id": 41, "text": "\\ \\beta = -i\\ " }, { "math_id": 42, "text": "\\ i\\ " }, { "math_id": 43, "text": "\\ A_a^j\\ " }, { "math_id": 44, "text": "\\tilde{H} = \\sqrt{\\det (q)} H ~." } ]
https://en.wikipedia.org/wiki?curid=681646
681666
Penrose diagram
Two-dimensional diagram capturing the causal relations between different points in spacetime In theoretical physics, a Penrose diagram (named after mathematical physicist Roger Penrose) is a two-dimensional diagram capturing the causal relations between different points in spacetime through a conformal treatment of infinity. It is an extension (suitable for the curved spacetimes of e.g. general relativity) of the Minkowski diagram of special relativity where the vertical dimension represents time, and the horizontal dimension represents a space dimension. Using this design, all light rays take a 45° path formula_0. Locally, the metric on a Penrose diagram is conformally equivalent to the metric of the spacetime depicted. The conformal factor is chosen such that the entire infinite spacetime is transformed into a Penrose diagram of finite size, with infinity on the boundary of the diagram. For spherically symmetric spacetimes, every point in the Penrose diagram corresponds to a 2-dimensional sphere formula_1. Basic properties. While Penrose diagrams share the same basic coordinate vector system of other spacetime diagrams for local asymptotically flat spacetime, it introduces a system of representing distant spacetime by shrinking or "triturando" distances that are further away. Straight lines of constant time and straight lines of constant space coordinates therefore become hyperbolae, which appear to converge at points in the corners of the diagram. These points and boundaries represent conformal infinity for spacetime, which was first introduced by Penrose in 1963. Penrose diagrams are more properly (but less frequently) called Penrose–Carter diagrams (or Carter–Penrose diagrams), acknowledging both Brandon Carter and Roger Penrose, who were the first researchers to employ them. They are also called conformal diagrams, or simply spacetime diagrams (although the latter may refer to Minkowski diagrams). Two lines drawn at 45° angles should intersect in the diagram only if the corresponding two light rays intersect in the actual spacetime. So, a Penrose diagram can be used as a concise illustration of spacetime regions that are accessible to observation. The diagonal boundary lines of a Penrose diagram correspond to the region called "null infinity", or to singularities where light rays must end. Thus, Penrose diagrams are also useful in the study of asymptotic properties of spacetimes and singularities. An infinite static Minkowski universe, coordinates formula_2 is related to Penrose coordinates formula_3 by: formula_4 The corners of the Penrose diagram, which represent the spacelike and timelike conformal infinities, are formula_5 from the origin. Black holes. Penrose diagrams are frequently used to illustrate the causal structure of spacetimes containing black holes. Singularities in the Schwarzschild solution are denoted by a spacelike boundary, unlike the timelike boundary found on conventional spacetime diagrams. This is due to the interchanging of timelike and spacelike coordinates within the horizon of a black hole (since space is uni-directional within the horizon, just as time is uni-directional outside the horizon). The singularity is represented by a spacelike boundary to make it clear that once an object has passed the horizon it will inevitably hit the singularity even if it attempts to take evasive action. Penrose diagrams are often used to illustrate the hypothetical Einstein–Rosen bridge connecting two separate universes in the maximally extended Schwarzschild black hole solution. The precursors to the Penrose diagrams were Kruskal–Szekeres diagrams. (The Penrose diagram adds to Kruskal and Szekeres' diagram the conformal crunching of the regions of flat spacetime far from the hole.) These introduced the method of aligning the event horizon into past and future horizons oriented at 45° angles (since one would need to travel faster than light to cross from the Schwarzschild radius back into flat spacetime); and splitting the singularity into past and future horizontally-oriented lines (since the singularity "cuts off" all paths into the future once one enters the hole). The Einstein–Rosen bridge closes off (forming "future" singularities) so rapidly that passage between the two asymptotically flat exterior regions would require faster-than-light velocity, and is therefore impossible. In addition, highly blue-shifted light rays (called a blue sheet) would make it impossible for anyone to pass through. The maximally extended solution does not describe a typical black hole created from the collapse of a star, as the surface of the collapsed star replaces the sector of the solution containing the past-oriented "white hole" geometry and other universe. While the basic space-like passage of a static black hole cannot be traversed, the Penrose diagrams for solutions representing rotating and/or electrically charged black holes illustrate these solutions' inner event horizons (lying in the future) and vertically oriented singularities, which open up what is known as a time-like "wormhole" allowing passage into future universes. In the case of the rotating hole, there is also a "negative" universe entered through a ring-shaped singularity (still portrayed as a line in the diagram) that can be passed through if entering the hole close to its axis of rotation. These features of the solutions are, however, not stable under perturbations and not believed to be a realistic description of the interior regions of such black holes; the true character of their interiors is still an open question. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(c = 1)" }, { "math_id": 1, "text": "(\\theta,\\phi)" }, { "math_id": 2, "text": "(x, t)" }, { "math_id": 3, "text": "(u, v)" }, { "math_id": 4, "text": "\\tan(u \\pm v) = x \\pm t" }, { "math_id": 5, "text": "\\pi /2" } ]
https://en.wikipedia.org/wiki?curid=681666
68175502
SNAFU (Person of Interest)
"SNAFU" is the 2nd episode of the fifth season of the American television drama series "Person of Interest". It is the 92nd overall episode of the series and is written by Lucas O'Connor and directed by executive producer Chris Fisher. It aired on CBS in the United States and on CTV in Canada on May 9, 2016. The series revolves around a computer program for the federal government known as "The Machine" that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. A team follows "irrelevant" crimes: lesser level of priority for the government. However, their security and safety is put in danger following the activation of a new program named Samaritan. In the episode, the Machine is rebooted but it assigns numbers with no threat to the team. To complicate matters, it starts seeing the team as threats when it gets a more contextualized background on them. The title refers to "SNAFU", an acronym that is widely used to stand for the sarcastic expression "Situation Normal: All Fucked Up". It means that the situation is bad, but that it is a normal state of affairs. According to Nielsen Media Research, the episode was seen by an estimated 5.80 million household viewers and gained a 1.0/4 ratings share among adults aged 18–49. The episode received very positive reviews from critics, who praised the writing, humor and exploration of the Machine's themes. Plot. The Machine starts the process of rebooting, although it struggles in identifying its assets through facial recognition. Needing more resources to properly activate the Machine, Reese (Jim Caviezel) and Finch (Michael Emerson) steal 64 next-generation GPU blade servers while Root (Amy Acker) works to make the Machine recover its memories and database. Once activated, the Machine properly works and Finch starts asking for the irrelevant numbers. The Machine ends up producing 30 numbers so the team works separately on each case. Reese pursues a bomb threat but finds that it was a call from a student to skip school while Fusco (Kevin Chapman) pursues a killer but finds that it's just a play. Finch and Root find that the Machine fails to differentiate threat with violence as it knows no context or background. They decide to run a contextual background on themselves. However, the Machine, after collecting all their actions, considers them a threat and locks Root and Finch in the train. Root destroys the train's window to escape and cuts the Machine's access to the doors so they can open them. However, the Machine responds by attacking her cochlear implant. They realize that the Machine deems them threats and is protecting itself from them. In order to avoid the Machine from attacking her, Root knocks herself out with Desflurane. Meanwhile, after discarding all the pointless numbers, Reese takes the case of Jessica Granger (Laurie Granger), who coincidentally shows up at the police precinct. He leaves to help Finch and Root but Granger follows him and is revealed to be an assassin and both start a gunfight in the streets. Finch discovers that the Machine sent Granger after Reese. Finch tries to convince the Machine of their real personas but eventually discovers that the Machine is experiencing everything as "Day formula_0" (in reference to the real numbers), especially the 42 times that Finch "killed" the Machine. Finch then shows the Machine all of their numbers they've saved throughout the years in an attempt to show their relevance to the Machine. This successfully returns the Machine to its original entity but the Machine is unable to call off the hit on Reese as it was paid on advance. Nevertheless, Reese subdues Granger and saves himself. With the Machine back, Finch asks for Grace's status, finding her in Venice. Having recovered Root's aliases, the team has a picnic in Central Park. The episode ends with ex-con Jeff Blackwell (Josh Close), one of the 30 numbers who was dismissed as someone who was not involved in any nefarious activity, asking for employment and be recruited by a Samaritan agent named Mona (LaChanze). Reception. Viewers. In its original American broadcast, "SNAFU" was seen by an estimated 5.80 million household viewers and gained a 1.0/4 ratings share among adults aged 18–49, according to Nielsen Media Research. This means that 1 percent of all households with televisions watched the episode, while 4 percent of all households watching television at that time watched it. This was a 22% decrease in viewership from the previous episode, which was watched by 7.35 million viewers with a 1.2/4 in the 18-49 demographics. With these ratings, "Person of Interest" was the fifth most watched show on CBS for the night, behind "The Odd Couple", a "The Big Bang Theory" rerun and two "Mike &amp; Molly", third on its timeslot and tenth for the night in the 18-49 demographics, behind "Castle", "Gotham", "Blindspot", "The Odd Couple", a "The Big Bang Theory" rerun, two "Mike &amp; Molly" episodes, "Dancing with the Stars", and "The Voice". With Live +7 DVR factored in, the episode was watched by 8.27 million viewers with a 1.6 in the 18-49 demographics. Critical reviews. "SNAFU" received very positive reviews from critics. Matt Fowler of "IGN" gave the episode a "great" 8.6 out of 10 rating and wrote in his verdict, "'SNAFU' gleefully turned "Person of Interest"'s premise upside-down with a clever, funny episode that lovingly put our heroes through the wringer. It was this show's funniest episode to date, though it wasn't without teeth. The glitch the Machine experienced, on its own, made the show analyze itself a little bit. While also allowing us to re-experience some of our heroes' transformative journeys over the years." Alexa Planje of "The A.V. Club" gave the episode an "A−" grade and wrote, "'SNAFU' is one of the best types of "Person of Interest" episodes, where the writers have fun playing with the show's central conceit. This approach often results in a great blend of both humor and drama, and this episode is no different." Chancellor Agard of "Entertainment Weekly" wrote, "While I liked what the show was doing with this episode, it definitely felt like it slowed down the season's momentum already. It's still not clear what this phase of the war against Samaritan looks like and it would've been great if the episode has pushed that a little bit more." Sean McKenna of "TV Fanatic" gave the episode a 4 star rating out of 5 and wrote "This was a decent enough episode, but I'm looking forward to getting back into the action now that the Machine has worked out some of its kinks. And thank goodness we won't have to wait long with the next episode on tomorrow!" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=68175502
68179990
TRAP experiment
The TRAP experiment, also known as PS196, operated at the Proton Synchrotron facility of the Low Energy Antiproton Ring (LEAR) at CERN, Geneva, from 1985 to 1996. Its main goal was to compare the mass of an antiproton and a proton by trapping these particles in the penning traps. The TRAP collaboration also measured and compared the charge-to-mass ratios of antiproton and proton. Although the data-taking period ended in 1996, the analysis of datasets continued until 2006. Experimental setup. In the first step, the antiprotons obtained from the LEAR entered the TRAP apparatus. They were immediately slowed down using the degrader foils. The first penning trap was used to the accumulate the entering antiprotons. While the second trap, located very close to the first one was used for the precision measurements. The number of antiprotons entering the degrader foils were counted using a scintillating device. A number of antiprotons coming out from the degrader foils were observed using an attached detector. The apparatus was cooled down to the liquid helium temperature for these measurements. The penning traps used strong magnetic fields to contain charged particles. The issue with storing antiprotons was that they required very stringent vacuum conditions, otherwise they would easily interact with the gas atoms in the medium and annihilate quickly. The TRAP collaboration achieved vacuum pressure as low as formula_0, with less than 1 annihilation per day. The special type of trap-geometry and use of superconducting solenoid that would cancel the magnetic fluctuations were the crucial design aspects of the TRAP setup. Results. The ratio of inertial masses of antiproton (&lt;chem&gt;\bar{p}&lt;/chem&gt;) and proton (p) was calculated to be 0.999,999,977 formula_1 0.000000042. This result had a fractional uncertainty of formula_2 formula_3 formula_4, which was 1000 thousand times more accurate than the previous measurements, that evidently implied the existence of CPT symmetry for the baryons. This result was obtained by comparing the cyclotron frequencies of the protons and the antiprotons. The ratio of antiproton to electron inertial mass was determined to be 1836.152660 formula_1 0.000083, while the proton to electron inertial mass ratio was found to be 1836.152680 formula_1 0.000088. The lower limit on the decay lifetime of the antiprotons was established to be 3.4 months. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{10^{-14} Torr}" }, { "math_id": 1, "text": "\\pm" }, { "math_id": 2, "text": "4" }, { "math_id": 3, "text": "\\times" }, { "math_id": 4, "text": "10^{-8}" } ]
https://en.wikipedia.org/wiki?curid=68179990
6818116
Regulated integral
Definition of integral for regulated functions In mathematics, the regulated integral is a definition of integration for regulated functions, which are defined to be uniform limits of step functions. The use of the regulated integral instead of the Riemann integral has been advocated by Nicolas Bourbaki and Jean Dieudonné. Definition. Definition on step functions. Let ["a", "b"] be a fixed closed, bounded interval in the real line R. A real-valued function "φ" : ["a", "b"] → R is called a step function if there exists a finite partition formula_0 of ["a", "b"] such that "φ" is constant on each open interval ("t""i", "t""i"+1) of Π; suppose that this constant value is "c""i" ∈ R. Then, define the integral of a step function "φ" to be formula_1 It can be shown that this definition is independent of the choice of partition, in that if Π1 is another partition of ["a", "b"] such that "φ" is constant on the open intervals of Π1, then the numerical value of the integral of "φ" is the same for Π1 as for Π. Extension to regulated functions. A function "f" : ["a", "b"] → R is called a regulated function if it is the uniform limit of a sequence of step functions on ["a", "b"]: Define the integral of a regulated function "f" to be formula_4 where ("φ""n")"n"∈N is any sequence of step functions that converges uniformly to "f". One must check that this limit exists and is independent of the chosen sequence, but this is an immediate consequence of the continuous linear extension theorem of elementary functional analysis: a bounded linear operator "T"0 defined on a dense linear subspace "E"0 of a normed linear space "E" and taking values in a Banach space "F" extends uniquely to a bounded linear operator "T" : "E" → "F" with the same (finite) operator norm. Extension to functions defined on the whole real line. It is possible to extend the definitions of step function and regulated function and the associated integrals to functions defined on the whole real line. However, care must be taken with certain technical points: Extension to vector-valued functions. The above definitions go through "mutatis mutandis" in the case of functions taking values in a Banach space "X".
[ { "math_id": 0, "text": "\\Pi = \\{ a = t_0 < t_1 < \\cdots < t_k = b \\}" }, { "math_id": 1, "text": "\\int_a^b \\varphi(t) \\, \\mathrm{d} t := \\sum_{i = 0}^{k - 1} c_i | t_{i + 1} - t_i |." }, { "math_id": 2, "text": "f(t+) = \\lim_{s \\downarrow t} f(s)" }, { "math_id": 3, "text": "f(t-) = \\lim_{s \\uparrow t} f(s)" }, { "math_id": 4, "text": "\\int_{a}^{b} f(t) \\, \\mathrm{d} t :=\n\\lim_{n \\to \\infty} \\int_{a}^{b} \\varphi_{n} (t) \\, \\mathrm{d} t," }, { "math_id": 5, "text": "\\int_{a}^{b} \\alpha f(t) + \\beta g(t) \\, \\mathrm{d} t\n= \\alpha \\int_{a}^{b} f(t) \\, \\mathrm{d} t + \\beta \\int_{a}^{b} g(t) \\, \\mathrm{d} t." }, { "math_id": 6, "text": "m | b - a | \\leq \\int_{a}^{b} f(t) \\, \\mathrm{d} t \\leq M | b - a |." }, { "math_id": 7, "text": "\\left| \\int_{a}^{b} f(t) \\, \\mathrm{d} t \\right| \\leq \\int_{a}^{b} | f(t) | \\, \\mathrm{d} t." } ]
https://en.wikipedia.org/wiki?curid=6818116
681895
Quantum geometry
Set of mathematical concepts propagating geometric concepts In theoretical physics, quantum geometry is the set of mathematical concepts generalizing the concepts of geometry whose understanding is necessary to describe the physical phenomena at distance scales comparable to the Planck length. At these distances, quantum mechanics has a profound effect on physical phenomena. Quantum gravity. Each theory of quantum gravity uses the term "quantum geometry" in a slightly different fashion. String theory, a leading candidate for a quantum theory of gravity, uses the term quantum geometry to describe exotic phenomena such as T-duality and other geometric dualities, mirror symmetry, topology-changing transitions, minimal possible distance scale, and other effects that challenge intuition. More technically, quantum geometry refers to the shape of a spacetime manifold as experienced by D-branes which includes quantum corrections to the metric tensor, such as the worldsheet instantons. For example, the quantum volume of a cycle is computed from the mass of a brane wrapped on this cycle. In an alternative approach to quantum gravity called loop quantum gravity (LQG), the phrase "quantum geometry" usually refers to the formalism within LQG where the observables that capture the information about the geometry are now well defined operators on a Hilbert space. In particular, certain physical observables, such as the area, have a discrete spectrum. It has also been shown that the loop quantum geometry is non-commutative. It is possible (but considered unlikely) that this strictly quantized understanding of geometry will be consistent with the quantum picture of geometry arising from string theory. Another, quite successful, approach, which tries to reconstruct the geometry of space-time from "first principles" is Discrete Lorentzian quantum gravity. Quantum states as differential forms. Differential forms are used to express quantum states, using the wedge product: formula_0 where the position vector is formula_1 the differential volume element is formula_2 and "x"1, "x"2, "x"3 are an arbitrary set of coordinates, the upper indices indicate contravariance, lower indices indicate covariance, so explicitly the quantum state in differential form is: formula_3 The overlap integral is given by: formula_4 in differential form this is formula_5 The probability of finding the particle in some region of space "R" is given by the integral over that region: formula_6 provided the wave function is normalized. When "R" is all of 3d position space, the integral must be 1 if the particle exists. Differential forms are an approach for describing the geometry of curves and surfaces in a coordinate independent way. In quantum mechanics, idealized situations occur in rectangular Cartesian coordinates, such as the potential well, particle in a box, quantum harmonic oscillator, and more realistic approximations in spherical polar coordinates such as electrons in atoms and molecules. For generality, a formalism which can be used in any coordinate system is useful. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\psi\\rangle = \\int \\psi(\\mathbf{x},t) \\, |\\mathbf{x},t\\rangle \\, \\mathrm{d}^3\\mathbf{x} " }, { "math_id": 1, "text": "\\mathbf{x} = (x^1,x^2,x^3) " }, { "math_id": 2, "text": "\\mathrm{d}^3\\mathbf{x} = \\mathrm{d}x^1 \\!\\wedge \\mathrm{d}x^2 \\!\\wedge \\mathrm{d}x^3" }, { "math_id": 3, "text": "|\\psi\\rangle = \\int \\psi(x^1,x^2,x^3,t) \\, |x^1,x^2,x^3,t\\rangle \\, \\mathrm{d}x^1 \\!\\wedge \\mathrm{d}x^2 \\!\\wedge \\mathrm{d}x^3" }, { "math_id": 4, "text": "\\langle\\chi|\\psi\\rangle = \\int \\chi^* \\psi ~ \\mathrm{d}^3\\mathbf{x}" }, { "math_id": 5, "text": "\\langle\\chi|\\psi\\rangle = \\int \\chi^* \\psi ~ \\mathrm{d}x^1 \\!\\wedge \\mathrm{d}x^2 \\!\\wedge \\mathrm{d}x^3" }, { "math_id": 6, "text": "\\langle\\psi|\\psi\\rangle = \\int_R \\psi^* \\psi ~ \\mathrm{d}x^1 \\!\\wedge \\mathrm{d}x^2 \\!\\wedge \\mathrm{d}x^3" } ]
https://en.wikipedia.org/wiki?curid=681895
68194734
Corepresentations of unitary and antiunitary groups
In quantum mechanics, symmetry operations are of importance in giving information about solutions to a system. Typically these operations form a mathematical group, such as the rotation group SO(3) for spherically symmetric potentials. The representation theory of these groups leads to irreducible representations, which for SO(3) gives the angular momentum ket vectors of the system. Standard representation theory uses linear operators. However, some operators of physical importance such as time reversal are antilinear, and including these in the symmetry group leads to groups including both unitary and antiunitary operators. This article is about corepresentation theory, the equivalent of representation theory for these groups. It is mainly used in the theoretical study of magnetic structure but is also relevant to particle physics due to CPT symmetry. It gives basic results, the relation to ordinary representation theory and some references to applications. Corepresentations of unitary/antiunitary groups. Eugene Wigner showed that a symmetry operation "S" of a Hamiltonian is represented in quantum mechanics either by a unitary operator, "S = U", or an antiunitary one, "S = UK" where "U" is unitary, and "K" denotes complex conjugation. Antiunitary operators arise in quantum mechanics due to the time reversal operator If the set of symmetry operations (both unitary and antiunitary) forms a group, then it is commonly known as a magnetic group and many of these are described in magnetic space groups. A group of unitary operators may be represented by a group representation. Due to the presence of antiunitary operators this must be replaced by Wigner's corepresentation theory. Definition. Let G be a group with a subgroup H of index 2. A corepresentation is a homomorphism into a group of operators over a vector space over the complex numbers where for all "u" in H the image of "u" is a linear operator and for all "a" in the coset G-H the image of "a" is antilinear (where '*' means complex conjugation): formula_0 Properties. As this is a homomorphism formula_1 Reducibility. Two corepresentations are equivalent if there is a matrix V formula_2 Just like representations, a corepresentation is reducible if there is a proper subspace invariant under the operations of the corepresentation. If the corepresentation is given by matrices, it is reducible if it is equivalent to a corepresentation with each matrix in block diagonal form. If the corepresentation is not reducible, then it is irreducible. Schur's lemma. Schur's lemma for irreducible representations over the complex numbers states that if a matrix commutes with all matrices of the representation then it is a (complex) multiple of the identity matrix, that is, the set of commuting matrices is isomorphic to the complex numbers formula_3. The equivalent of Schur's lemma for irreducible corepresentations is that the set of commuting matrices is isomorphic to formula_4, formula_3 or the quaternions formula_5. Using the intertwining number over the real numbers, this may be expressed as an intertwining number of 1, 2 or 4. Relation to representations of the linear subgroup. Typically, irreducible corepresentations are related to the irreducible representations of the linear subgroup H. Let formula_6 be an irreducible (ordinary) representation of he linear subgroup "H". Form the sum over all the antilinear operators of the square of the character of each of these operators: formula_7 and set formula_8 for an arbitrary element formula_9. There are three cases, distinguished by the character test eq 7.3.51 of Cracknell and Bradley. formula_10 formula_11 formula_14 Cracknell and Bradley show how to use these to construct corepresentations for the magnetic point groups, while Cracknell and Wong give more explicit tables for the double magnetic groups. Character theory of corepresentations. Standard representation theory for finite groups has a square character table with row and column orthogonality properties. With a slightly different definition of conjugacy classes and use of the intertwining number, a square character table with similar orthogonality properties also exists for the corepresentations of finite magnetic groups. Based on this character table, a character theory mirroring that of representation theory has been developed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n& \\forall u \\in H, D(u)(a{\\bf x} + b{\\bf y}) = a \\times D(u){\\bf x} + b \\times D(u){\\bf y} \\\\\n& \\forall a \\in G-H, D(a)(a{\\bf x} + b{\\bf y}) = a^* \\times D(a){\\bf x} + b^* \\times D(a){\\bf y}\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\begin{align}\n& D(u_1u_2) = D(u_1)D(u_2) \\\\\n& D(ua) = D(u)D(a) \\\\\n& D(au) = D(a)D(u)^* \\\\\n& D(a_1a_2) = D(a_1)D(a_2)^*\n\\end{align}\n" }, { "math_id": 2, "text": "\n\\begin{align}\n& \\forall u, D'(u) = VD(u)V^{-}1 \\\\\n& \\forall a, D'(a) = VD(a)V^{*-1}\n\\end{align}\n" }, { "math_id": 3, "text": "\\Complex" }, { "math_id": 4, "text": "\\mathbb {R}" }, { "math_id": 5, "text": "\\mathbb {Q}" }, { "math_id": 6, "text": "\\Delta" }, { "math_id": 7, "text": "\nS = \\sum_a \\chi_\\Delta(a^2)\n" }, { "math_id": 8, "text": "P = D(a_0)" }, { "math_id": 9, "text": "a_0" }, { "math_id": 10, "text": "\n\\begin{align}\n& D(u) = \\Delta(u) \\\\\n& D(a) = D(aa_0^{-1}a_0) = \\Delta(aa_0)P\n\\end{align}\n" }, { "math_id": 11, "text": "\n\\begin{align}\n& D(u) = \\begin{pmatrix}\n\\Delta(u) & 0 \\\\\n0 & \\Delta(u)\n\\end{pmatrix} \\\\\n\n& D(a) = \\begin{pmatrix}\n0 & \\Delta(aa_0^{-1})P \\\\\n-\\Delta(aa_0^{-1})P & 0\n\\end{pmatrix}\n\\end{align}\n" }, { "math_id": 12, "text": "\\Delta'" }, { "math_id": 13, "text": "\n\\Delta'(u) = \\Delta(a_0^{-1}ua_0)^* " }, { "math_id": 14, "text": "\n\\begin{align}\n& D(u) = \\begin{pmatrix}\n\\Delta(u) & 0 \\\\\n0 & \\Delta(a_0^{-1}ua_0)^*\n\\end{pmatrix} \\\\\n\n& D(a) = \\begin{pmatrix}\n0 & \\Delta(aa_0) \\\\\n\\Delta(a_0^{-1}a)^* & 0\n\\end{pmatrix}\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=68194734
68195735
Matrix sign function
Generalization of signum function to matrices In mathematics, the matrix sign function is a matrix function on square matrices analogous to the complex sign function. It was introduced by J.D. Roberts in 1971 as a tool for model reduction and for solving Lyapunov and Algebraic Riccati equation in a technical report of Cambridge University, which was later published in a journal in 1980. Definition. The matrix sign function is a generalization of the complex signum function formula_0 to the matrix valued analogue formula_1. Although the sign function is not analytic, the matrix function is well defined for all matrices that have no eigenvalue on the imaginary axis, see for example the Jordan-form-based definition (where the derivatives are all zero). Properties. Theorem: Let formula_2, then formula_3. Theorem: Let formula_2, then formula_1 is diagonalizable and has eigenvalues that are formula_4. Theorem: Let formula_2, then formula_5 is a projector onto the invariant subspace associated with the eigenvalues in the right-half plane, and analogously for formula_6 and the left-half plane. Theorem: Let formula_2, and formula_7 be a Jordan decomposition such that formula_8 corresponds to eigenvalues with positive real part and formula_9 to eigenvalue with negative real part. Then formula_10, where formula_11 and formula_12 are identity matrices of sizes corresponding to formula_8 and formula_9, respectively. Computational methods. The function can be computed with generic methods for matrix functions, but there are also specialized methods. Newton iteration. The Newton iteration can be derived by observing that formula_13, which in terms of matrices can be written as formula_14, where we use the matrix square root. If we apply the Babylonian method to compute the square root of the matrix formula_15, that is, the iteration formula_16, and define the new iterate formula_17, we arrive at the iteration formula_18, where typically formula_19. Convergence is global, and locally it is quadratic. The Newton iteration uses the explicit inverse of the iterates formula_20. Newton–Schulz iteration. To avoid the need of an explicit inverse used in the Newton iteration, the inverse can be approximated with one step of the Newton iteration for the inverse, formula_21, derived by Schulz() in 1933. Substituting this approximation into the previous method, the new method becomes formula_22. Convergence is (still) quadratic, but only local (guaranteed for formula_23). Applications. Solutions of Sylvester equations. Theorem: Let formula_24 and assume that formula_25 and formula_26 are stable, then the unique solution to the Sylvester equation, formula_27, is given by formula_28 such that formula_29 "Proof sketch:" The result follows from the similarity transform formula_30 since formula_31 due to the stability of formula_25 and formula_26. The theorem is, naturally, also applicable to the Lyapunov equation. However, due to the structure the Newton iteration simplifies to only involving inverses of formula_25 and formula_32. Solutions of algebraic Riccati equations. There is a similar result applicable to the algebraic Riccati equation, formula_33. Define formula_34 as formula_35 Under the assumption that formula_36 are Hermitian and there exists a unique stabilizing solution, in the sense that formula_37 is stable, that solution is given by the over-determined, but consistent, linear system formula_38 "Proof sketch:" The similarity transform formula_39 and the stability of formula_37 implies that formula_40 for some matrix formula_41. Computations of matrix square-root. The Denman–Beavers iteration for the square root of a matrix can be derived from the Newton iteration for the matrix sign function by noticing that formula_42 is a degenerate algebraic Riccati equation and by definition a solution formula_43 is the square root of formula_25. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\operatorname{csgn}(z)= \\begin{cases}\n 1 & \\text{if } \\mathrm{Re}(z) > 0, \\\\\n -1 & \\text{if } \\mathrm{Re}(z) < 0,\n\\end{cases}\n" }, { "math_id": 1, "text": "\\operatorname{csgn}(A)" }, { "math_id": 2, "text": "A\\in\\C^{n\\times n}" }, { "math_id": 3, "text": "\\operatorname{csgn}(A)^2 = I" }, { "math_id": 4, "text": "\\pm 1" }, { "math_id": 5, "text": "(I+\\operatorname{csgn}(A))/2" }, { "math_id": 6, "text": "(I-\\operatorname{csgn}(A))/2" }, { "math_id": 7, "text": "A = P\\begin{bmatrix}J_+ & 0 \\\\ 0 & J_-\\end{bmatrix}P^{-1}" }, { "math_id": 8, "text": "J_+" }, { "math_id": 9, "text": "J_-" }, { "math_id": 10, "text": "\\operatorname{csgn}(A) = P\\begin{bmatrix}I_+ & 0 \\\\ 0 & -I_-\\end{bmatrix}P^{-1}" }, { "math_id": 11, "text": "I_+" }, { "math_id": 12, "text": "I_-" }, { "math_id": 13, "text": "\\operatorname{csgn}(x) = \\sqrt{x^2}/x" }, { "math_id": 14, "text": "\\operatorname{csgn}(A) = A^{-1}\\sqrt{A^2}" }, { "math_id": 15, "text": "A^2" }, { "math_id": 16, "text": "X_{k+1} = \\frac{1}{2} \\left(X_k + A X_k^{-1}\\right)" }, { "math_id": 17, "text": "Z_k = A^{-1}X_k" }, { "math_id": 18, "text": "\nZ_{k+1} = \\frac{1}{2}\\left(Z_k + Z_k^{-1}\\right)\n" }, { "math_id": 19, "text": "Z_0=A" }, { "math_id": 20, "text": "Z_k" }, { "math_id": 21, "text": "Z_k^{-1}\\approx Z_k\\left(2I-Z_k^2\\right)" }, { "math_id": 22, "text": "\nZ_{k+1} = \\frac{1}{2}Z_k\\left(3I - Z_k^2\\right)\n" }, { "math_id": 23, "text": "\\|I-A^2\\|<1" }, { "math_id": 24, "text": "A,B,C\\in\\R^{n\\times n}" }, { "math_id": 25, "text": "A" }, { "math_id": 26, "text": "B" }, { "math_id": 27, "text": "AX +XB = C" }, { "math_id": 28, "text": "X" }, { "math_id": 29, "text": "\n\\begin{bmatrix}\n-I &2X\\\\ 0 & I\n\\end{bmatrix}\n=\n\\operatorname{csgn}\n\\left(\n\\begin{bmatrix}\nA &-C\\\\ 0 & -B\n\\end{bmatrix}\n\\right).\n" }, { "math_id": 30, "text": "\n\\begin{bmatrix}\nA &-C\\\\ 0 & -B\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nI & X \\\\ 0 & I\n\\end{bmatrix}\n\\begin{bmatrix}\nA & 0\\\\ 0 & -B\n\\end{bmatrix}\n\\begin{bmatrix}\nI & X \\\\ 0 & I\n\\end{bmatrix}^{-1},\n" }, { "math_id": 31, "text": "\n\\operatorname{csgn}\n\\left(\n\\begin{bmatrix}\nA &-C\\\\ 0 & -B\n\\end{bmatrix}\n\\right)\n=\n\\begin{bmatrix}\nI & X \\\\ 0 & I\n\\end{bmatrix}\n\\begin{bmatrix}\nI & 0\\\\ 0 & -I\n\\end{bmatrix}\n\\begin{bmatrix}\nI & -X \\\\ 0 & I\n\\end{bmatrix},\n" }, { "math_id": 32, "text": "A^T" }, { "math_id": 33, "text": "A^H P + P A - P F P + Q = 0 " }, { "math_id": 34, "text": "V,W\\in\\Complex^{2n\\times n} " }, { "math_id": 35, "text": "\n\\begin{bmatrix}\nV & W\n\\end{bmatrix}\n=\n\\operatorname{csgn}\n\\left(\n\\begin{bmatrix}\nA^H &Q\\\\ F & -A\n\\end{bmatrix}\n\\right)\n-\n\\begin{bmatrix}\nI &0\\\\ 0 & I\n\\end{bmatrix}.\n" }, { "math_id": 36, "text": "F,Q\\in\\Complex^{n\\times n} " }, { "math_id": 37, "text": "A-FP " }, { "math_id": 38, "text": "\nVP=-W.\n" }, { "math_id": 39, "text": "\n\\begin{bmatrix}\nA^H &Q\\\\ F & -A\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nP & -I \\\\ I & 0\n\\end{bmatrix}\n\\begin{bmatrix}\n(-A-FP) & -F\\\\ 0 & (A-FP)\n\\end{bmatrix}\n\\begin{bmatrix}\nP & -I \\\\ I & 0\n\\end{bmatrix}^{-1},\n" }, { "math_id": 40, "text": "\n\\left(\n\\operatorname{csgn}\n\\left(\n\\begin{bmatrix}\nA^H &Q\\\\ F & -A\n\\end{bmatrix}\n\\right)\n-\n\\begin{bmatrix}\nI &0\\\\ 0 & I\n\\end{bmatrix}\n\\right)\n\\begin{bmatrix}\nX & -I \\\\ I & 0\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nX & -I\\\\ I & 0\n\\end{bmatrix}\n\\begin{bmatrix}\n0 & Y \\\\ 0 & -2I\n\\end{bmatrix},\n" }, { "math_id": 41, "text": "Y\\in\\Complex^{n\\times n} " }, { "math_id": 42, "text": "A - PIP=0" }, { "math_id": 43, "text": "P" } ]
https://en.wikipedia.org/wiki?curid=68195735
681962
Coupling constant
Parameter describing the strength of a force In physics, a coupling constant or gauge coupling parameter (or, more simply, a coupling), is a number that determines the strength of the force exerted in an interaction. Originally, the coupling constant related the force acting between two static bodies to the "charges" of the bodies (i.e. the electric charge for electrostatic and the mass for Newtonian gravity) divided by the distance squared, formula_0, between the bodies; thus: formula_1 in formula_2 for Newtonian gravity and formula_3 in formula_4 for electrostatic. This description remains valid in modern physics for linear theories with static bodies and massless force carriers. A modern and more general definition uses the Lagrangian formula_5 (or equivalently the Hamiltonian formula_6) of a system. Usually, formula_5 (or formula_6) of a system describing an interaction can be separated into a "kinetic part" formula_7 and an "interaction part" formula_8: formula_9 (or formula_10). In field theory, formula_8 always contains 3 fields terms or more, expressing for example that an initial electron (field 1) interacts with a photon (field 2) producing the final state of the electron (field 3). In contrast, the "kinetic part" formula_7 always contains only two fields, expressing the free propagation of an initial particle (field 1) into a later state (field 2). The coupling constant determines the magnitude of the formula_7 part with respect to the formula_8 part (or between two sectors of the interaction part if several fields that couple differently are present). For example, the electric charge of a particle is a coupling constant that characterizes an interaction with two charge-carrying fields and one photon field (hence the common Feynman diagram with two arrows and one wavy line). Since photons mediate the electromagnetic force, this coupling determines how strongly electrons feel such a force, and has its value fixed by experiment. By looking at the QED Lagrangian, one sees that indeed, the coupling sets the proportionality between the kinetic term formula_11 and the interaction term formula_12. A coupling plays an important role in dynamics. For example, one often sets up hierarchies of approximation based on the importance of various coupling constants. In the motion of a large lump of magnetized iron, the magnetic forces may be more important than the gravitational forces because of the relative magnitudes of the coupling constants. However, in classical mechanics, one usually makes these decisions directly by comparing forces. Another important example of the central role played by coupling constants is that they are the expansion parameters for first-principle calculations based on perturbation theory, which is the main method of calculation in many branches of physics. Fine-structure constant. Couplings arise naturally in a quantum field theory. A special role is played in relativistic quantum theories by couplings that are dimensionless; i.e., are pure numbers. An example of a dimensionless such constant is the fine-structure constant, formula_13 where "e" is the charge of an electron, "ε"0 is the permittivity of free space, "ħ" is the reduced Planck constant and "c" is the speed of light. This constant is proportional to the square of the coupling strength of the charge of an electron to the electromagnetic field. Gauge coupling. In a non-abelian gauge theory, the gauge coupling parameter, formula_14, appears in the Lagrangian as formula_15 (where G is the gauge field tensor) in some conventions. In another widely used convention, G is rescaled so that the coefficient of the kinetic term is 1/4 and formula_14 appears in the covariant derivative. This should be understood to be similar to a dimensionless version of the elementary charge defined as formula_16 Weak and strong coupling. In a quantum field theory with a coupling "g", if "g" is much less than 1, the theory is said to be "weakly coupled". In this case, it is well described by an expansion in powers of "g", called perturbation theory. If the coupling constant is of order one or larger, the theory is said to be "strongly coupled". An example of the latter is the hadronic theory of strong interactions (which is why it is called strong in the first place). In such a case, non-perturbative methods need to be used to investigate the theory. In quantum field theory, the dimension of the coupling plays an important role in the renormalizability property of the theory, and therefore on the applicability of perturbation theory. If the coupling is dimensionless in the natural units system (i.e. formula_17, formula_18), like in QED, QCD, and the weak interaction, the theory is renormalizable and all the terms of the expansion series are finite (after renormalization). If the coupling is dimensionful, as e.g. in gravity (formula_19), the Fermi theory (formula_20) or the chiral perturbation theory of the strong force (formula_21), then the theory is usually not renormalizable. Perturbation expansions in the coupling might still be feasible, albeit within limitations, as most of the higher order terms of the series will be infinite. Running coupling. One may probe a quantum field theory at short times or distances by changing the wavelength or momentum, k, of the probe used. With a high frequency (i.e., short time) probe, one sees virtual particles taking part in every process. This apparent violation of the conservation of energy may be understood heuristically by examining the uncertainty relation formula_22 which virtually allows such violations at short times. The foregoing remark only applies to some formulations of quantum field theory, in particular, canonical quantization in the interaction picture. In other formulations, the same event is described by "virtual" particles going off the mass shell. Such processes renormalize the coupling and make it dependent on the energy scale, "μ", at which one probes the coupling. The dependence of a coupling "g"("μ") on the energy-scale is known as "running of the coupling". The theory of the running of couplings is given by the renormalization group, though it should be kept in mind that the renormalization group is a more general concept describing any sort of scale variation in a physical system (see the full article for details). Phenomenology of the running of a coupling. The renormalization group provides a formal way to derive the running of a coupling, yet the phenomenology underlying that running can be understood intuitively. As explained in the introduction, the coupling "constant" sets the magnitude of a force which behaves with distance as formula_23. The formula_23-dependence was first explained by Faraday as the decrease of the force flux: at a point "B" distant by formula_24 from the body "A" generating a force, this one is proportional to the field flux going through an elementary surface "S" perpendicular to the line "AB". As the flux spreads uniformly through space, it decreases according to the solid angle sustaining the surface "S". In the modern view of quantum field theory, the formula_23 comes from the expression in position space of the propagator of the force carriers. For relatively weakly-interacting bodies, as is generally the case in electromagnetism or gravity or the nuclear interactions at short distances, the exchange of a single force carrier is a good first approximation of the interaction between the bodies, and classically the interaction will obey a formula_23-law (note that if the force carrier is massive, there is an additional formula_24 dependence). When the interactions are more intense (e.g. the charges or masses are larger, or formula_24 is smaller) or happens over briefer time spans (smaller formula_24), more force carriers are involved or particle pairs are created, see Fig. 1, resulting in the break-down of the formula_23 behavior. The classical equivalent is that the field flux does not propagate freely in space any more but e.g. undergoes screening from the charges of the extra virtual particles, or interactions between these virtual particles. It is convenient to separate the first-order formula_23 law from this extra formula_24-dependence. This latter is then accounted for by being included in the coupling, which then becomes formula_25-dependent, (or equivalently "μ"-dependent). Since the additional particles involved beyond the single force carrier approximation are always virtual, i.e. transient quantum field fluctuations, one understands why the running of a coupling is a genuine quantum and relativistic phenomenon, namely an effect of the high-order Feynman diagrams on the strength of the force. Since a running coupling effectively accounts for microscopic quantum effects, it is often called an "effective coupling", in contrast to the "bare coupling (constant)" present in the Lagrangian or Hamiltonian. Beta functions. In quantum field theory, a "beta function", "β"("g"), encodes the running of a coupling parameter, "g". It is defined by the relation formula_26 where "μ" is the energy scale of the given physical process. If the beta functions of a quantum field theory vanish, then the theory is scale-invariant. The coupling parameters of a quantum field theory can flow even if the corresponding classical field theory is scale-invariant. In this case, the non-zero beta function tells us that the classical scale-invariance is anomalous. QED and the Landau pole. If a beta function is positive, the corresponding coupling increases with increasing energy. An example is quantum electrodynamics (QED), where one finds by using perturbation theory that the beta function is positive. In particular, at low energies, "α" ≈ 1/137, whereas at the scale of the Z boson, about 90 GeV, one measures "α" ≈ 1/127. Moreover, the perturbative beta function tells us that the coupling continues to increase, and QED becomes "strongly coupled" at high energy. In fact the coupling apparently becomes infinite at some finite energy. This phenomenon was first noted by Lev Landau, and is called the Landau pole. However, one cannot expect the perturbative beta function to give accurate results at strong coupling, and so it is likely that the Landau pole is an artifact of applying perturbation theory in a situation where it is no longer valid. The true scaling behaviour of formula_27 at large energies is not known. QCD and asymptotic freedom. In non-abelian gauge theories, the beta function can be negative, as first found by Frank Wilczek, David Politzer and David Gross. An example of this is the beta function for quantum chromodynamics (QCD), and as a result the QCD coupling decreases at high energies. Furthermore, the coupling decreases logarithmically, a phenomenon known as asymptotic freedom (the discovery of which was awarded with the Nobel Prize in Physics in 2004). The coupling decreases approximately as formula_28 where formula_29 is the energy of the process involved and "β"0 is a constant first computed by Wilczek, Gross and Politzer. Conversely, the coupling increases with decreasing energy. This means that the coupling becomes large at low energies, and one can no longer rely on perturbation theory. Hence, the actual value of the coupling constant is only defined at a given energy scale. In QCD, the Z boson mass scale is typically chosen, providing a value of the strong coupling constant of αs(MZ2 ) = 0.1179 ± 0.0010. In 2023 Atlas measured "α"s(MZ2) = 0.1183 ± 0.0009 the most precise so far. The most precise measurements stem from lattice QCD calculations, studies of tau-lepton decay, as well as by the reinterpretation of the transverse momentum spectrum of the Z boson. QCD scale. In quantum chromodynamics (QCD), the quantity Λ is called the QCD scale. The value is formula_30 for three "active" quark flavors, "viz" when the energy–momentum involved in the process allows production of only the up, down and strange quarks, but not the heavier quarks. This corresponds to energies below 1.275 GeV. At higher energy, Λ is smaller, e.g. formula_31 MeV above the bottom quark mass of about 5 GeV. The meaning of the minimal subtraction (MS) scheme scale ΛMS is given in the article on dimensional transmutation. The proton-to-electron mass ratio is primarily determined by the QCD scale. String theory. A remarkably different situation exists in string theory since it includes a dilaton. An analysis of the string spectrum shows that this field must be present, either in the bosonic string or the NS–NS sector of the superstring. Using vertex operators, it can be seen that exciting this field is equivalent to adding a term to the action where a scalar field couples to the Ricci scalar. This field is therefore an entire function worth of coupling constants. These coupling constants are not pre-determined, adjustable, or universal parameters; they depend on space and time in a way that is determined dynamically. Sources that describe the string coupling as if it were fixed are usually referring to the vacuum expectation value. This is free to have any value in the bosonic theory where there is no superpotential. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r^2" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "F=G m_1 m_2/r^2" }, { "math_id": 3, "text": "k_\\text{e}" }, { "math_id": 4, "text": "F=k_\\text{e}q_1 q_2/r^2" }, { "math_id": 5, "text": "\\mathcal{L}" }, { "math_id": 6, "text": "\\mathcal{H}" }, { "math_id": 7, "text": "T" }, { "math_id": 8, "text": "V" }, { "math_id": 9, "text": "\\mathcal{L}=T-V" }, { "math_id": 10, "text": "\\mathcal{H}=T+V" }, { "math_id": 11, "text": "T = \\bar \\psi (i\\hbar c \\gamma^\\sigma\\partial_\\sigma - mc^2) \\psi - {1 \\over 4\\mu_0} F_{\\mu \\nu} F^{\\mu \\nu} " }, { "math_id": 12, "text": "V = - e\\bar \\psi (\\hbar c \\gamma^\\sigma A_\\sigma) \\psi " }, { "math_id": 13, "text": "\\alpha = \\frac{e^2}{4\\pi\\varepsilon_0\\hbar c} ," }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "\\frac1{4g^2}{\\rm Tr}\\,G_{\\mu\\nu}G^{\\mu\\nu}," }, { "math_id": 16, "text": "\\frac{e}{\\sqrt{\\varepsilon_0\\hbar c}} = \\sqrt{4\\pi\\alpha} \\approx 0.30282212 \\ ~~." }, { "math_id": 17, "text": "c=1" }, { "math_id": 18, "text": "\\hbar=1" }, { "math_id": 19, "text": "[G_N]=\\text{energy}^{-2}" }, { "math_id": 20, "text": "[G_F]=\\text{energy}^{-2}" }, { "math_id": 21, "text": "[F]=\\text{energy}" }, { "math_id": 22, "text": "\\Delta E\\Delta t \\ge \\frac{\\hbar}{2}," }, { "math_id": 23, "text": "1/r^2" }, { "math_id": 24, "text": "r" }, { "math_id": 25, "text": "1/r" }, { "math_id": 26, "text": "\\beta(g) = \\mu\\frac{\\partial g}{\\partial \\mu} = \\frac{\\partial g}{\\partial \\ln \\mu}," }, { "math_id": 27, "text": "\\alpha" }, { "math_id": 28, "text": " \\alpha_\\text{s}(k^2) \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{g_\\text{s}^2(k^2)}{4\\pi} \\approx \\frac1{\\beta_0\\ln\\left({k^2}/{\\Lambda^2}\\right)}," }, { "math_id": 29, "text": " k " }, { "math_id": 30, "text": "\\Lambda_{\\rm MS} = 332\\pm17\\text{ MeV}" }, { "math_id": 31, "text": "\\Lambda_{\\rm MS} = 210\\pm14 " } ]
https://en.wikipedia.org/wiki?curid=681962
682001
Singlet state
Special low-energy state in quantum mechanics In quantum mechanics, a singlet state usually refers to a system in which all electrons are paired. The term 'singlet' originally meant a linked set of particles whose net angular momentum is zero, that is, whose overall spin quantum number formula_0. As a result, there is only one spectral line of a singlet state. In contrast, a doublet state contains one unpaired electron and shows splitting of spectral lines into a doublet, and a triplet state has two unpaired electrons and shows threefold splitting of spectral lines. History. Singlets and the related spin concepts of doublets and triplets occur frequently in atomic physics and nuclear physics, where one often needs to determine the total spin of a collection of particles. Since the only observed fundamental particle with zero spin is the extremely inaccessible Higgs boson, singlets in everyday physics are necessarily composed of sets of particles whose individual spins are non-zero, e.g. or 1. The origin of the term "singlet" is that bound quantum systems with zero net angular momentum emit photons within a single spectral line, as opposed to double lines (doublet state) or triple lines (triplet state). The number of spectral lines formula_1 in this singlet-style terminology has a simple relationship to the spin quantum number: formula_2, and formula_3. Singlet-style terminology is also used for systems whose mathematical properties are similar or identical to angular momentum spin states, even when traditional spin is not involved. In particular, the concept of isospin was developed early in the history of particle physics to address the remarkable similarities of protons and neutrons. Within atomic nuclei, protons and neutrons behave in many ways as if they were a single type of particle, the nucleon, with two states. The proton-neutron pair thus by analogy was referred to as a doublet, and the hypothesized underlying nucleon was assigned a spin-like doublet quantum number formula_4 to differentiate between those two states. Thus the neutron became a nucleon with isospin formula_5, and the proton a nucleon with formula_6. The isospin doublet notably shares the same SU(2) mathematical structure as the formula_7 angular momentum doublet. It should be mentioned that this early particle physics focus on nucleons was subsequently replaced by the more fundamental quark model, in which protons and neutrons are interpreted as bound systems of three quarks each. The isospin analogy also applies to quarks, and is the source of the names up (as in "isospin up") and down (as in "isospin down") for the quarks found in protons and neutrons. While for angular momentum states the singlet-style terminology is seldom used beyond triplets (spin=1), it has proven historically useful for describing much larger particle groups and subgroups that share certain features and are distinguished from each other by quantum numbers beyond spin. An example of this broader use of singlet-style terminology is the nine-member "nonet" of the pseudoscalar mesons. Examples. The simplest possible angular momentum singlet is a set (bound or unbound) of two spin-1/2 (fermion) particles that are oriented so that their spin directions ("up" and "down") oppose each other; that is, they are antiparallel. The simplest possible bound particle pair capable of exhibiting the singlet state is positronium, which consists of an electron and positron (antielectron) bound by their opposite electric charges. The electron and positron in positronium can also have identical or parallel spin orientations, which results in an experimentally-distinct form of positronium with a spin 1 or triplet state. An unbound singlet consists of a pair of entities small enough to exhibit quantum behavior (e.g. particles, atoms, or small molecules), not necessarily of the same type, for which four conditions hold: Any spin value can be used for the pair, but the entanglement effect will be strongest both mathematically and experimentally if the spin magnitude is as small as possible, with the maximum possible effect occurring for entities with spin-1/2 (such as electrons and positrons). Early thought experiments for unbound singlets usually assumed the use of two antiparallel spin-1/2 electrons. However, actual experiments have tended to focus instead on using pairs of spin 1 photons. While the entanglement effect is somewhat less pronounced with such spin 1 particles, photons are easier to generate in correlated pairs and (usually) easier to keep in an unperturbed quantum state. Mathematical representations. The ability of positronium to form both singlet and triplet states is described mathematically by saying that the product of two doublet representations (meaning the electron and positron, which are both spin-1/2 doublets) can be decomposed into the sum of an adjoint representation (the triplet or spin 1 state) and a trivial representation (the singlet or spin 0 state). While the particle interpretation of the positronium triplet and singlet states is arguably more intuitive, the mathematical description enables precise calculations of quantum states and probabilities. This greater mathematical precision for example makes it possible to assess how singlets and doublets behave under rotation operations. Since a spin-1/2 electron transforms as a doublet under rotation, its experimental response to rotation can be predicted by using the fundamental representation of that doublet, specifically the Lie group SU(2). Applying the operator formula_8 to the spin state of the electron thus will always result in formula_9, or spin-1/2, since the spin-up and spin-down states are both eigenstates of the operator with the same eigenvalue. Similarly, for a system of two electrons, it is possible to measure the total spin by applying formula_10, where formula_11 acts on electron 1 and formula_12 acts on electron 2. Since this system has two possible spins, it also has two possible eigenvalues and corresponding eigenstates for the total spin operator, corresponding to the spin 0 and spin 1 states. Singlets and entangled states. Particles in singlet states do not need to be locally bound to each other. For example, when the spin states of two electrons are correlated by their emission from a single quantum event that conserves angular momentum, the resulting electrons remain in a shared singlet state even as their separation in space increases indefinitely over time, provided only that their angular momentum states remain unperturbed. In Dirac notation this distance-indifferent singlet state is usually represented as: formula_13 The possibility of spatially extended unbound singlet states has considerable historical and even philosophical importance, since considering such states contributed importantly to the theoretical and experimental exploration and verification of what is now called quantum entanglement. Along with Podolsky and Rosen, Einstein proposed the EPR paradox thought experiment to help define his concerns with what he viewed as the non-locality of spatially separated entangled particles, using it in an argument that quantum mechanics was incomplete. In 1951 David Bohm formulated a version of the "paradox" using spin singlet states. The difficulty captured by the EPR-Bohm thought experiment was that by measuring a spatial component of the angular momentum of either of two particles that have been prepared in a spatially distributed singlet state, the quantum state of the remaining particle, conditioned on the measurement result obtained, appears to be "instantaneously" altered, even if the two particles have over time become separated by light years of distance. Decades later, John Stewart Bell, who was a strong advocate of Einstein's locality-first perspective, proved Bell's theorem and showed that it could be used to assess the existence or non-existence of singlet entanglement experimentally. The irony was that instead of disproving entanglement, which was Bell's hope, subsequent experiments instead established the reality of entanglement. In fact, there now exist commercial quantum encryption devices whose operation depends fundamentally on the existence and behavior of spatially extended singlets. A weaker form of Einstein's locality principle remains intact, which is this: Classical information cannot be transmitted faster than the speed of light "c", not even by using quantum entanglement events. This form of locality is weaker than the notion of "Einstein locality" or "local realism" used in the EPR and Bell's Theorem papers, but is sufficient to prevent the emergence of causality paradoxes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s=0" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n=2s+1" }, { "math_id": 3, "text": "s=(n-1)/2" }, { "math_id": 4, "text": "I_3=\\tfrac{1}{2}" }, { "math_id": 5, "text": "I_3(n)=-\\tfrac{1}{2}" }, { "math_id": 6, "text": "I_3(p)=+\\tfrac{1}{2}" }, { "math_id": 7, "text": "s=\\tfrac{1}{2}" }, { "math_id": 8, "text": "\\vec{S}^2" }, { "math_id": 9, "text": "\\hbar^2 \\left(\\frac{1}{2}\\right) \\left(\\frac{1}{2} + 1\\right) = \\left(\\frac{3}{4}\\right) \\hbar^2" }, { "math_id": 10, "text": "\\left(\\vec{S}_1 + \\vec{S}_2\\right)^2" }, { "math_id": 11, "text": "\\vec{S}_1" }, { "math_id": 12, "text": "\\vec{S}_2" }, { "math_id": 13, "text": "\\frac{1}{\\sqrt{2}}\\left( \\left|\\uparrow \\downarrow \\right\\rangle - \\left|\\downarrow \\uparrow \\right\\rangle\\right)." } ]
https://en.wikipedia.org/wiki?curid=682001
68209472
Fractional social choice
Fractional, stochastic, or weighted social choice is a branch of social choice theory in which the collective decision is not a single alternative, but rather a weighted sum of two or more alternatives. For example, if society has to choose between three candidates (A, B, or C), then in standard social choice exactly one of these candidates is chosen. By contrast, in fractional social choice it is possible to choose any linear combination of these, e.g. "2/3 of A and 1/3 of B". A common interpretation of the weighted sum is as a lottery, in which candidate A is chosen with probability 2/3 and candidate B is chosen with probability 1/3. The rule can also be interpreted as a recipe for sharing, for example: Formal definitions. There is a finite set of "alternatives" (also called: "candidates"), and a finite set of "voters" (also called: "agents"). Voters may have different preferences over the alternatives. The agents' preferences can be expressed in several ways: A "random social choice function" (RSCF) takes as input the set of voters' preference relations. It returns as output a "mixture" - a vector "p" of real numbers in [0,1], one number for each candidate, such that the sum of numbers is 1. This mixture can be interpreted as a random variable (a lottery), whose value equals each candidate "x" with probability "p"("x"). It can also be interpreted as a deterministic assignment of a fractional share to each candidate. Since the voters express preferences over single candidates only, in order to evaluate RSCFs one needs to "lift" these preferences to preferences over mixtures. This lifting process is often called a "lottery extension", and it results in one of several stochastic orderings. Properties. Basic properties. Two basic desired properties of RSCFs are anonymity - the names of the voters do not matter, and neutrality - the names of the outcomes do not matter. Anonymity and neutrality cannot always be satisfied by a deterministic social choice function. For example, if there are two voters and two alternatives A and B, and each voter wants a different alternative, then the only anonymous and neutral mixture is 1/2*A+1/2*B. Therefore, the use of mixtures is essential to guarantee the basic fairness properties. Consistency properties. The following properties involve changes in the set of voters or the set of alternatives. Condorcet consistency - if there exists a Condorcet winner, then the function returns a degenerate mixture in which this winner gets 1 and the other alternatives get 0 (that is, the Condorcet winner is chosen with probability 1). Agenda consistency - let "p" be a mixture, and let A,B be sets of alternatives that contain the support of "p". Then, the function returns "p" for A union B, iff it returns "p" for A and for B. This property was called expansion/contraction by Sen. Population consistency - if the function returns a mixture "p" for two disjoint sets of voters, then it returns the same "p" for their union. Independence of clones (also called cloning consistency) - if an alternative is "cloned", such that all voters rank all its clones one near the other, then the weight (=probability) of all the other alternatives in the returned mixture is not affected. These properties guarantee that a central planner cannot perform simple manipulations such as splitting alternatives, cloning alternatives, or splitting the population. Note that consistency properties depend only on the rankings of individual alternatives - they do not require ranking of mixtures. Mixture-comparison properties. The following properties involve comparisons of mixtures. To define them exactly, one needs an assumption on how voters rank mixtures. This requires a stochastic ordering on the lotteries. Several such orderings exist; the most common in social choice theory, in order of strength, are DD (deterministic dominance), BD (bilinear dominance), SD (stochastic dominance) and PC (pairwise-comparison dominance). See stochastic ordering for definitions and examples. Efficiency - no mixture is better for at least one voter and at least as good for all voters. One can define DD-efficiency, BD-efficiency, SD-efficiency, PC-efficiency, and ex-post efficiency (the final outcome is always efficient). Strategyproofness - reporting false preferences does not lead to a mixture that is better for the voter. Again, one can define DD-strategyproofness, BD-strategyproofness, SD-strategyproofness and PC-strategyproofness. Participation - abstaining from participation does not lead to a mixture that is better for the voter. Again, one can define DD-participation, BD-participation, SD-participation and PC-participation. Common functions. Some commonly-used rules for random social choice are: Random dictatorship - a voter is selected at random, and determines the outcome. If the preferences are strict, this yields a mixture in which the weight of each alternative is exactly proportional to the number of voters who rank it first. If the preferences are weak, and the chosen voter is indifferent between two or more best options, then a second voter is selected at random to choose among them, and so on. This extension is called random serial dictatorship. It satisfies ex-post efficiency, strong SD-strategyproofness, very-strong-SD-participation, agenda-consistency, and cloning-consistency. It fails Condorcet consistency, composition consistency, and (with weak preferences) population consistency. Max Borda - returns a mixture in which all alternatives with the highest Borda count have an equal weight, and all other alternatives have a weight of 0. In other words, it picks randomly one of the Borde winners (other score functions can be used instead of Borda). It satisfies SD-efficiency, strong-SD participation, and population-consistency, but does not satisfy any form of strategyproofness, or any other consistency. Proportional Borda - returns a mixture in which the weight of each alternative is proportional to its Borda count. In other words, it randomizes between "all" alternatives, where the probability of each alternative is proportional to its score (other score functions can be used instead of Borda). It satisfies strong SD-strategyproofness, strong SD-participation, and population consistency, but not any form of efficiency, or any other consistency. Maximal lotteries - a rule based on pairwise comparisons of alternatives. For any two alternatives "x,y", we compute how many voters prefer "x" to y, and how many voters prefer "y" to "x", and let "Mxy" be the difference. The resulting matrix "M" is called the "majority margin matrix". A mixture "p" is called "maximal" iff formula_0. When interpreted as a lottery, it means that "p" is weakly preferred to any other lottery by an expected majority of voters (the expected number of agents who prefer the alternative returned by "p" to that returned by any other lottery "q", is at least as large as the expected number of agents who prefer the alternative returned by "q" to that returned by "p"). A maximal lottery is the continuous analogue of a Condorcet winner. However, while a Condorcet winner might not exist, a maximal lottery always exists. This follows from applying the Minimax theorem to an appropriate symmetric two-player zero-sum game. It satisfies PC-efficiency, DD-strategyproofness, PC-participation, and all consistency properties - particularly, Condorcet consistency.
[ { "math_id": 0, "text": "p^T M \\geq 0" } ]
https://en.wikipedia.org/wiki?curid=68209472
68212199
Vision transformer
Variant of Transformer designed for vision processing A vision transformer (ViT) is a transformer designed for computer vision. A ViT breaks down an input image into a series of patches (rather than breaking up text into tokens), serialises each patch into a vector, and maps it to a smaller dimension with a single matrix multiplication. These vector embeddings are then processed by a transformer encoder as if they were token embeddings. ViT were designed as alternatives to convolutional neural networks (CNN) in computer vision applications. They have different inductive biases, training stability, and data efficiency. Compared to CNN, ViT is less data efficient, but has higher capacity. Some of the largest modern computer vision models are ViT, such as one with 22B parameters. Subsequent to its publication, many variants were proposed, with hybrid architectures with both features of ViT and CNN. ViT has found applications in image recognition, image segmentation, and autonomous driving. History. Transformers were introduced in "Attention Is All You Need" (2017), and have found widespread use in natural language processing. A 2019 paper applied ideas from the Transformer to computer vision. Specifically, they started with a ResNet, a standard convolutional neural network used for computer vision, and replaced all convolutional kernels by the self-attention mechanism found in a Transformer. It resulted in superior performance. However, it is not a Vision Transformer. In 2020, an encoder-only Transformer was adapted for computer vision, yielding the ViT, which reached state of the art in image classification, overcoming the previous dominance of CNN. The masked autoencoder (2022) extended ViT to work with unsupervised training. The vision transformer and the masked autoencoder, in turn, stimulated new developments in convolutional neural networks. Subsequently, there was cross-fertilization between the previous CNN approach and the ViT approach. In 2021, some important variants of the Vision Transformers were proposed. These variants are mainly intended to be more efficient, more accurate or better suited to a specific domain. Two studies improved efficiency and robustness of ViT by adding a CNN as a preprocessor. The Swin Transformer achieved state-of-the-art results on some object detection datasets such as COCO, by using convolution-like sliding windows of attention mechanism, and the pyramid process in classical computer vision. Overview. The basic architecture, used by the original 2020 paper, is as follows. In summary, it is a BERT-like encoder-only Transformer. The input image is of type formula_0, where formula_1 are height, width, channel (RGB). It is then split into square-shaped patches of type formula_2. For each patch, the patch is pushed through a linear operator, to obtain a vector ("patch embedding"). The position of the patch is also transformed into a vector by "position encoding". The two vectors are added, then pushed through several Transformer encoders. The attention mechanism in a ViT repeatedly transforms representation vectors of image patches, incorporating more and more semantic relations between image patches in an image. This is analogous to how in natural language processing, as representation vectors flow through a transformer, they incorporate more and more semantic relations between words, from syntax to semantics. The above architecture turns an image into a sequence of vector representations. To use these for downstream applications, an additional head needs to be trained to interpret them. For example, to use it for classification, one can add a shallow MLP on top of it that outputs a probability distribution over classes. The original paper uses a linear-GeLU-linear-softmax network. Variants. Original ViT. The original ViT was an encoder-only Transformer supervise-trained to predict the image label from the patches of the image. As in the case of BERT, it uses a special token codice_0 in the input side, and the corresponding output vector is used as the only input of the final output MLP head. The special token is an architectural hack to allow the model to compress all information relevant for predicting the image label into one vector. Transformers found their initial applications in natural language processing tasks, as demonstrated by language models such as BERT and GPT-3. By contrast the typical image processing system uses a convolutional neural network (CNN). Well-known projects include Xception, ResNet, EfficientNet, DenseNet, and Inception. Transformers measure the relationships between pairs of input tokens (words in the case of text strings), termed attention. The cost is quadratic in the number of tokens. For images, the basic unit of analysis is the pixel. However, computing relationships for every pixel pair in a typical image is prohibitive in terms of memory and computation. Instead, ViT computes relationships among pixels in various small sections of the image (e.g., 16x16 pixels), at a drastically reduced cost. The sections (with positional embeddings) are placed in a sequence. The embeddings are learnable vectors. Each section is arranged into a linear sequence and multiplied by the embedding matrix. The result, with the position embedding is fed to the transformer. Masked Autoencoder. The Masked Autoencoder took inspiration from denoising autoencoders. It has two ViTs put end-to-end. The first one takes in image patches with positional encoding, and outputs vectors representing each patch. The second one takes in vectors with positional encoding and outputs image patches again. During training, both ViTs are used. An image is cut into patches, and only 25% of the patches are put into the first ViT. The second ViT takes the encoded vectors and outputs a reconstruction of the full image. During use, only the first ViT is used. DINO. Like the Masked Autoencoder, the DINO (self-distillation with no labels) method is a way to train a ViT by self-supervision. DINO is a form of teacher-student self-distillation. In DINO, the student is the model itself, and the teacher is an exponential average of the student's past states. The method is similar to previous works like momentum contrast and bootstrap your own latent The loss function used in DINO is the cross-entropy loss between the output of the teacher network (formula_3) and the output of the student network (formula_4). The teacher network is an exponentially decaying average of the student network's past parameters: formula_5. The inputs to the networks are two different crops of the same image, represented as formula_6 and formula_7, where formula_8 is the original image. The loss function is written asformula_9One issue is that the network can "collapse" by always outputting the same value (formula_10), regardless of the input. To prevent this collapse, DINO employs two strategies: Swin Transformer. The Swin Transformer ("Shifted windows") took inspiration from standard CNNs: It is improved by Swin Transformer V2, which modifies upon the ViT by a different attention mechanism (Figure 1): TimeSformer. The TimeSformer was designed for video understanding tasks, and it applied a factorized self-attention, similar to the factorized convolution kernels found in the Inception CNN architecture. Schematically, it divides a video into frames, and each frame into a square grid of patches (same as ViT). Let each patch coordinate be denoted by formula_11, denoting horizontal, vertical, and time. The TimeSformer also considered other attention layer designs, such as the "height attention layer" where the requirement is formula_16. However, they found empirically that the best design interleaves one space attention layer and one time attention layer. ViT-VQGAN. In ViT-VQGAN, there are two ViT encoders and a discriminator. One encodes 8x8 patches of an image into a list of vectors, one for each patch. The vectors can only come from a discrete set of "codebook", as in vector quantization. Another encodes the quantized vectors back to image patches. The training objective attempts to make the reconstruction image (the output image) faithful to the input image. The discriminator (usually a convolutional network, but other networks are allowed) attempts to decide if an image is an original real image, or a reconstructed image by the ViT. The idea is essentially the same as vector quantized variational autoencoder (VQVAE) plus generative adversarial network (GAN). After such a ViT-VQGAN is trained, it can be used to code an arbitrary image into a list of symbols, and code an arbitrary list of symbols into an image. The list of symbols can be used to train into a standard autoregressive transformer (like GPT), for autoregressively generating an image. Further, one can take a list of caption-image pairs, convert the images into strings of symbols, and train a standard GPT-style transformer. Then at test time, one can just give an image caption, and have it autoregressively generate the image. This is the structure of Google Parti. Others. Other examples include the visual transformer, CoAtNet, CvT, etc. Comparison with CNNs. Typically, ViT uses patch sizes larger than standard CNN kernels (3x3 to 7x7). ViT is more sensitive to the choice of the optimizer, hyperparameters, and network depth. Preprocessing with a layer of smaller-size, overlapping (stride &lt; size) convolutional filters helps with performance and stability. This different behavior seems to derive from the different inductive biases they possess. CNN applies the same set of filters for processing the entire image. This allows them to be more data efficient and less sensitive to local perturbations. ViT applies self-attention, allowing them to easily capture long-range relationships between patches. They also require more data to train, but they can ingest more training data compared to CNN, which might not improve after training on a large enough training dataset. ViT also appears more robust to input image distortions such as adversarial patches or permutations. Applications. ViT have been used in many Computer Vision tasks with excellent results and in some cases even state-of-the-art. Image Classification, Object Detection, Video Deepfake Detection, Image segmentation, Anomaly detection, Image Synthesis, Cluster analysis, Autonomous Driving. ViT had been used for image generation as backbones for GAN and for diffusion models (diffusion transformer, or DiT). DINO has been demonstrated to learn useful representations for clustering images and exploring morphological profiles on biological datasets, such as images generated with the Cell Painting assay. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\R^{H\\times W \\times C}" }, { "math_id": 1, "text": "H, W, C" }, { "math_id": 2, "text": "\\R^{P\\times P \\times C}" }, { "math_id": 3, "text": "f_{\\theta'_t}" }, { "math_id": 4, "text": "f_{\\theta_t}" }, { "math_id": 5, "text": "\\theta'_t = \\alpha \\theta_t + \\alpha(1-\\alpha) \\theta_{t-1} + \\cdots" }, { "math_id": 6, "text": "T(x)" }, { "math_id": 7, "text": "T'(x)" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "L(f_{\\theta'_t}(T(x)), f_{\\theta_t}(T'(x)))" }, { "math_id": 10, "text": "y" }, { "math_id": 11, "text": "x, y, t" }, { "math_id": 12, "text": "q_{x, y, t}" }, { "math_id": 13, "text": "k_{x', y', t'}, v_{x', y', t'}" }, { "math_id": 14, "text": "t = t'" }, { "math_id": 15, "text": "x' = x, y' = y" }, { "math_id": 16, "text": "x' = x, t' = t" } ]
https://en.wikipedia.org/wiki?curid=68212199
6821715
Uniform 8-polytope
Polytope contained by 7-polytope facets In eight-dimensional geometry, an eight-dimensional polytope or 8-polytope is a polytope contained by 7-polytope facets. Each 6-polytope ridge being shared by exactly two 7-polytope facets. A uniform 8-polytope is one which is vertex-transitive, and constructed from uniform 7-polytope facets. Regular 8-polytopes. Regular 8-polytopes can be represented by the Schläfli symbol {p,q,r,s,t,u,v}, with v {p,q,r,s,t,u} 7-polytope facets around each peak. There are exactly three such convex regular 8-polytopes: There are no nonconvex regular 8-polytopes. Characteristics. The topology of any given 8-polytope is defined by its Betti numbers and torsion coefficients. The value of the Euler characteristic used to characterise polyhedra does not generalize usefully to higher dimensions, and is zero for all 8-polytopes, whatever their underlying topology. This inadequacy of the Euler characteristic to reliably distinguish between different topologies in higher dimensions led to the discovery of the more sophisticated Betti numbers. Similarly, the notion of orientability of a polyhedron is insufficient to characterise the surface twistings of toroidal polytopes, and this led to the use of torsion coefficients. Uniform 8-polytopes by fundamental Coxeter groups. Uniform 8-polytopes with reflective symmetry can be generated by these four Coxeter groups, represented by permutations of rings of the Coxeter-Dynkin diagrams: Selected regular and uniform 8-polytopes from each family include: Uniform prismatic forms. There are many uniform prismatic families, including: The A8 family. The A8 family has symmetry of order 362880 (9 factorial). There are 135 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. (128+8-1 cases) These are all enumerated below. Bowers-style acronym names are given in parentheses for cross-referencing. See also a list of 8-simplex polytopes for symmetric Coxeter plane graphs of these polytopes. The B8 family. The B8 family has symmetry of order 10321920 (8 factorial x 28). There are 255 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. See also a list of B8 polytopes for symmetric Coxeter plane graphs of these polytopes. The D8 family. The D8 family has symmetry of order 5,160,960 (8 factorial x 27). This family has 191 Wythoffian uniform polytopes, from "3x64-1" permutations of the D8 Coxeter-Dynkin diagram with one or more rings. 127 (2x64-1) are repeated from the B8 family and 64 are unique to this family, all listed below. See list of D8 polytopes for Coxeter plane graphs of these polytopes. The E8 family. The E8 family has symmetry order 696,729,600. There are 255 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. Eight forms are shown below, 4 single-ringed, 3 truncations (2 rings), and the final omnitruncation are given below. Bowers-style acronym names are given for cross-referencing. See also list of E8 polytopes for Coxeter plane graphs of this family. Regular and uniform honeycombs. There are five fundamental affine Coxeter groups that generate regular and uniform tessellations in 7-space: Regular and uniform tessellations include: Regular and uniform hyperbolic honeycombs. There are no compact hyperbolic Coxeter groups of rank 8, groups that can generate honeycombs with all finite facets, and a finite vertex figure. However, there are 4 paracompact hyperbolic Coxeter groups of rank 8, each generating uniform honeycombs in 7-space as permutations of rings of the Coxeter diagrams. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\tilde{A}}_7" }, { "math_id": 1, "text": "{\\tilde{C}}_7" }, { "math_id": 2, "text": "{\\tilde{B}}_7" }, { "math_id": 3, "text": "{\\tilde{D}}_7" }, { "math_id": 4, "text": "{\\tilde{E}}_7" } ]
https://en.wikipedia.org/wiki?curid=6821715
6821719
Uniform 7-polytope
Polytope In seven-dimensional geometry, a 7-polytope is a polytope contained by 6-polytope facets. Each 5-polytope ridge being shared by exactly two 6-polytope facets. A uniform 7-polytope is one whose symmetry group is transitive on vertices and whose facets are uniform 6-polytopes. Regular 7-polytopes. Regular 7-polytopes are represented by the Schläfli symbol {p,q,r,s,t,u} with u {p,q,r,s,t} 6-polytopes facets around each 4-face. There are exactly three such convex regular 7-polytopes: There are no nonconvex regular 7-polytopes. Characteristics. The topology of any given 7-polytope is defined by its Betti numbers and torsion coefficients. The value of the Euler characteristic used to characterise polyhedra does not generalize usefully to higher dimensions, whatever their underlying topology. This inadequacy of the Euler characteristic to reliably distinguish between different topologies in higher dimensions led to the discovery of the more sophisticated Betti numbers. Similarly, the notion of orientability of a polyhedron is insufficient to characterise the surface twistings of toroidal polytopes, and this led to the use of torsion coefficients. Uniform 7-polytopes by fundamental Coxeter groups. Uniform 7-polytopes with reflective symmetry can be generated by these four Coxeter groups, represented by permutations of rings of the Coxeter-Dynkin diagrams: The A7 family. The A7 family has symmetry of order 40320 (8 factorial). There are 71 (64+8-1) forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. All 71 are enumerated below. Norman Johnson's truncation names are given. Bowers names and acronym are also given for cross-referencing. See also a list of A7 polytopes for symmetric Coxeter plane graphs of these polytopes. The B7 family. The B7 family has symmetry of order 645120 (7 factorial x 27). There are 127 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. Johnson and Bowers names. See also a list of B7 polytopes for symmetric Coxeter plane graphs of these polytopes. The D7 family. The D7 family has symmetry of order 322560 (7 factorial x 26). This family has 3×32−1=95 Wythoffian uniform polytopes, generated by marking one or more nodes of the D7 Coxeter-Dynkin diagram. Of these, 63 (2×32−1) are repeated from the B7 family and 32 are unique to this family, listed below. Bowers names and acronym are given for cross-referencing. See also list of D7 polytopes for Coxeter plane graphs of these polytopes. The E7 family. The E7 Coxeter group has order 2,903,040. There are 127 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. See also a list of E7 polytopes for symmetric Coxeter plane graphs of these polytopes. Regular and uniform honeycombs. There are five fundamental affine Coxeter groups and sixteen prismatic groups that generate regular and uniform tessellations in 6-space: Regular and uniform tessellations include: Regular and uniform hyperbolic honeycombs. There are no compact hyperbolic Coxeter groups of rank 7, groups that can generate honeycombs with all finite facets, and a finite vertex figure. However, there are 3 paracompact hyperbolic Coxeter groups of rank 7, each generating uniform honeycombs in 6-space as permutations of rings of the Coxeter diagrams. Notes on the Wythoff construction for the uniform 7-polytopes. The reflective 7-dimensional uniform polytopes are constructed through a Wythoff construction process, and represented by a Coxeter-Dynkin diagram, where each node represents a mirror. An active mirror is represented by a ringed node. Each combination of active mirrors generates a unique uniform polytope. Uniform polytopes are named in relation to the regular polytopes in each family. Some families have two regular constructors and thus may be named in two equally valid ways. Here are the primary operators available for constructing and naming the uniform 7-polytopes. The prismatic forms and bifurcating graphs can use the same truncation indexing notation, but require an explicit numbering system on the nodes for clarity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\tilde{A}}_6" }, { "math_id": 1, "text": "{\\tilde{C}}_6" }, { "math_id": 2, "text": "{\\tilde{B}}_6" }, { "math_id": 3, "text": "{\\tilde{D}}_6" }, { "math_id": 4, "text": "{\\tilde{E}}_6" } ]
https://en.wikipedia.org/wiki?curid=6821719
6821726
Uniform 9-polytope
Type of geometric object In nine-dimensional geometry, a nine-dimensional polytope or 9-polytope is a polytope contained by 8-polytope facets. Each 7-polytope ridge being shared by exactly two 8-polytope facets. A uniform 9-polytope is one which is vertex-transitive, and constructed from uniform 8-polytope facets. Regular 9-polytopes. Regular 9-polytopes can be represented by the Schläfli symbol {p,q,r,s,t,u,v,w}, with w {p,q,r,s,t,u,v} 8-polytope facets around each peak. There are exactly three such convex regular 9-polytopes: There are no nonconvex regular 9-polytopes. Euler characteristic. The topology of any given 9-polytope is defined by its Betti numbers and torsion coefficients. The value of the Euler characteristic used to characterise polyhedra does not generalize usefully to higher dimensions, whatever their underlying topology. This inadequacy of the Euler characteristic to reliably distinguish between different topologies in higher dimensions led to the discovery of the more sophisticated Betti numbers. Similarly, the notion of orientability of a polyhedron is insufficient to characterise the surface twistings of toroidal polytopes, and this led to the use of torsion coefficients. Uniform 9-polytopes by fundamental Coxeter groups. Uniform 9-polytopes with reflective symmetry can be generated by these three Coxeter groups, represented by permutations of rings of the Coxeter-Dynkin diagrams: Selected regular and uniform 9-polytopes from each family include: The A9 family. The A9 family has symmetry of order 3628800 (10 factorial). There are 256+16-1=271 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. These are all enumerated below. Bowers-style acronym names are given in parentheses for cross-referencing. The B9 family. There are 511 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. Eleven cases are shown below: Nine rectified forms and 2 truncations. Bowers-style acronym names are given in parentheses for cross-referencing. Bowers-style acronym names are given in parentheses for cross-referencing. The D9 family. The D9 family has symmetry of order 92,897,280 (9 factorial × 28). This family has 3×128−1=383 Wythoffian uniform polytopes, generated by marking one or more nodes of the D9 Coxeter-Dynkin diagram. Of these, 255 (2×128−1) are repeated from the B9 family and 128 are unique to this family, with the eight 1 or 2 ringed forms listed below. Bowers-style acronym names are given in parentheses for cross-referencing. Regular and uniform honeycombs. There are five fundamental affine Coxeter groups that generate regular and uniform tessellations in 8-space: Regular and uniform tessellations include: Regular and uniform hyperbolic honeycombs. There are no compact hyperbolic Coxeter groups of rank 9, groups that can generate honeycombs with all finite facets, and a finite vertex figure. However, there are 4 paracompact hyperbolic Coxeter groups of rank 9, each generating uniform honeycombs in 8-space as permutations of rings of the Coxeter diagrams. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\tilde{A}}_8" }, { "math_id": 1, "text": "{\\tilde{C}}_8" }, { "math_id": 2, "text": "{\\tilde{B}}_8" }, { "math_id": 3, "text": "{\\tilde{D}}_8" }, { "math_id": 4, "text": "{\\tilde{E}}_8" } ]
https://en.wikipedia.org/wiki?curid=6821726
6821728
Uniform 6-polytope
Uniform 6-dimensional polytope In six-dimensional geometry, a uniform 6-polytope is a six-dimensional uniform polytope. A uniform polypeton is vertex-transitive, and all facets are uniform 5-polytopes. The complete set of convex uniform 6-polytopes has not been determined, but most can be made as Wythoff constructions from a small set of symmetry groups. These construction operations are represented by the permutations of rings of the Coxeter-Dynkin diagrams. Each combination of at least one ring on every connected group of nodes in the diagram produces a uniform 6-polytope. The simplest uniform polypeta are regular polytopes: the 6-simplex {3,3,3,3,3}, the 6-cube (hexeract) {4,3,3,3,3}, and the 6-orthoplex (hexacross) {3,3,3,3,4}. Uniform 6-polytopes by fundamental Coxeter groups. Uniform 6-polytopes with reflective symmetry can be generated by these four Coxeter groups, represented by permutations of rings of the Coxeter-Dynkin diagrams. There are four fundamental reflective symmetry groups which generate 153 unique uniform 6-polytopes. Uniform prismatic families. Uniform prism There are 6 categorical uniform prisms based on the uniform 5-polytopes. Uniform duoprism There are 11 categorical uniform duoprismatic families of polytopes based on Cartesian products of lower-dimensional uniform polytopes. Five are formed as the product of a uniform 4-polytope with a regular polygon, and six are formed by the product of two uniform polyhedra: Uniform triaprism There is one infinite family of uniform triaprismatic families of polytopes constructed as a Cartesian products of three regular polygons. Each combination of at least one ring on every connected group produces a uniform prismatic 6-polytope. Enumerating the convex uniform 6-polytopes. These fundamental families generate 153 nonprismatic convex uniform polypeta. In addition, there are 57 uniform 6-polytope constructions based on prisms of the uniform 5-polytopes: [3,3,3,3,2], [4,3,3,3,2], [32,1,1,2], excluding the penteract prism as a duplicate of the hexeract. In addition, there are infinitely many uniform 6-polytope based on: The A6 family. There are 32+4−1=35 forms, derived by marking one or more nodes of the Coxeter-Dynkin diagram. All 35 are enumerated below. They are named by Norman Johnson from the Wythoff construction operations upon regular 6-simplex (heptapeton). Bowers-style acronym names are given in parentheses for cross-referencing. The A6 family has symmetry of order 5040 (7 factorial). The coordinates of uniform 6-polytopes with 6-simplex symmetry can be generated as permutations of simple integers in 7-space, all in hyperplanes with normal vector (1,1,1,1,1,1,1). The B6 family. There are 63 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. The B6 family has symmetry of order 46080 (6 factorial x 26). They are named by Norman Johnson from the Wythoff construction operations upon the regular 6-cube and 6-orthoplex. Bowers names and acronym names are given for cross-referencing. The D6 family. The D6 family has symmetry of order 23040 (6 factorial x 25). This family has 3×16−1=47 Wythoffian uniform polytopes, generated by marking one or more nodes of the D6 Coxeter-Dynkin diagram. Of these, 31 (2×16−1) are repeated from the B6 family and 16 are unique to this family. The 16 unique forms are enumerated below. Bowers-style acronym names are given for cross-referencing. The E6 family. There are 39 forms based on all permutations of the Coxeter-Dynkin diagrams with one or more rings. Bowers-style acronym names are given for cross-referencing. The E6 family has symmetry of order 51,840. Triaprisms. Uniform triaprisms, {"p"}×{"q"}×{"r"}, form an infinite class for all integers "p","q","r"&gt;2. {4}×{4}×{4} makes a lower symmetry form of the 6-cube. The extended f-vector is ("p","p",1)*("q","q",1)*("r","r",1)=("pqr",3"pqr",3"pqr"+"pq"+"pr"+"qr",3"p"("p"+1),3"p",1). Non-Wythoffian 6-polytopes. In 6 dimensions and above, there are an infinite amount of non-Wythoffian convex uniform polytopes: the Cartesian product of the grand antiprism in 4 dimensions and any regular polygon in 2 dimensions. It is not yet proven whether or not there are more. Regular and uniform honeycombs. There are four fundamental affine Coxeter groups and 27 prismatic groups that generate regular and uniform tessellations in 5-space: Regular and uniform honeycombs include: Regular and uniform hyperbolic honeycombs. There are no compact hyperbolic Coxeter groups of rank 6, groups that can generate honeycombs with all finite facets, and a finite vertex figure. However, there are 12 paracompact hyperbolic Coxeter groups of rank 6, each generating uniform honeycombs in 5-space as permutations of rings of the Coxeter diagrams. Notes on the Wythoff construction for the uniform 6-polytopes. Construction of the reflective 6-dimensional uniform polytopes are done through a Wythoff construction process, and represented through a Coxeter-Dynkin diagram, where each node represents a mirror. Nodes are ringed to imply which mirrors are active. The full set of uniform polytopes generated are based on the unique permutations of ringed nodes. Uniform 6-polytopes are named in relation to the regular polytopes in each family. Some families have two regular constructors and thus may have two ways of naming them. Here's the primary operators available for constructing and naming the uniform 6-polytopes. The prismatic forms and bifurcating graphs can use the same truncation indexing notation, but require an explicit numbering system on the nodes for clarity. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\tilde{A}}_5" }, { "math_id": 1, "text": "{\\tilde{C}}_5" }, { "math_id": 2, "text": "{\\tilde{B}}_5" }, { "math_id": 3, "text": "{\\tilde{D}}_5" } ]
https://en.wikipedia.org/wiki?curid=6821728
6822584
Additive basis
In additive number theory, an additive basis is a set formula_0 of natural numbers with the property that, for some finite number formula_1, every natural number can be expressed as a sum of formula_1 or fewer elements of formula_0. That is, the sumset of formula_1 copies of formula_0 consists of all natural numbers. The "order" or "degree" of an additive basis is the number formula_1. When the context of additive number theory is clear, an additive basis may simply be called a basis. An asymptotic additive basis is a set formula_0 for which all but finitely many natural numbers can be expressed as a sum of formula_1 or fewer elements of formula_0. For example, by Lagrange's four-square theorem, the set of square numbers is an additive basis of order four, and more generally by the Fermat polygonal number theorem the polygonal numbers for formula_1-sided polygons form an additive basis of order formula_1. Similarly, the solutions to Waring's problem imply that the formula_1th powers are an additive basis, although their order is more than formula_1. By Vinogradov's theorem, the prime numbers are an asymptotic additive basis of order at most four, and Goldbach's conjecture would imply that their order is three. The unproven Erdős–Turán conjecture on additive bases states that, for any additive basis of order formula_1, the number of representations of the number formula_2 as a sum of formula_1 elements of the basis tends to infinity in the limit as formula_2 goes to infinity. (More precisely, the number of representations has no finite supremum.) The related Erdős–Fuchs theorem states that the number of representations cannot be close to a linear function. The Erdős–Tetali theorem states that, for every formula_1, there exists an additive basis of order formula_1 whose number of representations of each formula_2 is formula_3. A theorem of Lev Schnirelmann states that any sequence with positive Schnirelmann density is an additive basis. This follows from a stronger theorem of Henry Mann according to which the Schnirelmann density of a sum of two sequences is at least the sum of their Schnirelmann densities, unless their sum consists of all natural numbers. Thus, any sequence of Schnirelmann density formula_4 is an additive basis of order at most formula_5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\Theta(\\log n)" }, { "math_id": 4, "text": "\\varepsilon > 0" }, { "math_id": 5, "text": "\\lceil 1/\\varepsilon\\rceil" } ]
https://en.wikipedia.org/wiki?curid=6822584
68236156
Full load hour
Measure of the degree of utilisation Full Load hour is a measure of the degree of utilisation of a technical system. Full load hours refer to the time for which a plant would have to be operated at nominal power in order to convert the same amount of electrical work as the plant has actually converted within a defined period of time, during which breaks in operation or partial load operation can also occur. The figure usually refers to a period of one calendar year and is mainly applied to power plants. The annual utilisation rate or capacity factor derived from the number of full-load hours is the relative full-load utilisation in a year, i.e. the number of full-load hours divided by 8760 hours, the number of hours in a year with 365 days. Description. Typically technical plants are not constantly operated at full load, but depending on various factors (see below) the system can be under a partial load. The total work converted by the plant in a year is therefore less than the maximum possible work in the same period. The degree of utilisation of a technical plant can be expressed in full load hours if a nominal capacity can be specified and an adequate conversion from partial load operation to nominal load operation exists (e.g. on the basis of the amount of energy or material converted). The number of full load hours of a plant varies from year to year due to different technical inspection durations, power plant operating schedules, maintenance, unplanned disturbances and outages and due to different weather conditions, especially for renewable energy sources. The value must not be confused with operating hours. These refer to the entire period of time during which the system has been operated and can has been operated and may include periods of partial load operation. Definition. The number of full load hours of a generator is calculated by dividing the (expected) annual energy output (e.g. in kWh) by the nominal power of the generator (e.g. in kW or kWp). formula_0 It indicates how many hours the plant would have run to achieve the same annual energy production if it had been running either at full load or otherwise have been completely inactive. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{full load hour} =\\frac{kWh}{kWp}=\\frac{energy}{power}" } ]
https://en.wikipedia.org/wiki?curid=68236156
68237078
Izbash formula
Mathematical expression for the stability of rocks in water currents The Izbash formula is a mathematical expression used to calculate the stability of armourstone in flowing water environments. For the assessment of granular material stability in a current, the Shields formula and the Izbash formula are commonly employed. The former is more appropriate for fine-grained materials like sand and gravel, whereas the Izbash formula is tailored for larger stone sizes. The Izbash formula was devised by Sergei Vladimirovich Izbash. Its general expression is as follows: formula_0 or alternatively formula_1 Here, the variables represent: "uc" = flow velocity in proximity to the stone Δ = relative density of the stone, calculated as ("ρs" - "ρw")/"ρw" where "ρs" denotes the stone's density and "ρw" is the water's density "g" = gravitational acceleration "d" = diameter of the stone The coefficient 1.7 is an experimental constant determined by Izbash, encapsulating effects such as friction, inertia, and the turbulence of the current. Hence, the application of this coefficient is limited to conditions where turbulence is predominantly induced by the roughness of the construction materials in water. Adjustments are necessary when these conditions do not apply. Derivation of the Izbash Formula. The derivation of the formula begins by considering the forces at play on a stone in a flowing current. These are grouped into "active" forces that tend to dislodge the stone, and "passive" forces that resist this movement: Each active force can be quantified in terms of the water's density (ρw), the flow velocity (u), and respective coefficients and areas of influence (CD, CF, CL, AD, AS, AL). The three active forces and two passive forces described above are considered. Analysing the moment equilibrium around point A results in "FF" being disregarded due to its zero arm length. The active forces can then be detailed as: formula_2 The total active force is proportional to the square of the flow velocity and the stone's diameter, represented as ρwu²d². The resisting passive force is proportional to the stone's submerged weight, which involves the gravitational constant (g), the stone's volume (proportional to d³), and the difference in density between the stone and the water (ρs - ρw), represented by Δ. Balancing the active forces against the passive ones yields the critical flow velocity equation: formula_3 K is an empirical coefficient calibrated through experimental observations, and has been found to be around 1.7. The formula therefore provides a critical velocity estimate: the threshold at which the forces acting on a stone due to flow surpass the stone's resistance to movement. Calculation Example. Consider determining the requisite stone size to protect the base of a channel with a depth of 1 m and an average flow rate of 2 m/s. The stone diameter necessary for protection can be estimated by reconfiguring the formula and inserting the relevant data. The Izbash formula necessitates the use of the velocity "near the stone," which is ambiguous. Practically, a velocity approximately equivalent to the stone's diameter above the protective layer is assumed. This translates to about 85% of the channel's average flow velocity when employing a standard logarithmic flow profile, resulting in a stone diameter of approximately 6.3 cm (comparable to the 6.5 cm predicted by the Shields formula). Limitations. The application of the formula necessitates the measurement of velocity in proximity to the stone, a task that can be challenging, particularly in fine-grained soils and at significant water depths. Under such conditions, the Shields formula is often considered a more suitable alternative. Modification by Pilarczyk. Recognising the prevalent usage of the coefficient 0.7, Krystian Pilarczyk refined the formula in 1985 for enhanced specificity. The revised equation is expressed as: formula_4 where: Φ represents the stability parameter, which adjusts the formula for different construction types: Ψ denotes the Shields parameter, with typical values shown in the table below: Kt = turbulence factor, with values as below: Kh = depth factor * Typically, formula_5, where N ranges from 1 to 3 * kh for an undeveloped velocity profile: formula_6 Ks = slope factor * The value of Ks corresponds to either formula_7 or formula_8 (as detailed below). The destabilizing influences on a slope's stability can be quantified by examining two principal forces: In the figure below, "φ" represents the angle of internal friction or the angle of repose of the soil. When the flow direction aligns with the slope's inclination ("Figure b"), the perpendicular force impacting stability is: formula_9 If the flow is in the opposite direction, the stone's stability increases: formula_10 The strength reduction factor due to the slope is then: formula_11 For slopes transverse to the flow at an angle α ("Figure c"), the stability reduction factor is: formula_12 "Figure d" illustrates the reduction factors for stability at a slope with an angle of φ = 40 degrees, demonstrating the impact of slope angle relative to flow direction on the stability of objects. Due to the fact that a depth factor, "K"h, is included in this version of the Izbash formula, the average velocity above the stones can be considered for the velocity used in the calculations. This is a revision from the original Izbash formula, which ambiguously specified that the speed was "near to the stone". Effect of turbulence. Turbulence exerts a significant effect on stability. Turbulent vortices cause locally high velocities at the stone, generating a lift force on one side, while the absence of such a force on the other side can eject the stone from its bed. This mechanism is depicted in the accompanying image (see right), where the detailed drawing illustrates the stone positioned at the coordinate (0,0), with the relative velocity creating an upward lift force to the left, and no lift force to the right, resulting in a clockwise moment that can flip the stone out of the bed along with the normal lift force of the main flow. This selective motion explains why not all stones are set in motion by a given current speed, but only when an appropriate vortex passes by. These illustrations represent flow rate measurements in a vertical plane above a stone, with the flow moving from left to right. Displayed is the velocity after subtracting the average speed, i.e., the "u" and "v" components (for further explanation, see the main article on Turbulence modelling). The impact of turbulence is particularly pronounced when the size of turbulent vortices is comparable to that of the stones. It is feasible to modify the Izbash formula to more explicitly incorporate the effects of turbulence. A stone at rest will not move until the total velocity (i.e., the average velocity plus the additional velocity from turbulent vortices) surpasses a specific threshold. Research indicates that this critical velocity is formula_13, where formula_14 represents the standard deviation of the velocity and "r" denotes the relative turbulence. In the original Izbash formula, the coefficient of 1.7 encompasses a component accounting for turbulence. The formula can be reformulated as: formula_15 Assuming a relative turbulence of r=0.075 for turbulence induced by bed roughness, the revised formula leads to formula_16 = 0.47. This revision introduces an explicit turbulence parameter into the Izbash formula: formula_17 This adaptation allows the use of the Izbash formula in scenarios where turbulence is not solely the result of bed roughness but also occurs in flows influenced by ships and propellers. In the wake of a vessel in a narrow channel, a strong return flow with increased turbulence is observed, where r is typically around 0.2. For propeller-induced turbulence, an r value of 0.45 is recommended. Given that the relative turbulence appears quadratically in the formula, it is evident that for a propeller flow, a substantially larger stone size is required for bed protection compared to "normal" flow conditions. For non-typical cases, a turbulence model such as the k-epsilon model can be utilised to calculate the value of "r". This value can then be inserted into the aforementioned modified Izbash formula to ascertain the necessary stone size. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{u_c}{\\sqrt {\\Delta g d}} = 1.7" }, { "math_id": 1, "text": "\\Delta d = 0.7 \\frac{u^2}{2g} " }, { "math_id": 2, "text": " \\begin{matrix} F_D &= \\frac{1}{2}C_D \\rho_w u^2 A_D \\\\ F_S &= \\frac{1}{2}C_F \\rho_w u^2 A_S \\\\ F_L & = \\frac{1}{2}C_L \\rho_w u^2 A_L \\end{matrix} \\quad \\Biggr] \\quad F \\sim \\rho_w u^2 d^2" }, { "math_id": 3, "text": "u_c^2 = K \\Delta g d" }, { "math_id": 4, "text": " \\Delta d = 0.035 \\frac{\\Phi}{\\Psi}\\frac{K_t K_h}{K_s}\\frac{u^2}{2 g}" }, { "math_id": 5, "text": "K_h = \\frac{2}{\\log(\\frac{12 h}{N d})^2}" }, { "math_id": 6, "text": "(1+\\frac{h}{d})^{0.2}" }, { "math_id": 7, "text": "K(\\alpha_\\parallel)" }, { "math_id": 8, "text": "K(\\alpha)" }, { "math_id": 9, "text": "F(\\alpha_{\\text{perpendicular}}) = W \\cos(\\alpha) \\tan(\\phi) - W \\sin(\\alpha)" }, { "math_id": 10, "text": "F = W \\cos(\\alpha) \\tan(\\phi) + W \\sin(\\alpha)" }, { "math_id": 11, "text": "K(\\alpha_{\\parallel}) = \\frac{F(\\alpha_{\\parallel})}{F(0)} = \\frac{W \\cos(\\alpha) \\tan(\\phi) - W \\sin(\\alpha)}{W \\tan(\\phi)} = \\frac{\\sin(\\phi - \\alpha)}{\\sin(\\phi)}" }, { "math_id": 12, "text": "K(\\alpha) = \\frac{F(\\alpha)}{F(0)} = \\sqrt{\\frac{\\cos^2(\\alpha) \\tan^2(\\phi) - \\sin^2(\\alpha)}{\\tan^2(\\phi)}} = \\sqrt{1 - \\frac{\\sin^2(\\alpha)}{\\sin^2(\\phi)}}" }, { "math_id": 13, "text": "\\overline{u} + 3 \\sigma = (1+3r)\\overline{u}" }, { "math_id": 14, "text": "\\sigma" }, { "math_id": 15, "text": "\\Delta d = 0.7 \\frac{u^2}{2g} = c_{iz} \\frac{\\left[ u(1+3r) \\right]^2}{2g} " }, { "math_id": 16, "text": "c_{iz}" }, { "math_id": 17, "text": "\\Delta d = 0.47 \\frac{\\left[ u(1+3r) \\right]^2}{2g} " } ]
https://en.wikipedia.org/wiki?curid=68237078
68237099
Sum rules (quantum field theory)
Relation between static and dynamic quantities In quantum field theory, a "sum rule" is a relation between a static quantity and an integral over a dynamical quantity. Therefore, they have a form such as: formula_0 where formula_1 is the dynamical quantity, for example a structure function characterizing a particle, and formula_2 is the static quantity, for example the mass or the charge of that particle. Quantum field theory sum rules should not be confused with sum rules in quantum chromodynamics or quantum mechanics. Properties. Many sum rules exist. The validity of a particular sum rule can be sound if its derivation is based on solid assumptions, or on the contrary, some sum rules have been shown experimentally to be incorrect, due to unwarranted assumptions made in their derivation. The list of sum rules below illustrate this. Sum rules are usually obtained by combining a dispersion relation with the optical theorem, using the operator product expansion or current algebra. Quantum field theory sum rules are useful in a variety of ways. They permit to test the theory used to derive them, e.g. quantum chromodynamics, or an assumption made for the derivation, e.g. Lorentz invariance. They can be used to study a particle, e.g. how does the spins of partons make up the spin of the proton. They can also be used as a measurement method. If the static quantity formula_2 is difficult to measure directly, measuring formula_1 and integrating it offers a practical way to obtain formula_2 (providing that the particular sum rule linking formula_1 to formula_2 is reliable). Although in principle, formula_2 is a static quantity, the denomination of "sum rule" has been extended to the case where formula_2 is a probability amplitude, e.g. the probability amplitude of Compton scattering, see the list of sum rules below. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\int A(x) dx = B " }, { "math_id": 1, "text": "A(x)" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": " \\int_0^1 F_1^{p\\nu}(x,Q^2)-F_1^{p\\bar{\\nu}}(x,Q^2) dx= \\int_0^1 F_1^{p\\nu}(x,Q^2)-F_1^{n\\nu}(x,Q^2) dx =1-\\frac{2}{3}\\frac{\\alpha_s(Q^2)}{\\pi}," }, { "math_id": 4, "text": " F_1^{p\\nu}(x,Q^2),~F_1^{p\\bar{\\nu}}(x,Q^2)" }, { "math_id": 5, "text": " F_1^{n\\nu}(x,Q^2)" }, { "math_id": 6, "text": " Q^2" }, { "math_id": 7, "text": " \\alpha_s " }, { "math_id": 8, "text": "Q^2" }, { "math_id": 9, "text": " \\int_0^1 g_2(x,Q^2) dx=0,~ \\forall~Q^2" }, { "math_id": 10, "text": " g_2(x,Q^2)" }, { "math_id": 11, "text": "\\delta_{LT} " } ]
https://en.wikipedia.org/wiki?curid=68237099
682403
Names of large numbers
Two naming scales for large numbers have been used in English and other European languages since the early modern era: the long and short scales. Most English variants use the short scale today, but the long scale remains dominant in many non-English-speaking areas, including continental Europe and Spanish-speaking countries in Latin America. These naming procedures are based on taking the number "n" occurring in 103"n"+3 (short scale) or 106"n" (long scale) and concatenating Latin roots for its units, tens, and hundreds place, together with the suffix "-illion". Names of numbers above a trillion are rarely used in practice; such large numbers have practical usage primarily in the scientific domain, where powers of ten are expressed as "10" with a numeric superscript. However, these somewhat rare names are considered acceptable for approximate statements. For example, the statement "There are approximately 7.1 octillion atoms in an adult human body" is understood to be in short scale of the table below (and is only accurate if referring to short scale rather than long scale). Indian English do not use millions, but have their own system of large numbers including lakhs (Anglicised as lacs) and crores. English also has many words, such as "zillion", used informally to mean large but unspecified amounts; see indefinite and fictitious numbers. Standard dictionary numbers. Usage: Apart from "million", the words in this list ending with -"illion" are all derived by adding prefixes ("bi"-, "tri"-, etc., derived from Latin) to the stem -"illion". "Centillion" appears to be the highest name ending in -"illion" that is included in these dictionaries. "Trigintillion", often cited as a word in discussions of names of large numbers, is not included in any of them, nor are any of the names that can easily be created by extending the naming pattern ("unvigintillion", "duovigintillion", "duo­quinqua­gint­illion", etc.). All of the dictionaries included "googol" and "googolplex", generally crediting it to the Kasner and Newman book and to Kasner's nephew (see below). None include any higher names in the googol family (googolduplex, etc.). The "Oxford English Dictionary" comments that "googol" and "googolplex" are "not in formal mathematical use". Usage of names of large numbers. Some names of large numbers, such as "million", "billion", and "trillion", have real referents in human experience, and are encountered in many contexts. At times, the names of large numbers have been forced into common usage as a result of hyperinflation. The highest numerical value banknote ever printed was a note for 1 sextillion pengő (1021 or 1 milliard bilpengő as printed) printed in Hungary in 1946. In 2009, Zimbabwe printed a 100 trillion (1014) Zimbabwean dollar note, which at the time of printing was worth about US$30. Names of larger numbers, however, have a tenuous, artificial existence, rarely found outside definitions, lists, and discussions of how large numbers are named. Even well-established names like "sextillion" are rarely used, since in the context of science, including astronomy, where such large numbers often occur, they are nearly always written using scientific notation. In this notation, powers of ten are expressed as "10" with a numeric superscript, e.g. "The X-ray emission of the radio galaxy is ." When a number such as 1045 needs to be referred to in words, it is simply read out as "ten to the forty-fifth" or "ten to the forty-five". This is easier to say and less ambiguous than "quattuordecillion", which means something different in the long scale and the short scale. When a number represents a quantity rather than a count, SI prefixes can be used—thus "femtosecond", not "one quadrillionth of a second"—although often powers of ten are used instead of some of the very high and very low prefixes. In some cases, specialized units are used, such as the astronomer's parsec and light year or the particle physicist's barn. Nevertheless, large numbers have an intellectual fascination and are of mathematical interest, and giving them names is one way people try to conceptualize and understand them. One of the earliest examples of this is "The Sand Reckoner", in which Archimedes gave a system for naming large numbers. To do this, he called the numbers up to a myriad myriad (108) "first numbers" and called 108 itself the "unit of the second numbers". Multiples of this unit then became the second numbers, up to this unit taken a myriad myriad times, 108·108=1016. This became the "unit of the third numbers", whose multiples were the third numbers, and so on. Archimedes continued naming numbers in this way up to a myriad myriad times the unit of the 108-th numbers, i.e. formula_0 and embedded this construction within another copy of itself to produce names for numbers up to formula_1 Archimedes then estimated the number of grains of sand that would be required to fill the known universe, and found that it was no more than "one thousand myriad of the eighth numbers" (1063). Since then, many others have engaged in the pursuit of conceptualizing and naming numbers that have no existence outside the imagination. One motivation for such a pursuit is that attributed to the inventor of the word "googol", who was certain that any finite number "had to have a name". Another possible motivation is competition between students in computer programming courses, where a common exercise is that of writing a program to output numbers in the form of English words. Most names proposed for large numbers belong to systematic schemes which are extensible. Thus, many names for large numbers are simply the result of following a naming system to its logical conclusion—or extending it further. Origins of the "standard dictionary numbers". The words "bymillion" and "trimillion" were first recorded in 1475 in a manuscript of Jehan Adam. Subsequently, Nicolas Chuquet wrote a book "Triparty en la science des nombres" which was not published during Chuquet's lifetime. However, most of it was copied by Estienne de La Roche for a portion of his 1520 book, "L'arismetique". Chuquet's book contains a passage in which he shows a large number marked off into groups of six digits, with the comment: Ou qui veult le premier point peult signiffier million Le second point byllion Le tiers point tryllion Le quart quadrillion Le cinqe quyllion Le sixe sixlion Le sept.e septyllion Le huyte ottyllion Le neufe nonyllion et ainsi des ault's se plus oultre on vouloit preceder (Or if you prefer the first mark can signify million, the second mark byllion, the third mark tryllion, the fourth quadrillion, the fifth quyillion, the sixth sixlion, the seventh septyllion, the eighth ottyllion, the ninth nonyllion and so on with others as far as you wish to go). Adam and Chuquet used the long scale of powers of a million; that is, Adam's "bymillion" (Chuquet's "byllion") denoted 1012, and Adam's "trimillion" (Chuquet's "tryllion") denoted 1018. The googol family. The names "googol" and "googolplex" were invented by Edward Kasner's nephew Milton Sirotta and introduced in Kasner and Newman's 1940 book "Mathematics and the Imagination" in the following passage: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The name "googol" was invented by a child (Dr. Kasner's nine-year-old nephew) who was asked to think up a name for a very big number, namely 1 with one hundred zeroes after it. He was very certain that this number was not infinite, and therefore equally certain that it had to have a name. At the same time that he suggested "googol" he gave a name for a still larger number: "googolplex." A googolplex is much larger than a googol, but is still finite, as the inventor of the name was quick to point out. It was first suggested that a googolplex should be 1, followed by writing zeros until you got tired. This is a description of what would happen if one tried to write a googolplex, but different people get tired at different times and it would never do to have Carnera a better mathematician than Dr. Einstein, simply because he had more endurance. The googolplex is, then, a specific finite number, equal to 1 with a googol zeros after it. John Horton Conway and Richard K. Guy have suggested that "N-plex" be used as a name for 10N. This gives rise to the name "googolplexplex" for 10googolplex = 101010100. Conway and Guy have proposed that "N-minex" be used as a name for 10−N, giving rise to the name "googolminex" for the reciprocal of a googolplex, which is written as 10-(10100). None of these names are in wide use. The names "googol" and "googolplex" inspired the name of the Internet company Google and its corporate headquarters, the Googleplex, respectively. Extensions of the standard dictionary numbers. This section illustrates several systems for naming large numbers, and shows how they can be extended past "vigintillion". Traditional British usage assigned new names for each power of one million (the long scale): 1,000,000 1 million; 1,000,0002 1 billion; 1,000,0003 1 trillion; and so on. It was adapted from French usage, and is similar to the system that was documented or invented by Chuquet. Traditional American usage (which was also adapted from French usage but at a later date), Canadian, and modern British usage assign new names for each power of one thousand (the short scale). Thus, a "billion" is 1000 × 10002 = 109; a "trillion" is 1000 × 10003 = 1012; and so forth. Due to its dominance in the financial world (and by the US dollar), this was adopted for official United Nations documents. Traditional French usage has varied; in 1948, France, which had originally popularized the short scale worldwide, reverted to the long scale. The term "milliard" is unambiguous and always means 109. It is seldom seen in American usage and rarely in British usage, but frequently in continental European usage. The term is sometimes attributed to French mathematician Jacques Peletier du Mans c. 1550 (for this reason, the long scale is also known as the "Chuquet-Peletier" system), but the Oxford English Dictionary states that the term derives from post-Classical Latin term "milliartum", which became "milliare" and then "milliart" and finally our modern term. Concerning names ending in -illiard for numbers 106"n"+3, "milliard" is certainly in widespread use in languages other than English, but the degree of actual use of the larger terms is questionable. The terms "milliardo" in Italian, "Milliarde" in German, "miljard" in Dutch, "milyar" in Turkish, and "миллиард," milliard (transliterated) in Russian, are standard usage when discussing financial topics. The naming procedure for large numbers is based on taking the number "n" occurring in 103"n"+3 (short scale) or 106"n" (long scale) and concatenating Latin roots for its units, tens, and hundreds place, together with the suffix "-illion". In this way, numbers up to 103·999+3 = 103000 (short scale) or 106·999 = 105994 (long scale) may be named. The choice of roots and the concatenation procedure is that of the standard dictionary numbers if "n" is 9 or smaller. For larger "n" (between 10 and 999), prefixes can be constructed based on a system described by Conway and Guy. Today, sexdecillion and novemdecillion are standard dictionary numbers and, using the same reasoning as Conway and Guy did for the numbers up to nonillion, could probably be used to form acceptable prefixes. The Conway–Guy system for forming prefixes: (*) &lt;templatestyles src="Citation/styles.css"/&gt;^ When preceding a component marked S or X, "tre" changes to "tres" and "se" to "ses" or "sex"; similarly, when preceding a component marked M or N, "septe" and "nove" change to "septem" and "novem" or "septen" and "noven". Since the system of using Latin prefixes will become ambiguous for numbers with exponents of a size which the Romans rarely counted to, like 106,000,258, Conway and Guy co-devised with Allan Wechsler the following set of consistent conventions that permit, in principle, the extension of this system indefinitely to provide English short-scale names for any integer whatsoever. The name of a number 103"n"+3, where "n" is greater than or equal to 1000, is formed by concatenating the names of the numbers of the form 103"m"+3, where "m" represents each group of comma-separated digits of "n", with each but the last "-illion" trimmed to "-illi-", or, in the case of "m" = 0, either "-nilli-" or "-nillion". For example, 103,000,012, the 1,000,003rd "-illion" number, equals one "millinillitrillion"; 1033,002,010,111, the 11,000,670,036th "-illion" number, equals one "undecillinilli­septua­ginta­ses­centilli­sestrigint­illion"; and 1029,629,629,633, the 9,876,543,210th "-illion" number, equals one "nonillise­septua­ginta­octingentillitres­quadra­ginta­quingentillideciducent­illion". The following table shows number names generated by the system described by Conway and Guy for the short and long scales. &lt;templatestyles src="Citation/styles.css"/&gt;^[1] Googolplex's short scale name is derived from it equal to ten of the 3,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​333,​332nd "-illion"s (This is the value of n when 10 × 10(3n + 3) = 1010100) &lt;templatestyles src="Citation/styles.css"/&gt;^[2] Googolplex's long scale name (both traditional British and traditional European) is derived from it being equal to ten thousand of the 1,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666,​666th "-illion"s (This is the value of n when 10,000 × 106n = 1010100). Binary prefixes. The International System of Quantities (ISQ) defines a series of prefixes denoting integer powers of 1024 between 10241 and 10248. Other named large numbers used in mathematics, physics and chemistry. &lt;templatestyles src="Div col/styles.css"/&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(10^8)^{(10^8)}=10^{8\\cdot 10^8}," }, { "math_id": 1, "text": "((10^8)^{(10^8)})^{(10^8)}=10^{8\\cdot 10^{16}}." } ]
https://en.wikipedia.org/wiki?curid=682403
68241928
Photoemission orbital tomography
In physics and chemistry, photoemission orbital tomography (POT; sometimes called photoemission tomography) is a combined experimental / theoretical approach which was initially developed to reveal information about the spatial distribution of individual one-electron surface-state wave functions and later exteded to study molecular orbitals. Experimentally, it uses angle-resolved photoemission spectroscopy (ARPES) to obtain constant binding energy photoemission angular distribution maps. In their pioneering work, Mugarza et al. in 2003 used a phase-retrieval method to obtain the wave function of electron surface states based on ARPES data acquired from stepped gold crystalline surfaces; they obtained the respective wave functions and, upon insertion into the Schrödinger equation, also the binding potential. More recently, photoemission maps, also known as tomograms (also known as momentum maps or formula_0-maps), have been shown to reveal information about the electron probability distribution in molecular orbitals. Theoretically, one rationalizes these tomograms as hemispherical cuts through the molecular orbital in momentum space. This interpretation relies on the assumption of a plane wave final state, i.e., the idea that the outgoing electron can be treated as a free electron, which can be further exploited to reconstruct real-space images of molecular orbitals on a sub-Ångström length scale in two or three dimensions. Presently, POT has been applied to various organic molecules forming well-oriented monolayers on single crystal surfaces or to two-dimensional materials. Theory. Within the framework of POT, the photo-excitation is treated as a single coherent process from an initial (molecular) orbital formula_1 to the final state formula_2, which is referred to as the one-step-model of photoemission. The intensity distribution in the tomograms, formula_3, is then given from Fermi's golden rule as formula_4 Here, formula_5 and formula_6 are the components of the emitted electron's wave vector parallel to the surface, which are related to the polar and azimuthal emission angles formula_7 and formula_8 defined in the figure as follows, formula_9 formula_10 where formula_0 and formula_11 are the wave number and kinetic energy of the emitted electron, respectively, where formula_12 is the reduced Planck constant and formula_13 is the electron mass. The transition matrix element is given in the dipole approximation, where formula_14 and formula_15, respectively, denote the momentum operator of the electron and the vector potential of the exciting electromagnetic wave. In the independent electron approximation, the spectral function reduces to a delta function and ensures energy conservation, where formula_16 denotes the sample work function, formula_17 the binding energy of the initial state, and formula_18 the energy of the exciting photon. In POT, the evaluation of the transition matrix element is further simplified by approximating the final state by a plane wave. Then, the photocurrent formula_19 arising from one particular initial state formula_20 is proportional to the Fourier transform formula_21 of the initial state wave function modulated by the weakly angle-dependent polarization factor formula_22: formula_23 As illustrated in the figure, the relationship between the real space orbital and its photoemission distribution can be represented by an Ewald's sphere-like construction. Thus, a one-to-one relation between the photocurrent and the molecular orbital density in reciprocal space can be established. Moreover, a reconstruction of molecular orbital densities in real space via an inverse Fourier transform and applying an iterative phase retrieval algorithm has also been demonstrated. Experiment. The basic experimental requirements are a reasonably monoenergetic photon source (inert gas discharge lamps, synchrotron radiation or UV laser sources) and an angle-resolved photoelectron spectrometer. Ideally, a large angular distribution (formula_0-area) should be collected. Much of the development of POT was made using a toroidal analyzer with formula_24-polarized synchrotron radiation. Here the spectrometer collects the hemicircle of emissions (formula_25) in the plane of incidence and polarization, and the momentum maps are obtained by rotating the sample azimuth (formula_8). A number of commercially available electron spectrometers are now on the market which have been shown to be suited to POT. These include large acceptance angle hemispherical analysers, spectrometers with photoemission electron microscopy (PEEM) lenses and time of flight (TOF) spectrometers. Applications and future developments. POT has found many interesting applications including the assignment of molecular orbital densities in momentum and real space, the deconvolution of spectra into individual orbital contributions beyond the limits of energy resolution, the extraction of detailed geometric information, or the identification of reaction products. Recently, the extension to the time-domain has been demonstrated by combining time-resolved photoemission using high laser harmonics and a momentum microscope to measure the full momentum-space distribution of transiently excited electrons in organic molecules. The possibility to measure the spatial distribution of electrons in frontier molecular orbitals has stimulated discussions on the interpretation of the concept of orbitals itself. The present understanding is that the information retrieved from photoemission orbital tomography should be interpreted as Dyson orbitals. Approximating the photoelectron's final state by a plane wave have been viewed critically. Indeed, there are cases where the plane-wave final state approximation is problematic including a proper description of the photon energy dependence, the circular dichroism in the photoelectron angular distribution or certain experimental geometries. Nevertheless, the usefulness of the plane wave final state approximation has been extended beyond the originally suggested case of formula_26-orbitals of large, planar formula_26-conjugated molecules to three-dimensional molecules, small organic molecules and extended to two-dimensional materials. Theoretical approaches beyond the plane wave final state approximation have also been demonstrated including time-dependent density functional theory calculations or Green's function techniques. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\Psi_i" }, { "math_id": 2, "text": "\\Psi_f" }, { "math_id": 3, "text": "I(k_x,k_y;E_\\mathrm{kin})" }, { "math_id": 4, "text": "\nI(k_x,k_y;E_\\mathrm{kin}) \\propto \n \\left| \\langle \\Psi_f(k_x,k_y;E_\\mathrm{kin}) |\n \\vec{A} \\cdot \\vec{p} | \\Psi_i \\rangle \\right|^2\n \\times \\delta \\left(E_i + \\Phi + E_\\mathrm{kin} - \\hbar \\omega \\right).\n" }, { "math_id": 5, "text": "k_x" }, { "math_id": 6, "text": "k_y" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "\\phi" }, { "math_id": 9, "text": "\nk_x = k \\sin \\theta \\cos \\phi \n" }, { "math_id": 10, "text": "\nk_y = k \\sin \\theta \\sin \\phi\n" }, { "math_id": 11, "text": "E_\\mathrm{kin} = \\frac{\\hbar^2 k^2}{2m}" }, { "math_id": 12, "text": "\\hbar" }, { "math_id": 13, "text": "m" }, { "math_id": 14, "text": "\\vec{p}" }, { "math_id": 15, "text": "\\vec{A}" }, { "math_id": 16, "text": "\\Phi" }, { "math_id": 17, "text": "E_i" }, { "math_id": 18, "text": "\\hbar \\omega" }, { "math_id": 19, "text": "I_i" }, { "math_id": 20, "text": "i" }, { "math_id": 21, "text": "\\tilde{\\Psi}_{i} (\\vec{k}) = \\mathcal{F}\\left\\{ \\Psi_i(\\vec{r}) \\right\\} " }, { "math_id": 22, "text": "\\vec{A} \\cdot \\vec{k}" }, { "math_id": 23, "text": "\nI_i(k_x,k_y) \\propto \\left|\\vec{A} \\cdot \\vec{k}\\right|^2 \\cdot \\left| \\tilde{\\Psi}_{i} (k_x, k_y) \\right|^2 \n\\quad \\textrm{with} \\quad |\\vec{k}|^2 = k_x^2 + k_y^2 + k_z^2 = \\frac{2m}{\\hbar^2} E_\\mathrm{kin}\n" }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": "-90^\\circ < \\theta < +90^\\circ" }, { "math_id": 26, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=68241928
68245418
Shields formula
Parameter (and formula) to describe stability of grains in flowing water The Shields formula is a formula for the stability calculation of granular material (sand, gravel) in running water. The stability of granular material in flow can be determined by the Shields formula or the Izbash formula. The first is more suitable for fine grain material (such as sand and gravel), while the Izbash formula is more suitable for larger stone. The Shields formula was developed by Albert F. Shields (1908-1974). In fact, the Shields method determines whether or not the soil material will move. The Shields parameter thus determines whether or not there is a beginning of movement. Derivation. Movement of (loose grained) soil material occurs when the shear pressure exerted by the water on the soil is greater than the resistance the soil provides. This dimensionless ratio (the Shields parameter) was first described by Albert Shields and reads: formula_0, where: The shear stress that works on the bottom (with a normal uniform flow along a slope) is: formula_6, where: It is important to realise that formula_7 is the shear stress exerted by the flow (i.e. a property of the flow) and formula_1 is the shear stress at which the grains move (i.e. a property of the grains). The shear stress velocity is often used instead of the shear stress: formula_10 The shear stress velocity has the dimension of a velocity (m/s), but is actually a representation of the shear stress. So the shear stress velocity can never be measured with a velocity meter. By using the shear stress velocity, the Shields parameter can also be written as: formula_11 where: Shields found that the parameter formula_14 is a function of formula_15, in which formula_16 is the kinematic viscosity. This parameter is also called the granular reynolds number: formula_17 Shields has performed tests with grains of different densities, and the found value of formula_14 plotted as a function of formula_18. This led to the above graph. Van Rijn found that instead of the granular reynolds number a dimensionless grain size could be used: formula_19 Because usually the values of formula_20 are quite constant, the true grain size can also be set on the horizontal axis (see right figure b). This means that the value of formula_14 is only a function of the grain diameter and can be read directly. From this follows that for grains greater than 5 mm the Shields parameter gets a constant value of 0,055. The gradient of a river ("I") can be determined by Chézy formula: formula_21 in which formula_22 = the coefficiënt of Chézy (); This is often in the order 50 (). For a flat bed (i.e. without ripples) C can be approximated with: formula_23 By introducing this into the stability formula, a critical grain size formula is found at a given flow rate: formula_24 In this form, the stability relationship is usually called the "Shields formula". Definition of "incipient motion". The line of Shields (and of Van Rijn) in the graph is the separation between "movement" and "no movement". Shields has defined as "movement" that almost all grains move on the bottom. This is a useful definition for defining the beginning of sand transport by flow. However, if one wants to protect a bed from erosion, the requirement is that grains should hardly move. To make this operational, Breusers defined 7 phases of movement in 1969: These phases are shown in the figure below: Visually, these phases are also shown in a series of short video clips. In these video fragments, "no" is the Shields parameter used. In practice, this means that for bed protections (where the grain is always larger than 5mm), a design value of Ψ=0.03 must be used. Calculation Example. Question: At what speed of flow does sand of 0.2cm move at a water depth of 1m? The Chézy value then becomes "C" = 62 (this is a high value, so a smooth soil; This is because we assume there are no ridges). Filled in this gives a speed of 0.83 m/s. Question: What stone size is needed to defend this soil against a current of 2 m/s? This cannot be solved directly, first an assumption must be made for the "d". Take a stone size of 5cm. That gives a Chézy value of 37. When this is entered in the Shields formula it gives a stone size of 5.7cm. The 5cm was a little too small. By trying, a stone size of 6.5cm is finally found. (In this case, Izbash's formula gives 6.3cm) Restrictions. The Shields approach is based on a uniform, permanent flow with a turbulence generated by the bed roughness (i.e. no additional turbulence by a for example a propeller current). In the case of a rough bed in shallow water, and in case of unusual turbulence, the Izbash's formula is therefore more recommended. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Psi_c*=\\frac{load}{strength} = \\frac{\\tau_c d^2}{(\\rho_s-\\rho_w)g d^3}" }, { "math_id": 1, "text": "\\tau_c" }, { "math_id": 2, "text": "\\rho_s" }, { "math_id": 3, "text": "\\rho_w" }, { "math_id": 4, "text": "g" }, { "math_id": 5, "text": "d" }, { "math_id": 6, "text": "\\tau=\\rho_w g h I " }, { "math_id": 7, "text": "\\tau" }, { "math_id": 8, "text": "h" }, { "math_id": 9, "text": "I" }, { "math_id": 10, "text": "u_* =\\sqrt{\\tau / \\rho_w} " }, { "math_id": 11, "text": "\\Psi_{c*}= \\frac{load}{strength} = \\frac{u_{*c}^2}{\\Delta g d}" }, { "math_id": 12, "text": "\\Delta" }, { "math_id": 13, "text": "\\frac{(\\rho_s-\\rho_w)}{\\rho_w}" }, { "math_id": 14, "text": "\\Psi_{c*}" }, { "math_id": 15, "text": " \\frac{u_{*c} d}{\\nu}" }, { "math_id": 16, "text": "\\nu" }, { "math_id": 17, "text": "Re_* = \\frac{u_{*c}d} {\\nu}" }, { "math_id": 18, "text": "Re_*" }, { "math_id": 19, "text": " {d_*} = d {\\left(\\frac{\\Delta g}{\\nu^2}\\right) }^{1/3}" }, { "math_id": 20, "text": " \\Delta, g, \\nu" }, { "math_id": 21, "text": " u= C \\sqrt{h I}" }, { "math_id": 22, "text": "C" }, { "math_id": 23, "text": " C=18 \\log _{10} {\\frac{12 h}{2 d}}" }, { "math_id": 24, "text": "d_c=\\frac{u^2}{\\Psi_c \\Delta C^2}" } ]
https://en.wikipedia.org/wiki?curid=68245418
68245955
Cantor's isomorphism theorem
Uniqueness of countable dense linear orders In order theory and model theory, branches of mathematics, Cantor's isomorphism theorem states that every two countable dense unbounded linear orders are order-isomorphic. For instance, Minkowski's question-mark function produces an isomorphism (a one-to-one order-preserving correspondence) between the numerical ordering of the rational numbers and the numerical ordering of the dyadic rationals. The theorem is named after Georg Cantor, who first published it in 1895, using it to characterize the (uncountable) ordering on the real numbers. It can be proved by a back-and-forth method that is also sometimes attributed to Cantor but was actually published later, by Felix Hausdorff. The same back-and-forth method also proves that countable dense unbounded orders are highly symmetric, and can be applied to other kinds of structures. However, Cantor's original proof only used the "going forth" half of this method. In terms of model theory, the isomorphism theorem can be expressed by saying that the first-order theory of unbounded dense linear orders is countably categorical, meaning that it has only one countable model, up to logical equivalence. One application of Cantor's isomorphism theorem involves temporal logic, a method for using logic to reason about time. In this application, the theorem implies that it is sufficient to use intervals of rational numbers to model intervals of time: using irrational numbers for this purpose will not lead to any increase in logical power. Statement and examples. Cantor's isomorphism theorem is stated using the following concepts: With these definitions in hand, Cantor's isomorphism theorem states that every two unbounded countable dense linear orders are order-isomorphic. Within the rational numbers, certain subsets are also countable, unbounded, and dense. The rational numbers in the open unit interval are an example. Another example is the set of dyadic rational numbers, the numbers that can be expressed as a fraction with an integer numerator and a power of two as the denominator. By Cantor's isomorphism theorem, the dyadic rational numbers are order-isomorphic to the whole set of rational numbers. In this example, an explicit order isomorphism is provided by Minkowski's question-mark function. Another example of a countable unbounded dense linear order is given by the set of real algebraic numbers, the real roots of polynomials with integer coefficients. In this case, they are a superset of the rational numbers, but are again order-isomorphic. It is also possible to apply the theorem to other linear orders whose elements are not defined as numbers. For instance, the binary strings that end in a 1, in their lexicographic order, form another isomorphic ordering. Proofs. One proof of Cantor's isomorphism theorem, in some sources called "the standard proof", uses the back-and-forth method. This proof builds up an isomorphism between any two given orders, using a greedy algorithm, in an ordering given by a countable enumeration of the two orderings. In more detail, the proof maintains two order-isomorphic finite subsets formula_0 and formula_1 of the two given orders, initially empty. It repeatedly increases the sizes of formula_0 and formula_1 by adding a new element from one order, the first missing element in its enumeration, and matching it with an order-equivalent element of the other order, proven to exist using the density and lack of endpoints of the order. The two orderings switch roles at each step: the proof finds the first missing element of the first order, adds it to formula_0, matches it with an element of the second order, and adds it to formula_1; then it finds the first missing element of the second order, adds it to formula_1, matches it with an element of the first order, and adds it to formula_0, etc. Every element of each ordering is eventually matched with an order-equivalent element of the other ordering, so the two orderings are isomorphic. Although the back-and-forth method has also been attributed to Cantor, Cantor's original publication of this theorem in 1895–1897 used a different proof. In an investigation of the history of this theorem by logician Charles L. Silver, the earliest instance of the back-and-forth proof found by Silver was in a 1914 textbook by Felix Hausdorff, his "Grundzüge der Mengenlehre". Instead of building up order-isomorphic subsets formula_0 and formula_1 by going "back and forth" between the enumeration for the first order and the enumeration for the second order, Cantor's original proof only uses the "going forth" half of the back-and-forth method. It repeatedly augments the two finite sets formula_0 and formula_1 by adding to formula_0 the first missing element of the first order's enumeration, and adding to formula_1 the order-equivalent element that is first in the second order's enumeration. This naturally finds an equivalence between the first ordering and a subset of the second ordering, and Cantor then argues that the entire second ordering is included. The back-and-forth proof has been formalized as a computer-verified proof using Coq, an interactive theorem prover. This formalization process led to a strengthened result that when two computably enumerable linear orders have a computable comparison predicate, and computable functions representing their density and unboundedness properties, then the isomorphism between them is also computable. Model theory. One way of describing Cantor's isomorphism theorem uses the language of model theory. The first-order theory of unbounded dense linear orders consists of sentences in mathematical logic concerning variables that represent the elements of an order, with a binary relation used as the comparison operation of the ordering. Here, a sentence means a well-formed formula that has no free variables. These sentences include both axioms, formulating in logical terms the requirements of a dense linear order, and all other sentences that can be proven as logical consequences from those axioms. The axioms of this system can be expressed as: A model of this theory is any system of elements and a comparison relation that obeys all of the axioms; it is a "countable model" when the system of elements forms a countable set. For instance, the usual comparison relation on the rational numbers is a countable model of this theory. Cantor's isomorphism theorem can be expressed by saying that the first-order theory of unbounded dense linear orders is countably categorical: it has only one countable model, up to logical equivalence. However, it is not categorical for higher cardinalities: for any higher cardinality, there are multiple inequivalent dense unbounded linear orders with the same cardinality. A method of quantifier elimination in the first-order theory of unbounded dense linear orders can be used to prove that it is a complete theory. This means that every logical sentence in the language of this theory is either a theorem, that is, provable as a logical consequence of the axioms, or the negation of a theorem. This is closely related to being categorical (a sentence is a theorem if it is true of the unique countable model; see the Łoś–Vaught test) but there can exist multiple distinct models that have the same complete theory. In particular, both the ordering on the rational numbers and the ordering on the real numbers are models of the same theory, even though they are different models. Quantifier elimination can also be used in an algorithm for deciding whether a given sentence is a theorem. Related results. The same back-and-forth method used to prove Cantor's isomorphism theorem also proves that countable dense linear orders are highly symmetric. Their symmetries are called order automorphisms, and consist of order-preserving bijections from the whole linear order to itself. By the back-and-forth method, every countable dense linear order has order automorphisms that map any set of formula_2 points to any other set of formula_2 points. This can also be proven directly for the ordering on the rationals, by constructing a piecewise linear order automorphism with breakpoints at the formula_2 given points. This equivalence of all formula_2-element sets of points is summarized by saying that the group of symmetries of a countable dense linear order is "highly homogeneous". However, there is no order automorphism that maps an ordered pair of points to its reverse, so these symmetries do not form a 2-transitive group. The isomorphism theorem can be extended to colorings of an unbounded dense countable linear ordering, with a finite or countable set of colors, such that each color is dense, in the sense that a point of that color exists between any other two points of the whole ordering. The subsets of points with each color partition the order into a family of unbounded dense countable linear orderings. Any partition of an unbounded dense countable linear orderings into subsets, with the property that each subset is unbounded (within the whole set, not just in itself) and dense (again, within the whole set) comes from a coloring in this way. Each two colorings with the same number of colors are order-isomorphic, under any permutation of their colors. give as an example the partition of the rational numbers into the dyadic rationals and their complement; these two sets are dense in each other, and their union has an order isomorphism to any other pair of unbounded linear orders that are countable and dense in each other. Unlike Cantor's isomorphism theorem, the proof needs the full back-and-forth argument, and not just the "going forth" argument. Cantor used the isomorphism theorem to characterize the ordering of the real numbers, an uncountable set. Unlike the rational numbers, the real numbers are Dedekind-complete, meaning that every subset of the reals that has a finite upper bound has a real least upper bound. They contain the rational numbers, which are dense in the real numbers. By applying the isomorphism theorem, Cantor proved that whenever a linear ordering has the same properties of being Dedekind-complete and containing a countable dense unbounded subset, it must be order-isomorphic to the real numbers. Suslin's problem asks whether orders having certain other properties of the order on the real numbers, including unboundedness, density, and completeness, must be order-isomorphic to the reals; the truth of this statement is independent of Zermelo–Fraenkel set theory with the axiom of choice (ZFC). Although uncountable unbounded dense orderings may not be order-isomorphic, it follows from the back-and-forth method that any two such orderings are elementarily equivalent. Another consequence of Cantor's proof is that every finite or countable linear order can be embedded into the rationals, or into any unbounded dense ordering. Calling this a "well known" result of Cantor, Wacław Sierpiński proved an analogous result for higher cardinality: assuming the continuum hypothesis, there exists a linear ordering of cardinality formula_3 into which all other linear orderings of cardinality formula_3 can be embedded. Baumgartner's axiom, formulated by James Earl Baumgartner in 1973 to study the continuum hypothesis, concerns formula_3-dense sets of real numbers, unbounded sets with the property that every two elements are separated by exactly formula_3 other elements. It states that each two such sets are order-isomorphic, providing in this way another higher-cardinality analogue of Cantor's isomorphism theorem (formula_3 is defined as the cardinality of the set of all countable ordinals). Baumgartner's axiom is consistent with ZFC and the negation of the continuum hypothesis, and implied by the proper forcing axiom, but independent of Martin's axiom. In temporal logic, various formalizations of the concept of an interval of time can be shown to be equivalent to defining an interval by a pair of distinct elements of a dense unbounded linear order. This connection implies that these theories are also countably categorical, and can be uniquely modeled by intervals of rational numbers. Sierpiński's theorem stating that any two countable metric spaces without isolated points are homeomorphic can be seen as a topological analogue of Cantor's isomorphism theorem, and can be proved using a similar back-and-forth argument. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "\\aleph_1" } ]
https://en.wikipedia.org/wiki?curid=68245955
68247395
List of aqueous ions by element
This table lists the ionic species that are most likely to be present, depending on pH, in aqueous solutions of binary salts of metal ions. The existence must be inferred on the basis of indirect evidence provided by modelling with experimental data or by analogy with structures obtained by X-ray crystallography. Introduction. When a salt of a metal ion, with the generic formula MXn, is dissolved in water, it will dissociate into a cation and anions. formula_0 (aq) signifies that the ion is aquated, with cations having a chemical formula [M(H2O)p]q+ and anions whose state of aquation is generally unknown. For convenience (aq) is not shown in the rest of this article as the "number" of water molecules that are attached to the ions is irrelevant in regard to hydrolysis. This reaction occurs quantitatively with salts of the alkali-metals at low to moderate concentrations. With salts of divalent metal ions, the aqua-ion will be subject to a dissociation reaction, known as hydrolysis, a name derived from Greek words for water splitting. The first step in this process can be written as formula_1 ⇌ formula_2 When the pH of the solution is increased by adding an alkaline solution to it, the extent of hydrolysis increases. Measurements of pH or colour change are used to derive the equilibrium constant for the reaction. Further hydrolysis may occur, producing dimeric, trimeric or polymeric species containing hydroxy- or oxy- groups. The next step is to determine which model for the chemical processes best fits the experimental data. Model selection. The model is defined in terms of a list of those complex species which are present in solutions in significant amounts. In the present context the complex species have the general formula [MpOq(OH)r]n±. where p, q and r define the stoichiometry of the species and n± gives the electrical charge of the ion. The experimental data are fitted to those models which may represent the species that are formed in solution. The model which gives the best fit is selected for publication. However, the pH range in which data may be collected is limited by the fact that an hydroxide with formula M(OH)n will be formed at relatively low pH, as illustrated at the right. This will make the process of model selection difficult when monomers and dimers are formed. and virtually impossible when higher polymers are also formed. In those cases it "must be assumed" that the species found in solids are also present in solutions. The formation of an hydroxo-bridged species is enthalpically favoured over the monomers, countering the unfavourable entropic effect of aggregation. For this reason, it is difficult to establish models in which both types of species are present. Monomeric hydrolysis products. The extent of hydrolysis can be quantified when the values of the hydrolysis constants can be determined experimentally. The first hydrolysis constant refers to the equilibrium formula_3 ⇌ formula_4 The "association" constant for this reaction can be expressed as formula_5 (electrical charges are omitted from generic expressions) Numerical values for this equilibrium constant can be found in papers concerned only with metal ion hydrolysis. However, it is more useful, in general, to use the acid dissociation constant, Ka. formula_6 and to cite the cologarithm, pKa, of the value of this quantity in books and other publications. The two values are constrained by the relationship log K(association) * log K(dissociation) = pKw pKw refers to the self-ionization of water: pK = log (1/K) = -log(K). Further monomeric complexes may be formed in a stepwise manner. formula_7 ⇌ formula_8 Dimeric species. Hydrolysed species containing two metal ions, with the general formula M2(OH)n, may be formed from pre-existing monomeric species. The stepwise reaction formula_9⇌ formula_10 illustrates the process. An alternative stepwise reaction formula_9⇌ formula_11 may also occur. Unfortunately it is not possible to distinguish between these two possibilities using data from potentiometric titrations because both of these reactions have no effect on the pH of the solution. The concentration of a dimeric species decreases more rapidly with metal ion concentration than does the concentration of the corresponding monomeric species. Therefore, when determining the stability constants of both species it is usually necessary to obtain data from 2 or more titrations, each with a different metal salt concentration. Otherwise the stability constant non-linear least squares refinement may fail without providing the desired values, due to there being 100% mathematical correlation between the refinement parameters for the monomeric and dimeric species. Trimeric and polymeric species. The principal problem when determining the stability constant for a polymeric species is how to select the "best" model to use from a number of possibilities. An example that illustrates the problem is shown in Baes &amp; Mesmer, p. 119. A trimeric species must be formed from a chemical reaction of a dimer with a monomer, with the implication that the value of the stability constant of the dimer must be "known", having been determined using separate experimental data. In practice this extremely difficult to achieve. Instead, it is generally assumed that the species in solution are the same as the species that have been identified in crystal structure determinations. There is no way to establish whether or not the assumption is justified. Furthermore, species that are required as intermediaries between the monomer and the polymer may have such low concentrations as to be "undetectable". An extreme example concerns the species with a cluster of 13 aluminium(III) ions, which can be isolated in the solid state; there must be at least 12 intermediate species in solution, which have not been characterized. It follows that the published stoichiometry of the polymeric species in solution may well be correct, but it is always possible that other species are actually present in solution. In general, the omission of intermediary species will affect the reliability of the published speciation schemes. Soluble hydroxides. Some hydroxides of non-metallic elements are soluble in water; they are not included in the following table. Examples cited by Baes and Mesmer (p. 413) include hydroxides of Gallium(III), Indium(III), Thallium(III), Arsenic(III), Antimony(III) and Bismuth(III). Most hydroxides of transition metals are classified as being "insoluble" in water. Some of them dissolve, with reaction, in alkaline solution. M(OH)n + OH− → [M(OH){n+1}]− List. For some highly radioactive elements, such as astatine and radon, only tracer quantities have been experimented on. As such, unambiguous characterisation of the species they form is impossible, and so their species have been excluded from the table below. Some theoretical speculations as to what they might be are present in the literature; more information can be found at the main articles of the elements involved. Periodic table distribution. The occurrence of the different kinds of ions of the elements is shown in this periodic table: Periodic table notes. Rather than the periodic table being the sum of its groups and periods an examination of the image shows several patterns Thus, there is a largely a left-to-right transition in metallic character seen in the red-orange-sand-yellow colours for the metals, and the turquoise, blue and violet colours for the nonmetals. The dashed line seen in the periods 1 to 4 corresponds to notions of a dividing line between metals and nonmetals. The mixed species in periods 5 and 6 shows how much trouble chemists can have in assessing where to continue the dividing line. The separate dashed boundary around the Nb-Ta-W-Tc-Re-Os-Ir hexad is an exemplar for the reputation many transition metals have for nonmetallic chemistry. Ⓐ  Hydrogen is shown as being a cation former but most of its chemistry, "can be explained in terms of its tendency to [eventually] acquire the electronic configuration of…helium", thereby behaving predominately as a nonmetal. Ⓑ  Beryllium has an isodiagonal relationship with aluminium, in group 13, such a relationship also occurring between B and Si; and C and P. Ⓒ  Cation-only elements are shown as being limited to sixteen elements: all those in group 1, and the heavier actinides. Ⓓ  Rare earth metals are the group 3 metals scandium, yttrium, lutetium and the lanthanides; scandium is the only such metal shown as being capable of forming an oxyanion. Ⓔ Radioactive elements, such as the actinides, are harder to study. The known species may not represent the whole of what is possible, and the identifications may sometimes be in doubt. Astatine, as another example, is highly radioactive, and determining its stable species is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions. Equally, as Kirby noted, “since the trace chemistry of I sometimes differs significantly from its own macroscopic chemistry, analogies drawn between At and I are likely to be questionable, at best." Ⓕ  The earlier actinides, up to uranium, show some superficial resemblance to their transition metal counterparts in groups 3 to 9. Ⓖ  Most of the transition metals are known for their nonmetallic chemistry, and this is particularly seen in the image for periods 5 and 6, groups 5 to 9. They nevertheless have the relatively high electrical conductivity values characteristic of metals. Ⓗ  The transition metals (or d-block metals) further show electrochemical character, in terms of their capacity to form positive or negative ions, that is in-between that of (i) the s and f-block metals; and (ii) the p-block elements. Ⓘ  The p-block shows a relatively distinct cutoff in periods 1 to 4 between elements commonly recognised as metals and nonmetals. Periods 5 and 6 include elements commonly recognised as metalloids by authors who recognise such a class or subclass (antimony and tellurium), and elements less commonly recognised as such (polonium and astatine). Ⓙ  Stein, in 1987, showed the metalloid elements as occupying a zone in the p-block composed of B, Si, Ge, As, Sb, Po, Te, At and Rn. In the periodic table image these elements are found on the right or upper side of the dashed line traversing the p-block. Ⓚ  Of 103 elements shown in the image, just ten form anions, all of these being in the p-block: arsenic; the five chalcogens: oxygen, sulfur, selenium, tellurium, polonium; and the four halogens: fluorine, chlorine, bromine, and iodine Ⓛ  Anion-only elements are confined to oxygen and fluorine. Further notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " MX_n \\rarr M^{n+}(aq) +nX^-(aq)" }, { "math_id": 1, "text": " M^{n+} + OH^-" }, { "math_id": 2, "text": " M(OH)^{(n-1)+} " }, { "math_id": 3, "text": "M^{n+} (aq)" }, { "math_id": 4, "text": "M(OH)^{(n-1)+} (aq) + H^+" }, { "math_id": 5, "text": " K=\\frac{{[M(OH)]}} {{[M][OH]}}" }, { "math_id": 6, "text": " K_a=\\frac{{[M][H]}}{{[M(OH)]}}" }, { "math_id": 7, "text": "[M(OH)_n]^{m+} + OH^- " }, { "math_id": 8, "text": "[M(OH)_{n+1}]^{(m-1)+}" }, { "math_id": 9, "text": "2M(OH)^{n+}" }, { "math_id": 10, "text": "M_2(OH)_2^{2n+}" }, { "math_id": 11, "text": "M_2O^{2n+} + H_2O" } ]
https://en.wikipedia.org/wiki?curid=68247395
68249505
Super envy-freeness
Type of fair division of resources A super-envy-free division is a kind of a fair division. It is a division of resources among "n" partners, in which each partner values his/her share at strictly "more" than his/her due share of 1/"n" of the total value, and simultaneously, values the share of every other partner at strictly less than 1/"n". Formally, in a super-envy-free division of a resource "C" among "n" partners, each partner "i", with value measure "Vi", receives a share "Xi" such that:formula_0.This is a strong fairness requirement: it is stronger than both envy-freeness and super-proportionality. Existence. Super envy-freeness was introduced by Julius Barbanel in 1996. He proved that a super-envy-free cake-cutting exists if-and-only-if the value measures of the "n" partners are "linearly independent". "Linearly independent" means that there is no vector of "n" non-zero real numbers formula_1 for which formula_2, Computation. In 1999, William Webb presented an algorithm that finds a super-envy-free allocation in this case. His algorithm is based on a "witness" to the fact that the measures are independent. A witness is an "n"-by-"n" matrix, in which element ("i","j") is the value assigned by agent "i" to some piece "j" (where the pieces 1...,"n" can be any partition of the cake, for example, partition to equal-length intervals). The matrix should be invertible - this is a witness to the linear independence of the measures. Using such a matrix, the algorithm partitions each of the "n" pieces in a near-exact division. It can be shown that, if the matrix is invertible and the approximation factor is sufficiently small (w.r.t. the values in the inverse of the matrix), then the resulting allocation is indeed super-envy-free. The run-time of the algorithm depends on the properties of the matrix. However, if the value measures are drawn uniformly at random from the unit simplex, with high probability, the runtime is polynomial in "n". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_i(X_i) > V_i(C)/n ~~ \\text{ and } ~~ \\forall j\\neq i:V_i(X_j) < V_i(C)/n " }, { "math_id": 1, "text": "c_1,\\ldots,c_n \\in \\mathbb{R}" }, { "math_id": 2, "text": "c_1\\cdot V_1 + \\cdots + c_n \\cdot V_n = 0" } ]
https://en.wikipedia.org/wiki?curid=68249505
68250997
Fixed-target experiment
A fixed-target experiment in particle physics is an experiment in which a beam of accelerated particles is collided with a stationary target. The moving beam (also known as a projectile) consists of charged particles such as electrons or protons and is accelerated to relativistic speed. The fixed target can be a solid block or a liquid or a gaseous medium. These experiments are distinct from the collider-type experiments in which two moving particle beams are accelerated and collided. The famous Rutherford gold foil experiment, performed between 1908 and 1913, was one of the first fixed-target experiments, in which the alpha particles were targeted at a thin gold foil. Explanation. The energy involved in a fixed target experiment is 4 times smaller compared to that in collider with the dual beams of same energy. More over in collider experiments energy of two beams is available to produce new particles, while in fixed target case a lot of energy is just expended in giving velocities to the newly created particles. This clearly implies that fixed target experiments are not helpful when it comes to increasing the energy scales of experiments. The targeted source also wears down with number of strikes and usually require a regular replacement. Current day fixed-target experiments try to use highly resistant materials but the damage cannot be avoided entirely. The fixed target experiments have a significant advantage for experiments that require higher luminosity (rate of interaction). The High Luminosity Large Hadron Collider, which is an upcoming upgraded version of the Large Hadron Collider (LHC) at CERN, will attain total integrated luminosity of around formula_0 in its run. While luminosity scale of about formula_1 have already been approached by older fixed target experiments such at the E288 led by Leon Lederman at Fermilab. Another advantage for fixed-target experiments is that they are easier and cheaper to build compared to the collider accelerators. Experimental facilities. Rutherford's gold foil experiment that led to the discovery that mass and positive charge of an atom was concentrated in a small nucleus was probably the first fixed-target experiment. Later half of the 20th century saw the rise of particle and nuclear physics facilities such as CERN's Super Proton Synchrotron (SPS) and Fermilab's Tevatron where number of fixed-target experiments led to new discoveries. 43 fixed-target experiments were conducted at the Tevatron during its run period from 1983 to 2000. While proton and other beams from SPS are still used by fixed target experiments such as NA61/SHINE and COMPASS collaboration. A fixed-target facility at the LHC, called AFTER@LHC, is also being planned. Physics at fixed-target experiments. The fixed-target experiments are mainly implemented for the intensive studies of the rare processes, dynamics at high Bjorken x, diffractive physics, spin-correlations, and numerous nuclear phenomena. The experiments at Fermilab's Tevatron facility covered wide range of physics domains such as testing the theoretical predictions of quantum chromodynamics theory, studies of structure of proton, neutron and mesons, and studies of heavy quarks such as charm and bottom. Several experiments looked into CP symmetry tests. Few collaborations also studied the hyperons and the neutrinos created at fixed-target setups. NA61/SHINE at the SPS is studying the phase transitions in strongly interacting matter and physics related to onset of confinement. While the COMPASS experiment investigates the structure of the hadrons. AFTER@LHC aims at the studies of gluon and quark distribution inside protons and neutrons using fixed-target facilities. There are possibilities to observe the W and Z bosons as well. Observation and studies of the Drell-Yan pair production and quarkonium are also being looked into. Thus the number of options available to explore extreme and rare physics at the fixed-target experiments are numerous. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{5 \\times 10^{35} cm^{-2} s^{-1}}" }, { "math_id": 1, "text": "\\mathrm{5 \\times 10^{34} cm^{-2} s^{-1}}" } ]
https://en.wikipedia.org/wiki?curid=68250997
68252674
Time resolved microwave conductivity
Time resolved microwave conductivity (TRMC) is an experimental technique used to evaluate the electronic properties of semiconductors. Specifically, it is used to evaluate a proxy for charge carrier mobility and a representative carrier lifetime from light-induced changes in conductance. The technique works by photo-generating electrons and holes in a semiconductor, allowing these charge carriers to move under a microwave field, and detecting the resulting changes in the electric field. TRMC systems cannot be purchased as a single unit, and are generally "home-built" from individual components. One advantage of TRMC over alternative techniques is that it does not require direct physical contact to the material. History. While semiconductors have been studied using microwave radiation since the 1950s, it was not until the late 1970s and early 1980s that John Warman at the Delft University of Technology exploited microwaves for "time-resolved" measurements of photoconductivity. The first reports used electrons then photons to generate charges in fluids. The technique was later refined to study semiconductors by Kunst and Beck at the Hahn Meitner Institute in Berlin. Delft remains a significant center for TRMC, however the technique is now used at a number of institutions around the world, notably the National Renewable Energy Laboratory and Kyoto University. Operating principles. The experiment relies upon the interaction between optically-generated charge carriers and microwave frequency electromagnetic radiation. The most common approach is to use a resonant cavity. An oscillating voltage is produced using a signal generator such as a voltage controlled oscillator or a Gunn diode. The oscillating current is incident on an antenna, resulting in the emission of microwaves of the same frequency. These microwaves are then directed into a resonant cavity. Because they can transmit microwaves with lower loss than cables, metallic waveguides are often used to form the circuit. With the appropriate cavity dimensions and microwave frequency, a standing wave can be formed with 1 full wavelength filing the cavity. The sample to be studied is placed at a maximum of the electric field component of the standing wave. Because metals act as cavity walls, the sample needs to have a relatively low free carrier concentration in the dark to be measurable. TRMC is hence best suited to the study of intrinsic or lightly doped semiconductors. Electrons and hole are generated by illuminating the sample with above band gap optical photons. Optical access to the sample is provided by a cavity wall which is both electrically conducting and optically transparent; for example a metallic grating or a transparent conducting oxide. The photo-generated charge carriers move under the influence of the electric field component of the standing wave, resulting in a change in intensity of microwaves that leave the cavity. The intensity of microwaves out of the cavity is measured as a function of time using an appropriate detector and an oscilloscope. Knowledge of the properties of the cavity can be used to evaluate photoconductance from changes in microwave intensity. Theory. The reflection coefficient is determined by the coupling between cavity and waveguide. When the frequency of microwave is resonant frequency, the reflectance, formula_0, of the cavity is expressed as follows: formula_1 Here formula_2 is the quality factor of the cavity including the sample, formula_3 is the quality factor of the external coupling, which is generally adjusted by iris. The total loaded quality factor of the cavity, formula_4, is defined as follows: formula_5 The photo-generated charge carriers reduce the quality of the cavity, formula_2. When the change of quality factor is very small, the change of reflected microwave power is approximately proportional to the change of dissipation factor of the cavity. Furthermore, dissipation factor of the cavity is mainly determined by the conductivity of the inside space including the sample. Consequently, the change in the conductivity, formula_6, of the cavity contents is proportional to relative changes in microwave intensity: formula_7 Here formula_8 is the background (unperturbed) microwave power measured coming out of the cavity and formula_9 is the change in microwave power as a result of the change in cavity conductance. formula_10 is the sensitivity factor determined by the quality of the cavity, formula_11 is the geometry factor of the sample. formula_10 can be derived by Taylor expanding of the reflectance equation: formula_12 Here formula_13 is the resonant frequency of the cavity in Hertz unit, formula_14 is the vacuum permittivity, formula_15 is the relative permittivity of the medium inside the cavity. The relative permittivity should be considered only when the cavity is filled by solvent. When the sample is inserted into dry cavity, only vacuum permittivity should be used because most of the inside space is filled by air. The sign of formula_16 depends on whether the cavity is in the under-coupled (lower) or over-coupled (upper) regime. So, the negative signal is detected in over-coupled regime, formula_17, whereas the positive signal is detected in under-coupled regime, formula_18. No signal can be detected at critical coupling condition, formula_19 formula_11 is determined by the overlap between the electric field and the sample position: formula_20 Here formula_21 is the electric field in the cavity. formula_22 and formula_23 denote the total inside volume of the cavity and the volume of photo-generated carriers, respectively. If the thickness of the sample is sufficiently thin (below several μm), the electric field to photo-generated carriers would be uniform. In this condition, formula_11 is approximately proportional to the thickness of the sample. Above conductivity equation can be expressed as follows: formula_24 Here formula_25 is the elementary charge, formula_26 is the transmittance of the sample at the excitation wavelength, formula_27 is the incident laser fluence, formula_28 is the quantum yield of photo-carrier generation per absorbed photon, formula_29 is the sum of the electron and hole mobility, formula_30 is the thickness of the sample. Because formula_11 is linearly proportional to the thickness, only the fractional absorbance of the semiconductor (between 0 and 1) should be additionally measured to determine the TRMC figure of merit formula_31 (e.g. using ultraviolet–visible spectroscopy): formula_32 Applications. Knowledge of charge carrier mobility in semiconductors is important for understanding the electronic and materials properties of a system. It is also valuable in device design and optimization. This is particularly true for thin film solar cells and thin film transistors, where charge extraction and amplification, respectively, are highly dependent upon mobility. TRMC has been used to study electron and hole dynamics in hydrogenated amorphous silicon, organic semiconductors, metal halide perovskites, metal oxides, dye sensitized systems, quantum dots, carbon nanotubes, chalcogenides, metal organic frameworks, and the interfaces between various systems. Because charges are normally generated using a green (~2.3 eV) or ultraviolet (~3 eV) laser, this restricts materials to those with comparable or smaller bandgaps. The technique is hence well suited to the study of solar absorbers, but not to wide bandgap semiconductors such as metal oxides. While it is very similar, and has the same dimensions, the parameter formula_31 is not the same a charge carrier mobility. formula_31 contains contributions from both holes and electrons, which cannot conventionally be resolved using TRMC. This is in contrast to Hall Measurements or transistor measurements, where hole and electron mobility can easily be separated. Additionally, the mobility is not directly extracted from the measurements, it is measured multiplied by the carrier generation yield, formula_28. The carrier generation yield is the number of electron hole pairs generated per absorbed photon. Because some absorbed photons can lead to bound neutral excitons, not all absorbed photons will lead to detectable free carriers. This can make interpretation of formula_31 more complicated than mobility. However, generally both mobility and formula_28 are parameters which one wishes to maximize when developing solar cells. As a time-resolved technique, TRMC also provides information on the timescale of carrier recombination in solar cells. Unlike time resolved photoluminescence measurements, TRMC is not sensitive to the lifetime of excitons. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_0" }, { "math_id": 1, "text": "R_0=\\frac{\\Bigl(\\frac{1}{Q_0}-\\frac{1}{Q_{ex}}\\Bigr)^2}{\\Bigl(\\frac{1}{Q_0}+\\frac{1}{Q_{ex}}\\Bigr)^2}" }, { "math_id": 2, "text": "Q_0" }, { "math_id": 3, "text": "Q_{ex}" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "\\frac{1}{Q}=\\frac{1}{Q_0}+\\frac{1}{Q_{ex}}" }, { "math_id": 6, "text": "\\Delta \\sigma" }, { "math_id": 7, "text": "F\\Delta \\sigma= \\frac{1}{A}\\frac{\\Delta P}{P}" }, { "math_id": 8, "text": "P" }, { "math_id": 9, "text": "\\Delta P" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "F" }, { "math_id": 12, "text": "A=\\mp\\frac{Q\\Bigl(\\frac{1}{\\sqrt{R_0}}\\pm1\\Bigr)}{\\pi f_{0}\\epsilon_{0}\\epsilon_{r}}" }, { "math_id": 13, "text": "f_{0}" }, { "math_id": 14, "text": "\\epsilon_{0}" }, { "math_id": 15, "text": "\\epsilon_{r}" }, { "math_id": 16, "text": "\\pm" }, { "math_id": 17, "text": "Q_0>Q_{ex}" }, { "math_id": 18, "text": "Q_0<Q_{ex}" }, { "math_id": 19, "text": "Q_0=Q_{ex}" }, { "math_id": 20, "text": "F=\\frac{\\iiint_{cavity} E^2\\Delta \\sigma dxdydz}{\\iiint_{cavity} E^2dxdydz}\\big/\\frac{\\iiint_{charge} \\Delta \\sigma dxdydz}{\\iiint_{charge} dxdydz}" }, { "math_id": 21, "text": "E" }, { "math_id": 22, "text": "cavity" }, { "math_id": 23, "text": "charge" }, { "math_id": 24, "text": "F\\Delta \\sigma=F\\frac{e(1-T)I_0\\phi\\Sigma\\mu}{d}= \\frac{1}{A}\\frac{\\Delta P}{P}" }, { "math_id": 25, "text": "e" }, { "math_id": 26, "text": "T" }, { "math_id": 27, "text": "I_0" }, { "math_id": 28, "text": "\\phi" }, { "math_id": 29, "text": "\\Sigma\\mu" }, { "math_id": 30, "text": "d" }, { "math_id": 31, "text": "\\phi\\Sigma\\mu" }, { "math_id": 32, "text": "\\phi\\Sigma\\mu= \\frac{1}{e I_0 F_A}\\frac{1}{A}\\frac{\\Delta P}{P}" } ]
https://en.wikipedia.org/wiki?curid=68252674
6825924
Root datum
In mathematical group theory, the root datum of a connected split reductive algebraic group over a field is a generalization of a root system that determines the group up to isomorphism. They were introduced by Michel Demazure in SGA III, published in 1970. Definition. A root datum consists of a quadruple formula_0, where The elements of formula_4 are called the roots of the root datum, and the elements of formula_5 are called the coroots. If formula_4 does not contain formula_10 for any formula_11, then the root datum is called reduced. The root datum of an algebraic group. If formula_12 is a reductive algebraic group over an algebraically closed field formula_13 with a split maximal torus formula_14 then its root datum is a quadruple formula_15, where A connected split reductive algebraic group over formula_13 is uniquely determined (up to isomorphism) by its root datum, which is always reduced. Conversely for any root datum there is a reductive algebraic group. A root datum contains slightly more information than the Dynkin diagram, because it also determines the center of the group. For any root datum formula_15, we can define a dual root datum formula_19 by switching the characters with the 1-parameter subgroups, and switching the roots with the coroots. If formula_12 is a connected reductive algebraic group over the algebraically closed field formula_13, then its Langlands dual group formula_20 is the complex connected reductive group whose root datum is dual to that of formula_12.
[ { "math_id": 0, "text": "(X^\\ast, \\Phi, X_\\ast, \\Phi^\\vee)" }, { "math_id": 1, "text": "X^\\ast" }, { "math_id": 2, "text": "X_\\ast" }, { "math_id": 3, "text": "\\mathbb{Z}" }, { "math_id": 4, "text": "\\Phi" }, { "math_id": 5, "text": "\\Phi^\\vee" }, { "math_id": 6, "text": "\\alpha\\mapsto\\alpha^\\vee" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "(\\alpha, \\alpha^\\vee)=2" }, { "math_id": 9, "text": "x\\mapsto x-(x,\\alpha^\\vee)\\alpha" }, { "math_id": 10, "text": "2\\alpha" }, { "math_id": 11, "text": "\\alpha\\in\\Phi" }, { "math_id": 12, "text": "G" }, { "math_id": 13, "text": "K" }, { "math_id": 14, "text": "T" }, { "math_id": 15, "text": "(X^*, \\Phi, X_*, \\Phi^{\\vee})" }, { "math_id": 16, "text": "X^*" }, { "math_id": 17, "text": "X_*" }, { "math_id": 18, "text": "\\Phi^{\\vee}" }, { "math_id": 19, "text": "(X_*, \\Phi^{\\vee},X^*, \\Phi)" }, { "math_id": 20, "text": "{}^L G" } ]
https://en.wikipedia.org/wiki?curid=6825924
68261023
Rope-burning puzzle
Mathematical puzzle In recreational mathematics, rope-burning puzzles are a class of mathematical puzzle in which one is given lengths of rope, fuse cord, or shoelace that each burn for a given amount of time, and matches to set them on fire, and must use them to measure a non-unit amount of time. The fusible numbers are defined as the amounts of time that can be measured in this way. As well as being of recreational interest, these puzzles are sometimes posed at job interviews as a test of candidates' problem-solving ability, and have been suggested as an activity for middle school mathematics students. Example. A common and simple version of this problem asks to measure a time of 45 seconds using only two fuses that each burn for a minute. The assumptions of the problem are usually specified in a way that prevents measuring out 3/4 of the length of one fuse and burning it end-to-end, for instance by stating that the fuses burn unevenly along their length. One solution to this problem is to perform the following steps: Many other variations are possible, in some cases using fuses that burn for different amounts of time from each other. Fusible numbers. In common versions of the problem, each fuse lasts for a unit length of time, and the only operations used or allowed in the solution are to light one or both ends of a fuse at known times, determined either as the start of the solution or as the time that another fuse burns out. If only one end of a fuse is lit at time formula_0, it will burn out at time formula_1. If both ends of a fuse are lit at times formula_0 and formula_2, it will burn out at time formula_3, because a portion of formula_4 is burnt at the original rate, and the remaining portion of formula_5 is burnt at twice the original rate, hence the fuse burns out at formula_6. A number formula_0 is a fusible number if it is possible to use unit-time fuses to measure out formula_0 units of time using only these operations. For instance, by the solution to the example problem, formula_7 is a fusible number. One may assume without loss of generality that every fuse is lit at both ends, by replacing a fuse that is lit only at one end at time formula_0 by two fuses, the first one lit at both ends at time formula_0 and the second one lit at both ends at time formula_8 when the first fuse burns out. In this way, the fusible numbers can be defined as the set of numbers that can be obtained from the number formula_9 by repeated application of the operation formula_10, applied to pairs formula_11 that have already been obtained and for which formula_12. The fusible numbers include all of the non-negative integers, and are a well-ordered subset of the dyadic rational numbers, the fractions whose denominators are powers of two. Being well-ordered means that, if one chooses a decreasing sequence of fusible numbers, the sequence must always be finite. Among the well-ordered sets, their ordering can be classified as formula_13, an epsilon number (a special case of the infinite ordinal numbers). Because they are well-ordered, for each integer formula_14 there is a unique smallest fusible number among the fusible numbers larger than formula_14; it has the form formula_15 for some formula_16. This number formula_16 grows very rapidly as a function of formula_14, so rapidly that for formula_17 it is (in Knuth's up-arrow notation for large numbers) already larger than formula_18. The existence of this number formula_16, for each formula_14, cannot be proven in Peano arithmetic. Lighting more than two points of a fuse. If the rules of the fuse-burning puzzles are interpreted to allow fuses to be lit at more points than their ends, a larger set of amounts of time can be measured. For instance, if a fuse is lit in such a way that, while it burns, it always has three ends burning (for instance, by lighting one point in the middle and one end, and then lighting another end or another point in the middle whenever one or two of the current lit points burn out) then it will burn for 1/3 of a unit of time rather than a whole unit. By representing a given amount of time as a sum of unit fractions, and successively burning fuses with multiple lit points so that they last for each unit fraction amount of time, it is possible to measure any rational number of units of time. However, keeping the desired number of flames lit, even on a single fuse, may require an infinite number of re-lighting steps. The problem of representing a given rational number as a sum of unit fractions is closely related to the construction of Egyptian fractions, sums of distinct unit fractions; however, for fuse-burning problems there is no need for the fractions to be different from each other. Using known methods for Egyptian fractions one can prove that measuring a fractional amount of time formula_19, with formula_20, needs only formula_21 fuses (expressed in big O notation). An unproven conjecture of Paul Erdős on Egyptian fractions suggests that fewer fuses, formula_22, may always be enough. History. In a booklet on these puzzles titled "Shoelace Clock Puzzles", created by Dick Hess for a 1998 Gathering 4 Gardner conference, Hess credits Harvard statistician Carl Morris as his original source for these puzzles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "x+1" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "(x+y+1)/2" }, { "math_id": 4, "text": "y-x" }, { "math_id": 5, "text": "1-(y-x)" }, { "math_id": 6, "text": "x+(y-x)+[1-(y-x)]/2=(x+y+1)/2" }, { "math_id": 7, "text": "\\tfrac34" }, { "math_id": 8, "text": "x+1/2" }, { "math_id": 9, "text": "0" }, { "math_id": 10, "text": "x,y\\mapsto (x+y+1)/2" }, { "math_id": 11, "text": "x,y" }, { "math_id": 12, "text": "|x-y|<1" }, { "math_id": 13, "text": "\\varepsilon_0" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "n+1/2^k" }, { "math_id": 16, "text": "k" }, { "math_id": 17, "text": "n=3" }, { "math_id": 18, "text": "2\\uparrow^9 16" }, { "math_id": 19, "text": "x/y" }, { "math_id": 20, "text": "x<y" }, { "math_id": 21, "text": "O(\\sqrt{\\log y})" }, { "math_id": 22, "text": "O(\\log\\log y)" } ]
https://en.wikipedia.org/wiki?curid=68261023
682629
Element (mathematics)
Any one of the distinct objects that make up a set in set theory In mathematics, an element (or member) of a set is any one of the distinct objects that belong to that set. Sets. Writing formula_0 means that the elements of the set A are the numbers 1, 2, 3 and 4. Sets of elements of A, for example formula_1, are subsets of A. Sets can themselves be elements. For example, consider the set formula_2. The elements of B are "not" 1, 2, 3, and 4. Rather, there are only three elements of B, namely the numbers 1 and 2, and the set formula_3. The elements of a set can be anything. For example, formula_4 is the set whose elements are the colors red, green and blue. In logical terms, ("x" ∈ "y") ↔ (∀"x"[P"x" = "y"] : "x" ∈ 𝔇"y"). Notation and terminology. The relation "is an element of", also called set membership, is denoted by the symbol "∈". Writing formula_5 means that ""x" is an element of "A". Equivalent expressions are "x" is a member of "A", "x" belongs to "A", "x" is in "A" and "x" lies in "A". The expressions "A" includes "x" and "A" contains "x" are also used to mean set membership, although some authors use them to mean instead "x" is a subset of "A"". Logician George Boolos strongly urged that "contains" be used for membership only, and "includes" for the subset relation only. For the relation ∈ , the converse relation ∈T may be written formula_6 meaning ""A" contains or includes "x"". The negation of set membership is denoted by the symbol "∉". Writing formula_7 means that ""x" is not an element of "A"". The symbol ∈ was first used by Giuseppe Peano, in his 1889 work . Here he wrote on page X: which means The symbol ∈ means "is". So "a" ∈ "b" is read as a "is a certain" b; … The symbol itself is a stylized lowercase Greek letter epsilon ("ϵ"), the first letter of the word , which means "is". Examples. Using the sets defined above, namely "A" = {1, 2, 3, 4}, "B" = {1, 2, {3, 4}} and "C" = {red, green, blue}, the following statements are true: Cardinality of sets. The number of elements in a particular set is a property known as cardinality; informally, this is the size of a set. In the above examples, the cardinality of the set "A" is 4, while the cardinality of set "B" and set "C" are both 3. An infinite set is a set with an infinite number of elements, while a finite set is a set with a finite number of elements. The above examples are examples of finite sets. An example of an infinite set is the set of positive integers {1, 2, 3, 4, ...}. Formal relation. As a relation, set membership must have a domain and a range. Conventionally the domain is called the universe denoted "U". The range is the set of subsets of "U" called the power set of "U" and denoted P("U"). Thus the relation formula_8 is a subset of "U" × P("U"). The converse relation formula_9 is a subset of P("U") × "U". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\{1, 2, 3, 4\\}" }, { "math_id": 1, "text": "\\{1, 2\\}" }, { "math_id": 2, "text": "B = \\{1, 2, \\{3, 4\\}\\}" }, { "math_id": 3, "text": "\\{3, 4\\}" }, { "math_id": 4, "text": "C = \\{\\mathrm{\\color{Red}red}, \\mathrm{\\color{green}green}, \\mathrm{\\color{blue}blue}\\}" }, { "math_id": 5, "text": "x \\in A " }, { "math_id": 6, "text": "A \\ni x" }, { "math_id": 7, "text": "x \\notin A" }, { "math_id": 8, "text": "\\in" }, { "math_id": 9, "text": "\\ni" } ]
https://en.wikipedia.org/wiki?curid=682629
68265416
Armourstone
Broken stone (very course aggregate) used in hydraulic engineering Armourstone is a generic term for broken stone with stone masses between (very coarse aggregate) that is suitable for use in hydraulic engineering. Dimensions and characteristics for armourstone are laid down in European Standard EN13383. In the United States, there are a number of different standards and publications setting out different methodologies for classifying armourstone, ranging from weight-based classifications to gradation curves and size-based classifications. Stone Classes. European Practice to EN13383. Armourstone is available in standardised stone classes, defined by both a lower and upper value of the stone mass within these classes. For instance, Class 60-300 signifies that up to 10% of the stones weigh less than and up to 30% weigh more than . The standard also mentions values which shouldn't be exceeded by 5% or 3%. For particular applications like a top layer for a breakwater or bank protection, the median stone mass size, known as "M"50, is frequently required. This pertains to a category A stone. It doesn't relate to category B stone. There are two main groups: HM and LM, standing for "Heavy" and "Light" respectively. A stone class might be defined according to EN 13383 as, for instance, HMA300-1000. The accompanying graphs offer an overview of all stone classes. A distribution between the two curves in the graph fulfils the criteria for category B. Furthermore, for category A compliance, the "MEM" should intersect the short horizontal line. "MEM" represents the average stone mass, meaning the total sample mass divided by the count of stones in that sample. It's worth noting that in wider ranges, notably 15-300 and 40-400, there's a considerable difference; for the 15-300 class, "M"50 is 1.57 times the "MEM". Additionally, there's a defined stone class called CP ("Coarse"). Despite its name suggesting otherwise, the class CP is smaller than LM. This naming convention exists because this class corresponds to the coarse category in the standard for fractional stone used as supplemental material (aggregate). For the CP stone class, size isn't denoted in kg, but in mm. Based on the primary data from standard EN13383, the following table is presented: Practice in the United States. Several standards and guidelines are identified for classifying armourstone used in coastal and river engineering in the United States, some of which are summarised in the following table: These standards provide different methodologies for classifying armourstone, ranging from weight-based classifications to gradation curves and size-based classifications. Guidance for the use of large armourstone is given in various USACE publications including the Coastal Engineering Manual. Median Stone Mass "M"50. For fine-grained materials, such as sand, the size is typically represented by the median diameter. This measurement is ascertained by sieving the sand. However, for armourstone, producing a sieve curve isn't feasible because the stones are too large for sieving. Therefore, the "M"50 measurement is employed. It is calculated by obtaining a sample of stones, determining the mass of each stone, arranging these masses by size, and then creating a cumulative mass curve. Within this curve, one can identify the "M"50 value. It's essential to note that the term median stone mass is technically inaccurate, as the stone with mass "M"50 doesn't necessarily represent the median stone in the sample. To illustrate, consider a sample of 50 stones sourced from a quarry in Bulgaria. The blue rectangle is of A4 size. Every stone's weight is individually recorded, and their masses are illustrated in the attached graph. The horizontal axis represents the individual stone mass, while the vertical axis denotes the cumulative mass as a percentage of the entire sample's mass. At the 50% mark, the "M"50 value is discerned to be 24kg. The true median for this sample is the mean mass of the 25th and 26th stones. In this specific instance, the "M"50 closely matches the median mass, which is 26kg. This sample meets the criteria for LMA5-40. However, it's important to note that the sample size is insufficient. According to EN13383, such a sample should comprise at least 200 stones. Nominal Diameter. Many design formulas do not account for stone mass but rather for diameter. As a result, a method for conversion is required. This method is identified as the nominal diameter. Essentially, it represents the size of a cube's edge that weighs the same as the stone. The formula for this is: formula_0 Often, the median value is utilised for this purpose, represented as "d""n"50. Typically, the following relationship can be used for conversion: formula_1 Here, "Fs" represents the shape factor. The shape factor can vary substantially, typically ranging between 0.7 and 0.9. Referring to the aforementioned example from Bulgaria, the "d""n"50 was also determined. Given the local stone's density (which is limestone) stands at 2284 kg/m³, the "d""n"50 is calculated to be 22cm. It may be observed that the stones in the sample appear much larger to the eye. This visual misperception can be attributed to a few particularly large stones within the sample, which distort the overall impression. Additional Parameters. The EN13383 standard elaborates on numerous parameters that define the quality of armourstone. This includes attributes like the shape parameter (measured as Length/Thickness), resistance to fracturing, and the capacity for water absorption. It's pivotal to understand that while the standard delineates how to characterise the quality of armourstone, it doesn't specify the requisite quality for a given application. Such specifics are typically found in design manuals and guidelines, including the Rock Manual. Establishing the Necessary Stone Weight. When determining the weight of stone required under the influence of waves, one might utilise the (now dated) Hudson Formula or the Van der Meer formula. For computations pertaining to stone weight in flows, the Izbash formula is advisable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d_n = \\sqrt[3]{M/\\rho}" }, { "math_id": 1, "text": "d_{n50} = F_s d_{50} = 0.84 d_{50}" } ]
https://en.wikipedia.org/wiki?curid=68265416
682693
Cohomotopy set
In mathematics, particularly algebraic topology, cohomotopy sets are particular contravariant functors from the category of pointed topological spaces and basepoint-preserving continuous maps to the category of sets and functions. They are dual to the homotopy groups, but less studied. Overview. The "p"-th cohomotopy set of a pointed topological space "X" is defined by formula_0 the set of pointed homotopy classes of continuous mappings from formula_1 to the "p"-sphere formula_2. For "p" = 1 this set has an abelian group structure, and is called the Bruschlinsky group. Provided formula_1 is a CW-complex, it is isomorphic to the first cohomology group formula_3, since the circle formula_4 is an Eilenberg–MacLane space of type formula_5. A theorem of Heinz Hopf states that if formula_1 is a CW-complex of dimension at most "p", then formula_6 is in bijection with the "p"-th cohomology group formula_7. The set formula_6 also has a natural group structure if formula_1 is a suspension formula_8, such as a sphere formula_9 for formula_10. If "X" is not homotopy equivalent to a CW-complex, then formula_3 might not be isomorphic to formula_11. A counterexample is given by the Warsaw circle, whose first cohomology group vanishes, but admits a map to formula_4 which is not homotopic to a constant map. Properties. Some basic facts about cohomotopy sets, some more obvious than others: formula_27 which is an abelian group. History. Cohomotopy sets were introduced by Karol Borsuk in 1936. A systematic examination was given by Edwin Spanier in 1949. The stable cohomotopy groups were defined by Franklin P. Peterson in 1956. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi^p(X) = [X,S^p]" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "S^p" }, { "math_id": 3, "text": "H^1(X)" }, { "math_id": 4, "text": "S^1" }, { "math_id": 5, "text": "K(\\mathbb{Z},1)" }, { "math_id": 6, "text": "[X,S^p]" }, { "math_id": 7, "text": "H^p(X)" }, { "math_id": 8, "text": "\\Sigma Y" }, { "math_id": 9, "text": "S^q" }, { "math_id": 10, "text": "q \\ge 1" }, { "math_id": 11, "text": "[X,S^1]" }, { "math_id": 12, "text": "\\pi^p(S^q) = \\pi_q(S^p)" }, { "math_id": 13, "text": "q= p + 1" }, { "math_id": 14, "text": "p > 2" }, { "math_id": 15, "text": "\\pi^p(S^q)" }, { "math_id": 16, "text": "\\mathbb{Z}_2" }, { "math_id": 17, "text": "f,g\\colon X \\to S^p" }, { "math_id": 18, "text": "\\|f(x) - g(x)\\| < 2" }, { "math_id": 19, "text": "[f] = [g]" }, { "math_id": 20, "text": "\\pi^p(X)" }, { "math_id": 21, "text": "X \\to S^p" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "\\pi^p(X)=0" }, { "math_id": 24, "text": "p > m" }, { "math_id": 25, "text": "\\pi^p(X,\\partial X)" }, { "math_id": 26, "text": "X \\setminus \\partial X" }, { "math_id": 27, "text": "\\pi^p_s(X) = \\varinjlim_k{[\\Sigma^k X, S^{p+k}]}" } ]
https://en.wikipedia.org/wiki?curid=682693
68270636
Linearized augmented-plane-wave method
The linearized augmented-plane-wave method (LAPW) is an implementation of Kohn-Sham density functional theory (DFT) adapted to periodic materials. It typically goes along with the treatment of both valence and core electrons on the same footing in the context of DFT and the treatment of the full potential and charge density without any shape approximation. This is often referred to as the all-electron full-potential linearized augmented-plane-wave method (FLAPW). It does not rely on the pseudopotential approximation and employs a systematically extendable basis set. These features make it one of the most precise implementations of DFT, applicable to all crystalline materials, regardless of their chemical composition. It can be used as a reference for evaluating other approaches. Introduction. At the core of density functional theory the Hohenberg-Kohn theorems state that every observable of an interacting many-electron system is a functional of its ground-state charge density and that this density minimizes the total energy of the system. The theorems do not answer the question how to obtain such a ground-state density. A recipe for this is given by Walter Kohn and Lu Jeu Sham who introduce an auxiliary system of noninteracting particles constructed such that it shares the same ground-state density with the interacting particle system. The Schrödinger-like equations describing this system are the Kohn-Sham equations. With these equations one can calculate the eigenstates of the system and with these the density. One contribution to the Kohn-Sham equations is the effective potential which itself depends on the density. As the ground-state density is not known before a Kohn-Sham DFT calculation and it is an input as well as an output of such a calculation, the Kohn-Sham equations are solved in an iterative procedure together with a recalculation of the density and the potential in every iteration. It starts with an initial guess for the density and after every iteration a new density is constructed as a mixture from the output density and previous densities. The calculation finishes as soon as a fixpoint of a self-consistent density is found, i.e., input and output density are identical. This is the ground-state density. A method implementing Kohn-Sham DFT has to realize these different steps of the sketched iterative algorithm. The LAPW method is based on a partitioning of the material's unit cell into non-overlapping but nearly touching so-called muffin-tin (MT) spheres, centered at the atomic nuclei, and an interstitial region (IR) in between the spheres. The physical description and the representation of the Kohn-Sham orbitals, the charge density, and the potential is adapted to this partitioning. In the following this method design and the extraction of quantities from it are sketched in more detail. Variations and extensions are indicated. Solving the Kohn-Sham equations. The central aspect of practical DFT implementations is the question how to solve the Kohn-Sham equations formula_0 with the single-electron kinetic energy operator formula_1, the effective potential formula_2, Kohn-Sham states formula_3, energy eigenvalues formula_4, and position and Bloch vectors formula_5 and formula_6. While in abstract evaluations of Kohn-Sham DFT the model for the exchange-correlation contribution to the effective potential is the only fundamental approximation, in practice solving the Kohn-Sham equations is accompanied by the introduction of many additional approximations. These include the incompleteness of the basis set used to represent the Kohn-Sham orbitals, the choice of whether to use the pseudopotential approximation or to consider all electrons in the DFT scheme, the treatment of relativistic effects, and possible shape approximations to the potential. Beyond the partitioning of the unit cell, for the LAPW method the central design aspect is the use of the LAPW basis set formula_7 to represent the valence electron orbitals as formula_8 where formula_9 are the expansion coefficients. The LAPW basis is designed to enable a precise representation of the orbitals and an accurate modelling of the physics in each region of the unit cell. Considering a unit cell of volume formula_10 covering atoms formula_11 at positions formula_12, an LAPW basis function is characterized by a reciprocal lattice vector formula_13 and the considered Bloch vector formula_6. It is given as formula_14 where formula_15 is the position vector relative to the position of atom nucleus formula_11. An LAPW basis function is thus a plane wave in the IR and a linear combination of the radial functions formula_16 and formula_17 multiplied by spherical harmonics formula_18 in each MT sphere. The radial function formula_16 is hereby the solution of the Kohn-Sham Hamiltonian for the spherically averaged potential with regular behavior at the nucleus for the given energy parameter formula_19. Together with its energy derivative formula_17 these augmentations of the plane wave in each MT sphere enable a representation of the Kohn-Sham orbitals at arbitrary eigenenergies linearized around the energy parameters. The coefficients formula_20 and formula_21 are automatically determined by enforcing the basis function to be continuously differentiable for the respective formula_22 channel. The set of LAPW basis functions is defined by specifying a cutoff parameter formula_23. In each MT sphere, the expansion into spherical harmonics is limited to a maximum number of angular momenta formula_24, where formula_25 is the muffin-tin radius of atom formula_11. The choice of this cutoff is connected to the decay of expansion coefficients for growing formula_26 in the Rayleigh expansion of plane waves into spherical harmonics. While the LAPW basis functions are used to represent the valence states, core electron states, which are completely confined within a MT sphere, are calculated for the spherically averaged potential on radial grids, for each atom separately applying atomic boundary conditions. Semicore states, which are still localized but slightly extended beyond the MT sphere boundary, may either be treated as core electron states or as valence electron states. For the latter choice the linearized representation is not sufficient because the related eigenenergy is typically far away from the energy parameters. To resolve this problem the LAPW basis can be extended by additional basis functions in the respective MT sphere, so called local orbitals (LOs). These are tailored to provide a precise representation of the semicore states. The plane-wave form of the basis functions in the interstitial region makes setting up the Hamiltonian matrix formula_27 for that region simple. In the MT spheres this setup is also simple and computationally inexpensive for the kinetic energy and the spherically averaged potential, e.g., in the muffin-tin approximation. The simplicity hereby stems from the connection of the radial functions to the spherical Hamiltonian in the spheres formula_28, i.e., formula_29 and formula_30. In comparison to the MT approximation, for the full-potential description (FLAPW) contributions from the non-spherical part of the potential are added to the Hamiltonian matrix in the MT spheres and in the IR contributions related to deviations from the constant potential. After the Hamiltonian matrix formula_31 together with the overlap matrix formula_32 is set up, the Kohn-Sham orbitals are obtained as eigenfunctions from the algebraic generalized dense Hermitian eigenvalue problem formula_33 where formula_34 is the energy eigenvalue of the j-th Kohn-Sham state at Bloch vector formula_35 and the state is given as indicated above by the expansion coefficients formula_9. The considered degree of relativistic physics differs for core and valence electrons. The strong localization of core electrons due to the singularity of the effective potential at the atomic nucleus is connected to large kinetic energy contributions and thus a fully relativistic treatment is desirable and common. For the determination of the radial functions formula_16 and formula_17 the common approach is to make an approximation to the fully relativistic description. This may be the scalar-relativistic approximation (SRA) or similar approaches. The dominant effect neglected by these approximations is the spin-orbit coupling. As indicated above the construction of the Hamiltonian matrix within such an approximation is trivial. Spin-orbit coupling can additionally be included, though this leads to a more complex Hamiltonian matrix setup or a second variation scheme, connected to increased computational demands. In the interstitial region it is reasonable and common to describe the valence electrons without considering relativistic effects. Representation of the charge density and the potential. After calculating the Kohn-Sham eigenfunctions, the next step is to construct the electron charge density by occupying the lowest energy eigenstates up to the Fermi level with electrons. The Fermi level itself is determined in this process by keeping charge neutrality in the unit cell. The resulting charge density formula_36 then has a region-specific form formula_37 i.e., it is given as a plane-wave expansion in the interstitial region and as an expansion into radial functions times spherical harmonics in each MT sphere. The radial functions hereby are numerically given on a mesh. The representation of the effective potential follows the same scheme. In its construction a common approach is to employ Weinert's method for solving the Poisson equation. It efficiently and accurately provides a solution of the Poisson equation without shape approximation for an arbitrary periodic charge density based on the concept of multipole potentials and the boundary value problem for a sphere. Postprocessing and extracting results. Because they are based on the same theoretical framework, different DFT implementations offer access to very similar sets of material properties. However, the variations in the implementations result in differences in the ease of extracting certain quantities and also in differences in their interpretation. In the following, these circumstances are sketched for some examples. The most basic quantity provided by DFT is the ground-state total energy of an investigated system. To avoid the calculation of derivatives of the eigenfunctions in its evaluation, the common implementation replaces the expectation value of the kinetic energy operator by the sum of the band energies of occupied Kohn-Sham states minus the energy due to the effective potential. The force exerted on an atom, which is given by the change of the total energy due to an infinitesimal displacement, has two major contributions. The first contribution is due to the displacement of the potential. It is known as Hellmann-Feynman force. The other, computationally more elaborate contribution, is due to the related change in the atom-position-dependent basis functions. It is often called Pulay force and requires a method-specific implementation. Beyond forces, similar method-specific implementations are also needed for further quantities derived from the total energy functional. For the LAPW method, formulations for the stress tensor and for phonons have been realized. Independent of the actual size of an atom, evaluating atom-dependent quantities in LAPW is often interpreted as calculating the quantity in the respective MT sphere. This applies to quantities like charges at atoms, magnetic moments, or projections of the density of states or the band structure onto a certain orbital character at a given atom. Deviating interpretations of such quantities from experiments or other DFT implementations may lead to differences when comparing results. On a side note also some atom-specific LAPW inputs relate directly to the respective MT region. For example, in the DFT+U approach the Hubbard U only affects the MT sphere. A strength of the LAPW approach is the inclusion of all electrons in the DFT calculation, which is crucial for the evaluation of certain quantities. One of which are hyperfine interaction parameters like electric field gradients whose calculation involves the evaluation of the curvature of the all-electron Coulomb potential near the nuclei. The prediction of such quantities with LAPW is very accurate. Kohn-Sham DFT does not give direct access to all quantities one may be interested in. For example, most energy eigenvalues of the Kohn-Sham states are not directly related to the real interacting many-electron system. For the prediction of optical properties one therefore often uses DFT codes in combination with software implementing the GW approximation (GWA) to many-body perturbation theory and optionally the Bethe-Salpeter equation (BSE) to describe excitons. Such software has to be adapted to the representation used in the DFT implementation. Both the GWA and the BSE have been formulated in the LAPW context and several implementations of such tools are in use. In other postprocessing situations it may be useful to project Kohn-Sham states onto Wannier functions. For the LAPW method such projections have also been implemented and are in common use. Software implementations. There are various software projects implementing the LAPW method and/or its variants. Examples for such codes are References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left[ \\hat{T}_\\text{s} + V_\\text{eff}(\\mathbf{r}) \\right] \\left| \\Psi_j^\\mathbf{k}(\\mathbf{r}) \\right\\rangle = \\epsilon_j^\\mathbf{k} \\left| \\Psi_j^\\mathbf{k}(\\mathbf{r}) \\right\\rangle" }, { "math_id": 1, "text": "\\hat{T}_\\text{s}" }, { "math_id": 2, "text": "V_\\text{eff}(\\mathbf{r})" }, { "math_id": 3, "text": "\\Psi_j^\\mathbf{k}(\\mathbf{r})" }, { "math_id": 4, "text": "\\epsilon_j^\\mathbf{k}" }, { "math_id": 5, "text": "\\mathbf{r}" }, { "math_id": 6, "text": "\\mathbf{k}" }, { "math_id": 7, "text": "\\left\\lbrace \\phi_{\\mathbf{k},\\mathbf{G}}(\\mathbf{r}) \\right\\rbrace" }, { "math_id": 8, "text": "\\left| \\Psi_j^\\mathbf{k}(\\mathbf{r}) \\right\\rangle = \\sum\\limits_\\mathbf{G} c_j^{\\mathbf{k},\\mathbf{G}} \\left| \\phi_{\\mathbf{k},\\mathbf{G}}(\\mathbf{r}) \\right\\rangle," }, { "math_id": 9, "text": "c_j^{\\mathbf{k},\\mathbf{G}}" }, { "math_id": 10, "text": "\\Omega" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "\\mathbf{\\tau}_\\alpha" }, { "math_id": 13, "text": "\\mathbf{G}" }, { "math_id": 14, "text": "\\phi_{\\mathbf{k},\\mathbf{G}}(\\mathbf{r}) = \\left\\lbrace \\begin{array}{l l}\\frac{1}{\\sqrt{\\Omega}} e^{i(\\mathbf{k}+\\mathbf{G})\\mathbf{r}} & \\text{for } \\mathbf{r} \\text{ in IR} \\\\ \\sum\\limits_{l=0}^{l_{\\text{max},\\alpha}} \\sum\\limits_{m=-l}^{l} \\left[ a_{l,m}^{\\mathbf{k},\\mathbf{G},\\alpha} u_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}) + b_{l,m}^{\\mathbf{k},\\mathbf{G},\\alpha} \\dot{u}_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}) \\right] Y_{l,m}(\\mathbf{\\hat{r}}_\\alpha) & \\text{for } \\mathbf{r} \\text{ in MT}_\\alpha \\end{array} \\right.," }, { "math_id": 15, "text": "\\mathbf{r}_\\alpha = \\mathbf{r} - \\mathbf{\\tau}_\\alpha" }, { "math_id": 16, "text": "u_{l,\\alpha}(r_\\alpha, E_{l,\\alpha})" }, { "math_id": 17, "text": "\\dot{u}_{l,\\alpha}(r_\\alpha, E_{l,\\alpha})" }, { "math_id": 18, "text": "Y_{l,m}" }, { "math_id": 19, "text": "E_{l,\\alpha}" }, { "math_id": 20, "text": "a_{l,m}^{\\mathbf{k},\\mathbf{G},\\alpha}" }, { "math_id": 21, "text": "b_{l,m}^{\\mathbf{k},\\mathbf{G},\\alpha}" }, { "math_id": 22, "text": "(l,m)" }, { "math_id": 23, "text": "K_\\text{max} = |\\mathbf{k}+\\mathbf{G}|_\\text{max}" }, { "math_id": 24, "text": "l_{\\text{max},\\alpha} \\approx K_\\text{max} R_{\\text{MT}_\\alpha}" }, { "math_id": 25, "text": "R_{\\text{MT}_\\alpha}" }, { "math_id": 26, "text": "l" }, { "math_id": 27, "text": "H_{\\mathbf{G'},\\mathbf{G}}^{\\mathbf{k}} = \\left\\langle \\phi_{\\mathbf{k},\\mathbf{G'}} \\Big| \\hat{H} \\Big| \\phi_{\\mathbf{k},\\mathbf{G}} \\right\\rangle = \\left\\langle \\phi_{\\mathbf{k},\\mathbf{G'}} \\Big| \\hat{T}_\\text{s} + V_\\text{eff}(\\mathbf{r}) \\Big| \\phi_{\\mathbf{k},\\mathbf{G}} \\right\\rangle" }, { "math_id": 28, "text": "\\hat{H}_\\text{sphr}^\\alpha" }, { "math_id": 29, "text": "\\hat{H}_\\text{sphr}^\\alpha \\left| u_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}) \\right\\rangle = E_{l,\\alpha} \\left| u_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}) \\right\\rangle" }, { "math_id": 30, "text": "\\hat{H}_\\text{sphr}^\\alpha \\left| \\dot{u}_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}) \\right\\rangle = E_{l,\\alpha} \\left| \\dot{u}_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}) \\right\\rangle + \\left| u_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}) \\right\\rangle" }, { "math_id": 31, "text": "H_{\\mathbf{G'},\\mathbf{G}}^{\\mathbf{k}}" }, { "math_id": 32, "text": "S_{\\mathbf{G'},\\mathbf{G}}^{\\mathbf{k}} = \\left\\langle \\phi_{\\mathbf{k},\\mathbf{G'}} \\Big| \\phi_{\\mathbf{k},\\mathbf{G}} \\right\\rangle" }, { "math_id": 33, "text": "\\sum\\limits_\\mathbf{G} H_{\\mathbf{G'},\\mathbf{G}}^{\\mathbf{k}} c_j^{\\mathbf{k},\\mathbf{G}} = \\epsilon_j^{\\mathbf{k}} \\sum\\limits_\\mathbf{G} S_{\\mathbf{G'},\\mathbf{G}}^{\\mathbf{k}} c_j^{\\mathbf{k},\\mathbf{G}}~~," }, { "math_id": 34, "text": "\\epsilon_j^{\\mathbf{k}}" }, { "math_id": 35, "text": "{\\mathbf{k}}" }, { "math_id": 36, "text": "\\rho(\\mathbf{r})" }, { "math_id": 37, "text": "\\rho(\\mathbf{r})=\\left\\{ \\begin{array}{l l} \\sum\\limits_{\\mathbf{G}} \\rho_{\\mathbf{G}} e^{i\\mathbf{G}\\mathbf{r}} & \\text{for } \\mathbf{r} \\text{ in IR} \\\\ \\sum\\limits_{l=0}^{l_{\\text{max},\\alpha}} \\sum\\limits_{m=-l}^{l} \\rho_{l,m}^\\alpha(r_\\alpha) Y_{l,m}(\\mathbf{\\hat{r}}_\\alpha) & \\text{for } \\mathbf{r} \\text{ in MT}_\\alpha \\end{array} \\right.," }, { "math_id": 38, "text": "u_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}^\\text{lo})" }, { "math_id": 39, "text": "\\dot{u}_{l,\\alpha}(r_\\alpha, E_{l,\\alpha}^\\text{lo})" }, { "math_id": 40, "text": "E_{l,\\alpha}^\\text{lo}" }, { "math_id": 41, "text": "\\ddot{u}_{l,\\alpha}(r_\\alpha, E_{l,\\alpha})" }, { "math_id": 42, "text": "u_{l,\\alpha}" }, { "math_id": 43, "text": "\\dot{u}_{l,\\alpha}" } ]
https://en.wikipedia.org/wiki?curid=68270636
68274287
Moessner's theorem
Theorem in number theory In number theory, Moessner's theorem or Moessner's magic is related to an arithmetical algorithm to produce an infinite sequence of the exponents of positive integers formula_0 with formula_1 by recursively manipulating the sequence of integers algebraically. The algorithm was first published by Alfred Moessner in 1951; the first proof of its validity was given by Oskar Perron that same year. For example, for formula_2, one can remove every even number, resulting in formula_3, and then add each odd number to the sum of all previous elements, providing formula_4. Construction. Write down every positive integer and remove every formula_5-th element, with formula_5 a positive integer. Build a new sequence of partial sums with the remaining numbers. Continue by removing every formula_6-st element in the new sequence and producing a new sequence of partial sums. For the sequence formula_7, remove the formula_8-st elements and produce a new sequence of partial sums. The procedure stops at the formula_5-th sequence. The remaining sequence will correspond to formula_9 Example. The initial sequence is the sequence of positive integers, formula_10 For formula_11, we remove every fourth number from the sequence of integers and add up each element to the sum of the previous elements formula_12 Now we remove every third element and continue to add up the partial sums formula_13 Remove every second element and continue to add up the partial sums formula_14, which recovers formula_15. Variants. If the triangular numbers are removed instead, a similar procedure leads to the sequence of factorials formula_16 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 1^n, 2^n, 3^n, 4^n, \\cdots ~," }, { "math_id": 1, "text": "n \\geq 1 ~," }, { "math_id": 2, "text": "n=2" }, { "math_id": 3, "text": "(1,3,5,7\\cdots)" }, { "math_id": 4, "text": "(1,4,9,16,\\cdots)=(1^2,2^2,3^2,4^2\\cdots)" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "(n-1)" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "(n-k+1)" }, { "math_id": 9, "text": "1^n, 2^n, 3^n, 4^n \\cdots~." }, { "math_id": 10, "text": "1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 \\cdots ~." }, { "math_id": 11, "text": "n=4" }, { "math_id": 12, "text": "1,2,3,5,6,7,9,10,11,13,14,15 \\cdots \\to 1,3,6,11,17,24,33,43,54,67,81,96 \\cdots" }, { "math_id": 13, "text": "1,3,11,17,33,43,67,81 \\cdots \\to 1,4,15,32,65,108,175,256 \\cdots" }, { "math_id": 14, "text": "1,15,65,175 \\cdots \\to 1,16,81,256 \\cdots " }, { "math_id": 15, "text": "1^4, 2^4,3^4,4^4, \\cdots" }, { "math_id": 16, "text": "1!, 2!,3!,4!,\\cdots~." } ]
https://en.wikipedia.org/wiki?curid=68274287
68274488
Van der Meer formula
Formula to calculate the stability of armourstone under wave action The Van der Meer formula is a formula for calculating the required stone weight for armourstone under the influence of (wind) waves. This is necessary for the design of breakwaters and shoreline protection. Around 1985 it was found that the Hudson formula in use at that time had considerable limitations (only valid for permeable breakwaters and steep (storm) waves). That is why the Dutch government agency Rijkswaterstaat commissioned Deltares to start research for a more complete formula. This research, conducted by Jentsje van der Meer, resulted in the Van der Meer formula in 1988, as described in his dissertation. This formula reads formula_0 and formula_1 In this formula: "Hs" = Significant wave height at the toe of the construction Δ = relative density of the stone (= ("ρs" -"ρw")/"ρw") where "ρs" is the density of the stone and "ρw" is the density of the water "d""n"50 = nominal stone diameter α = breakwater slope "P" = notional permeability "S" = Damage number "N" = number of waves in the storm "ξm" = the Iribarren number calculated with the Tm For design purposes, for the coefficient "cp" the value of 5,2 and for "cs" the value 0,87 is recommended. The value of "P" can be read from attached graph. Until now, there is no good method for determining "P" different than with accompanying pictures. Research is under way to try to determine the value of "P" using calculation models that can simulate the water movement in the breakwater (OpenFOAM models). The value of the damage number "S" is defined as formula_2 where "A" is the area of the erosion area. Permissible values for "S are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{H_s}{\\Delta \\cdot d_{n50}}= \n\\begin{cases} c_{p} P^{0.18} \\left (\\frac{S}{\\sqrt{N}} \\right)^{0.2} \\xi_m^{-0.5} & \\mbox{when } \\xi_m < \\xi{cr} \\quad [\\mathrm{plunging}]\n\\\\ c_{s} P^{-0.13} \\left (\\frac{S}{\\sqrt{N}} \\right)^{0.2} \\xi_m^{P} \\sqrt{\\cot \\alpha} & \\mbox{when } \\xi_m \\ge \\xi{cr} \\quad [\\mathrm{surging}]\n\\end{cases} " }, { "math_id": 1, "text": " \\xi_{cr} = \\left[ \\frac{c_p}{c_s}P^{0.31} \\sqrt{\\tan \\alpha} \\right]^{\\frac{1}{P+0.5}}" }, { "math_id": 2, "text": "S=\\frac{A}{d_{n50}^2}" } ]
https://en.wikipedia.org/wiki?curid=68274488
68281326
Urbach energy
The Urbach Energy, or Urbach Edge, is a parameter typically denoted formula_0, with dimensions of energy, used to quantify energetic disorder in the band edges of a semiconductor. It is evaluated by fitting the absorption coefficient as a function of energy to an exponential function. It is often used to describe electron transport in structurally disordered semiconductors such as hydrogenated amorphous silicon. Introduction. In the simplest description of a semiconductor, a single parameter is used to quantify the onset of optical absorption: the band gap, formula_1. In this description, semiconductors are described as being able to absorb photons above formula_1, but are transparent to photons below formula_1. However, the density of states in 3 dimensional semiconductors increases further from the band gap (this is not generally true in lower dimensional semiconductors however). For this reason, the absorption coefficient, formula_2, increases with energy. The Urbach Energy quantifies the "steepness" of the onset of absorption near the band edge, and hence the broadness of the density of states. A sharper onset of absorption represents a lower Urbach Energy. History and name. The Urbach Energy is defined by an exponential increase in absorbance with energy. While an exponential dependence of absorbance had been observed previously in photographic materials, it was Franz Urbach that evaluated this property systematically in crystals. He used silver bromide for his study while working at the Kodak Company in 1953. Definition. Absorption in semiconductors is known to increase exponentially near the onset of absorption, spanning several orders of magnitude. Absorption as a function of energy can be described by the following equation: formula_3 where formula_4 and formula_5 are fitting parameters with dimensions of inverse length and energy, respectively, and formula_0 is the Urbach Energy. This equation is only valid when formula_6. The Urbach Energy is temperature-dependent. Room temperature values of formula_0 for hydrogenated amorphous silicon are typically between 50 meV and 150 meV. Relationship to charge transport. The Urbach Energy is often evaluated to make statements on the energetic disorder of band edges in structurally disordered semiconductors. The Urbach Energy has been shown to increase with dangling bond density in hydrogenated amorphous silicon and has been shown to be strongly correlated with the slope of band tails evaluated using transistor measurements. For this reason, it can be used as a proxy for activation energy, formula_7, in semiconductors governed by multiple trapping and release. It is important to state that formula_0 is not the same as formula_7, since formula_7 describes the disorder associated with one band, not both. Measurement. To evaluate the Urbach Energy, the absorption coefficient needs to be measured over several orders of magnitude. For this reason, high precision techniques such as the constant photocurrent method (CPM) or photothermal deflection spectroscopy are used. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{0}" }, { "math_id": 1, "text": "E_{G}" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\alpha(E)=\\alpha_{0}\\exp \\biggl(\\frac{E-E_{1}}{E_{0}}\\biggr)" }, { "math_id": 4, "text": "\\alpha_{0}" }, { "math_id": 5, "text": "E_{1}" }, { "math_id": 6, "text": "\\alpha\\propto\\exp(E)" }, { "math_id": 7, "text": "E_{A}" } ]
https://en.wikipedia.org/wiki?curid=68281326
68283515
BIO-LGCA
In computational and mathematical biology, a biological lattice-gas cellular automaton (BIO-LGCA) is a discrete model for moving and interacting biological agents, a type of cellular automaton. The BIO-LGCA is based on the lattice-gas cellular automaton (LGCA) model used in fluid dynamics. A BIO-LGCA model describes cells and other motile biological agents as point particles moving on a discrete lattice, thereby interacting with nearby particles. Contrary to classic cellular automaton models, particles in BIO-LGCA are defined by their position and velocity. This allows to model and analyze active fluids and collective migration mediated primarily through changes in momentum, rather than density. BIO-LGCA applications include cancer invasion and cancer progression. Model definition. As are all cellular automaton models, a BIO-LGCA model is defined by a lattice formula_0, a state space formula_1, a neighborhood formula_2, and a rule formula_3. State space. For modeling particle velocities explicitly, lattice sites are assumed to have a specific substructure. Each lattice site formula_6 is connected to its neighboring lattice sites through vectors called "velocity channels", formula_8, formula_9, where the number of velocity channels formula_10 is equal to the number of nearest neighbors, and thus depends on the lattice geometry (formula_11 for a one-dimensional lattice, formula_7 for a two-dimensional hexagonal lattice, and so on). In two dimensions, velocity channels are defined as formula_12. Additionally, an arbitrary number formula_13 of so-called "rest channels" may be defined, such that formula_14, formula_15. A channel is said to be occupied if there is a particle in the lattice site with a velocity equal to the velocity channel. The occupation of channel formula_8 is indicated by the occupation number formula_16. Typically, particles are assumed to obey an exclusion principle, such that no more than one particle may occupy a single velocity channel at a lattice site simultaneously. In this case, occupation numbers are Boolean variables, i.e. formula_17, and thus, every site has a maximum carrying capacity formula_18. Since the collection of all channel occupation numbers defines the number of particles and their velocities in each lattice site, the vector formula_19 describes the state of a lattice site, and the state space is given by formula_20. Rule and model dynamics. The states of every site in the lattice are updated synchronously in discrete time steps to simulate the model dynamics. The rule is divided into two steps. The probabilistic interaction step simulates particle interaction, while the deterministic transport step simulates particle movement. Interaction step. Depending on the specific application, the interaction step may be composed of reaction and/or reorientation operators. The reaction operator formula_21 replaces the state of a node formula_22 with a new state formula_23 following a transition probability formula_24, which depends on the state of the neighboring lattice sites formula_25to simulate the influence of neighboring particles on the reactive process. The reaction operator does not conserve particle number, thus allowing to simulate birth and death of individuals. The reaction operator's transition probability is usually defined "ad hoc" form phenomenological observations. The reorientation operator formula_26 also replaces a state formula_27 with a new state formula_28 with probability formula_29. However, this operator conserves particle number and therefore only models changes in particle velocity by redistributing particles among velocity channels. The transition probability for this operator can be determined from statistical observations (by using the maximum caliber principle) or from known single-particle dynamics (using the discretized, steady-state angular probability distribution given by the Fokker-Planck equation associated to a Langevin equation describing the reorientation dynamics), and typically takes the form formula_30 where formula_31 is a normalization constant (also known as the partition function), formula_32 is an energy-like function which particles will likely minimize when changing their direction of motion, formula_33 is a free parameter inversely proportional to the randomness of particle reorientation (analogous to the inverse temperature in thermodynamics), and formula_34 is a Kronecker delta which ensures that particle number before formula_35 and after reorientation formula_36 is unchanged. The state resulting form applying the reaction and reorientation operator formula_37 is known as the post-interaction configuration and denoted by formula_38. Transport step. After the interaction step, the deterministic transport step is applied synchronously to all lattice sites. The transport step simulates the movement of agents according to their velocity, due to the self-propulsion of living organisms. During this step, the occupation numbers of post-interaction states will be defined as the new occupation states of the same channel of the neighboring lattice site in the direction of the velocity channel, i.e. formula_39. A new time step begins when both interaction and transport steps have occurred. Therefore, the dynamics of the BIO-LGCA can be summarized as the stochastic finite-difference microdynamical equation formula_40 Example interaction dynamics. The transition probability for the reaction and/or reorientation operator must be defined to appropriately simulate the modeled system. Some elementary interactions and the corresponding transition probabilities are listed below. Random walk. In the absence of any external or internal stimuli, cells may move randomly without any directional preference. In this case, the reorientation operator may be defined through a transition probabilityformula_41 where formula_42. Such transition probability allows any post-reorientation configuration formula_43 with the same number of particles as the pre-reorientation configuration formula_27, to be picked uniformly. Simple birth and death process. If organisms reproduce and die independently of other individuals (with the exception of the finite carrying capacity), then a simple birth/death process can be simulated with a transition probability given byformula_44 where formula_45, formula_46 are constant birth and death probabilities, respectively, formula_47 is the Kronecker delta which ensures only one birth/death event happens every time step, and formula_48 is the Heaviside function, which makes sure particle numbers are positive and bounded by the carrying capacity formula_49. Adhesive interactions. Cells may adhere to one another by cadherin molecules on the cell surface. Cadherin interactions allow cells to form aggregates. The formation of cell aggregates via adhesive biomolecules can be modeled by a reorientation operator with transition probabilities defined asformula_50 where formula_51 is a vector pointing in the direction of maximum cell density, defined as formula_52, where formula_53is the configuration of the lattice site formula_54 within the neighborhood formula_2, and formula_55 is the momentum of the post-reorientation configuration, defined as formula_56. This transition probability favors post-reorientation configurations with cells moving towards the cell density gradient. Mathematical analysis. Since an exact treatment of a stochastic agent-based model quickly becomes unfeasible due to high-order correlations between all agents, the general method of analyzing a BIO-LGCA model is to cast it into an approximate, deterministic finite difference equation (FDE) describing the mean dynamics of the population, then performing the mathematical analysis of this approximate model, and comparing the results to the original BIO-LGCA model. First, the expected value of the microdynamical equation formula_57 is obtainedformula_58 where formula_59 denotes the expected value, and formula_60 is the expected value of the formula_61-th channel occupation number of the lattice site at formula_62 at time step formula_63. However, the term on the right, formula_64 is highly nonlinear on the occupation numbers of both the lattice site formula_62 and the lattice sites within the interaction neighborhood formula_2, due to the form of the transition probability formula_65 and the statistics of particle placement within velocity channels (for example, arising from an exclusion principle imposed on channel occupations). This non-linearity would result in high-order correlations and moments among all channel occupations involved. Instead, a mean-field approximation is usually assumed, wherein all correlations and high order moments are neglected, such that direct particle-particle interactions are substituted by interactions with the respective expected values. In other words, if formula_66 are random variables, and formula_67 is a function, thenformula_68 under this approximation. Thus, we can simplify the equation toformula_69 where formula_70 is a nonlinear function of the expected lattice site configuration formula_71 and the expected neighborhood configuration formula_72 dependent on the transition probabilities and in-node particle statistics. From this nonlinear FDE, one may identify several homogeneous steady states, or constants formula_73 independent of formula_62 and formula_63 which are solutions to the FDE. To study the stability conditions of these steady states and the pattern formation potential of the model, a linear stability analysis can be performed. To do so, the nonlinear FDE is linearized asformula_74 where formula_75 denotes the homogeneous steady state formula_76, and a von Neumann neighborhood was assumed. In order to cast it into a more familiar finite difference equation with temporal increments only, a discrete Fourier transform can be applied on both sides of the equation. After applying the shift theorem and isolating the term with a temporal increment on the left, one obtains the lattice-Boltzmann equationformula_77 where formula_78 is the imaginary unit, formula_79 is the size of the lattice along one dimension, formula_80 is the Fourier wave number, and formula_81 denotes the discrete Fourier transform. In matrix notation, this equation is simplified to formula_82, where the matrix formula_83 is called the "Boltzmann propagator" and is defined asformula_84 The eigenvalues formula_85 of the Boltzmann propagator dictate the stability properties of the steady state: Applications. Constructing a BIO-LGCA for the study of biological phenomena mainly involves defining appropriate transition probabilities for the interaction operator, though precise definitions of the state space (to consider several cellular phenotypes, for example), boundary conditions (for modeling phenomena in confined conditions), neighborhood (to match experimental interaction ranges quantitatively), and carrying capacity (to simulate crowding effects for given cell sizes) may be important for specific applications. While the distribution of the reorientation operator can be obtained through the aforementioned statistical and biophysical methods, the distribution of the reaction operators can be estimated from the statistics of "in vitro" experiments, for example. BIO-LGCA models have been used to study several cellular, biophysical and medical phenomena. Some examples include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{L}" }, { "math_id": 1, "text": "\\mathcal{E}" }, { "math_id": 2, "text": "\\mathcal{N}" }, { "math_id": 3, "text": "\\mathcal{R}" }, { "math_id": 4, "text": "\\mathcal{L}\\subset\\mathbb{R}^d" }, { "math_id": 5, "text": "d" }, { "math_id": 6, "text": "\\mathbf{r}\\in\\mathcal{L}" }, { "math_id": 7, "text": "b=6" }, { "math_id": 8, "text": "\\mathbf{c}_i" }, { "math_id": 9, "text": "i\\in\\{1,2,\\ldots,b\\}" }, { "math_id": 10, "text": "b" }, { "math_id": 11, "text": "b=2" }, { "math_id": 12, "text": "\\mathbf{c}_i=\\left(\\cos\\frac{2\\pi i}{b},\\sin\\frac{2\\pi i}{b}\\right)" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "\\mathbf{c}_i=(0,0)" }, { "math_id": 15, "text": "i\\in\\{b+1,b+2,\\ldots,b+a\\}" }, { "math_id": 16, "text": "s_i" }, { "math_id": 17, "text": "s_i\\in\\mathcal{S}=\\{0,1\\}" }, { "math_id": 18, "text": "K=a+b" }, { "math_id": 19, "text": "\\mathbf{s}=\\left(s_1,s_2,\\ldots,s_{K}\\right)" }, { "math_id": 20, "text": "\\mathcal{E}=\\mathcal{S}^K" }, { "math_id": 21, "text": "\\mathcal{A}" }, { "math_id": 22, "text": " \\mathbf{s}" }, { "math_id": 23, "text": "\\mathbf{s}^{\\mathcal{A}}" }, { "math_id": 24, "text": "P\\left(\\left. \\mathbf{s}\\rightarrow \\mathbf{s}^{\\mathcal{A}}\\right| \\mathbf{s}_{\\mathcal{N}} \\right)" }, { "math_id": 25, "text": "\\mathbf{s}_{\\mathcal{N}}" }, { "math_id": 26, "text": "\\mathcal{O}" }, { "math_id": 27, "text": "\\mathbf{s}" }, { "math_id": 28, "text": "\\mathbf{s}^{\\mathcal{O}}" }, { "math_id": 29, "text": "P\\left(\\left. \\mathbf{s}\\rightarrow \\mathbf{s}^{\\mathcal{O}}\\right| \\mathbf{s}_{\\mathcal{N}} \\right)" }, { "math_id": 30, "text": "P\\left(\\left. \\mathbf{s}\\rightarrow \\mathbf{s}^{\\mathcal{O}}\\right| \\mathbf{s}_{\\mathcal{N}} \\right)\n=\\frac{1}{Z}e^{-\\beta H\\left(\\mathbf{s}_{\\mathcal{N}}\\right)}\n\\delta_{n\\left(\\mathbf{s}\\right),n\\left(\\mathbf{s}^{\\mathcal{O}}\\right)}" }, { "math_id": 31, "text": "Z" }, { "math_id": 32, "text": "H\\left(\\mathbf{s}_{\\mathcal{N}}\\right)" }, { "math_id": 33, "text": "\\beta" }, { "math_id": 34, "text": "\\delta_{n\\left(\\mathbf{s}\\right),n\\left(\\mathbf{s}^{\\mathcal{O}}\\right)}" }, { "math_id": 35, "text": "n\\left(\\mathbf{s}\\right)" }, { "math_id": 36, "text": "n\\left(\\mathbf{s}^{\\mathcal{O}}\\right)" }, { "math_id": 37, "text": "\\mathbf{s}^{\\mathcal{O}\\circ\\mathcal{A}}" }, { "math_id": 38, "text": "\\mathbf{s}^{\\mathcal{I}}:=\\mathbf{s}^{\\mathcal{O}\\circ\\mathcal{A}}" }, { "math_id": 39, "text": "s_i(\\mathbf{r}+\\mathbf{c}_i)=s_i^{\\mathcal{I}}(\\mathbf{r})" }, { "math_id": 40, "text": "s_i(\\mathbf{r}+\\mathbf{c}_i,k+1)=s_i^{\\mathcal{I}}(\\mathbf{r},k)" }, { "math_id": 41, "text": "P\\left(\\left.\\mathbf{s}\\rightarrow\\mathbf{s}^{\\mathcal{O}}\\right|\\mathbf{s}_{\\mathcal{N}}\\right)\n=\\frac{\\delta_{n(\\mathbf{s}),n\\left(\\mathbf{s}^{\\mathcal{O}}\\right)}}{Z}" }, { "math_id": 42, "text": "Z=\\sum_{\\mathbf{s}^{\\mathcal{O}}}\\delta_{n\\left(\\mathbf{s}\\right),n\\left(\\mathbf{s}^{\\mathcal{O}}\\right)}" }, { "math_id": 43, "text": "\\mathbf{s}^\\mathcal{O}" }, { "math_id": 44, "text": "P\\left(\\left.\\mathbf{s}\\rightarrow\\mathbf{s}^\\mathcal{A}\\right|\\mathbf{s}_{\\mathcal{N}}\\right)=\n\\left[r_b\\delta_{n\\left(\\mathbf{s}^{\\mathcal{A}}\\right),n\\left(\\mathbf{s}\\right)+1}\n+r_d\\delta_{n\\left(\\mathbf{s}^{\\mathcal{A}}\\right),n\\left(\\mathbf{s}\\right) - 1}\\right]\n\\Theta\\left[n\\left(\\mathbf{s}^{\\mathcal{A}}\\right)\\right]\n\\Theta\\left[n\\left( K-\\mathbf{s}^{\\mathcal{A}}\\right)\\right]" }, { "math_id": 45, "text": "r_b,r_d\\in[0,1]" }, { "math_id": 46, "text": "r_b+r_d\\leq 1" }, { "math_id": 47, "text": "\\delta_{i,j}" }, { "math_id": 48, "text": "\\Theta(x)" }, { "math_id": 49, "text": "K" }, { "math_id": 50, "text": "P\\left(\\left.\\mathbf{s}\\rightarrow\\mathbf{s}^{\\mathcal{O}}\\right|\\mathbf{s}_{\\mathcal{N}}\\right)\n=\\frac{1}{Z}\\exp\\left[\\beta\\mathbf{G}\\left(\\mathbf{s}_{\\mathcal{N}}\\right)\\cdot\\mathbf{J}\\left(\\mathbf{s}^{\\mathcal{O}}\\right)\\right]" }, { "math_id": 51, "text": "\\mathbf{G}\\left(\\mathbf{s}_{\\mathcal{N}}\\right)" }, { "math_id": 52, "text": "\\mathbf{G}\\left(\\mathbf{s}_{\\mathcal{N}}\\right)=\n\\sum_{\\mathbf{r}'\\in\\mathcal{N}}\\left(\\mathbf{r}'-\\mathbf{r}\\right)n\\left(\\mathbf{s}_{\\mathcal{N}}^{\\mathbf{r}'}\\right)" }, { "math_id": 53, "text": "\\mathbf{s}_{\\mathcal{N}}^{\\mathbf{r}'}" }, { "math_id": 54, "text": "\\mathbf{r}'" }, { "math_id": 55, "text": "\\mathbf{J}\\left(\\mathbf{s}^{\\mathcal{O}}\\right)" }, { "math_id": 56, "text": "\\mathbf{J}\\left(\\mathbf{s}^{\\mathcal{O}}\\right)=\\sum_{j=1}^bs_j^{\\mathcal{O}}\\mathbf{c}_j" }, { "math_id": 57, "text": "s_m(\\mathbf{r}+\\mathbf{c}_m,k+1)=s_m^{\\mathcal{I}}(\\mathbf{r},k)" }, { "math_id": 58, "text": "f_m\\left(\\mathbf{r}+\\mathbf{c}_m,k+1\\right)=\n\\left\\langle s_m^{\\mathcal{I}}\\left(\\mathbf{r},k\\right)\\right\\rangle" }, { "math_id": 59, "text": "\\langle\\cdot\\rangle" }, { "math_id": 60, "text": "f_m\\left(\\mathbf{r},k\\right):=\\left\\langle s_m\\left(\\mathbf{r},k\\right)\\right\\rangle" }, { "math_id": 61, "text": "m" }, { "math_id": 62, "text": "\\mathbf{r}" }, { "math_id": 63, "text": "k" }, { "math_id": 64, "text": "\\left\\langle s_m^{\\mathcal{I}}\\left(\\mathbf{r},k\\right)\\right\\rangle" }, { "math_id": 65, "text": "P\\left(\\left.\\mathbf{s}\\rightarrow\\mathbf{s}^{\\mathcal{I}}\\right|\\mathbf{s}_{\\mathcal{N}}\\right)" }, { "math_id": 66, "text": "X_1,X_2,\\ldots,X_n" }, { "math_id": 67, "text": "F:\\mathbb{R}^n\\mapsto\\mathbb{R}" }, { "math_id": 68, "text": "\\left\\langle F\\left(X_1,X_2,\\ldots,X_n\\right)\\right\\rangle\\approx\nF\\left(\\left\\langle X_1\\right\\rangle,\\left\\langle X_2\\right\\rangle,\\ldots,\\left\\langle X_n\\right\\rangle\\right)" }, { "math_id": 69, "text": "f_m\\left(\\mathbf{r}+\\mathbf{c}_m,k+1\\right)=\n\\mathcal{C}\\left(\\mathbf{f}\\left(\\mathbf{r},k\\right),\\mathbf{f}_{\\mathcal{N}}\\left(\\mathbf{r},k\\right)\\right)" }, { "math_id": 70, "text": "\\mathcal{C}\\left(\\mathbf{f}\\left(\\mathbf{r},k\\right),\\mathbf{f}_{\\mathcal{N}}\\left(\\mathbf{r},k\\right)\\right)" }, { "math_id": 71, "text": "\\mathbf{f}\\left(\\mathbf{r},k\\right)" }, { "math_id": 72, "text": "\\mathbf{f}_{\\mathcal{N}}\\left(\\mathbf{r},k\\right)" }, { "math_id": 73, "text": "\\bar{f}_m" }, { "math_id": 74, "text": "f_m\\left(\\mathbf{r}+\\mathbf{c}_m,k+1\\right)=\n\\sum_{j=1}^K\\left.\\frac{\\partial\\mathcal{C}}{\\partial f_j\\left(\\mathbf{r},k\\right)}\\right|_{\\mathrm{ss}}f_j\\left(\\mathbf{r},k\\right)+\n\\sum_{j=1}^K\\sum_{p=1}^K\\left.\\frac{\\partial\\mathcal{C}}{\\partial f_j\\left(\\mathbf{r}+\\mathbf{c}_p,k\\right)}\\right|_{\\mathrm{ss}}f_j\\left(\\mathbf{r}+\\mathbf{c}_p,k\\right)" }, { "math_id": 75, "text": "\\mathrm{ss}" }, { "math_id": 76, "text": "f_m\\left(\\mathbf{r},k\\right)=\\bar{f}_m,m\\in\\{1,\\ldots,K\\}" }, { "math_id": 77, "text": "\\hat{f}_m\\left(\\mathbf{q},k+1\\right)=e^{-\\frac{2 \\pi i}{L}\\mathbf{q}\\cdot\\mathbf{c}_m}\n\\left\\{\\sum_{j=1}^K\\left[\\left.\\frac{\\partial\\mathcal{C}}{\\partial f_j\\left(\\mathbf{r},k\\right)}\\right|_{\\mathrm{ss}}+\\sum_{p=1}^K\\left.\\frac{\\partial\\mathcal{C}}{\\partial f_j\\left(\\mathbf{r}+\\mathbf{c}_p,k\\right)}\\right|_{\\mathrm{ss}}e^{\\frac{2\\pi i}{L}\\mathbf{q}\\cdot\\mathbf{c}_p}\\right]\\hat{f}_j\\left(\\mathbf{q},k\\right)\\right\\}" }, { "math_id": 78, "text": "i=\\sqrt{-1}" }, { "math_id": 79, "text": "L" }, { "math_id": 80, "text": "\\mathbf{q}\\in\\{1,2,\\ldots,L\\}^d" }, { "math_id": 81, "text": "\\hat{\\cdot}=\\mathcal{F}\\{\\cdot\\}" }, { "math_id": 82, "text": "\\hat{\\mathbf{f}}\\left(\\mathbf{q},k+1\\right)=\\Gamma\\hat{\\mathbf{f}}\\left(\\mathbf{q},k\\right)" }, { "math_id": 83, "text": "\\Gamma" }, { "math_id": 84, "text": "\\Gamma_{m,j}=e^{-\\frac{2\\pi i}{L}\\mathbf{q}\\cdot\\mathbf{c}_m}\n\\left[\\left.\\frac{\\partial\\mathcal{C}}{\\partial f_j\\left(\\mathbf{r},k\\right)}\\right|_{\\mathrm{ss}}+\\sum_{p=1}^K\\left.\\frac{\\partial\\mathcal{C}}{\\partial f_j\\left(\\mathbf{r}+\\mathbf{c}_p,k\\right)}\\right|_{\\mathrm{ss}}e^{\\frac{2\\pi i}{L}\\mathbf{q}\\cdot \\mathbf{c}_p}\\right]." }, { "math_id": 85, "text": "\\lambda\\left(\\mathbf{q}\\right)" }, { "math_id": 86, "text": "\\left|\\lambda\\left(\\mathbf{q}\\right)\\right|>1" }, { "math_id": 87, "text": "|\\cdot|" }, { "math_id": 88, "text": "\\mathbf{q}" }, { "math_id": 89, "text": "\\left|\\lambda\\left(\\mathbf{q}_{\\mathrm{max}}\\right)\\right|>1" }, { "math_id": 90, "text": "\\left|\\lambda\\left(\\mathbf{q}_{\\mathrm{max}}\\right)\\right|\\geq\\left|\\lambda\\left(\\mathbf{q}\\right)\\right|\\forall\\mathbf{q}\\in\\{1,2,\\ldots,L\\}^d" }, { "math_id": 91, "text": "\\mathbf{q}_{\\mathrm{max}}" }, { "math_id": 92, "text": "\\mathrm{arg}\\left[\\lambda\\left(q\\right)\\right]\\neq 0" }, { "math_id": 93, "text": "\\mathrm{arg}(\\cdot)" } ]
https://en.wikipedia.org/wiki?curid=68283515
68283876
Edward George Effros
American mathematician (1935–2019) Edward George Effros (December 10, 1935, Queens, New York City – December 21, 2019, Portland, Oregon) was an American mathematician, specializing in operator algebras and representation theory. His research included "C*-algebras theory and operator algebras, descriptive set theory, Banach space theory, and quantum information." Biography. Edward Effros grew up in Great Neck, New York. He finished his undergraduate study in three years at Massachusetts Institute of Technology and received his Ph.D. from Harvard University in 1962. His thesis "On Representations of formula_0-algebras" was supervised by George Mackey. Effros was a postdoc at Columbia University and then became a faculty member at the University of Pennsylvania. Effros married Rita Brickman in 1967. Their two children, Rachel and Stephen, were born in Philadelphia. In 1980 Edward Effros became a full professor at the University of California at Los Angeles (UCLA), and in 1979 the family relocated to Los Angeles. Rita Brickman Effros received her Ph.D. in immunology from the University of Pennsylvania. Eventually, she became a professor of pathology and laboratory medicine at the David Geffen School of Medicine at UCLA. In 2013 Edward Effros retired from UCLA as professor emeritus. He was a Guggenheim Fellow for the academic year 1982–1983. In 1986 he was an invited speaker at the International Congress of Mathematicians in Berkeley, California. He was the author or coauthor of over 80 publications and supervised the doctoral dissertations of 16 students, including Patricia Clark Kenschaft. He was elected to the 2014 Class of Fellows of the American Mathematical Society. According to Masamichi Takesaki, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Probably one can divide his mathematical achievements into the following areas: Edward's older brother, Robert Carlton Effros (born 1933), became a lawyer and member of the legal department of the International Monetary Fund. Edward's identical twin, Richard M. Effros, graduated from NYU School of Medicine and became a pulmonologist. Edward was married to Rita "née" Brinkman for 52 years. Their daughter Rachel Marian Effros (born 1969) became a pediatrician. Their son Stephen David Effros (born 1972) became a senior project manager for Portland Public Schools in Portland, Oregon. In June 2019 Edward and Rita relocated to Portland, but Edward died 6 months later. Upon his death he was survived by his wife, daughter, son, and two granddaughters.
[ { "math_id": 0, "text": "C^*" } ]
https://en.wikipedia.org/wiki?curid=68283876
682847
Multivariate analysis of variance
Procedure for comparing multivariate sample means In statistics, multivariate analysis of variance (MANOVA) is a procedure for comparing multivariate sample means. As a multivariate procedure, it is used when there are two or more dependent variables, and is often followed by significance tests involving individual dependent variables separately. Without relation to the image, the dependent variables may be k life satisfactions scores measured at sequential time points and p job satisfaction scores measured at sequential time points. In this case there are k+p dependent variables whose linear combination follows a multivariate normal distribution, multivariate variance-covariance matrix homogeneity, and linear relationship, no multicollinearity, and each without outliers. Model. Assume formula_0 formula_1-dimensional observations, where the formula_2’th observation formula_3 is assigned to the group formula_4 and is distributed around the group center formula_5 with multivariate Gaussian noise: formula_6 where formula_7 is the covariance matrix. Then we formulate our null hypothesis as formula_8 Relationship with ANOVA. MANOVA is a generalized form of univariate analysis of variance (ANOVA), although, unlike univariate ANOVA, it uses the covariance between outcome variables in testing the statistical significance of the mean differences. Where sums of squares appear in univariate analysis of variance, in multivariate analysis of variance certain positive-definite matrices appear. The diagonal entries are the same kinds of sums of squares that appear in univariate ANOVA. The off-diagonal entries are corresponding sums of products. Under normality assumptions about error distributions, the counterpart of the sum of squares due to error has a Wishart distribution. Hypothesis Testing. First, define the following formula_9 matrices: Then the matrix formula_16 is a generalization of the sum of squares explained by the group, and formula_17 is a generalization of the residual sum of squares. Note that alternatively one could also speak about covariances when the abovementioned matrices are scaled by 1/(n-1) since the subsequent test statistics do not change by multiplying formula_18 and formula_19 by the same non-zero constant. The most common statistics are summaries based on the roots (or eigenvalues) formula_20 of the matrix formula_21 Discussion continues over the merits of each, although the greatest root leads only to a bound on significance which is not generally of practical interest. A further complication is that, except for the Roy's greatest root, the distribution of these statistics under the null hypothesis is not straightforward and can only be approximated except in a few low-dimensional cases. An algorithm for the distribution of the Roy's largest root under the null hypothesis was derived in while the distribution under the alternative is studied in. The best-known approximation for Wilks' lambda was derived by C. R. Rao. In the case of two groups, all the statistics are equivalent and the test reduces to Hotelling's T-square. Introducing covariates (MANCOVA). One can also test if there is a group effect after adjusting for covariates. For this, follow the procedure above but substitute formula_11 with the predictions of the general linear model, containing the group and the covariates, and substitute formula_14 with the predictions of the general linear model containing only the covariates (and an intercept). Then formula_18 are the additional sum of squares explained by adding the grouping information and formula_19 is the residual sum of squares of the model containing the grouping and the covariates. Note that in case of unbalanced data, the order of adding the covariates matter. Correlation of dependent variables. MANOVA's power is affected by the correlations of the dependent variables and by the effect sizes associated with those variables. For example, when there are two groups and two dependent variables, MANOVA's power is lowest when the correlation equals the ratio of the smaller to the larger standardized effect size. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "y_i" }, { "math_id": 4, "text": "g(i)\\in \\{1,\\dots,m\\}" }, { "math_id": 5, "text": "\\mu^{(g(i))}\\in \\mathbb R^q" }, { "math_id": 6, "text": "\ny_i = \\mu^{(g(i))} + \\varepsilon_i\\quad \\varepsilon_i \\overset{\\text{i.i.d.}}{\\sim} \\mathcal N_q (0, \\Sigma) \\quad \\text{ for } i=1,\\dots, n,\n" }, { "math_id": 7, "text": "\\Sigma" }, { "math_id": 8, "text": "H_0\\!:\\;\\mu^{(1)}=\\mu^{(2)}=\\dots =\\mu^{(m)}." }, { "math_id": 9, "text": "n\\times q" }, { "math_id": 10, "text": "Y" }, { "math_id": 11, "text": "\\hat Y" }, { "math_id": 12, "text": "g(i)" }, { "math_id": 13, "text": "\\frac{1}{\\text{size of group }g(i)}\\sum_{k: g(k)=g(i)}y_k" }, { "math_id": 14, "text": "\\bar Y" }, { "math_id": 15, "text": "\\frac{1}{n}\\sum_{k=1}^n y_k" }, { "math_id": 16, "text": "S_{\\text{model}} := (\\hat Y - \\bar Y)^T(\\hat Y - \\bar Y)" }, { "math_id": 17, "text": "S_{\\text{res}} := (Y - \\hat Y)^T(Y - \\hat Y)" }, { "math_id": 18, "text": "S_{\\text{model}}" }, { "math_id": 19, "text": "S_{\\text{res}}" }, { "math_id": 20, "text": "\\lambda_p" }, { "math_id": 21, "text": "A:= S_{\\text{model}}S_{\\text{res}}^{-1}" }, { "math_id": 22, "text": "\\Lambda_\\text{Wilks} = \\prod_{1,\\ldots,p}(1/(1 + \\lambda_{p})) = \\det(I + A)^{-1} = \\det(S_\\text{res})/\\det(S_\\text{res} + S_\\text{model})" }, { "math_id": 23, "text": "\\Lambda_\\text{Pillai} = \\sum_{1,\\ldots,p}(\\lambda_p/(1 + \\lambda_p)) = \\operatorname{tr}(A(I + A)^{-1})" }, { "math_id": 24, "text": "\\Lambda_\\text{LH} = \\sum_{1,\\ldots,p}(\\lambda_{p}) = \\operatorname{tr}(A)" }, { "math_id": 25, "text": "\\Lambda_\\text{Roy} = \\max_p(\\lambda_p) " } ]
https://en.wikipedia.org/wiki?curid=682847
682853
Sorgenfrey plane
Frequently-cited counterexample in topology In topology, the Sorgenfrey plane is a frequently-cited counterexample to many otherwise plausible-sounding conjectures. It consists of the product of two copies of the Sorgenfrey line, which is the real line formula_0 under the half-open interval topology. The Sorgenfrey line and plane are named for the American mathematician Robert Sorgenfrey. A basis for the Sorgenfrey plane, denoted formula_1 from now on, is therefore the set of rectangles that include the west edge, southwest corner, and south edge, and omit the southeast corner, east edge, northeast corner, north edge, and northwest corner. Open sets in formula_1 are unions of such rectangles. formula_1 is an example of a space that is a product of Lindelöf spaces that is not itself a Lindelöf space. The so-called anti-diagonal formula_2 is an uncountable discrete subset of this space, and this is a non-separable subset of the separable space formula_1. It shows that separability does not inherit to closed subspaces. Note that formula_3 and formula_4 are closed sets; it can be proved that they cannot be separated by open sets, showing that formula_1 is not normal. Thus it serves as a counterexample to the notion that the product of normal spaces is normal; in fact, it shows that even the finite product of perfectly normal spaces need not be normal.
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "\\mathbb{S}" }, { "math_id": 2, "text": "\\Delta = \\{(x, -x) \\mid x \\in \\mathbb{R}\\}" }, { "math_id": 3, "text": "K = \\{(x, -x) \\mid x \\in \\mathbb{Q}\\}" }, { "math_id": 4, "text": "\\Delta \\setminus K" } ]
https://en.wikipedia.org/wiki?curid=682853
682937
Quantum phase transition
Transition between different phases of matter at zero temperature In physics, a quantum phase transition (QPT) is a phase transition between different quantum phases (phases of matter at zero temperature). Contrary to classical phase transitions, quantum phase transitions can only be accessed by varying a physical parameter—such as magnetic field or pressure—at absolute zero temperature. The transition describes an abrupt change in the ground state of a many-body system due to its quantum fluctuations. Such a quantum phase transition can be a second-order phase transition. Quantum phase transitions can also be represented by the topological fermion condensation quantum phase transition, see e.g. strongly correlated quantum spin liquid. In case of three dimensional Fermi liquid, this transition transforms the Fermi surface into a Fermi volume. Such a transition can be a first-order phase transition, for it transforms two dimensional structure (Fermi surface) into three dimensional. As a result, the topological charge of Fermi liquid changes abruptly, since it takes only one of a discrete set of values. Classical description. To understand quantum phase transitions, it is useful to contrast them to classical phase transitions (CPT) (also called thermal phase transitions). A CPT describes a cusp in the thermodynamic properties of a system. It signals a reorganization of the particles; A typical example is the freezing transition of water describing the transition between liquid and solid. The classical phase transitions are driven by a competition between the energy of a system and the entropy of its thermal fluctuations. A classical system does not have entropy at zero temperature and therefore no phase transition can occur. Their order is determined by the first discontinuous derivative of a thermodynamic potential. A phase transition from water to ice, for example, involves latent heat (a discontinuity of the internal energy formula_0) and is of first order. A phase transition from a ferromagnet to a paramagnet is continuous and is of second order. (See phase transition for Ehrenfest's classification of phase transitions by the derivative of free energy which is discontinuous at the transition). These continuous transitions from an ordered to a disordered phase are described by an order parameter, which is zero in the disordered and nonzero in the ordered phase. For the aforementioned ferromagnetic transition, the order parameter would represent the total magnetization of the system. Although the thermodynamic average of the order parameter is zero in the disordered state, its fluctuations can be nonzero and become long-ranged in the vicinity of the critical point, where their typical length scale "ξ" (correlation length) and typical fluctuation decay time scale "τc" (correlation time) diverge: formula_1 formula_2 where formula_3 is defined as the relative deviation from the critical temperature "Tc". We call "ν" the (correlation length) "critical exponent" and "z" the "dynamical critical exponent". Critical behavior of nonzero temperature phase transitions is fully described by classical thermodynamics; quantum mechanics does not play any role even if the actual phases require a quantum mechanical description (e.g. superconductivity). Quantum description. Talking about "quantum" phase transitions means talking about transitions at "T" = 0: by tuning a non-temperature parameter like pressure, chemical composition or magnetic field, one could suppress e.g. some transition temperature like the Curie or Néel temperature to 0 K. As a system in equilibrium at zero temperature is always in its lowest-energy state (or an equally weighted superposition if the lowest-energy is degenerate), a QPT cannot be explained by thermal fluctuations. Instead, quantum fluctuations, arising from Heisenberg's uncertainty principle, drive the loss of order characteristic of a QPT. The QPT occurs at the quantum critical point (QCP), where quantum fluctuations driving the transition diverge and become scale invariant in space and time. Although absolute zero is not physically realizable, characteristics of the transition can be detected in the system's low-temperature behavior near the critical point. At nonzero temperatures, classical fluctuations with an energy scale of "kBT" compete with the quantum fluctuations of energy scale "ħω." Here "ω" is the characteristic frequency of the quantum oscillation and is inversely proportional to the correlation time. Quantum fluctuations dominate the system's behavior in the region where "ħω" &gt; "kBT", known as the quantum critical region. This quantum critical behavior manifests itself in unconventional and unexpected physical behavior like novel non Fermi liquid phases. From a theoretical point of view, a phase diagram like the one shown on the right is expected: the QPT separates an ordered from a disordered phase (often, the low temperature disordered phase is referred to as 'quantum' disordered). At high enough temperatures, the system is disordered and purely classical. Around the classical phase transition, the system is governed by classical thermal fluctuations (light blue area). This region becomes narrower with decreasing energies and converges towards the quantum critical point (QCP). Experimentally, the 'quantum critical' phase, which is still governed by quantum fluctuations, is the most interesting one. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U" }, { "math_id": 1, "text": " \\xi \\propto |\\epsilon |^{-\\nu}\\,\\,= \\left (\\frac{|T-T_c|}{T_c}\\right )^{-\\nu} " }, { "math_id": 2, "text": " \\tau_c \\propto \\xi^{z} \\propto |\\epsilon |^{-\\nu z}, " }, { "math_id": 3, "text": "\\epsilon = \\frac{T-T_c}{T_c} " } ]
https://en.wikipedia.org/wiki?curid=682937
68305758
Urey–Bigeleisen–Mayer equation
Isotope geochemistry modelling method In stable isotope geochemistry, the Urey–Bigeleisen–Mayer equation, also known as the Bigeleisen–Mayer equation or the Urey model, is a model describing the approximate equilibrium isotope fractionation in an isotope exchange reaction. While the equation itself can be written in numerous forms, it is generally presented as a ratio of partition functions of the isotopic molecules involved in a given reaction. The Urey–Bigeleisen–Mayer equation is widely applied in the fields of quantum chemistry and geochemistry and is often modified or paired with other quantum chemical modelling methods (such as density functional theory) to improve accuracy and precision and reduce the computational cost of calculations. The equation was first introduced by Harold Urey and, independently, by Jacob Bigeleisen and Maria Goeppert Mayer in 1947. Description. Since its original descriptions, the Urey–Bigeleisen–Mayer equation has taken many forms. Given an isotopic exchange reaction formula_0, such that formula_1 designates a molecule containing an isotope of interest, the equation can be expressed by relating the equilibrium constant, formula_2, to the product of partition function ratios, namely the translational, rotational, vibrational, and sometimes electronic partition functions. Thus the equation can be written as: formula_3 where formula_4 and formula_5 is each respective partition function of molecule or atom formula_6. It is typical to approximate the rotational partition function ratio as quantized rotational energies in a rigid rotor system. The Urey model also treats molecular vibrations as simplified harmonic oscillators and follows the Born–Oppenheimer approximation. Isotope partitioning behavior is often reported as a reduced partition function ratio, a simplified form of the Bigeleisen–Mayer equation notated mathematically as formula_7 or formula_8. The reduced partition function ratio can be derived from power series expansion of the function and allows the partition functions to be expressed in terms of frequency. It can be used to relate molecular vibrations and intermolecular forces to equilibrium isotope effects. As the model is an approximation, many applications append corrections for improved accuracy. Some common, significant modifications to the equation include accounting for pressure effects, nuclear geometry, and corrections for anharmonicity and quantum mechanical effects. For example, hydrogen isotope exchange reactions have been shown to disagree with the requisite assumptions for the model but correction techniques using path integral methods have been suggested. History of discovery. One aim of the Manhattan Project was increasing the availability of concentrated radioactive and stable isotopes, in particular 14C, 35S, 32P, and deuterium for heavy water. Harold Urey, Nobel laureate physical chemist known for his discovery of deuterium, became its head of isotope separation research while a professor at Columbia University. In 1945, he joined The Institute for Nuclear Studies at the University of Chicago, where he continued to work with chemist Jacob Bigeleisen and physicist Maria Mayer, both also veterans of isotopic research in the Manhattan Project. In 1946, Urey delivered the Liversidge lecture at the then-Royal Institute of Chemistry, where he outlined his proposed model of stable isotope fractionation. Bigeleisen and Mayer had been working on similar work since at least 1944 and, in 1947, published their model independently from Urey. Their calculations were mathematically equivalent to a 1943 derivation of the reduced partition function by German physicist Ludwig Waldmann." Applications. Initially used to approximate chemical reaction rates, models of isotope fractionation are used throughout the physical sciences. In chemistry, the Urey–Bigeleisen–Mayer equation has been used to predict equilibrium isotope effects and interpret the distributions of isotopes and isotopologues within systems, especially as deviations from their natural abundance. The model is also used to explain isotopic shifts in spectroscopy, such as those from nuclear field effects or mass independent effects. In biochemistry, it is used to model enzymatic kinetic isotope effects. Simulation testing in computational systems biology often uses the Bigeleisen–Mayer model as a baseline in the development of more complex models of biological systems. Isotope fractionation modeling is a critical component of isotope geochemistry and can be used to reconstruct past Earth environments as well as examine surface processes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A+B^*=A^*+B" }, { "math_id": 1, "text": "^*" }, { "math_id": 2, "text": "K_{{eq}}" }, { "math_id": 3, "text": "K_{eq} = \\frac{[A^*][B]}{[A][B^*]}" }, { "math_id": 4, "text": "[A]=\\prod^n Q_{n,A}" }, { "math_id": 5, "text": "Q_n" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "\\frac{s}{s'}f" }, { "math_id": 8, "text": "(\\frac{Q^*}{Q})_r" } ]
https://en.wikipedia.org/wiki?curid=68305758
683109
Triality
Relationship between certain vector spaces In mathematics, triality is a relationship among three vector spaces, analogous to the duality relation between dual vector spaces. Most commonly, it describes those special features of the Dynkin diagram D4 and the associated Lie group Spin(8), the double cover of 8-dimensional rotation group SO(8), arising because the group has an outer automorphism of order three. There is a geometrical version of triality, analogous to duality in projective geometry. Of all simple Lie groups, Spin(8) has the most symmetrical Dynkin diagram, D4. The diagram has four nodes with one node located at the center, and the other three attached symmetrically. The symmetry group of the diagram is the symmetric group "S"3 which acts by permuting the three legs. This gives rise to an "S"3 group of outer automorphisms of Spin(8). This automorphism group permutes the three 8-dimensional irreducible representations of Spin(8); these being the "vector" representation and two chiral "spin" representations. These automorphisms do not project to automorphisms of SO(8). The vector representation—the natural action of SO(8) (hence Spin(8)) on "F"8—consists over the real numbers of Euclidean 8-vectors and is generally known as the "defining module", while the chiral spin representations are also known as "half-spin representations", and all three of these are fundamental representations. No other connected Dynkin diagram has an automorphism group of order greater than 2; for other D"n" (corresponding to other even Spin groups, Spin(2"n")), there is still the automorphism corresponding to switching the two half-spin representations, but these are not isomorphic to the vector representation. Roughly speaking, symmetries of the Dynkin diagram lead to automorphisms of the Tits building associated with the group. For special linear groups, one obtains projective duality. For Spin(8), one finds a curious phenomenon involving 1-, 2-, and 4-dimensional subspaces of 8-dimensional space, historically known as "geometric triality". The exceptional 3-fold symmetry of the D4 diagram also gives rise to the Steinberg group 3D4. General formulation. A duality between two vector spaces over a field F is a non-degenerate bilinear form formula_0 i.e., for each non-zero vector v in one of the two vector spaces, the pairing with v is a non-zero linear functional on the other. Similarly, a triality between three vector spaces over a field F is a non-degenerate trilinear form formula_1 i.e., each non-zero vector in one of the three vector spaces induces a duality between the other two. By choosing vectors "e""i" in each "V""i" on which the trilinear form evaluates to 1, we find that the three vector spaces are all isomorphic to each other, and to their duals. Denoting this common vector space by V, the triality may be re-expressed as a bilinear multiplication formula_2 where each "e""i" corresponds to the identity element in V. The non-degeneracy condition now implies that V is a composition algebra. It follows that V has dimension 1, 2, 4 or 8. If further "F" = R and the form used to identify V with its dual is positively definite, then V is a Euclidean Hurwitz algebra, and is therefore isomorphic to R, C, H or O. Conversely, composition algebras immediately give rise to trialities by taking each "V""i" equal to the algebra, and contracting the multiplication with the inner product on the algebra to make a trilinear form. An alternative construction of trialities uses spinors in dimensions 1, 2, 4 and 8. The eight-dimensional case corresponds to the triality property of Spin(8).
[ { "math_id": 0, "text": " V_1\\times V_2\\to F," }, { "math_id": 1, "text": " V_1\\times V_2\\times V_3\\to F," }, { "math_id": 2, "text": " V \\times V \\to V" } ]
https://en.wikipedia.org/wiki?curid=683109
683116
SO(8)
Rotation group in 8-dimensional Euclidean space In mathematics, SO(8) is the special orthogonal group acting on eight-dimensional Euclidean space. It could be either a real or complex simple Lie group of rank 4 and dimension 28. Spin(8). Like all special orthogonal groups of formula_0, SO(8) is not simply connected, having a fundamental group isomorphic to Z2. The universal cover of SO(8) is the spin group Spin(8). Center. The center of SO(8) is Z2, the diagonal matrices {±I} (as for all SO(2"n") with 2"n" ≥ 4), while the center of Spin(8) is Z2×Z2 (as for all Spin(4"n"), 4"n" ≥ 4). Triality. SO(8) is unique among the simple Lie groups in that its Dynkin diagram, (D4 under the Dynkin classification), possesses a three-fold symmetry. This gives rise to peculiar feature of Spin(8) known as triality. Related to this is the fact that the two spinor representations, as well as the fundamental vector representation, of Spin(8) are all eight-dimensional (for all other spin groups the spinor representation is either smaller or larger than the vector representation). The triality automorphism of Spin(8) lives in the outer automorphism group of Spin(8) which is isomorphic to the symmetric group S3 that permutes these three representations. The automorphism group acts on the center Z2 x Z2 (which also has automorphism group isomorphic to "S"3 which may also be considered as the general linear group over the finite field with two elements, "S"3 ≅GL(2,2)). When one quotients Spin(8) by one central Z2, breaking this symmetry and obtaining SO(8), the remaining outer automorphism group is only Z2. The triality symmetry acts again on the further quotient SO(8)/Z2. Sometimes Spin(8) appears naturally in an "enlarged" form, as the automorphism group of Spin(8), which breaks up as a semidirect product: Aut(Spin(8)) ≅ PSO (8) ⋊ "S"3. Unit octonions. Elements of SO(8) can be described with unit octonions, analogously to how elements of SO(2) can be described with unit complex numbers and elements of SO(4) can be described with unit quaternions. However the relationship is more complicated, partly due to the non-associativity of the octonions. A general element in SO(8) can be described as the product of 7 left-multiplications, 7 right-multiplications and also 7 bimultiplications by unit octonions (a bimultiplication being the composition of a left-multiplication and a right-multiplication by the same octonion and is unambiguously defined due to octonions obeying the Moufang identities). It can be shown that an element of SO(8) can be constructed with bimultiplications, by first showing that pairs of reflections through the origin in 8-dimensional space correspond to pairs of bimultiplications by unit octonions. The triality automorphism of Spin(8) described below provides similar constructions with left multiplications and right multiplications. Octonions and triality. If formula_1 and formula_2, it can be shown that this is equivalent to formula_3, meaning that formula_4 without ambiguity. A triple of maps formula_5 that preserve this identity, so that formula_6 is called an isotopy. If the three maps of an isotopy are in formula_7, the isotopy is called an orthogonal isotopy. If formula_8, then following the above formula_9 can be described as the product of bimultiplications of unit octonions, say formula_10. Let formula_11 be the corresponding products of left and right multiplications by the conjugates (i.e., the multiplicative inverses) of the same unit octonions, so formula_12, formula_13. A simple calculation shows that formula_5 is an isotopy. As a result of the non-associativity of the octonions, the only other orthogonal isotopy for formula_9 is formula_14. As the set of orthogonal isotopies produce a 2-to-1 cover of formula_15, they must in fact be formula_16. Multiplicative inverses of octonions are two-sided, which means that formula_4 is equivalent to formula_17. This means that a given isotopy formula_5 can be permuted cyclically to give two further isotopies formula_18 and formula_19. This produces an order 3 outer automorphism of formula_16. This "triality" automorphism is exceptional among spin groups. There is no triality automorphism of formula_15, as for a given formula_9 the corresponding maps formula_20 are only uniquely determined up to sign. formula_21 formula_22 formula_23 formula_24 formula_25 formula_26 Weyl group. Its Weyl/Coxeter group has 4! × 8 = 192 elements. formula_27 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n > 2" }, { "math_id": 1, "text": "x,y,z\\in\\mathbb{O}" }, { "math_id": 2, "text": "(xy)z=1" }, { "math_id": 3, "text": "x(yz)=1" }, { "math_id": 4, "text": "xyz=1" }, { "math_id": 5, "text": "(\\alpha,\\beta,\\gamma)" }, { "math_id": 6, "text": "x^\\alpha y^\\beta z^\\gamma=1" }, { "math_id": 7, "text": "\\operatorname{SO(8)}" }, { "math_id": 8, "text": "\\gamma\\in \\operatorname{SO(8)}" }, { "math_id": 9, "text": "\\gamma" }, { "math_id": 10, "text": "\\gamma=B_{u_1}...B_{u_n}" }, { "math_id": 11, "text": "\\alpha,\\beta \\in \\operatorname{SO(8)}" }, { "math_id": 12, "text": "\\alpha=L_{\\overline{u_1}}...L_{\\overline{u_n}}" }, { "math_id": 13, "text": "\\beta=R_{\\overline{u_1}}...R_{\\overline{u_n}}" }, { "math_id": 14, "text": "(-\\alpha,-\\beta,\\gamma)" }, { "math_id": 15, "text": "\\operatorname{SO}(8)" }, { "math_id": 16, "text": "\\operatorname{Spin}(8)" }, { "math_id": 17, "text": "yzx=1" }, { "math_id": 18, "text": "(\\beta,\\gamma,\\alpha)" }, { "math_id": 19, "text": "(\\gamma,\\alpha,\\beta)" }, { "math_id": 20, "text": "\\alpha,\\beta" }, { "math_id": 21, "text": "(\\pm 1,\\pm 1,0,0)" }, { "math_id": 22, "text": "(\\pm 1,0,\\pm 1,0)" }, { "math_id": 23, "text": "(\\pm 1,0,0,\\pm 1)" }, { "math_id": 24, "text": "(0,\\pm 1,\\pm 1,0)" }, { "math_id": 25, "text": "(0,\\pm 1,0,\\pm 1)" }, { "math_id": 26, "text": "(0,0,\\pm 1,\\pm 1)" }, { "math_id": 27, "text": "\n\\begin{pmatrix}\n2 & -1 & -1 & -1\\\\\n-1 & 2 & 0 & 0\\\\\n-1 & 0 & 2 & 0\\\\\n-1 & 0 & 0 & 2\n\\end{pmatrix}\n" } ]
https://en.wikipedia.org/wiki?curid=683116
6831289
Swinging Atwood's machine
Variation of Atwood's machine incorporating a pendulum The swinging Atwood's machine (SAM) is a mechanism that resembles a simple Atwood's machine except that one of the masses is allowed to swing in a two-dimensional plane, producing a dynamical system that is chaotic for some system parameters and initial conditions. Specifically, it comprises two masses (the pendulum, mass m and counterweight, mass M) connected by an inextensible, massless string suspended on two frictionless pulleys of zero radius such that the pendulum can swing freely around its pulley without colliding with the counterweight. The conventional Atwood's machine allows only "runaway" solutions ("i.e." either the pendulum or counterweight eventually collides with its pulley), except for formula_0. However, the swinging Atwood's machine with formula_1 has a large parameter space of conditions that lead to a variety of motions that can be classified as terminating or non-terminating, periodic, quasiperiodic or chaotic, bounded or unbounded, singular or non-singular due to the pendulum's reactive centrifugal force counteracting the counterweight's weight. Research on the SAM started as part of a 1982 senior thesis entitled "Smiles and Teardrops" (referring to the shape of some trajectories of the system) by Nicholas Tufillaro at Reed College, directed by David J. Griffiths. Equations of motion. The swinging Atwood's machine is a system with two degrees of freedom. We may derive its equations of motion using either Hamiltonian mechanics or Lagrangian mechanics. Let the swinging mass be formula_2 and the non-swinging mass be formula_3. The kinetic energy of the system, formula_4, is: formula_5 where formula_6 is the distance of the swinging mass to its pivot, and formula_7 is the angle of the swinging mass relative to pointing straight downwards. The potential energy formula_8 is solely due to the acceleration due to gravity: formula_9 We may then write down the Lagrangian, formula_10, and the Hamiltonian, formula_11 of the system: formula_12 We can then express the Hamiltonian in terms of the canonical momenta, formula_13, formula_14: formula_15 Lagrange analysis can be applied to obtain two second-order coupled ordinary differential equations in formula_6 and formula_7. First, the formula_7 equation: formula_16 And the formula_6 equation: formula_17 We simplify the equations by defining the mass ratio formula_18. The above then becomes: formula_19 Hamiltonian analysis may also be applied to determine four first order ODEs in terms of formula_6, formula_7 and their corresponding canonical momenta formula_13 and formula_14: formula_20 Notice that in both of these derivations, if one sets formula_7 and angular velocity formula_21 to zero, the resulting special case is the regular non-swinging Atwood machine: formula_22 The swinging Atwood's machine has a four-dimensional phase space defined by formula_6, formula_7 and their corresponding canonical momenta formula_13 and formula_14. However, due to energy conservation, the phase space is constrained to three dimensions. System with massive pulleys. If the pulleys in the system are taken to have moment of inertia formula_23 and radius formula_24, the Hamiltonian of the SAM is then: formula_25 Where Mt is the effective total mass of the system, formula_26 This reduces to the version above when formula_24 and formula_23 become zero. The equations of motion are now: formula_27 where formula_28. Integrability. Hamiltonian systems can be classified as integrable and nonintegrable. SAM is integrable when the mass ratio formula_29. The system also looks pretty regular for formula_30, but the formula_31 case is the only known integrable mass ratio. It has been shown that the system is not integrable for formula_32. For many other values of the mass ratio (and initial conditions) SAM displays chaotic motion. Numerical studies indicate that when the orbit is singular (initial conditions: formula_33), the pendulum executes a single symmetrical loop and returns to the origin, regardless of the value of formula_34. When formula_34 is small (near vertical), the trajectory describes a "teardrop", when it is large, it describes a "heart". These trajectories can be exactly solved algebraically, which is unusual for a system with a non-linear Hamiltonian. Trajectories. The swinging mass of the swinging Atwood's machine undergoes interesting trajectories or orbits when subject to different initial conditions, and for different mass ratios. These include periodic orbits and collision orbits. Nonsingular orbits. For certain conditions, system exhibits complex harmonic motion. The orbit is called nonsingular if the swinging mass does not touch the pulley. Periodic orbits. When the different harmonic components in the system are in phase, the resulting trajectory is simple and periodic, such as the "smile" trajectory, which resembles that of an ordinary pendulum, and various loops. In general a periodic orbit exists when the following is satisfied: formula_35 The simplest case of periodic orbits is the "smile" orbit, which Tufillaro termed Type A orbits in his 1984 paper. Singular orbits. The motion is singular if at some point, the swinging mass passes through the origin. Since the system is invariant under time reversal and translation, it is equivalent to say that the pendulum starts at the origin and is fired outwards: formula_36 The region close to the pivot is singular, since formula_6 is close to zero and the equations of motion require dividing by formula_6. As such, special techniques must be used to rigorously analyze these cases. The following are plots of arbitrarily selected singular orbits. Collision orbits. Collision (or terminating singular) orbits are subset of singular orbits formed when the swinging mass is ejected from its pivot with an initial velocity, such that it returns to the pivot (i.e. it collides with the pivot): formula_37 The simplest case of collision orbits are the ones with a mass ratio of 3, which will always return symmetrically to the origin after being ejected from the origin, and were termed Type B orbits in Tufillaro's initial paper. They were also referred to as teardrop, heart, or rabbit-ear orbits because of their appearance. When the swinging mass returns to the origin, the counterweight mass, formula_3 must instantaneously change direction, causing an infinite tension in the connecting string. Thus we may consider the motion to terminate at this time. Boundedness. For any initial position, it can be shown that the swinging mass is bounded by a curve that is a conic section. The pivot is always a focus of this bounding curve. The equation for this curve can be derived by analyzing the energy of the system, and using conservation of energy. Let us suppose that formula_2 is released from rest at formula_38 and formula_39. The total energy of the system is therefore: formula_40 However, notice that in the boundary case, the velocity of the swinging mass is zero. Hence we have: formula_41 To see that it is the equation of a conic section, we isolate for formula_6: formula_42 Note that the numerator is a constant dependent only on the initial position in this case, as we have assumed the initial condition to be at rest. However, the energy constant formula_43 can also be calculated for nonzero initial velocity, and the equation still holds in all cases. The eccentricity of the conic section is formula_44. For formula_45, this is an ellipse, and the system is bounded and the swinging mass always stays within the ellipse. For formula_46, it is a parabola and for formula_47 it is a hyperbola; in either of these cases, it is not bounded. As formula_48 gets arbitrarily large, the bounding curve approaches a circle. The region enclosed by the curve is known as the Hill's region. Recent three dimensional extension. A new integrable case for the problem of three dimensional Swinging Atwood Machine (3D-SAM) was announced in 2016. Like the 2D version, the problem is integrable when formula_49.
[ { "math_id": 0, "text": "M=m" }, { "math_id": 1, "text": "M>m" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\n\\begin{align}\nT &= \\frac{1}{2} M v^2_M + \\frac{1}{2} mv^2_m \\\\\n&= \\frac{1}{2}M \\dot{r}^2+\\frac{1}{2} m \\left(\\dot{r}^2+r^2\\dot{\\theta}^2\\right)\n\\end{align}\n" }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "U" }, { "math_id": 9, "text": "\n\\begin{align}\nU &= Mgr - mgr \\cos{\\theta}\n\\end{align}\n" }, { "math_id": 10, "text": "\\mathcal{L}" }, { "math_id": 11, "text": "\\mathcal{H}" }, { "math_id": 12, "text": "\n\\begin{align}\n\\mathcal{L} &= T-U\\\\\n&= \\frac{1}{2}M \\dot{r}^2+\\frac{1}{2} m \\left(\\dot{r}^2+r^2\\dot{\\theta}^2\\right) - Mgr + mgr \\cos{\\theta}\\\\\n\\mathcal{H} &= T+U\\\\\n&= \\frac{1}{2}M \\dot{r}^2+\\frac{1}{2} m \\left(\\dot{r}^2+r^2\\dot{\\theta}^2\\right) + Mgr - mgr \\cos{\\theta}\n\\end{align}\n" }, { "math_id": 13, "text": "p_r" }, { "math_id": 14, "text": "p_\\theta" }, { "math_id": 15, "text": "\n\\begin{align}\np_r &= \\frac{\\partial{\\mathcal{L}}}{\\partial \\dot{r}} = \\frac{\\partial T}{\\partial \\dot{r}} = (M+m)\\dot{r}\\\\\np_\\theta &= \\frac{\\partial {\\mathcal{L}}}{\\partial \\dot{\\theta}} = \\frac{\\partial T}{\\partial \\dot{\\theta}} = mr^2 \\dot{\\theta}\\\\\n\\therefore \\mathcal{H} &= \\frac{p_r^2}{2(M+m)} + \\frac{p_\\theta^2}{2mr^2} + Mgr - mgr \\cos{\\theta}\n\\end{align}\n" }, { "math_id": 16, "text": "\n\\begin{align}\n\\frac{\\partial {\\mathcal{L}}}{\\partial \\theta} &= \\frac{d}{dt} \\left(\\frac{\\partial {\\mathcal{L}}}{\\partial \\dot{\\theta}}\\right)\\\\\n-mgr \\sin{\\theta} &= 2mr \\dot{r}\\dot{\\theta} + mr^2 \\ddot{\\theta}\\\\\nr\\ddot{\\theta} + 2\\dot{r}\\dot{\\theta} + g\\sin{\\theta} &= 0\n\\end{align}\n" }, { "math_id": 17, "text": "\n\\begin{align}\n\\frac{\\partial {\\mathcal{L}}}{\\partial r} &= \\frac{d}{dt} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\dot{r}}\\right)\\\\\nmr\\dot{\\theta}^2 - Mg + mg\\cos{\\theta} &= (M+m) \\ddot{r}\n\\end{align}\n" }, { "math_id": 18, "text": "\\mu = \\frac{M}{m}" }, { "math_id": 19, "text": "(\\mu+1)\\ddot{r} - r\\dot{\\theta}^2 + g(\\mu - \\cos{\\theta}) = 0" }, { "math_id": 20, "text": "\n\\begin{align}\n\\dot{r}&=\\frac {\\partial{\\mathcal{H}}} {\\partial{p_r}} = \\frac {p_r}{M+m} \\\\\n\\dot{p_r} &= - \\frac {\\partial{\\mathcal{H}}} {\\partial{r}} = \\frac {p_\\theta ^2} {mr^3} - Mg + mg\\cos{\\theta} \\\\\n\\dot{\\theta}&=\\frac {\\partial{\\mathcal{H}}} {\\partial{p_\\theta}} = \\frac {p_\\theta} {mr^2} \\\\\n\\dot{p_\\theta} &= - \\frac {\\partial{\\mathcal{H}}} {\\partial{\\theta}} = -mgr\\sin{\\theta}\n\\end{align}\n" }, { "math_id": 21, "text": "\\dot{\\theta}" }, { "math_id": 22, "text": "\\ddot{r} = g \\frac{1-\\mu}{1+\\mu}=g\\frac{m-M}{m+M}" }, { "math_id": 23, "text": "I" }, { "math_id": 24, "text": "R" }, { "math_id": 25, "text": "\\mathcal{H}\\left(r, \\theta, \\dot{r}, \\dot{\\theta} \\right) =\n \\underbrace{ \\frac{1}{2} M_t \\left( R \\dot{\\theta} - \\dot{r} \\right) ^2\n + \\frac{1}{2} m r^2 \\dot{\\theta}^2 }_{T}\n + \\underbrace{ gr \\left(M - m \\cos{\\theta} \\right)\n + gR \\left( m \\sin{\\theta} - M \\theta \\right)}_{U},\n" }, { "math_id": 26, "text": "M_t = M + m + \\frac{I}{R^2}" }, { "math_id": 27, "text": "\\begin{align}\n \\mu_t ( \\ddot{r} - R \\ddot{\\theta}) & = r \\dot{\\theta}^2 + g (\\cos {\\theta} - \\mu ) \\\\\n r \\ddot{\\theta} & = - 2 \\dot{r} \\dot{\\theta} + R \\dot{\\theta}^2 - g \\sin {\\theta} \\\\\n\\end{align}\n" }, { "math_id": 28, "text": "\\mu_t = M_t / m" }, { "math_id": 29, "text": "\\mu = M/m = 3" }, { "math_id": 30, "text": "\\mu = 4 n^2 - 1 = 3, 15, 35, ..." }, { "math_id": 31, "text": "\\mu = 3" }, { "math_id": 32, "text": "\\mu \\in (0,1) \\cup (3,\\infty)" }, { "math_id": 33, "text": "r=0, \\dot{r}=v, \\theta=\\theta_0, \\dot{\\theta}=0" }, { "math_id": 34, "text": "\\theta_0" }, { "math_id": 35, "text": "r(t+\\tau) = r(t),\\, \\theta(t+\\tau) = \\theta(t)" }, { "math_id": 36, "text": "r(0) = 0" }, { "math_id": 37, "text": "r(\\tau) = r(0) = 0, \\, \\tau > 0" }, { "math_id": 38, "text": "r=r_0" }, { "math_id": 39, "text": "\\theta=\\theta_0" }, { "math_id": 40, "text": "\nE = \\frac{1}{2}M \\dot{r}^2+\\frac{1}{2} m \\left(\\dot{r}^2+r^2\\dot{\\theta}^2\\right) + Mgr - mgr \\cos{\\theta} = Mgr_0 - mgr_0 \\cos{\\theta_0}\n" }, { "math_id": 41, "text": "\nMgr - mgr \\cos{\\theta}=Mgr_0 - mgr_0 \\cos{\\theta_0}\n" }, { "math_id": 42, "text": "\n\\begin{align}\nr&=\\frac{h}{1-\\frac{\\cos{\\theta}}{\\mu}}\\\\\nh&=r_0\\left(1-\\frac{\\cos{\\theta_0}}{\\mu}\\right)\n\\end{align}\n" }, { "math_id": 43, "text": "h" }, { "math_id": 44, "text": "\\frac{1}{\\mu}" }, { "math_id": 45, "text": "\\mu>1" }, { "math_id": 46, "text": "\\mu=1" }, { "math_id": 47, "text": "\\mu<1" }, { "math_id": 48, "text": "\\mu" }, { "math_id": 49, "text": "M = 3m" } ]
https://en.wikipedia.org/wiki?curid=6831289
68316
Heat pump
System that transfers heat from one space to another A heat pump is a device that consumes energy (usually electricity) to transfer heat from a cold heat sink to a hot heat sink. Specifically, the heat pump transfers thermal energy using a refrigeration cycle, cooling the cool space and warming the warm space. In cold weather, a heat pump can move heat from the cool outdoors to warm a house (e.g. winter); the pump may also be designed to move heat from the house to the warmer outdoors in warm weather (e.g. summer). As they transfer heat rather than generating heat, they are more energy-efficient than other ways of heating or cooling a home. A gaseous refrigerant is compressed so its pressure and temperature rise. When operating as a heater in cold weather, the warmed gas flows to a heat exchanger in the indoor space where some of its thermal energy is transferred to that indoor space, causing the gas to condense to its liquid state. The liquified refrigerant flows to a heat exchanger in the outdoor space where the pressure falls, the liquid evaporates and the temperature of the gas falls. It is now colder than the temperature of the outdoor space being used as a heat source. It can again take up energy from the heat source, be compressed and repeat the cycle. Air source heat pumps are the most common models, while other types include ground source heat pumps, water source heat pumps and exhaust air heat pumps. Large-scale heat pumps are also used in district heating systems. The efficiency of a heat pump is expressed as a coefficient of performance (COP), or seasonal coefficient of performance (SCOP). The higher the number, the more efficient a heat pump is. For example, an air-to-water heat pump that produces 6kW at a SCOP of 4.62 will give over 4kW of energy into a heating system for every kilowatt of energy that the heat pump uses itself to operate. When used for space heating, heat pumps are typically more energy-efficient than electric resistance and other heaters. Because of their high efficiency and the increasing share of fossil-free sources in electrical grids, heat pumps are playing a key role in climate change mitigation. Consuming 1 kWh of electricity, they can transfer 1 to 4.5 kWh of thermal energy into a building. The carbon footprint of heat pumps depends on how electricity is generated, but they usually reduce emissions. Heat pumps could satisfy over 80% of global space and water heating needs with a lower carbon footprint than gas-fired condensing boilers: however, in 2021 they only met 10%. Principle of operation. Heat flows spontaneously from a region of higher temperature to a region of lower temperature. Heat does not flow spontaneously from lower temperature to higher, but it can be made to flow in this direction if work is performed. The work required to transfer a given amount of heat is usually much less than the amount of heat; this is the motivation for using heat pumps in applications such as the heating of water and the interior of buildings. The amount of work required to drive an amount of heat Q from a lower-temperature reservoir such as ambient air to a higher-temperature reservoir such as the interior of a building is: formula_0 where The coefficient of performance of a heat pump is greater than one so the work required is less than the heat transferred, making a heat pump a more efficient form of heating than electrical resistance heating. As the temperature of the higher-temperature reservoir increases in response to the heat flowing into it, the coefficient of performance decreases, causing an increasing amount of work to be required for each unit of heat being transferred. The coefficient of performance, and the work required by a heat pump can be calculated easily by considering an ideal heat pump operating on the reversed Carnot cycle: This is the theoretical amount of heat pumped but in practice it will be less for various reasons, for example if the outside unit has been installed where there is not enough airflow. More data sharing with owners and academics—perhaps from heat meters—could improve efficiency in the long run. History. Milestones: William Cullen demonstrates artificial refrigeration. Jacob Perkins patents a design for a practical refrigerator using dimethyl ether. Lord Kelvin describes the theory underlying heat pumps. Peter von Rittinger develops and builds the first heat pump. In the period before 1875, heat pumps were for the time being pursued for vapour compression evaporation (open heat pump process) in salt works with their obvious advantages for saving wood and coal. In 1857, Peter von Rittinger was the first to try to implement the idea of vapor compression in a small pilot plant. Presumably inspired by Rittinger's experiments in Ebensee, Antoine-Paul Piccard from the University of Lausanne and the engineer J. H. Weibel from the Weibel–Briquet company in Geneva built the world's first really functioning vapor compression system with a two-stage piston compressor. In 1877 this first heat pump in Switzerland was installed in the Bex salt works. Aurel Stodola constructs a closed-loop heat pump (water source from Lake Geneva) which provides heating for the Geneva city hall to this day. During the First World War, fuel prices were very high in Switzerland but it had plenty of hydropower.18 In the period before and especially during the Second World War, when neutral Switzerland was completely surrounded by fascist-ruled countries, the coal shortage became alarming again. Thanks to their leading position in energy technology, the Swiss companies Sulzer, Escher Wyss and Brown Boveri built and put in operation around 35 heat pumps between 1937 and 1945. The main heat sources were lake water, river water, groundwater, and waste heat. Particularly noteworthy are the six historic heat pumps from the city of Zurich with heat outputs from 100 kW to 6 MW. An international milestone is the heat pump built by Escher Wyss in 1937/38 to replace the wood stoves in the City Hall of Zurich. To avoid noise and vibrations, a recently developed rotary piston compressor was used. This historic heat pump heated the town hall for 63 years until 2001. Only then was it replaced by a new, more efficient heat pump. John Sumner, City Electrical Engineer for Norwich, installs an experimental water-source heat pump fed central heating system, using a nearby river to heat new Council administrative buildings. It had a seasonal efficiency ratio of 3.42, average thermal delivery of 147 kW, and peak output of 234 kW. Robert C. Webber is credited as developing and building the first ground-source heat pump. First large scale installation—the Royal Festival Hall in London is opened with a town gas-powered reversible water-source heat pump, fed by the Thames, for both winter heating and summer cooling needs. The Kigali Amendment to phase out harmful refrigerants takes effect. Types. Heat recovery ventilation. Exhaust air heat pumps extract heat from the exhaust air of a building and require mechanical ventilation. Two classes exist: Water-source. A water-source heat pump works in a similar manner to a ground-source heat pump, except that it takes heat from a body of water rather than the ground. The body of water does, however, need to be large enough to be able to withstand the cooling effect of the unit without freezing or creating an adverse effect for wildlife. The largest water-source heat pump was installed in the Danish town of Esbjerg in 2023. Others. A thermoacoustic heat pump operates as a thermoacoustic heat engine without refrigerant but instead uses a standing wave in a sealed chamber driven by a loudspeaker to achieve a temperature difference across the chamber. Electrocaloric heat pumps are solid state. Applications. The International Energy Agency estimated that, as of 2021, heat pumps installed in buildings have a combined capacity of more than 1000 GW. They are used for heating, ventilation, and air conditioning (HVAC) and may also provide domestic hot water and tumble clothes drying. The purchase costs are supported in various countries by consumer rebates. Space heating and sometimes also cooling. In HVAC applications, a heat pump is typically a vapor-compression refrigeration device that includes a reversing valve and optimized heat exchangers so that the direction of "heat flow" (thermal energy movement) may be reversed. The reversing valve switches the direction of refrigerant through the cycle and therefore the heat pump may deliver either heating or cooling to a building. Because the two heat exchangers, the condenser and evaporator, must swap functions, they are optimized to perform adequately in both modes. Therefore, the Seasonal Energy Efficiency Rating (SEER in the US) or European seasonal energy efficiency ratio of a reversible heat pump is typically slightly less than those of two separately optimized machines. For equipment to receive the US Energy Star rating, it must have a rating of at least 14 SEER. Pumps with ratings of 18 SEER or above are considered highly efficient. The highest efficiency heat pumps manufactured are up to 24 SEER. Heating seasonal performance factor (in the US) or Seasonal Performance Factor (in Europe) are ratings of heating performance. The SPF is Total heat output per annum / Total electricity consumed per annum in other words the average heating COP over the year. Window mounted heat pump. Window mounted heat pumps run on standard 120v AC outlets and provide heating, cooling, and humidity control. They are more efficient with lower noise levels, condensation management, and a smaller footprint than window mounted air conditioners that just do cooling. Water heating. In water heating applications, heat pumps may be used to heat or preheat water for swimming pools, homes or industry. Usually heat is extracted from outdoor air and transferred to an indoor water tank. District heating. Large (megawatt-scale) heat pumps are used for district heating. However as of 2022[ [update]] about 90% of district heat is from fossil fuels. In Europe, heat pumps account for a mere 1% of heat supply in district heating networks but several countries have targets to decarbonise their networks between 2030 and 2040. Possible sources of heat for such applications are sewage water, ambient water (e.g. sea, lake and river water), industrial waste heat, geothermal energy, flue gas, waste heat from district cooling and heat from solar seasonal thermal energy storage. Large-scale heat pumps for district heating combined with thermal energy storage offer high flexibility for the integration of variable renewable energy. Therefore, they are regarded as a key technology for limiting climate change by phasing out fossil fuels. They are also a crucial element of systems which can both heat and cool districts. Industrial heating. There is great potential to reduce the energy consumption and related greenhouse gas emissions in industry by application of industrial heat pumps, for example for process heat. Short payback periods of less than 2 years are possible, while achieving a high reduction of CO2 emissions (in some cases more than 50%). Industrial heat pumps can heat up to 200 °C, and can meet the heating demands of many light industries. In Europe alone, 15 GW of heat pumps could be installed in 3,000 facilities in the paper, food and chemicals industries. Performance. The performance of a heat pump is determined by the ability of the pump to extract heat from a low temperature environment (the "source") and deliver it to a higher temperature environment (the "sink"). Performance varies, depending on installation details, temperature differences, site elevation, location on site, pipe runs, flow rates, and maintenance. In general, heat pumps work most efficiently (that is, the heat output produced for a given energy input) when the difference between the heat source and the heat sink is small. When using a heat pump for space or water heating, therefore, the heat pump will be most efficient in mild conditions, and decline in efficiency on very cold days. Performance metrics supplied to consumers attempt to take this variation into account. Common performance metrics are the SEER (in cooling mode) and seasonal coefficient of performance (SCOP) (commonly used just for heating), although SCOP can be used for both modes of operation. Larger values of either metric indicate better performance. When comparing the performance of heat pumps, the term "performance" is preferred to "efficiency", with coefficient of performance (COP) being used to describe the ratio of useful heat movement per work input. An electrical resistance heater has a COP of 1.0, which is considerably lower than a well-designed heat pump which will typically have a COP of 3 to 5 with an external temperature of 10 °C and an internal temperature of 20 °C. Because the ground is a constant temperature source, a ground-source heat pump is not subjected to large temperature fluctuations, and therefore is the most energy-efficient type of heat pump. The "seasonal coefficient of performance" (SCOP) is a measure of the aggregate energy efficiency measure over a period of one year which is dependent on regional climate. One framework for this calculation is given by the Commission Regulation (EU) No. 813/2013. A heat pump's operating performance in cooling mode is characterized in the US by either its energy efficiency ratio (EER) or seasonal energy efficiency ratio (SEER), both of which have units of BTU/(h·W) (note that 1 BTU/(h·W) = 0.293 W/W) and larger values indicate better performance. Carbon footprint. The carbon footprint of heat pumps depends on their individual efficiency and how electricity is produced. An increasing share of low-carbon energy sources such as wind and solar will lower the impact on the climate. In most settings, heat pumps will reduce CO2 emissions compared to heating systems powered by fossil fuels. In regions accounting for 70% of world energy consumption, the emissions savings of heat pumps compared with a high-efficiency gas boiler are on average above 45% and reach 80% in countries with cleaner electricity mixes. These values can be improved by 10 percentage points, respectively, with alternative refrigerants. In the United States, 70% of houses could reduce emissions by installing a heat pump. The rising share of renewable electricity generation in many countries is set to increase the emissions savings from heat pumps over time. Heating systems powered by green hydrogen are also low-carbon and may become competitors, but are much less efficient due to the energy loss associated with hydrogen conversion, transport and use. In addition, not enough green hydrogen is expected to be available before the 2030s or 2040s. Operation. Vapor-compression uses a circulating refrigerant as the medium which absorbs heat from one space, compresses it thereby increasing its temperature before releasing it in another space. The system normally has eight main components: a compressor, a reservoir, a reversing valve which selects between heating and cooling mode, two thermal expansion valves (one used when in heating mode and the other when used in cooling mode) and two heat exchangers, one associated with the external heat source/sink and the other with the interior. In heating mode the external heat exchanger is the evaporator and the internal one being the condenser; in cooling mode the roles are reversed. Circulating refrigerant enters the compressor in the thermodynamic state known as a saturated vapor and is compressed to a higher pressure, resulting in a higher temperature as well. The hot, compressed vapor is then in the thermodynamic state known as a superheated vapor and it is at a temperature and pressure at which it can be condensed with either cooling water or cooling air flowing across the coil or tubes. In heating mode this heat is used to heat the building using the internal heat exchanger, and in cooling mode this heat is rejected via the external heat exchanger. The condensed, liquid refrigerant, in the thermodynamic state known as a saturated liquid, is next routed through an expansion valve where it undergoes an abrupt reduction in pressure. That pressure reduction results in the adiabatic flash evaporation of a part of the liquid refrigerant. The auto-refrigeration effect of the adiabatic flash evaporation lowers the temperature of the liquid and-vapor refrigerant mixture to where it is colder than the temperature of the enclosed space to be refrigerated. The cold mixture is then routed through the coil or tubes in the evaporator. A fan circulates the warm air in the enclosed space across the coil or tubes carrying the cold refrigerant liquid and vapor mixture. That warm air evaporates the liquid part of the cold refrigerant mixture. At the same time, the circulating air is cooled and thus lowers the temperature of the enclosed space to the desired temperature. The evaporator is where the circulating refrigerant absorbs and removes heat which is subsequently rejected in the condenser and transferred elsewhere by the water or air used in the condenser. To complete the refrigeration cycle, the refrigerant vapor from the evaporator is again a saturated vapor and is routed back into the compressor. Over time, the evaporator may collect ice or water from ambient humidity. The ice is melted through defrosting cycle. An internal heat exchanger is either used to heat/cool the interior air directly or to heat water that is then circulated through radiators or underfloor heating circuit to either heat or cool the buildings. Improvement of coefficient of performance by subcooling. Heat input can be improved if the refrigerant enters the evaporator with a lower vapor content. This can be achieved by cooling the liquid refrigerant after condensation. The gaseous refrigerant condenses on the heat exchange surface of the condenser. To achieve a heat flow from the gaseous flow center to the wall of the condenser, the temperature of the liquid refrigerant must be lower than the condensation temperature. Additional subcooling can be achieved by heat exchange between relatively warm liquid refrigerant leaving the condenser and the cooler refrigerant vapor emerging from the evaporator. The enthalpy difference required for the subcooling leads to the superheating of the vapor drawn into the compressor. When the increase in cooling achieved by subcooling is greater that the compressor drive input required to overcome the additional pressure losses, such a heat exchange improves the coefficient of performance. One disadvantage of the subcooling of liquids is that the difference between the condensing temperature and the heat-sink temperature must be larger. This leads to a moderately high pressure difference between condensing and evaporating pressure, whereby the compressor energy increases. Refrigerant choice. Pure refrigerants can be divided into organic substances (hydrocarbons (HCs), chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), hydrofluorocarbons (HFCs), hydrofluoroolefins (HFOs), and HCFOs), and inorganic substances (ammonia (NH3), carbon dioxide (CO2), and water (H2O)). Their boiling points are usually below −25 °C. In the past 200 years, the standards and requirements for new refrigerants have changed. Nowadays low global warming potential (GWP) is required, in addition to all the previous requirements for safety, practicality, material compatibility, appropriate atmospheric life, and compatibility with high-efficiency products. By 2022, devices using refrigerants with a very low GWP still have a small market share but are expected to play an increasing role due to enforced regulations, as most countries have now ratified the Kigali Amendment to ban HFCs. Isobutane (R600A) and propane (R290) are far less harmful to the environment than conventional hydrofluorocarbons (HFC) and are already being used in air-source heat pumps. Propane may be the most suitable for high temperature heat pumps. Ammonia (R717) and carbon dioxide (R-744) also have a low GWP. As of 2023[ [update]] smaller CO2 heat pumps are not widely available and research and development of them continues. A 2024 report said that refrigerants with GWP are vulnerable to further international restrictions. Until the 1990s, heat pumps, along with fridges and other related products used chlorofluorocarbons (CFCs) as refrigerants, which caused major damage to the ozone layer when released into the atmosphere. Use of these chemicals was banned or severely restricted by the Montreal Protocol of August 1987. Replacements, including R-134a and R-410A, are hydrofluorocarbons (HFC) with similar thermodynamic properties with insignificant ozone depletion potential (ODP) but had problematic GWP. HFCs are powerful greenhouse gases which contribute to climate change. Dimethyl ether (DME) also gained in popularity as a refrigerant in combination with R404a. More recent refrigerants include difluoromethane (R32) with a lower GWP, but still over 600. Devices with R-290 refrigerant (propane) are expected to play a key role in the future. The 100-year GWP of propane, at 0.02, is extremely low and is approximately 7000 times less than R-32. However, the flammability of propane requires additional safety measures: the maximum safe charges have been set significantly lower than for lower flammability refrigerants (only allowing approximately 13.5 times less refrigerant in the system than R-32). This means that R-290 is not suitable for all situations or locations. Nonetheless, by 2022, an increasing number of devices with R-290 were offered for domestic use, especially in Europe. At the same time, HFC refrigerants still dominate the market. Recent government mandates have seen the phase-out of R-22 refrigerant. Replacements such as R-32 and R-410A are being promoted as environmentally friendly but still have a high GWP. A heat pump typically uses 3 kg of refrigerant. With R-32 this amount still has a 20-year impact equivalent to 7 tons of CO2, which corresponds to two years of natural gas heating in an average household. Refrigerants with a high ODP have already been phased out. Government incentives. Financial incentives aim to protect consumers from high fossil gas costs and to reduce greenhouse gas emissions, and are currently available in more than 30 countries around the world, covering more than 70% of global heating demand in 2021. Australia. Food processors, brewers, petfood producers and other industrial energy users are exploring whether it is feasible to use renewable energy to produce industrial-grade heat. Process heating accounts for the largest share of onsite energy use in Australian manufacturing, with lower-temperature operations like food production particularly well-suited to transition to renewables. To help producers understand how they could benefit from making the switch, the Australian Renewable Energy Agency (ARENA) provided funding to the Australian Alliance for Energy Productivity (A2EP) to undertake pre-feasibility studies at a range of sites around Australia, with the most promising locations advancing to full feasibility studies. In an effort to incentivize energy efficiency and reduce environmental impact, the Australian states of Victoria, New South Wales, and Queensland have implemented rebate programs targeting the upgrade of existing hot water systems. These programs specifically encourage the transition from traditional gas or electric systems to heat pump based systems. Canada. In 2022, the Canada Greener Homes Grant provides up to $5000 for upgrades (including certain heat pumps), and $600 for energy efficiency evaluations. China. Purchase subsidies in rural areas in the 2010s reduced burning coal for heating, which had been causing ill health. In the 2024 report by the International Energy Agency (IEA) titled "The Future of Heat Pumps in China," it is highlighted that China, as the world's largest market for heat pumps in buildings, plays a critical role in the global industry. The country accounts for over one-quarter of global sales, with a 12% increase in 2023 alone, despite a global sales dip of 3% the same year. Heat pumps are now used in approximately 8% of all heating equipment sales for buildings in China as of 2022, and they are increasingly becoming the norm in central and southern regions for both heating and cooling. Despite their higher upfront costs and relatively low awareness, heat pumps are favored for their energy efficiency, consuming three to five times less energy than electric heaters or fossil fuel-based solutions. Currently, decentralized heat pumps installed in Chinese buildings represent a quarter of the global installed capacity, with a total capacity exceeding 250 GW, which covers around 4% of the heating needs in buildings. Under the Announced Pledges Scenario (APS), which aligns with China's carbon neutrality goals, the capacity is expected to reach 1,400 GW by 2050, meeting 25% of heating needs. This scenario would require an installation of about 100 GW of heat pumps annually until 2050. Furthermore, the heat pump sector in China employs over 300,000 people, with employment numbers expected to double by 2050, underscoring the importance of vocational training for industry growth. This robust development in the heat pump market is set to play a significant role in reducing direct emissions in buildings by 30% and cutting PM2.5 emissions from residential heating by nearly 80% by 2030. United Kingdom. As of 2022: heat pumps have no Value Added Tax (VAT) although in Northern Ireland they are taxed at the reduced rate of 5% instead of the usual level of VAT of 20% for most other products. As of 2022[ [update]] the installation cost of a heat pump is more than a gas boiler, but with the "Boiler Upgrade Scheme" government grant and assuming electricity/gas costs remain similar their lifetime costs would be similar on average. However lifetime cost relative to a gas boiler varies considerably depending on several factors, such as the quality of the heat pump installation and the tariff used. In 2024 England was criticised for still allowing new homes to be built with gas boilers, unlike some other counties where this is banned. United States. The High-efficiency Electric Home Rebate Program was created in 2022 to award grants to State energy offices and Indian Tribes in order to establish state-wide high-efficiency electric-home rebates. Effective immediately, American households are eligible for a tax credit to cover the costs of buying and installing a heat pump, up to $2,000. Starting in 2023, low- and moderate-level income households will be eligible for a heat-pump rebate of up to $8,000. In 2022, more heat pumps were sold in the United States than natural gas furnaces. In November 2023 Biden's administration allocated 169 million dollars from the Inflation Reduction Act to speed production of heat pumps. It used the Defense Production Act for doing so, because according to the administration, energy that is better for climate, is better also for national security. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W = \\frac{ Q}{\\mathrm{COP}}" }, { "math_id": 1, "text": "W " }, { "math_id": 2, "text": " Q " }, { "math_id": 3, "text": "\\mathrm{COP}" } ]
https://en.wikipedia.org/wiki?curid=68316
68326
Extended periodic table
Periodic table of the elements with eight or more periods Extended periodic table Element 119 (Uue, marked here) in period 8 (row 8) marks the start of theorisations. An extended periodic table theorizes about chemical elements beyond those currently known and proven. The element with the highest atomic number known is oganesson ("Z" = 118), which completes the seventh period (row) in the periodic table. All elements in the eighth period and beyond thus remain purely hypothetical. Elements beyond 118 will be placed in additional periods when discovered, laid out (as with the existing periods) to illustrate periodically recurring trends in the properties of the elements. Any additional periods are expected to contain more elements than the seventh period, as they are calculated to have an additional so-called "g-block", containing at least 18 elements with partially filled g-orbitals in each period. An "eight-period table" containing this block was suggested by Glenn T. Seaborg in 1969. The first element of the g-block may have atomic number 121, and thus would have the systematic name unbiunium. Despite many searches, no elements in this region have been synthesized or discovered in nature. According to the orbital approximation in quantum mechanical descriptions of atomic structure, the g-block would correspond to elements with partially filled g-orbitals, but spin–orbit coupling effects reduce the validity of the orbital approximation substantially for elements of high atomic number. Seaborg's version of the extended period had the heavier elements following the pattern set by lighter elements, as it did not take into account relativistic effects. Models that take relativistic effects into account predict that the pattern will be broken. Pekka Pyykkö and Burkhard Fricke used computer modeling to calculate the positions of elements up to "Z" = 172, and found that several were displaced from the Madelung rule. As a result of uncertainty and variability in predictions of chemical and physical properties of elements beyond 120, there is currently no consensus on their placement in the extended periodic table. Elements in this region are likely to be highly unstable with respect to radioactive decay and undergo alpha decay or spontaneous fission with extremely short half-lives, though element 126 is hypothesized to be within an island of stability that is resistant to fission but not to alpha decay. Other islands of stability beyond the known elements may also be possible, including one theorised around element 164, though the extent of stabilizing effects from closed nuclear shells is uncertain. It is not clear how many elements beyond the expected island of stability are physically possible, whether period 8 is complete, or if there is a period 9. The International Union of Pure and Applied Chemistry (IUPAC) defines an element to exist if its lifetime is longer than 10−14 seconds (0.01 picoseconds, or 10 femtoseconds), which is the time it takes for the nucleus to form an electron cloud. As early as 1940, it was noted that a simplistic interpretation of the relativistic Dirac equation runs into problems with electron orbitals at "Z" &gt; 1/α ≈ 137, suggesting that neutral atoms cannot exist beyond element 137, and that a periodic table of elements based on electron orbitals therefore breaks down at this point. On the other hand, a more rigorous analysis calculates the analogous limit to be "Z" ≈ 168–172 where the 1s subshell dives into the Dirac sea, and that it is instead not neutral atoms that cannot exist beyond this point, but bare nuclei, thus posing no obstacle to the further extension of the periodic system. Atoms beyond this critical atomic number are called "supercritical" atoms. History. Elements beyond the actinides were first proposed to exist as early as 1895, when Danish chemist Hans Peter Jørgen Julius Thomsen predicted that thorium and uranium formed part of a 32-element period which would end at a chemically inactive element with atomic weight 292 (not far from the 294 for the only known isotope of oganesson). In 1913, Swedish physicist Johannes Rydberg similarly predicted that the next noble gas after radon would have atomic number 118, and purely formally derived even heavier congeners of radon at "Z" = 168, 218, 290, 362, and 460, exactly where the Aufbau principle would predict them to be. In 1922, Niels Bohr predicted the electronic structure of this next noble gas at "Z" = 118, and suggested that the reason why elements beyond uranium were not seen in nature was because they were too unstable. The German physicist and engineer Richard Swinne published a review paper in 1926 containing predictions on the transuranic elements (he may have coined the term) in which he anticipated modern predictions of an island of stability: he first hypothesised in 1914 that half-lives should not decrease strictly with atomic number, but suggested instead that there might be some longer-lived elements at "Z" = 98–102 and "Z" = 108–110, and speculated that such elements might exist in the Earth's core, in iron meteorites, or in the ice caps of Greenland where they had been locked up from their supposed cosmic origin. By 1955, these elements were called "superheavy" elements. The first predictions on properties of undiscovered superheavy elements were made in 1957, when the concept of nuclear shells was first explored and an island of stability was theorized to exist around element 126. In 1967, more rigorous calculations were performed, and the island of stability was theorized to be centered at the then-undiscovered flerovium (element 114); this and other subsequent studies motivated many researchers to search for superheavy elements in nature or attempt to synthesize them at accelerators. Many searches for superheavy elements were conducted in the 1970s, all with negative results. As of  2022[ [update]], synthesis has been attempted for every element up to and including unbiseptium ("Z" = 127), except unbitrium ("Z" = 123), with the heaviest successfully synthesized element being oganesson in 2002 and the most recent discovery being that of tennessine in 2010. As some superheavy elements were predicted to lie beyond the seven-period periodic table, an additional eighth period containing these elements was first proposed by Glenn T. Seaborg in 1969. This model continued the pattern in established elements and introduced a new g-block and superactinide series beginning at element 121, raising the number of elements in period 8 compared to known periods. These early calculations failed to consider relativistic effects that break down periodic trends and render simple extrapolation impossible, however. In 1971, Fricke calculated the periodic table up to "Z" = 172, and discovered that some elements indeed had different properties that break the established pattern, and a 2010 calculation by Pekka Pyykkö also noted that several elements might behave differently than expected. It is unknown how far the periodic table might extend beyond the known 118 elements, as heavier elements are predicted to be increasingly unstable. Glenn T. Seaborg suggested that practically speaking, the end of the periodic table might come as early as around "Z" = 120 due to nuclear instability. Predicted structures of an extended periodic table. There is currently no consensus on the placement of elements beyond atomic number 120 in the periodic table. All hypothetical elements are given an International Union of Pure and Applied Chemistry (IUPAC) systematic element name, for use until the element has been discovered, confirmed, and an official name is approved. These names are typically not used in the literature, and the elements are instead referred to by their atomic numbers; hence, element 164 is usually not called "unhexquadium" or "Uhq" (the systematic name and symbol), but rather "element 164" with symbol "164", "(164)", or "E164". Aufbau principle. At element 118, the orbitals 1s, 2s, 2p, 3s, 3p, 3d, 4s, 4p, 4d, 4f, 5s, 5p, 5d, 5f, 6s, 6p, 6d, 7s and 7p are assumed to be filled, with the remaining orbitals unfilled. A simple extrapolation from the Aufbau principle would predict the eighth row to fill orbitals in the order 8s, 5g, 6f, 7d, 8p; but after element 120, the proximity of the electron shells makes placement in a simple table problematic. &lt;templatestyles src="Navbar-header/styles.css"/&gt;Legend Fricke. Not all models show the higher elements following the pattern established by lighter elements. Burkhard Fricke et al., who carried out calculations up to element 184 in an article published in 1971, also found some elements to be displaced from the Madelung energy-ordering rule as a result of overlapping orbitals; this is caused by the increasing role of relativistic effects in heavy elements (They describe chemical properties up to element 184, but only draw a table to element 172.) Fricke et al.'s format is more focused on formal electron configurations than likely chemical behaviour. They place elements 156–164 in groups 4–12 because formally their configurations should be 7d2 through 7d10. However, they differ from the previous d-elements in that the 8s shell is not available for chemical bonding: instead, the 9s shell is. Thus element 164 with 7d109s0 is noted by Fricke et al. to be analogous to palladium with 4d105s0, and they consider elements 157–172 to have chemical analogies to groups 3–18 (though they are ambivalent on whether elements 165 and 166 are more like group 1 and 2 elements or more like group 11 and 12 elements, respectively). Thus, elements 157–164 are placed in their table in a group that the authors do not think is chemically most analogous. Nefedov. Nefedov, Trzhaskovskaya, and Yarzhemskii carried out calculations up to 164 (results published in 2006). They considered elements 158 through 164 to be homologues of groups 4 through 10, and not 6 through 12, noting similarities of electron configurations to the period 5 transition metals (e.g. element 159 7d49s1 vs Nb 4d45s1, element 160 7d59s1 vs Mo 4d55s1, element 162 7d79s1 vs Ru 4d75s1, element 163 7d89s1 vs Rh 4d85s1, element 164 7d109s0 vs Pd 4d105s0). They thus agree with Fricke et al. on the chemically most analogous groups, but differ from them in that Nefedov et al. actually place elements in the chemically most analogous groups. Rg and Cn are given an asterisk to reflect differing configurations from Au and Hg (in the original publication they are drawn as being displaced in the third dimension). In fact Cn probably has an analogous configuration to Hg, and the difference in configuration between Pt and Ds is not marked. Pyykkö. Pekka Pyykkö used computer modeling to calculate the positions of elements up to "Z" = 172 and their possible chemical properties in an article published in 2011. He reproduced the orbital order of Fricke et al., and proposed a refinement of their table by formally assigning slots to elements 121–164 based on ionic configurations. In order to bookkeep the electrons, Pyykkö places some elements out of order: thus 139 and 140 are placed in groups 13 and 14 to reflect that the 8p1/2 shell needs to fill, and he distinguishes separate 5g, 8p1/2, and 6f series. Fricke et al. and Nefedov et al. do not attempt to break up these series. Kulsha. Computational chemist Andrey Kulsha has suggested two forms of the extended periodic table up to 172 that build on and refine Nefedov et al.'s versions up to 164 with reference to Pyykkö's calculations. Based on their likely chemical properties, elements 157–172 are placed by both forms as eighth-period congeners of yttrium through xenon in the fifth period; this extends Nefedov et al.'s placement of 157–164 under yttrium through palladium, and agrees with the chemical analogies given by Fricke et al. Kulsha suggested two ways to deal with elements 121–156, that lack precise analogues among earlier elements. In his first form (2011, after Pyykkö's paper was published), elements 121–138 and 139–156 are placed as two separate rows (together called "ultransition elements"), related by the addition of a 5g18 subshell into the core, as according to Pyykkö's calculations of oxidation states, they should, respectively, mimic lanthanides and actinides. In his second suggestion (2016), elements 121–142 form a g-block (as they have 5g activity), while elements 143–156 form an f-block placed under actinium through nobelium. Thus, period 8 emerges with 54 elements, and the next noble element after 118 is 172. Smits et al.. In 2023 Smits, Düllmann, Indelicato, Nazarewicz, and Schwerdtfeger made another attempt to place elements from 119 to 170 in the periodic table based on their electron configurations. The configurations of a few elements (121–124 and 168) did not allow them be placed unambiguously. Element 145 appears twice, some places have double occupancy, and others are empty. Searches for undiscovered elements. Synthesis attempts. Attempts to synthesise the period 8 elements up to unbiseptium, except unbitrium, have been unsuccessful. Attempts to synthesise ununennium, the first period 8 element, are ongoing as of 2024[ [update]]. Ununennium (E119). The synthesis of element 119 (ununennium) was first attempted in 1985 by bombarding a target of einsteinium-254 with calcium-48 ions at the superHILAC accelerator at Berkeley, California: Es + Ca → 302119* → no atoms No atoms were identified, leading to a limiting cross section of 300 nb. Later calculations suggest that the cross section of the 3n reaction (which would result in 299119 and three neutrons as products) would actually be six hundred thousand times lower than this upper bound, at 0.5 pb. From April to September 2012, an attempt to synthesize the isotopes 295119 and 296119 was made by bombarding a target of berkelium-249 with titanium-50 at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. Based on the theoretically predicted cross section, it was expected that an ununennium atom would be synthesized within five months of the beginning of the experiment. Moreover, as berkelium-249 decays to californium-249 (the next element) with a short half-life of 327 days, this allowed elements 119 and 120 to be searched for simultaneously. Bk + Ti → 299119* → no atoms The experiment was originally planned to continue to November 2012, but was stopped early to make use of the 249Bk target to confirm the synthesis of tennessine (thus changing the projectiles to 48Ca). This reaction between 249Bk and 50Ti was predicted to be the most favorable practical reaction for formation of element 119, as it is rather asymmetrical, though also somewhat cold. (The reaction between 254Es and 48Ca would be superior, but preparing milligram quantities of 254Es for a target is difficult.) Nevertheless, the necessary change from the "silver bullet" 48Ca to 50Ti divides the expected yield of element 119 by about twenty, as the yield is strongly dependent on the asymmetry of the fusion reaction. Due to the predicted short half-lives, the GSI team used new "fast" electronics capable of registering decay events within microseconds. No atoms of element 119 were identified, implying a limiting cross section of 70 fb. The predicted actual cross section is around 40 fb, which is at the limits of current technology. The team at RIKEN in Wakō, Japan began bombarding curium-248 targets with a vanadium-51 beam in January 2018 to search for element 119. Curium was chosen as a target, rather than heavier berkelium or californium, as these heavier targets are difficult to prepare. The 248Cm targets were provided by Oak Ridge National Laboratory. RIKEN developed a high-intensity vanadium beam. The experiment began at a cyclotron while RIKEN upgraded its linear accelerators; the upgrade was completed in 2020. Bombardment may be continued with both machines until the first event is observed; the experiment is currently running intermittently for at least 100 days per year. The RIKEN team's efforts are being financed by the Emperor of Japan. The team at the JINR plans to attempt synthesis of element 119 in the future, probably using the 243Am + 54Cr reaction, but a precise timeframe has not been publicly released. Unbinilium (E120). Following their success in obtaining oganesson by the reaction between 249Cf and 48Ca in 2006, the team at the Joint Institute for Nuclear Research (JINR) in Dubna started similar experiments in March–April 2007, in hope of creating element 120 (unbinilium) from nuclei of 58Fe and 244Pu. Isotopes of unbinilium are predicted to have alpha decay half-lives of the order of microseconds. Initial analysis revealed that no atoms of element 120 were produced, providing a limit of 400 fb for the cross section at the energy studied. Pu + Fe → 302120* → no atoms The Russian team planned to upgrade their facilities before attempting the reaction again. In April 2007, the team at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany, attempted to create element 120 using uranium-238 and nickel-64: U + Ni → 302120* → no atoms No atoms were detected, providing a limit of 1.6 pb for the cross section at the energy provided. The GSI repeated the experiment with higher sensitivity in three separate runs in April–May 2007, January–March 2008, and September–October 2008, all with negative results, reaching a cross section limit of 90 fb. In June–July 2010, and again in 2011, after upgrading their equipment to allow the use of more radioactive targets, scientists at the GSI attempted the more asymmetrical fusion reaction: Cm + Cr → 302120 → no atoms It was expected that the change in reaction would quintuple the probability of synthesizing element 120, as the yield of such reactions is strongly dependent on their asymmetry. Three correlated signals were observed that matched the predicted alpha decay energies of 299120 and its daughter 295Og, as well as the experimentally known decay energy of its granddaughter 291Lv. However, the lifetimes of these possible decays were much longer than expected, and the results could not be confirmed. In August–October 2011, a different team at the GSI using the TASCA facility tried a new, even more asymmetrical reaction: Cf + Ti → 299120* → no atoms This was also tried unsuccessfully the next year during the aforementioned attempt to make element 119 in the 249Bk+50Ti reaction, as 249Bk decays to 249Cf. Because of its asymmetry, the reaction between 249Cf and 50Ti was predicted to be the most favorable practical reaction for synthesizing unbinilium, although it is also somewhat cold. No unbinilium atoms were identified, implying a limiting cross-section of 200 fb. Jens Volker Kratz predicted the actual maximum cross-section for producing element 120 by any of these reactions to be around 0.1 fb; in comparison, the world record for the smallest cross section of a successful reaction was 30 fb for the reaction 209Bi(70Zn,n)278Nh, and Kratz predicted a maximum cross-section of 20 fb for producing the neighbouring element 119. If these predictions are accurate, then synthesizing element 119 would be at the limits of current technology, and synthesizing element 120 would require new methods. In May 2021, the JINR announced plans to investigate the 249Cf+50Ti reaction in their new facility. However, the 249Cf target would have had to be made by the Oak Ridge National Laboratory in the United States, and after the Russian invasion of Ukraine began in February 2022, collaboration between the JINR and other institutes completely ceased due to sanctions. Consequently, the JINR now plans to try the 248Cm+54Cr reaction instead. A preparatory experiment for the use of 54Cr projectiles was conducted in late 2023, successfully synthesising 288Lv in the 238U+54Cr reaction, and the hope is for experiments to synthesise element 120 to begin by 2025. Starting from 2022, plans have also been made to use 88-inch cyclotron in the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California, United States to attempt to make new elements using 50Ti projectiles. The plan is to first test them on a plutonium target to create livermorium (element 116) in late 2023. If that is successful, an attempt to make element 120 in the 249Cf+50Ti reaction will begin, probably in 2024 at the earliest. Unbiunium (E121). The synthesis of element 121 (unbiunium) was first attempted in 1977 by bombarding a target of uranium-238 with copper-65 ions at the Gesellschaft für Schwerionenforschung in Darmstadt, Germany: U + Cu → 303121* → no atoms No atoms were identified. Unbibium (E122). The first attempts to synthesize element 122 (unbibium) were performed in 1972 by Flerov et al. at the Joint Institute for Nuclear Research (JINR), using the heavy-ion induced hot fusion reactions: U + Zn → 304, 306122* → no atoms These experiments were motivated by early predictions on the existence of an island of stability at "N" = 184 and "Z" &gt; 120. No atoms were detected and a yield limit of 5 nb (5,000 pb) was measured. Current results (see flerovium) have shown that the sensitivity of these experiments were too low by at least 3 orders of magnitude. In 2000, the Gesellschaft für Schwerionenforschung (GSI) Helmholtz Center for Heavy Ion Research performed a very similar experiment with much higher sensitivity: U + Zn → 308122* → no atoms These results indicate that the synthesis of such heavier elements remains a significant challenge and further improvements of beam intensity and experimental efficiency is required. The sensitivity should be increased to 1 fb in the future for better quality results. Another unsuccessful attempt to synthesize element 122 was carried out in 1978 at the GSI Helmholtz Center, where a natural erbium target was bombarded with xenon-136 ions: Er + Xe → 298, 300, 302, 303, 304, 306122* → no atoms In particular, the reaction between 170Er and 136Xe was expected to yield alpha-emitters with half-lives of microseconds that would decay down to isotopes of flerovium with half-lives perhaps increasing up to several hours, as flerovium is predicted to lie near the center of the island of stability. After twelve hours of irradiation, nothing was found in this reaction. Following a similar unsuccessful attempt to synthesize element 121 from 238U and 65Cu, it was concluded that half-lives of superheavy nuclei must be less than one microsecond or the cross sections are very small. More recent research into synthesis of superheavy elements suggests that both conclusions are true. The two attempts in the 1970s to synthesize element 122 were both propelled by the research investigating whether superheavy elements could potentially be naturally occurring. Several experiments studying the fission characteristics of various superheavy compound nuclei such as 306122* were performed between 2000 and 2004 at the Flerov Laboratory of Nuclear Reactions. Two nuclear reactions were used, namely 248Cm + 58Fe and 242Pu + 64Ni. The results reveal how superheavy nuclei fission predominantly by expelling closed shell nuclei such as 132Sn ("Z" = 50, "N" = 82). It was also found that the yield for the fusion-fission pathway was similar between 48Ca and 58Fe projectiles, suggesting a possible future use of 58Fe projectiles in superheavy element formation. Unbiquadium (E124). Scientists at GANIL (Grand Accélérateur National d'Ions Lourds) attempted to measure the direct and delayed fission of compound nuclei of elements with "Z" = 114, 120, and 124 in order to probe shell effects in this region and to pinpoint the next spherical proton shell. This is because having complete nuclear shells (or, equivalently, having a magic number of protons or neutrons) would confer more stability on the nuclei of such superheavy elements, thus moving closer to the island of stability. In 2006, with full results published in 2008, the team provided results from a reaction involving the bombardment of a natural germanium target with uranium ions: U + Ge → 308, 310, 311, 312, 314124* → "fission" The team reported that they had been able to identify compound nuclei fissioning with half-lives &gt; 10−18 s. This result suggests a strong stabilizing effect at "Z" = 124 and points to the next proton shell at "Z" &gt; 120, not at "Z" = 114 as previously thought. A compound nucleus is a loose combination of nucleons that have not arranged themselves into nuclear shells yet. It has no internal structure and is held together only by the collision forces between the target and projectile nuclei. It is estimated that it requires around 10−14 s for the nucleons to arrange themselves into nuclear shells, at which point the compound nucleus becomes a nuclide, and this number is used by IUPAC as the minimum half-life a claimed isotope must have to potentially be recognised as being discovered. Thus, the GANIL experiments do not count as a discovery of element 124. The fission of the compound nucleus 312124 was also studied in 2006 at the tandem ALPI heavy-ion accelerator at the Laboratori Nazionali di Legnaro (Legnaro National Laboratories) in Italy: Th + Se → 312124* → "fission" Similarly to previous experiments conducted at the JINR (Joint Institute for Nuclear Research), fission fragments clustered around doubly magic nuclei such as 132Sn ("Z" = 50, "N" = 82), revealing a tendency for superheavy nuclei to expel such doubly magic nuclei in fission. The average number of neutrons per fission from the 312124 compound nucleus (relative to lighter systems) was also found to increase, confirming that the trend of heavier nuclei emitting more neutrons during fission continues into the superheavy mass region. Unbipentium (E125). The first and only attempt to synthesize element 125 (unbipentium) was conducted in Dubna in 1970–1971 using zinc ions and an americium-243 target: Am + Zn → 309, 311125* → no atoms No atoms were detected, and a cross section limit of 5 nb was determined. This experiment was motivated by the possibility of greater stability for nuclei around "Z" ~ 126 and "N" ~ 184, though more recent research suggests the island of stability may instead lie at a lower atomic number (such as copernicium, "Z" = 112), and the synthesis of heavier elements such as element 125 will require more sensitive experiments. Unbihexium (E126). The first and only attempt to synthesize element 126 (unbihexium), which was unsuccessful, was performed in 1971 at CERN (European Organization for Nuclear Research) by René Bimbot and John M. Alexander using the hot fusion reaction: Th + Kr → 316126* → no atoms High-energy (13–15 MeV) alpha particles were observed and taken as possible evidence for the synthesis of element 126. Subsequent unsuccessful experiments with higher sensitivity suggest that the 10 mb sensitivity of this experiment was too low; hence, the formation of element 126 nuclei in this reaction is highly unlikely. Unbiseptium (E127). The first and only attempt to synthesize element 127 (unbiseptium), which was unsuccessful, was performed in 1978 at the UNILAC accelerator at the GSI Helmholtz Center, where a natural tantalum target was bombarded with xenon-136 ions: Ta + Xe → 316, 317127* → no atoms Searches in nature. A study in 1976 by a group of American researchers from several universities proposed that primordial superheavy elements, mainly livermorium, elements 124, 126, and 127, could be a cause of unexplained radiation damage (particularly radiohalos) in minerals. This prompted many researchers to search for them in nature from 1976 to 1983. A group led by Tom Cahill, a professor at the University of California at Davis, claimed in 1976 that they had detected alpha particles and X-rays with the right energies to cause the damage observed, supporting the presence of these elements. In particular, the presence of long-lived (on the order of 109 years) nuclei of elements 124 and 126, along with their decay products, at an abundance of 10−11 relative to their possible congeners uranium and plutonium, was conjectured. Others claimed that none had been detected, and questioned the proposed characteristics of primordial superheavy nuclei. In particular, they cited that any such superheavy nuclei must have a closed neutron shell at "N" = 184 or "N" = 228, and this necessary condition for enhanced stability only exists in neutron deficient isotopes of livermorium or neutron rich isotopes of the other elements that would not be beta-stable unlike most naturally occurring isotopes. This activity was also proposed to be caused by nuclear transmutations in natural cerium, raising further ambiguity upon this claimed observation of superheavy elements. On April 24, 2008, a group led by Amnon Marinov at the Hebrew University of Jerusalem claimed to have found single atoms of 292122 in naturally occurring thorium deposits at an abundance of between 10−11 and 10−12 relative to thorium. The claim of Marinov et al. was criticized by a part of the scientific community. Marinov claimed that he had submitted the article to the journals "Nature" and "Nature Physics" but both turned it down without sending it for peer review. The 292122 atoms were claimed to be superdeformed or hyperdeformed isomers, with a half-life of at least 100 million years. A criticism of the technique, previously used in purportedly identifying lighter thorium isotopes by mass spectrometry, was published in "Physical Review C" in 2008. A rebuttal by the Marinov group was published in "Physical Review C" after the published comment. A repeat of the thorium experiment using the superior method of Accelerator Mass Spectrometry (AMS) failed to confirm the results, despite a 100-fold better sensitivity. This result throws considerable doubt on the results of the Marinov collaboration with regard to their claims of long-lived isotopes of thorium, roentgenium and element 122. It is still possible that traces of unbibium might only exist in some thorium samples, although this is unlikely. The possible extent of primordial superheavy elements on Earth today is uncertain. Even if they are confirmed to have caused the radiation damage long ago, they might now have decayed to mere traces, or even be completely gone. It is also uncertain if such superheavy nuclei may be produced naturally at all, as spontaneous fission is expected to terminate the r-process responsible for heavy element formation between mass number 270 and 290, well before elements beyond 120 may be formed. A recent hypothesis tries to explain the spectrum of Przybylski's Star by naturally occurring flerovium and element 120. Predicted properties of eighth-period elements. Element 118, oganesson, is the heaviest element that has been synthesized. The next two elements, elements 119 and 120, should form an 8s series and be an alkali and alkaline earth metal, respectively. Beyond element 120, the superactinide series is expected to begin, when the 8s electrons and the filling of the 8p1/2, 7d3/2, 6f, and 5g subshells determine the chemistry of these elements. Complete and accurate CCSD calculations are not available for elements beyond 122 because of the extreme complexity of the situation: the 5g, 6f, and 7d orbitals should have about the same energy level, and in the region of element 160, the 9s, 8p3/2, and 9p1/2 orbitals should also be about equal in energy. This will cause the electron shells to mix so that the block concept no longer applies very well, and will also result in novel chemical properties that will make positioning some of these elements in a periodic table very difficult. Chemical and physical properties. Elements 119 and 120. The first two elements of period 8 will be ununennium and unbinilium, elements 119 and 120. Their electron configurations should have the 8s orbital being filled. This orbital is relativistically stabilized and contracted; thus, elements 119 and 120 should be more like rubidium and strontium than their immediate neighbours above, francium and radium. Another effect of the relativistic contraction of the 8s orbital is that the atomic radii of these two elements should be about the same as those of francium and radium. They should behave like normal alkali and alkaline earth metals (albeit less reactive than their immediate vertical neighbours), normally forming +1 and +2 oxidation states, respectively, but the relativistic destabilization of the 7p3/2 subshell and the relatively low ionization energies of the 7p3/2 electrons should make higher oxidation states like +3 and +4 (respectively) possible as well. Superactinides. The superactinides may be considered to range from elements 121 through 157, which can be classified as the 5g and 6f elements of the eighth period, together with the first 7d element. In the superactinide series, the 7d3/2, 8p1/2, 6f5/2 and 5g7/2 shells should all fill simultaneously. This creates very complicated situations, so much so that complete and accurate CCSD calculations have been done only for elements 121 and 122. The first superactinide, unbiunium (element 121), should be similar to lanthanum and actinium: its main oxidation state should be +3, although the closeness of the valence subshells' energy levels may permit higher oxidation states, just as in elements 119 and 120. Relativistic stabilization of the 8p subshell should result in a ground-state 8s28p1 valence electron configuration for element 121, in contrast to the ds2 configurations of lanthanum and actinium; nevertheless, this anomalous configuration does not appear to affect its calculated chemistry, which remains similar to that of actinium. Its first ionization energy is predicted to be 429.4 kJ/mol, which would be lower than those of all known elements except for the alkali metals potassium, rubidium, caesium, and francium: this value is even lower than that of the period 8 alkali metal ununennium (463.1 kJ/mol). Similarly, the next superactinide, unbibium (element 122), may be similar to cerium and thorium, with a main oxidation state of +4, but would have a ground-state 7d18s28p1 or 8s28p2 valence electron configuration, unlike thorium's 6d27s2 configuration. Hence, its first ionization energy would be smaller than thorium's (Th: 6.3 eV; element 122: 5.6 eV) because of the greater ease of ionizing unbibium's 8p1/2 electron than thorium's 6d electron. The collapse of the 5g orbital itself is delayed until around element 125; the electron configurations of the 119-electron isoelectronic series are expected to be [Og]8s1 for elements 119 through 122, [Og]6f1 for elements 123 and 124, and [Og]5g1 for element 125 onwards. In the first few superactinides, the binding energies of the added electrons are predicted to be small enough that they can lose all their valence electrons; for example, unbihexium (element 126) could easily form a +8 oxidation state, and even higher oxidation states for the next few elements may be possible. Element 126 is also predicted to display a variety of other oxidation states: recent calculations have suggested a stable monofluoride 126F may be possible, resulting from a bonding interaction between the 5g orbital on element 126 and the 2p orbital on fluorine. Other predicted oxidation states include +2, +4, and +6; +4 is expected to be the most usual oxidation state of unbihexium. The superactinides from unbipentium (element 125) to unbiennium (element 129) are predicted to exhibit a +6 oxidation state and form hexafluorides, though 125F6 and 126F6 are predicted to be relatively weakly bound. The bond dissociation energies are expected to greatly increase at element 127 and even more so at element 129. This suggests a shift from strong ionic character in fluorides of element 125 to more covalent character, involving the 8p orbital, in fluorides of element 129. The bonding in these superactinide hexafluorides is mostly between the highest 8p subshell of the superactinide and the 2p subshell of fluorine, unlike how uranium uses its 5f and 6d orbitals for bonding in uranium hexafluoride. Despite the ability of early superactinides to reach high oxidation states, it has been calculated that the 5g electrons will be most difficult to ionize; the 1256+ and 1267+ ions are expected to bear a 5g1 configuration, similar to the 5f1 configuration of the Np6+ ion. Similar behavior is observed in the low chemical activity of the 4f electrons in lanthanides; this is a consequence of the 5g orbitals being small and deeply buried in the electron cloud. The presence of electrons in g-orbitals, which do not exist in the ground state electron configuration of any currently known element, should allow presently unknown hybrid orbitals to form and influence the chemistry of the superactinides in new ways, although the absence of "g" electrons in known elements makes predicting superactinide chemistry more difficult. In the later superactinides, the oxidation states should become lower. By element 132, the predominant most stable oxidation state will be only +6; this is further reduced to +3 and +4 by element 144, and at the end of the superactinide series it will be only +2 (and possibly even 0) because the 6f shell, which is being filled at that point, is deep inside the electron cloud and the 8s and 8p1/2 electrons are bound too strongly to be chemically active. The 5g shell should be filled at element 144 and the 6f shell at around element 154, and at this region of the superactinides the 8p1/2 electrons are bound so strongly that they are no longer active chemically, so that only a few electrons can participate in chemical reactions. Calculations by Fricke et al. predict that at element 154, the 6f shell is full and there are no d- or other electron wave functions outside the chemically inactive 8s and 8p1/2 shells. This may cause element 154 to be rather unreactive with noble gas-like properties. Calculations by Pyykkö nonetheless expect that at element 155, the 6f shell is still chemically ionisable: 1553+ should have a full 6f shell, and the fourth ionisation potential should be between those of terbium and dysprosium, both of which are known in the +4 state. Similarly to the lanthanide and actinide contractions, there should be a superactinide contraction in the superactinide series where the ionic radii of the superactinides are smaller than expected. In the lanthanides, the contraction is about 4.4 pm per element; in the actinides, it is about 3 pm per element. The contraction is larger in the lanthanides than in the actinides due to the greater localization of the 4f wave function as compared to the 5f wave function. Comparisons with the wave functions of the outer electrons of the lanthanides, actinides, and superactinides lead to a prediction of a contraction of about 2 pm per element in the superactinides; although this is smaller than the contractions in the lanthanides and actinides, its total effect is larger due to the fact that 32 electrons are filled in the deeply buried 5g and 6f shells, instead of just 14 electrons being filled in the 4f and 5f shells in the lanthanides and actinides, respectively. Pekka Pyykkö divides these superactinides into three series: a 5g series (elements 121 to 138), an 8p1/2 series (elements 139 to 140), and a 6f series (elements 141 to 155), also noting that there would be a great deal of overlapping between energy levels and that the 6f, 7d, or 8p1/2 orbitals could well also be occupied in the early superactinide atoms or ions. He also expects that they would behave more like "superlanthanides", in the sense that the 5g electrons would mostly be chemically inactive, similarly to how only one or two 4f electrons in each lanthanide are ever ionized in chemical compounds. He also predicted that the possible oxidation states of the superactinides might rise very high in the 6f series, to values such as +12 in element 148. Andrey Kulsha has called the thirty-six elements 121 to 156 "ultransition" elements and has proposed to split them into two series of eighteen each, one from elements 121 to 138 and another from elements 139 to 156. The first would be analogous to the lanthanides, with oxidation states mainly ranging from +4 to +6, as the filling of the 5g shell dominates and neighbouring elements are very similar to each other, creating an analogy to uranium, neptunium, and plutonium. The second would be analogous to the actinides: at the beginning (around elements in the 140s) very high oxidation states would be expected as the 6f shell rises above the 7d one, but after that the typical oxidation states would lower and in elements in the 150s onwards the 8p1/2 electrons would stop being chemically active. Because the two rows are separated by the addition of a complete 5g18 subshell, they could be considered analogues of each other as well. As an example from the late superactinides, element 156 is expected to exhibit mainly the +2 oxidation state, on account of its electron configuration with easily removed 7d2 electrons over a stable [Og]5g186f148s28p core. It can thus be considered a heavier congener of nobelium, which likewise has a pair of easily removed 7s2 electrons over a stable [Rn]5f14 core, and is usually in the +2 state (strong oxidisers are required to obtain nobelium in the +3 state). Its first ionization energy should be about 400 kJ/mol and its metallic radius approximately 170 picometers. With a relative atomic mass of around 445 u, it should be a very heavy metal with a density of around 26 g/cm3. Elements 157 to 166. The 7d transition metals in period 8 are expected to be elements 157 to 166. Although the 8s and 8p1/2 electrons are bound so strongly in these elements that they should not be able to take part in any chemical reactions, the 9s and 9p1/2 levels are expected to be readily available for hybridization. These 7d elements should be similar to the 4d elements yttrium through cadmium. In particular, element 164 with a 7d109s0 electron configuration shows clear analogies with palladium with its 4d105s0 electron configuration. The noble metals of this series of transition metals are not expected to be as noble as their lighter homologues, due to the absence of an outer "s" shell for shielding and also because the 7d shell is strongly split into two subshells due to relativistic effects. This causes the first ionization energies of the 7d transition metals to be smaller than those of their lighter congeners. Theoretical interest in the chemistry of unhexquadium is largely motivated by theoretical predictions that it, especially the isotopes 472164 and 482164 (with 164 protons and 308 or 318 neutrons), would be at the center of a hypothetical second island of stability (the first being centered on copernicium, particularly the isotopes 291Cn, 293Cn, and 296Cn which are expected to have half-lives of centuries or millennia). Calculations predict that the 7d electrons of element 164 (unhexquadium) should participate very readily in chemical reactions, so that it should be able to show stable +6 and +4 oxidation states in addition to the normal +2 state in aqueous solutions with strong ligands. Element 164 should thus be able to form compounds like 164(CO)4, 164(PF3)4 (both tetrahedral like the corresponding palladium compounds), and 164(CN)22- (linear), which is very different behavior from that of lead, which element 164 would be a heavier homologue of if not for relativistic effects. Nevertheless, the divalent state would be the main one in aqueous solution (although the +4 and +6 states would be possible with stronger ligands), and unhexquadium(II) should behave more similarly to lead than unhexquadium(IV) and unhexquadium(VI). Element 164 is expected to be a soft Lewis acid and have Ahrlands softness parameter close to 4 eV. It should be at most moderately reactive, having a first ionization energy that should be around 685 kJ/mol, comparable to that of molybdenum. Due to the lanthanide, actinide, and superactinide contractions, element 164 should have a metallic radius of only 158 pm, very close to that of the much lighter magnesium, despite its expected atomic weight of around 474 u which is about 19.5 times the atomic weight of magnesium. This small radius and high weight cause it to be expected to have an extremely high density of around 46 g·cm−3, over twice that of osmium, currently the most dense element known, at 22.61 g·cm−3; element 164 should be the second most dense element in the first 172 elements in the periodic table, with only its neighbor unhextrium (element 163) being more dense (at 47 g·cm−3). Metallic element 164 should have a very large cohesive energy (enthalpy of crystallization) due to its covalent bonds, most probably resulting in a high melting point. In the metallic state, element 164 should be quite noble and analogous to palladium and platinum. Fricke et al. suggested some formal similarities to oganesson, as both elements have closed-shell configurations and similar ionisation energies, although they note that while oganesson would be a very bad noble gas, element 164 would be a good noble metal. Elements 165 (unhexpentium) and 166 (unhexhexium), the last two 7d metals, should behave similarly to alkali and alkaline earth metals when in the +1 and +2 oxidation states, respectively. The 9s electrons should have ionization energies comparable to those of the 3s electrons of sodium and magnesium, due to relativistic effects causing the 9s electrons to be much more strongly bound than non-relativistic calculations would predict. Elements 165 and 166 should normally exhibit the +1 and +2 oxidation states, respectively, although the ionization energies of the 7d electrons are low enough to allow higher oxidation states like +3 for element 165. The oxidation state +4 for element 166 is less likely, creating a situation similar to the lighter elements in groups 11 and 12 (particularly gold and mercury). As with mercury but not copernicium, ionization of element 166 to 1662+ is expected to result in a 7d10 configuration corresponding to the loss of the s-electrons but not the d-electrons, making it more analogous to the lighter "less relativistic" group 12 elements zinc, cadmium, and mercury. Elements 167 to 172. The next six elements on the periodic table are expected to be the last main-group elements in their period, and are likely to be similar to the 5p elements indium through xenon. In elements 167 to 172, the 9p1/2 and 8p3/2 shells will be filled. Their energy eigenvalues are so close together that they behave as one combined p-subshell, similar to the non-relativistic 2p and 3p subshells. Thus, the inert-pair effect does not occur and the most common oxidation states of elements 167 to 170 are expected to be +3, +4, +5, and +6, respectively. Element 171 (unseptunium) is expected to show some similarities to the halogens, showing various oxidation states ranging from −1 to +7, although its physical properties are expected to be closer to that of a metal. Its electron affinity is expected to be 3.0 eV, allowing it to form H171, analogous to a hydrogen halide. The 171− ion is expected to be a soft base, comparable to iodide (I−). Element 172 (unseptbium) is expected to be a noble gas with chemical behaviour similar to that of xenon, as their ionization energies should be very similar (Xe, 1170.4 kJ/mol; element 172, 1090 kJ/mol). The only main difference between them is that element 172, unlike xenon, is expected to be a liquid or a solid at standard temperature and pressure due to its much higher atomic weight. Unseptbium is expected to be a strong Lewis acid, forming fluorides and oxides, similarly to its lighter congener xenon. Because of some analogy of elements 165–172 to periods 2 and 3, Fricke et al. considered them to form a ninth period of the periodic table, while the eighth period was taken by them to end at the noble metal element 164. This ninth period would be similar to the second and third period in having no transition metals. That being said, the analogy is incomplete for elements 165 and 166; although they do start a new s-shell (9s), this is above a d-shell, making them chemically more similar to groups 11 and 12. Beyond element 172. Beyond element 172, there is the potential to fill the 6g, 7f, 8d, 10s, 10p1/2, and perhaps 6h11/2 shells. These electrons would be very loosely bound, potentially rendering extremely high oxidation states reachable, though the electrons would become more tightly bound as the ionic charge rises. Thus, there will probably be another very long transition series, like the superactinides. In element 173 (unsepttrium), the outermost electron might enter the 6g7/2, 9p3/2, or 10s subshells. Because spin–orbit interactions would create a very large energy gap between these and the 8p3/2 subshell, this outermost electron is expected to be very loosely bound and very easily lost to form a 173+ cation. As a result, element 173 is expected to behave chemically like an alkali metal, and one that might be far more reactive than even caesium (francium and element 119 being less reactive than caesium due to relativistic effects): the calculated ionisation energy for element 173 is 3.070 eV, compared to the experimentally known 3.894 eV for caesium. Element 174 (unseptquadium) may add an 8d electron and form a closed-shell 1742+ cation; its calculated ionisation energy is 3.614 eV. Element 184 (unoctquadium) was significantly targeted in early predictions, as it was originally speculated that 184 would be a proton magic number: it is predicted to have an electron configuration of [172] 6g5 7f4 8d3, with at least the 7f and 8d electrons chemically active. Its chemical behaviour is expected to be similar to uranium and neptunium, as further ionisation past the +6 state (corresponding to removal of the 6g electrons) is likely to be unprofitable; the +4 state should be most common in aqueous solution, with +5 and +6 reachable in solid compounds. End of the periodic table. The number of physically possible elements is unknown. A low estimate is that the periodic table may end soon after the island of stability, which is expected to center on "Z" = 126, as the extension of the periodic and nuclide tables is restricted by the proton and the neutron drip lines and stability toward alpha decay and spontaneous fission. One calculation by Y. Gambhir "et al.", analyzing nuclear binding energy and stability in various decay channels, suggests a limit to the existence of bound nuclei at "Z" = 146. Other predictions of an end to the periodic table include "Z" = 128 (John Emsley) and "Z" = 155 (Albert Khazan). Elements above the atomic number 137. It is a "folk legend" among physicists that Richard Feynman suggested that neutral atoms could not exist for atomic numbers greater than "Z" = 137, on the grounds that the relativistic Dirac equation predicts that the ground-state energy of the innermost electron in such an atom would be an imaginary number. Here, the number 137 arises as the inverse of the fine-structure constant. By this argument, neutral atoms cannot exist beyond atomic number 137, and therefore a periodic table of elements based on electron orbitals breaks down at this point. However, this argument presumes that the atomic nucleus is pointlike. A more accurate calculation must take into account the small, but nonzero, size of the nucleus, which is predicted to push the limit further to "Z" ≈ 173. Bohr model. The Bohr model exhibits difficulty for atoms with atomic number greater than 137, for the speed of an electron in a 1s electron orbital, "v", is given by formula_0 where "Z" is the atomic number, and "α" is the fine-structure constant, a measure of the strength of electromagnetic interactions. Under this approximation, any element with an atomic number of greater than 137 would require 1s electrons to be traveling faster than "c", the speed of light. Hence, the non-relativistic Bohr model is inaccurate when applied to such an element. Relativistic Dirac equation. The relativistic Dirac equation gives the ground state energy as formula_1 where "m" is the rest mass of the electron. For "Z" &gt; 137, the wave function of the Dirac ground state is oscillatory, rather than bound, and there is no gap between the positive and negative energy spectra, as in the Klein paradox. More accurate calculations taking into account the effects of the finite size of the nucleus indicate that the binding energy first exceeds 2"mc"2 for "Z" &gt; "Z"cr probably between 168 and 172. For "Z" &gt; "Z"cr, if the innermost orbital (1s) is not filled, the electric field of the nucleus will pull an electron out of the vacuum, resulting in the spontaneous emission of a positron. This diving of the 1s subshell into the negative continuum has often been taken to constitute an "end" to the periodic table, but in fact it does not impose such a limit, as such resonances can be interpreted as Gamow states. The accurate description of such states in a multi-electron system, needed to extend calculations and the periodic table past "Z"cr ≈ 172, are, however, still open problems. Atoms with atomic numbers above "Z"cr ≈ 172 have been termed "supercritical" atoms. Supercritical atoms cannot be totally ionised because their 1s subshell would be filled by spontaneous pair creation in which an electron-positron pair is created from the negative continuum, with the electron being bound and the positron escaping. However, the strong field around the atomic nucleus is restricted to a very small region of space, so that the Pauli exclusion principle forbids further spontaneous pair creation once the subshells that have dived into the negative continuum are filled. Elements 173–184 have been termed "weakly supercritical" atoms as for them only the 1s shell has dived into the negative continuum; the 2p1/2 shell is expected to join around element 185 and the 2s shell around element 245. Experiments have so far not succeeded in detecting spontaneous pair creation from assembling supercritical charges through the collision of heavy nuclei (e.g. colliding lead with uranium to momentarily give an effective "Z" of 174; uranium with uranium gives effective "Z" = 184 and uranium with californium gives effective "Z" = 190). Even though passing "Z"cr does not mean elements can no longer exist, the increasing concentration of the 1s density close to the nucleus would likely make these electrons more vulnerable to "K" electron capture as "Z"cr is approached. For such heavy elements, these 1s electrons would likely spend a significant fraction of time so close to the nucleus that they are actually inside it. This may pose another limit to the periodic table. Because of the factor of "m", muonic atoms become supercritical at a much larger atomic number of around 2200, as muons are about 207 times as heavy as electrons. Quark matter. It has also been posited that in the region beyond "A" &gt; 300, an entire "continent of stability" consisting of a hypothetical phase of stable quark matter, comprising freely flowing up and down quarks rather than quarks bound into protons and neutrons, may exist. Such a form of matter is theorized to be a ground state of baryonic matter with a greater binding energy per baryon than nuclear matter, favoring the decay of nuclear matter beyond this mass threshold into quark matter. If this state of matter exists, it could possibly be synthesized in the same fusion reactions leading to normal superheavy nuclei, and would be stabilized against fission as a consequence of its stronger binding that is enough to overcome Coulomb repulsion. Calculations published in 2020 suggest stability of up-down quark matter (udQM) nuggets against conventional nuclei beyond "A" ~ 266, and also show that udQM nuggets become supercritical earlier ("Z"cr ~ 163, "A" ~ 609) than conventional nuclei ("Z"cr ~ 177, "A" ~ 480). Nuclear properties. Magic numbers and the island of stability. The stability of nuclei decreases greatly with the increase in atomic number after curium, element 96, so that all isotopes with an atomic number above 101 decay radioactively with a half-life under a day. No elements with atomic numbers above 82 (after lead) have stable isotopes. Nevertheless, because of reasons not very well understood yet, there is a slight increased nuclear stability around atomic numbers 110–114, which leads to the appearance of what is known in nuclear physics as the "island of stability". This concept, proposed by University of California professor Glenn Seaborg, explains why superheavy elements last longer than predicted. Calculations according to the Hartree–Fock–Bogoliubov method using the non-relativistic Skyrme interaction have proposed "Z" = 126 as a closed proton shell. In this region of the periodic table, "N" = 184, "N" = 196, and "N" = 228 have been suggested as closed neutron shells. Therefore, the isotopes of most interest are 310126, 322126, and 354126, for these might be considerably longer-lived than other isotopes. Element 126, having a magic number of protons, is predicted to be more stable than other elements in this region, and may have nuclear isomers with very long half-lives. It is also possible that the island of stability is instead centered at 306122, which may be spherical and doubly magic. Probably, the island of stability occurs around "Z" = 114–126 and "N" = 184, with lifetimes probably around hours to days. Beyond the shell closure at "N" = 184, spontaneous fission lifetimes should drastically drop below 10−15 seconds – too short for a nucleus to obtain an electron cloud and participate in any chemistry. That being said, such lifetimes are very model-dependent, and predictions range across many orders of magnitude. Taking nuclear deformation and relativistic effects into account, an analysis of single-particle levels predicts new magic numbers for superheavy nuclei at "Z" = 126, 138, 154, and 164 and "N" = 228, 308, and 318. Therefore, in addition to the island of stability centered at 291Cn, 293Cn, and 298Fl, further islands of stability may exist around the doubly magic 354126 as well as 472164 or 482164. These nuclei are predicted to be beta-stable and decay by alpha emission or spontaneous fission with relatively long half-lives, and confer additional stability on neighboring "N" = 228 isotones and elements 152–168, respectively. On the other hand, the same analysis suggests that proton shell closures may be relatively weak or even nonexistent in some cases such as 354126, meaning that such nuclei might not be doubly magic and stability will instead be primarily determined by strong neutron shell closures. Additionally, due to the enormously greater forces of electromagnetic repulsion that must be overcome by the strong force at the second island ("Z" = 164), it is possible that nuclei around this region only exist as resonances and cannot stay together for a meaningful amount of time. It is also possible that some of the superactinides between these series may not actually exist because they are too far from both islands, in which case the periodic table might end around "Z" = 130. Interestingly, the area of elements 121–156 where periodicity is in abeyance is quite similar to the gap between the two islands. Beyond element 164, the fissility line defining the limit of stability with respect to spontaneous fission may converge with the neutron drip line, posing a limit to the existence of heavier elements. Nevertheless, further magic numbers have been predicted at "Z" = 210, 274, and 354 and "N" = 308, 406, 524, 644, and 772, with two beta-stable doubly magic nuclei found at 616210 and 798274; the same calculation method reproduced the predictions for 298Fl and 472164. (The doubly magic nuclei predicted for "Z" = 354 are beta-unstable, with 998354 being neutron-deficient and 1126354 being neutron-rich.) Although additional stability toward alpha decay and fission are predicted for 616210 and 798274, with half-lives up to hundreds of microseconds for 616210, there will not exist islands of stability as significant as those predicted at "Z" = 114 and 164. As the existence of superheavy elements is very strongly dependent on stabilizing effects from closed shells, nuclear instability and fission will likely determine the end of the periodic table beyond these islands of stability. The International Union of Pure and Applied Chemistry (IUPAC) defines an element to exist if its lifetime is longer than 10−14 seconds, which is the time it takes for the nucleus to form an electron cloud. However, a nuclide is generally considered to exist if its lifetime is longer than about 10−22 seconds, which is the time it takes for nuclear structure to form. Consequently, it is possible that some "Z" values can only be realised in nuclides and that the corresponding elements do not exist. It is also possible that no further islands actually exist beyond 126, as the nuclear shell structure gets smeared out (as the electron shell structure already is expected to be around oganesson) and low-energy decay modes become readily available. In some regions of the table of nuclides, there are expected to be additional regions of stability due to non-spherical nuclei that have different magic numbers than spherical nuclei do; the egg-shaped 270Hs ("Z" = 108, "N" = 162) is one such deformed doubly magic nucleus. In the superheavy region, the strong Coulomb repulsion of protons may cause some nuclei, including isotopes of oganesson, to assume a bubble shape in the ground state with a reduced central density of protons, unlike the roughly uniform distribution inside most smaller nuclei. Such a shape would have a very low fission barrier, however. Even heavier nuclei in some regions, such as 342136 and 466156, may instead become toroidal or red blood cell-like in shape, with their own magic numbers and islands of stability, but they would also fragment easily. Predicted decay properties of undiscovered elements. As the main island of stability is thought to lie around 291Cn and 293Cn, undiscovered elements beyond oganesson may be very unstable and undergo alpha decay or spontaneous fission in microseconds or less. The exact region in which half-lives exceed one microsecond is unknown, though various models suggest that isotopes of elements heavier than unbinilium that may be produced in fusion reactions with available targets and projectiles will have half-lives under one microsecond and therefore may not be detected. It is consistently predicted that there will exist regions of stability at "N" = 184 and "N" = 228, and possibly also at "Z" ~ 124 and "N" ~ 198. These nuclei may have half-lives of a few seconds and undergo predominantly alpha decay and spontaneous fission, though minor beta-plus decay (or electron capture) branches may also exist. Outside these regions of enhanced stability, fission barriers are expected to drop significantly due to loss of stabilization effects, resulting in fission half-lives below 10−18 seconds, especially in even–even nuclei for which hindrance is even lower due to nucleon pairing. In general, alpha decay half-lives are expected to increase with neutron number, from nanoseconds in the most neutron-deficient isotopes to seconds closer to the beta-stability line. For nuclei with only a few neutrons more than a magic number, binding energy substantially drops, resulting in a break in the trend and shorter half-lives. The most neutron deficient isotopes of these elements may also be unbound and undergo proton emission. Cluster decay (heavy particle emission) has also been proposed as an alternative decay mode for some isotopes, posing yet another hurdle to identification of these elements. Electron configurations. The following are expected electron configurations of elements 119–174 and 184. The symbol [Og] indicates the probable electron configuration of oganesson (Z = 118), which is currently the last known element. The configurations of the elements in this table are written starting with [Og] because oganesson is expected to be the last prior element with a closed-shell (inert gas) configuration, 1s2 2s2 2p6 3s2 3p6 3d10 4s2 4p6 4d10 4f14 5s2 5p6 5d10 5f14 6s2 6p6 6d10 7s2 7p6. Similarly, the [172] in the configurations for elements 173, 174, and 184 denotes the likely closed-shell configuration of element 172. Beyond element 123, no complete calculations are available and hence the data in this table must be taken as tentative. In the case of element 123, and perhaps also heavier elements, several possible electron configurations are predicted to have very similar energy levels, such that it is very difficult to predict the ground state. All configurations that have been proposed (since it was understood that the Madelung rule probably stops working here) are included. The predicted block assignments up to 172 are Kulsha's, following the expected available valence orbitals. There is, however, not a consensus in the literature as to how the blocks should work after element 138. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v = Z \\alpha c \\approx \\frac{Z c}{137.04}" }, { "math_id": 1, "text": "E=\\frac{m c^2}{\\sqrt{1+\\dfrac{Z^2 \\alpha^2}{\\bigg({n-\\left(j+\\frac12\\right)+\\sqrt{\\left(j+\\frac12\\right)^2-Z^ 2\\alpha^2}\\bigg)}^2}}}," } ]
https://en.wikipedia.org/wiki?curid=68326
683285
Conway's LUX method for magic squares
Algorithm for creating magic squares Conway's LUX method for magic squares is an algorithm by John Horton Conway for creating magic squares of order 4"n"+2, where "n" is a natural number. Method. Start by creating a (2"n"+1)-by-(2"n"+1) square array consisting of and then exchange the U in the middle with the L above it. Each letter represents a 2x2 block of numbers in the finished square. Now replace each letter by four consecutive numbers, starting with 1, 2, 3, 4 in the centre square of the top row, and moving from block to block in the manner of the Siamese method: move up and right, wrapping around the edges, and move down whenever you are obstructed. Fill each 2x2 block according to the order prescribed by the letter: formula_0 Example. Let "n" = 2, so that the array is 5x5 and the final square is 10x10. Start with the L in the middle of the top row, move to the 4th X in the bottom row, then to the U at the end of the 4th row, then the L at the beginning of the 3rd row, etc.
[ { "math_id": 0, "text": "\\mathrm{L}: \\quad \\begin{smallmatrix}4&&1\\\\&\\swarrow&\\\\2&\\rightarrow&3\\end{smallmatrix} \\qquad \\mathrm{U}: \\quad \\begin{smallmatrix}1&&4\\\\\\downarrow&&\\uparrow\\\\2&\\rightarrow&3\\end{smallmatrix} \\qquad \\mathrm{X}:\\quad \\begin{smallmatrix}1&&4\\\\&\\searrow\\!\\!\\!\\!\\!\\!\\nearrow&\\\\3&&2\\end{smallmatrix}" } ]
https://en.wikipedia.org/wiki?curid=683285
6832938
Non-integer base of numeration
Number systems with a non-integer radix (base), such as base 2.5 A non-integer representation uses non-integer numbers as the radix, or base, of a positional numeral system. For a non-integer radix "β" &gt; 1, the value of formula_0 is formula_1 The numbers "d""i" are non-negative integers less than "β". This is also known as a "β"-expansion, a notion introduced by and first studied in detail by . Every real number has at least one (possibly infinite) "β"-expansion. The set of all "β"-expansions that have a finite representation is a subset of the ring Z["β", "β"−1]. There are applications of "β"-expansions in coding theory and models of quasicrystals. Construction. "β"-expansions are a generalization of decimal expansions. While infinite decimal expansions are not unique (for example, 1.000... = 0.999...), all finite decimal expansions are unique. However, even finite "β"-expansions are not necessarily unique, for example "φ" + 1 = "φ"2 for "β" = "φ", the golden ratio. A canonical choice for the "β"-expansion of a given real number can be determined by the following greedy algorithm, essentially due to and formulated as given here by . Let "β" &gt; 1 be the base and "x" a non-negative real number. Denote by ⌊"x"⌋ the floor function of "x" (that is, the greatest integer less than or equal to "x") and let {"x"} = "x" − ⌊"x"⌋ be the fractional part of "x". There exists an integer "k" such that "β""k" ≤ "x" &lt; "β""k"+1. Set formula_2 and formula_3 For "k" − 1 ≥  "j" &gt; −∞, put formula_4 In other words, the canonical "β"-expansion of "x" is defined by choosing the largest "d""k" such that "β""k""d""k" ≤ "x", then choosing the largest "d""k"−1 such that "β""k""d""k" + β"k"−1"d""k"−1 ≤ "x", and so on. Thus it chooses the lexicographically largest string representing "x". With an integer base, this defines the usual radix expansion for the number "x". This construction extends the usual algorithm to possibly non-integer values of "β". Conversion. Following the steps above, we can create a "β"-expansion for a real number formula_5 (the steps are identical for an formula_6, although n must first be multiplied by to make it positive, then the result must be multiplied by to make it negative again). First, we must define our k value (the exponent of the nearest power of β greater than n, as well as the amount of digits in formula_7, where formula_8 is n written in base β). The k value for n and β can be written as: formula_9 After a k value is found, formula_8 can be written as d, where formula_10 for "k" − 1 ≥  "j" &gt; −∞. The first k values of d appear to the left of the decimal place. This can also be written in the following pseudocode: function toBase(n, b) { k = floor(log(b, n)) + 1 precision = 8 result = "" for (i = k - 1, i &gt; -precision-1, i--) { if (result.length == k) result += "." digit = floor((n / b^i) mod b) n -= digit * b^i result += digit return result Note that the above code is only valid for formula_11 and formula_5, as it does not convert each digits to their correct symbols or correct negative numbers. For example, if a digit's value is , it will be represented as instead of A. Example implementation code. To base π. function toBasePI(num, precision = 8) { let k = Math.floor(Math.log(num)/Math.log(Math.PI)) + 1; if (k &lt; 0) k = 0; let digits = []; for (let i = k-1; i &gt; (-1*precision)-1; i--) { let digit = Math.floor((num / Math.pow(Math.PI, i)) % Math.PI); num -= digit * Math.pow(Math.PI, i); digits.push(digit); if (num &lt; 0.1**(precision+1) &amp;&amp; i &lt;= 0) break; if (digits.length &gt; k) digits.splice(k, 0, "."); return digits.join(""); From base π. function fromBasePI(num) { let numberSplit = num.split(/\./g); let numberLength = numberSplit[0].length; let output = 0; let digits = numberSplit.join(""); for (let i = 0; i &lt; digits.length; i++) { output += digits[i] * Math.pow(Math.PI, numberLength-i-1); return output; Examples. Base √2. Base √2 behaves in a very similar way to base 2 as all one has to do to convert a number from binary into base √2 is put a zero digit in between every binary digit; for example, 191110 = 111011101112 becomes 101010001010100010101√2 and 511810 = 10011111111102 becomes 1000001010101010101010100√2. This means that every integer can be expressed in base √2 without the need of a decimal point. The base can also be used to show the relationship between the side of a square to its diagonal as a square with a side length of 1√2 will have a diagonal of 10√2 and a square with a side length of 10√2 will have a diagonal of 100√2. Another use of the base is to show the silver ratio as its representation in base √2 is simply 11√2. In addition, the area of a regular octagon with side length 1√2 is 1100√2, the area of a regular octagon with side length 10√2 is 110000√2, the area of a regular octagon with side length 100√2 is 11000000√2, etc… Golden base. In the golden base, some numbers have more than one decimal base equivalent: they are ambiguous. For example: 11φ = 100φ. Base ψ. There are some numbers in base ψ that are also ambiguous. For example, 101ψ = 1000ψ. Base "e". With base "e" the natural logarithm behaves like the common logarithm as ln(1"e") = 0, ln(10"e") = 1, ln(100"e") = 2 and ln(1000"e") = 3. The base "e" is the most economical choice of radix "β" &gt; 1, where the radix economy is measured as the product of the radix and the length of the string of symbols needed to express a given range of values. Base π. Base π can be used to more easily show the relationship between the diameter of a circle to its circumference, which corresponds to its perimeter; since circumference = diameter × π, a circle with a diameter 1π will have a circumference of 10π, a circle with a diameter 10π will have a circumference of 100π, etc. Furthermore, since the area = π × radius2, a circle with a radius of 1π will have an area of 10π, a circle with a radius of 10π will have an area of 1000π and a circle with a radius of 100π will have an area of 100000π. Properties. In no positional number system can every number be expressed uniquely. For example, in base ten, the number 1 has two representations: 1.000... and 0.999... The set of numbers with two different representations is dense in the reals, but the question of classifying real numbers with unique "β"-expansions is considerably more subtle than that of integer bases. Another problem is to classify the real numbers whose "β"-expansions are periodic. Let "β" &gt; 1, and Q("β") be the smallest field extension of the rationals containing "β". Then any real number in [0,1) having a periodic "β"-expansion must lie in Q("β"). On the other hand, the converse need not be true. The converse does hold if "β" is a Pisot number, although necessary and sufficient conditions are not known. References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "x = d_n \\dots d_2d_1d_0.d_{-1}d_{-2}\\dots d_{-m}" }, { "math_id": 1, "text": "\\begin{align}\nx &= \\beta^nd_n + \\cdots + \\beta^2d_2 + \\beta d_1 + d_0 \\\\\n &\\qquad + \\beta^{-1}d_{-1} + \\beta^{-2}d_{-2} + \\cdots + \\beta^{-m}d_{-m}.\n\\end{align}" }, { "math_id": 2, "text": "d_k = \\lfloor x/\\beta^k\\rfloor" }, { "math_id": 3, "text": "r_k = \\{x/\\beta^k\\}.\\," }, { "math_id": 4, "text": "d_j = \\lfloor\\beta r_{j+1}\\rfloor, \\quad r_j = \\{\\beta r_{j+1}\\}." }, { "math_id": 5, "text": "n \\geq 0" }, { "math_id": 6, "text": "n < 0" }, { "math_id": 7, "text": "\\lfloor n_\\beta \\rfloor" }, { "math_id": 8, "text": "n_\\beta" }, { "math_id": 9, "text": "k = \\lfloor \\log_\\beta(n) \\rfloor + 1" }, { "math_id": 10, "text": "d_j = \\lfloor (n/\\beta^j) \\bmod \\beta \\rfloor, \\quad n = n-d_j*\\beta^j " }, { "math_id": 11, "text": "1 < \\beta \\leq 10" } ]
https://en.wikipedia.org/wiki?curid=6832938
6832988
Wiener–Ikehara theorem
Tauberian theorem introduced by Shikao Ikehara (1931). The Wiener–Ikehara theorem is a Tauberian theorem, originally published by Shikao Ikehara, a student of Norbert Wiener's, in 1931. It is a special case of Wiener's Tauberian theorems, which were published by Wiener one year later. It can be used to prove the prime number theorem (Chandrasekharan, 1969), under the assumption that the Riemann zeta function has no zeros on the line of real part one. Statement. Let "A"("x") be a non-negative, monotonic nondecreasing function of "x", defined for 0 ≤ "x" &lt; ∞. Suppose that formula_0 converges for ℜ("s") &gt; 1 to the function "ƒ"("s") and that, for some non-negative number "c", formula_1 has an extension as a continuous function for ℜ("s") ≥ 1. Then the limit as "x" goes to infinity of "e"−"x" "A"("x") is equal to c. One Particular Application. An important number-theoretic application of the theorem is to Dirichlet series of the form formula_2 where "a"("n") is non-negative. If the series converges to an analytic function in formula_3 with a simple pole of residue "c" at "s" = "b", then formula_4 Applying this to the logarithmic derivative of the Riemann zeta function, where the coefficients in the Dirichlet series are values of the von Mangoldt function, it is possible to deduce the Prime number theorem from the fact that the zeta function has no zeroes on the line formula_5
[ { "math_id": 0, "text": "f(s)=\\int_0^\\infty A(x) e^{-xs}\\,dx" }, { "math_id": 1, "text": "f(s) - \\frac{c}{s-1}" }, { "math_id": 2, "text": "\\sum_{n=1}^\\infty a(n) n^{-s}" }, { "math_id": 3, "text": "\\Re(s) \\ge b" }, { "math_id": 4, "text": "\\sum_{n\\le X}a(n) \\sim \\frac{c}{b} X^b." }, { "math_id": 5, "text": "\\Re(s)=1. " } ]
https://en.wikipedia.org/wiki?curid=6832988
683368
Young tableau
A combinatorial object in representation theory In mathematics, a Young tableau (; plural: tableaux) is a combinatorial object useful in representation theory and Schubert calculus. It provides a convenient way to describe the group representations of the symmetric and general linear groups and to study their properties. Young tableaux were introduced by Alfred Young, a mathematician at Cambridge University, in 1900. They were then applied to the study of the symmetric group by Georg Frobenius in 1903. Their theory was further developed by many mathematicians, including Percy MacMahon, W. V. D. Hodge, G. de B. Robinson, Gian-Carlo Rota, Alain Lascoux, Marcel-Paul Schützenberger and Richard P. Stanley. Definitions. "Note: this article uses the English convention for displaying Young diagrams and tableaux". Diagrams. A Young diagram (also called a Ferrers diagram, particularly when represented using dots) is a finite collection of boxes, or cells, arranged in left-justified rows, with the row lengths in non-increasing order. Listing the number of boxes in each row gives a partition "λ" of a non-negative integer "n", the total number of boxes of the diagram. The Young diagram is said to be of shape "λ", and it carries the same information as that partition. Containment of one Young diagram in another defines a partial ordering on the set of all partitions, which is in fact a lattice structure, known as Young's lattice. Listing the number of boxes of a Young diagram in each column gives another partition, the conjugate or "transpose" partition of "λ"; one obtains a Young diagram of that shape by reflecting the original diagram along its main diagonal. There is almost universal agreement that in labeling boxes of Young diagrams by pairs of integers, the first index selects the row of the diagram, and the second index selects the box within the row. Nevertheless, two distinct conventions exist to display these diagrams, and consequently tableaux: the first places each row below the previous one, the second stacks each row on top of the previous one. Since the former convention is mainly used by Anglophones while the latter is often preferred by Francophones, it is customary to refer to these conventions respectively as the "English notation" and the "French notation"; for instance, in his book on symmetric functions, Macdonald advises readers preferring the French convention to "read this book upside down in a mirror" (Macdonald 1979, p. 2). This nomenclature probably started out as jocular. The English notation corresponds to the one universally used for matrices, while the French notation is closer to the convention of Cartesian coordinates; however, French notation differs from that convention by placing the vertical coordinate first. The figure on the right shows, using the English notation, the Young diagram corresponding to the partition (5, 4, 1) of the number 10. The conjugate partition, measuring the column lengths, is (3, 2, 2, 2, 1). Arm and leg length. In many applications, for example when defining Jack functions, it is convenient to define the arm length "a"λ("s") of a box "s" as the number of boxes to the right of "s" in the diagram λ in English notation. Similarly, the leg length "l"λ("s") is the number of boxes below "s". The hook length of a box "s" is the number of boxes to the right of "s" or below "s" in English notation, including the box "s" itself; in other words, the hook length is "a"λ("s") + "l"λ("s") + 1. Tableaux. A Young tableau is obtained by filling in the boxes of the Young diagram with symbols taken from some "alphabet", which is usually required to be a totally ordered set. Originally that alphabet was a set of indexed variables "x"1, "x"2, "x"3..., but now one usually uses a set of numbers for brevity. In their original application to representations of the symmetric group, Young tableaux have "n" distinct entries, arbitrarily assigned to boxes of the diagram. A tableau is called standard if the entries in each row and each column are increasing. The number of distinct standard Young tableaux on "n" entries is given by the involution numbers 1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, ... (sequence in the OEIS). In other applications, it is natural to allow the same number to appear more than once (or not at all) in a tableau. A tableau is called semistandard, or "column strict", if the entries weakly increase along each row and strictly increase down each column. Recording the number of times each number appears in a tableau gives a sequence known as the weight of the tableau. Thus the standard Young tableaux are precisely the semistandard tableaux of weight (1,1...,1), which requires every integer up to "n" to occur exactly once. In a standard Young tableau, the integer formula_0 is a descent if formula_1 appears in a row strictly below formula_0. The sum of the descents is called the major index of the tableau. Variations. There are several variations of this definition: for example, in a row-strict tableau the entries strictly increase along the rows and weakly increase down the columns. Also, tableaux with "decreasing" entries have been considered, notably, in the theory of plane partitions. There are also generalizations such as domino tableaux or ribbon tableaux, in which several boxes may be grouped together before assigning entries to them. Skew tableaux. A skew shape is a pair of partitions ("λ", "μ") such that the Young diagram of "λ" contains the Young diagram of "μ"; it is denoted by "λ"/"μ". If "λ" ("λ"1, "λ"2, ...) and "μ" ("μ"1, "μ"2, ...), then the containment of diagrams means that "μ""i" ≤ "λ""i" for all i. The skew diagram of a skew shape "λ"/"μ" is the set-theoretic difference of the Young diagrams of "λ" and "μ": the set of squares that belong to the diagram of "λ" but not to that of "μ". A skew tableau of shape "λ"/"μ" is obtained by filling the squares of the corresponding skew diagram; such a tableau is semistandard if entries increase weakly along each row, and increase strictly down each column, and it is standard if moreover all numbers from 1 to the number of squares of the skew diagram occur exactly once. While the map from partitions to their Young diagrams is injective, this is not the case for the map from skew shapes to skew diagrams; therefore the shape of a skew diagram cannot always be determined from the set of filled squares only. Although many properties of skew tableaux only depend on the filled squares, some operations defined on them do require explicit knowledge of "λ" and "μ", so it is important that skew tableaux do record this information: two distinct skew tableaux may differ only in their shape, while they occupy the same set of squares, each filled with the same entries. Young tableaux can be identified with skew tableaux in which "μ" is the empty partition (0) (the unique partition of 0). Any skew semistandard tableau "T" of shape "λ"/"μ" with positive integer entries gives rise to a sequence of partitions (or Young diagrams), by starting with "μ", and taking for the partition "i" places further in the sequence the one whose diagram is obtained from that of "μ" by adding all the boxes that contain a value  ≤ "i" in "T"; this partition eventually becomes equal to "λ". Any pair of successive shapes in such a sequence is a skew shape whose diagram contains at most one box in each column; such shapes are called horizontal strips. This sequence of partitions completely determines "T", and it is in fact possible to define (skew) semistandard tableaux as such sequences, as is done by Macdonald (Macdonald 1979, p. 4). This definition incorporates the partitions "λ" and "μ" in the data comprising the skew tableau. Overview of applications. Young tableaux have numerous applications in combinatorics, representation theory, and algebraic geometry. Various ways of counting Young tableaux have been explored and lead to the definition of and identities for Schur functions. Many combinatorial algorithms on tableaux are known, including Schützenberger's jeu de taquin and the Robinson–Schensted–Knuth correspondence. Lascoux and Schützenberger studied an associative product on the set of all semistandard Young tableaux, giving it the structure called the "plactic monoid" (French: "le monoïde plaxique"). In representation theory, standard Young tableaux of size "k" describe bases in irreducible representations of the symmetric group on "k" letters. The standard monomial basis in a finite-dimensional irreducible representation of the general linear group "GL""n" are parametrized by the set of semistandard Young tableaux of a fixed shape over the alphabet {1, 2, ..., "n"}. This has important consequences for invariant theory, starting from the work of Hodge on the homogeneous coordinate ring of the Grassmannian and further explored by Gian-Carlo Rota with collaborators, de Concini and Procesi, and Eisenbud. The Littlewood–Richardson rule describing (among other things) the decomposition of tensor products of irreducible representations of "GL""n" into irreducible components is formulated in terms of certain skew semistandard tableaux. Applications to algebraic geometry center around Schubert calculus on Grassmannians and flag varieties. Certain important cohomology classes can be represented by Schubert polynomials and described in terms of Young tableaux. Applications in representation theory. Young diagrams are in one-to-one correspondence with irreducible representations of the symmetric group over the complex numbers. They provide a convenient way of specifying the Young symmetrizers from which the irreducible representations are built. Many facts about a representation can be deduced from the corresponding diagram. Below, we describe two examples: determining the dimension of a representation and restricted representations. In both cases, we will see that some properties of a representation can be determined by using just its diagram. Young tableaux are involved in the use of the symmetric group in quantum chemistry studies of atoms, molecules and solids. Young diagrams also parametrize the irreducible polynomial representations of the general linear group "GL""n" (when they have at most "n" nonempty rows), or the irreducible representations of the special linear group "SL""n" (when they have at most "n" − 1 nonempty rows), or the irreducible complex representations of the special unitary group "SU""n" (again when they have at most "n" − 1 nonempty rows). In these cases semistandard tableaux with entries up to "n" play a central role, rather than standard tableaux; in particular it is the number of those tableaux that determines the dimension of the representation. Dimension of a representation. &lt;templatestyles src="Plain image with caption/styles.css"/&gt; "Hook-lengths" of the boxes for the partition 10 = 5 + 4 + 1 The dimension of the irreducible representation of the symmetric group "S""n" corresponding to a partition "λ" of "n" is equal to the number of different standard Young tableaux that can be obtained from the diagram of the representation. This number can be calculated by the hook length formula. A hook length hook("x") of a box "x" in Young diagram "Y"("λ") of shape "λ" is the number of boxes that are in the same row to the right of it plus those boxes in the same column below it, plus one (for the box itself). By the hook-length formula, the dimension of an irreducible representation is "n"! divided by the product of the hook lengths of all boxes in the diagram of the representation: formula_2 The figure on the right shows hook-lengths for all boxes in the diagram of the partition 10 = 5 + 4 + 1. Thus formula_3 Similarly, the dimension of the irreducible representation "W"("λ") of GL"r" corresponding to the partition "λ" of "n" (with at most "r" parts) is the number of semistandard Young tableaux of shape "λ" (containing only the entries from 1 to "r"), which is given by the hook-length formula: formula_4 where the index "i" gives the row and "j" the column of a box. For instance, for the partition (5,4,1) we get as dimension of the corresponding irreducible representation of GL7 (traversing the boxes by rows): formula_5 Restricted representations. A representation of the symmetric group on "n" elements, "S""n" is also a representation of the symmetric group on "n" − 1 elements, "S""n"−1. However, an irreducible representation of "S""n" may not be irreducible for "S""n"−1. Instead, it may be a direct sum of several representations that are irreducible for "S""n"−1. These representations are then called the factors of the restricted representation (see also induced representation). The question of determining this decomposition of the restricted representation of a given irreducible representation of "S""n", corresponding to a partition "λ" of "n", is answered as follows. One forms the set of all Young diagrams that can be obtained from the diagram of shape "λ" by removing just one box (which must be at the end both of its row and of its column); the restricted representation then decomposes as a direct sum of the irreducible representations of "S""n"−1 corresponding to those diagrams, each occurring exactly once in the sum. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "k+1" }, { "math_id": 2, "text": "\\dim\\pi_\\lambda = \\frac{n!}{\\prod_{x \\in Y(\\lambda)} \\operatorname{hook}(x)}." }, { "math_id": 3, "text": "\\dim\\pi_\\lambda = \\frac{10!}{7\\cdot5\\cdot 4 \\cdot 3\\cdot 1\\cdot 5\\cdot 3\\cdot 2\\cdot 1\\cdot1} = 288." }, { "math_id": 4, "text": "\\dim W(\\lambda) = \\prod_{(i,j) \\in Y(\\lambda)} \\frac{r+j-i}{\\operatorname{hook}(i,j)}," }, { "math_id": 5, "text": "\\dim W(\\lambda) = \\frac{7\\cdot 8\\cdot 9\\cdot 10\\cdot 11\\cdot 6\\cdot 7\\cdot 8\\cdot 9\\cdot 5}{7\\cdot5\\cdot 4 \\cdot 3\\cdot 1\\cdot 5\\cdot 3\\cdot 2\\cdot 1\\cdot1} = 66 528." } ]
https://en.wikipedia.org/wiki?curid=683368
6833695
Demihypercube
Polytope constructed from alternation of an hypercube In geometry, demihypercubes (also called "n-demicubes", "n-hemicubes", and "half measure polytopes") are a class of "n"-polytopes constructed from alternation of an "n"-hypercube, labeled as "hγn" for being "half" of the hypercube family, "γn". Half of the vertices are deleted and new facets are formed. The 2"n" facets become 2"n" ("n"−1)-demicubes, and 2"n" ("n"−1)-simplex facets are formed in place of the deleted vertices. They have been named with a "demi-" prefix to each hypercube name: demicube, demitesseract, etc. The demicube is identical to the regular tetrahedron, and the demitesseract is identical to the regular 16-cell. The demipenteract is considered "semiregular" for having only regular facets. Higher forms do not have all regular facets but are all uniform polytopes. The vertices and edges of a demihypercube form two copies of the halved cube graph. An "n"-demicube has inversion symmetry if "n" is even. Discovery. Thorold Gosset described the demipenteract in his 1900 publication listing all of the regular and semiregular figures in "n"-dimensions above three. He called it a "5-ic semi-regular". It also exists within the semiregular "k"21 polytope family. The demihypercubes can be represented by extended Schläfli symbols of the form h{4,3...,3} as half the vertices of {4,3...,3}. The vertex figures of demihypercubes are rectified "n"-simplexes. Constructions. They are represented by Coxeter-Dynkin diagrams of three constructive forms: H.S.M. Coxeter also labeled the third bifurcating diagrams as 1"k"1 representing the lengths of the three branches and led by the ringed branch. An "n-demicube", "n" greater than 2, has "n"("n"−1)/2 edges meeting at each vertex. The graphs below show less edges at each vertex due to overlapping edges in the symmetry projection. In general, a demicube's elements can be determined from the original "n"-cube: (with C"n","m" = "mth"-face count in "n"-cube = 2"n"−"m" "n"!/("m"!("n"−"m")!)) Symmetry group. The stabilizer of the demihypercube in the hyperoctahedral group (the Coxeter group formula_0 [4,3"n"−1]) has index 2. It is the Coxeter group formula_1 [3"n"−3,1,1] of order formula_2, and is generated by permutations of the coordinate axes and reflections along "pairs" of coordinate axes. Orthotopic constructions. Constructions as alternated orthotopes have the same topology, but can be stretched with different lengths in "n"-axes of symmetry. The rhombic disphenoid is the three-dimensional example as alternated cuboid. It has three sets of edge lengths, and scalene triangle faces. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "BC_n" }, { "math_id": 1, "text": "D_n," }, { "math_id": 2, "text": "2^{n-1}n!" } ]
https://en.wikipedia.org/wiki?curid=6833695
68342104
International Axion Observatory
Axion helioscope The International Axion Observatory (IAXO) is a next-generation axion helioscope for the search of solar axions and Axion-Like Particles (ALPs). It is the follow-up of the CERN Axion Solar Telescope (CAST), which operated from 2003 to 2022. IAXO will be set up by implementing the helioscope concept bringing it to a larger size and longer observation times. The IAXO collaboration. The Letter of Intent for International Axion Observatory was submitted to the CERN in August 2013. IAXO formally founded in July 2017 and received an advanced grant from the European Research Council in October 2018. The near-term goal of the collaboration is to build a precursor version of the experiment, called BabyIAXO, which will be located at DESY, Germany. The IAXO Collaboration is formed by 21 institutes from 7 different countries. Principle of operation. The IAXO experiment is based on the helioscope principle. Axions can be produced in stars (like the sun) via the Primakoff effect and other mechanisms. These axions would reach the helioscope and would be converted into soft X-ray photons in the presence of a magnetic field. Then, these photons travel through a focusing X-ray optics, and are expected as an excess of signal in the detector when the magnet points to the Sun. The potential of the experiment can be estimated by means of the figure of merit (FOM), which can be defined as formula_0, where the first factor is related to the magnet and depends on the magnetic field ("B"), the length of the magnet ("L") and the area of the bore ("A"). The second part depends on the efficiency (formula_1) and background ("b") of the detector. The third is related to the optics, more specifically the efficiency (formula_2) and the area of the focused signal on the detector readout (formula_3). The last term is related to the time ("t") of operation. The objective is to maximise the value of the figure of merit in order to optimise the sensitivity of the experiment to axions. Sensitivity and physics potential. IAXO will primarily be searching for solar axions, along with the potential to observe the quantum chromodynamics (QCD) axion in the mass range of 1 meV to 1 eV. It is also expected to be capable of discovering ALPs. Therefore, IAXO will have the potential to solve both the strong CP problem and the dark matter problem.. It could also be later adapted to test models of hypothesized hidden photons or chameleons. Also, the magnet can be used as a haloscope to search for axion dark matter. IAXO will have a sensitivity to the axion-photon coupling 1–1.5 order of magnitude higher than that achieved by previous detectors. Axion sources accessible to IAXO. Any particle found by IAXO will be at the least a sub-dominant component of the dark matter. The observatory would be capable of observing from a wide range of sources given below. IAXO: The International Axion Observatory. IAXO will be a next-generation enhanced helioscope, with a signal to noise ratio five orders of magnitude higher compared to current-day detectors. The cross-sectional area of the magnet equipped with an X-ray focusing optics is meant to increase this signal to background ratio. When the solar axions interact with the magnetic field, some of them may convert into photons through the Primakoff effect. These photons would then be detected by the X-ray detectors of the helioscope. The magnet will be a purpose‐built large‐scale superconductor with a length of 20 m and an average field strength of 2.5 Tesla. The whole helioscope will feature 8 bores of 60 cm diameter. Each of the bores will be equipped with a focusing X-ray optic and a low-background X-ray detector. The helioscope will also be equipped with a mechanical system allowing it to follow the sun consistently throughout half of the day. Tracking data will be taken during the day and background data will be taken during the night, which is the ideal split of data and background for properly estimating the event rate in each case and determining the axion signal. BabyIAXO. BabyIAXO is a technological prototype of all the subsystems of the IAXO with 2 magnet bores (with 2 detection systems) in a magnet of 10 m length. The prototype is a testing version and will serve as an intermediate step to explore further possible improvements to the final IAXO. BabyIAXO will be set up in Hamburg, Germany by the CERN and DESY collaboration. CERN will be responsible for giving in the design reports of prototype magnets and cryostat, and DESY will design and construct the movable platform along with the other infrastructure. The data taking by BabyIAXO is scheduled to start in 2028. In addition to being a proof of concept for IAXO, BabyIAXO will have its own physics potential and a FOM around 100 times larger than CAST. BabyIAXO design. Magnet. The central magnetic systems will have a large superconducting magnet, configured in a toroidal multibore manner, in order to generate a strong magnetic field over a larger volume. It will be a 10 meters long magnet consisting of two different coils made out of 35 km Rutherford cable. This configuration is calculated to generate a 2.5 Tesla magnetic field within a 70 cm diameter. The magnetic subsystem is inspired by the ATLAS experiment. X-ray optics. Since BabyIAXO will have two bores in the magnet, two X-ray optics are required to operate in parallel. Both of them are Wolter optics (type I). One of the two BabyIAXO optics will be based on a mature technology developed for NASA's NuStar X-ray satellite. The signal from the 0.7 m diameter bore will be focused to 0.2 formula_4 area. The second BabyIAXO optics will be one of the flight models of the XMM-Newton space mission that belongs to the ESA. Detectors. IAXO and BabyIAXO will have multiple, diverse detectors working in parallel mounted to the different magnet bores. The detectors for this experiment need to meet certain technical requirements. They need a high detection efficiency in the ROI (1 – 10 keV) where the Primakoff axion signal is expected. They also need very low background in ROI of under formula_5 (less than 3 counts per year of data). To reach this background level, the detector relies on: Detector technologies. Based upon the experience from CAST, the baseline detector technology will be a TPC with a Micromegas readout. There are several other technologies under study: GridPix, Metallic Magnetic Calorimeters (MMC), Transition Edge Sensors (TES) and Silicon Drift Detectors (SDD). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " FOM \\sim B^2 L^2 A \\times \\epsilon_{d} b^{-1/2} \\times \\epsilon_{o} \\alpha^{-1/2} \\times \\epsilon^{1/2}_{t} t^{1/2} " }, { "math_id": 1, "text": "\\epsilon_d" }, { "math_id": 2, "text": "\\epsilon_o" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\mathrm{cm^2}" }, { "math_id": 5, "text": "\\mathrm{10^{-7} counts\\,keV^{-1} cm ^{-2} s ^{-1} } " } ]
https://en.wikipedia.org/wiki?curid=68342104
68342165
Scattering and Neutrino Detector
The Scattering and Neutrino Detector (SND) at the Large Hadron Collider (LHC), CERN, is an experiment built for the detection of the collider neutrinos. The primary goal of SND is to measure the p+p --&gt; formula_0+X  process and search for the feebly interacting particles. It will be operational from 2022, during the LHC-Run 3 (2022-2024). SND will be installed in an empty tunnel- TI18 that links the LHC and Super Proton Synchrotron, 480m away from the ATLAS experiment interaction point in the fast forward region and along the beam collision axis. In February 2020, the Search for Hidden Particle (SHiP) collaboration expressed its interest in neutrino-measurement to the LHC Council (LHCC). The letter of intent for SND was presented in August 2020. Based on LHCC’s recommendation, the Letter of intent was followed by the Technical Design report presented in February 2021. The experiment was later approved in March 2021 by the CERN Research Board to be the ninth experiment at LHC. In 2023, SND@LHC and FASER reported the first observation of collider neutrinos. Physics potential and goals. The SND will cover a wide range of physics, such as detecting all three neutrino flavors in the pseudorapidity (angular) range that has never been explored before. Along with the FASERnu detector at LHC, it will be the first experiment to observe and study the collider neutrinos. It will also search for Beyond Standard Model particles such as Feebly Interacting Particles and particles that could make up the dark matter. Physics with neutrinos. SND will primarily observe neutrinos in the pseudorapidity range of 7.2 to 8.6. It will detect the scattering properties of the neutrinos in this yet unexplored range and complement the observation range of FASERnu. The neutrinos in this range come from the decay of heavy quarks such as charm decays (c → s + formula_1 : charm quark decaying into a strange quark and a W boson), and hence SND aims to give valuable insights into the physics of heavy quark production. The charmed-hadron production studies will also provide data to constrain the gluon parton distribution function in the low Bjorken-x region. In its first operational run, i.e. the LHC's Run-3 between 2022 and 2025, SND is expected to detect and study about 2000 high-energy neutrinos. Physics with feebly interacting particles. The Feebly Interacting Particles (FIPs) are theorized to be produced in the proton-proton collisions. SND has the potential to detect two types of FIPs; stable FIPs by observing their scattering from the atoms (mostly protons) in the detector target section, and unstable FIPs which could decay inside the detector itself. The light-dark matter particles hypothesized with scattering properties similar to the neutrinos, and which interact with the Standard Model particles through ‘portal mediators’, could also be possibly detected as FIPs, although they will have to be separated from the neutrino scattering background. One basic criterion for such a separation would be to observe the number of inelastic and elastic collision events. Neutrinos usually scatter inelastically due to the high mass of their mediators (W and Z bosons). Thus more than the predicted number of elastic collisions will hint at light dark matter scattering events. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{\\nu}" }, { "math_id": 1, "text": "\\mathrm{W^{\\pm}}" } ]
https://en.wikipedia.org/wiki?curid=68342165
68343710
Klaus Wilhelm Roggenkamp
German mathematician (1940–2021) Klaus Wilhelm Roggenkamp (24 December 1940 – 23 July 2021) was a German mathematician, specializing in algebra. Education and career. As an undergraduate, Roggenkamp studied mathematics from 1960 to 1964 at the University of Giessen. There in 1967 he received his PhD. His thesis "Darstellungen endlicher Gruppen in Polynombereichen" (Representations of finite groups in polynomial integral domains) was written under the supervision of Hermann Boerner. As a postdoc Roggenkamp was at the University of Illinois at Urbana-Champaign, where he studied under Irving Reiner, and at the University of Montreal. After four years as a professor at Bielefeld University, he was appointed to the chair of algebra at the University of Stuttgart. Roggenkamp and Leonard Lewy Scott collaborated on a long series of papers on the groups of units of integral group rings, dealing with problems connected with the "integral isomorphism problem", which was proposed by Graham Higman in his 1940 doctoral dissertation at the University of Oxford. In 1986 Roggenkamp and Scott proved their most famous theorem (published in 1987 in the "Annals of Mathematics"). Their theorem states that given two finite groups formula_0 and formula_1, if Zformula_0 is isomorphic to Zformula_1 then formula_0 is isomorphic to formula_1, in the case where formula_0 and formula_1 are finite "p"-groups over the "p"-adic integers, and also in the case where formula_0 and formula_1 are finite nilpotent groups. Their 1987 paper also established a very strong form of a conjecture made by Hans Zassenhaus. The papers of Roggenkamp and Scott were the basis for most developments which followed in the study of finite groups of units of integral group rings. In 1988 Roggenkamp and Scott found a counterexample to another conjecture by Hans Zassenhaus — the conjecture was a somewhat strengthened form of the conjecture that the "integral isomorphism problem" always has an affirmative solution. Martin Hertweck, partly building on the techniques introduced by Roggenkamp and Scott for their counterexample, published a counterexample to the conjecture that the "integral isomorphism problem" can always be solved affirmatively. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A series of joint papers of Klaus Roggenkamp and Karl Gruenberg centers around homological considerations of groups and connections to homological questions of group rings. In particular, the authors studied the relation module of a group, i.e. the abelianised kernel of a minimal presentation of a group. Various applications were given, among others, to questions about units in integral group rings. Klaus Roggenkamp managed to clarify completely the structure of blocks of "p"-adic group rings with cyclic defect group, thus establishing an integral analogue of the celebrated theory of Brauer tree algebras. Many applications are known and more are on the way, from equivalences between derived categories to the inverse problem of Galois theory.&lt;br&gt;A new branch of representation theory is created by Klaus Roggenkamp’s most recent research on higher-dimensional orders. Motivated by recent developments in the representation theory of algebraic groups, algebraic combinatorics, Hecke algebras and quantum groups, Klaus Roggenkamp had started to study orders over two and higher-dimensional coefficient domains. Roggenkamp was elected a member of the "Akademie gemeinnütziger Wissenschaften zu Erfurt" (Erfurt Academy of Useful Sciences) and was made an honorary member of Ovidius University of Constanța in Romania. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "H" } ]
https://en.wikipedia.org/wiki?curid=68343710
683561
Total variation
Measure of local oscillation behavior In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function "f", defined on an interval ["a", "b"] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation "x" ↦ "f"("x"), for "x" ∈ ["a", "b"]. Functions whose total variation is finite are called "functions of bounded variation". Historical note. The concept of total variation for functions of one real variable was first introduced by Camille Jordan in the paper . He used the new concept in order to prove a convergence theorem for Fourier series of discontinuous periodic functions whose variation is bounded. The extension of the concept to functions of more than one variable however is not simple for various reasons. Definitions. Total variation for functions of one real variable. Definition 1.1. The total variation of a real-valued (or more generally complex-valued) function formula_0, defined on an interval formula_1 is the quantity formula_2 where the supremum runs over the set of all partitions formula_3 of the given interval. Which means that formula_4. Total variation for functions of "n" &gt; 1 real variables. Definition 1.2.&lt;ref name="10.1093/oso/9780198502456.001.0001"&gt;&lt;/ref&gt; Let Ω be an open subset of R"n". Given a function "f" belonging to "L"1(Ω), the total variation of "f" in Ω is defined as formula_5 where This definition "does not require" that the domain formula_10 of the given function be a bounded set. Total variation in measure theory. Classical total variation definition. Following , consider a signed measure formula_11 on a measurable space formula_12: then it is possible to define two set functions formula_13 and formula_14, respectively called upper variation and lower variation, as follows formula_15 formula_16 clearly formula_17 Definition 1.3. The variation (also called absolute variation) of the signed measure formula_11 is the set function formula_18 and its total variation is defined as the value of this measure on the whole space of definition, i.e. formula_19 Modern definition of total variation norm. uses upper and lower variations to prove the Hahn–Jordan decomposition: according to his version of this theorem, the upper and lower variation are respectively a non-negative and a non-positive measure. Using a more modern notation, define formula_20 formula_21 Then formula_22 and formula_23 are two non-negative measures such that formula_24 formula_25 The last measure is sometimes called, by abuse of notation, total variation measure. Total variation norm of complex measures. If the measure formula_11 is complex-valued i.e. is a complex measure, its upper and lower variation cannot be defined and the Hahn–Jordan decomposition theorem can only be applied to its real and imaginary parts. However, it is possible to follow and define the total variation of the complex-valued measure formula_11 as follows Definition 1.4. The variation of the complex-valued measure formula_11 is the set function formula_26 where the supremum is taken over all partitions formula_27 of a measurable set formula_28 into a countable number of disjoint measurable subsets. This definition coincides with the above definition formula_25 for the case of real-valued signed measures. Total variation norm of vector-valued measures. The variation so defined is a positive measure (see ) and coincides with the one defined by 1.3 when formula_11 is a signed measure: its total variation is defined as above. This definition works also if formula_11 is a vector measure: the variation is then defined by the following formula formula_29 where the supremum is as above. This definition is slightly more general than the one given by since it requires only to consider "finite partitions" of the space formula_30: this implies that it can be used also to define the total variation on finite-additive measures. Total variation of probability measures. The total variation of any probability measure is exactly one, therefore it is not interesting as a means of investigating the properties of such measures. However, when μ and ν are probability measures, the total variation distance of probability measures can be defined as formula_31 where the norm is the total variation norm of signed measures. Using the property that formula_32, we eventually arrive at the equivalent definition formula_33 and its values are non-trivial. The factor formula_34 above is usually dropped (as is the convention in the article total variation distance of probability measures). Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. For a categorical distribution it is possible to write the total variation distance as follows formula_35 It may also be normalized to values in formula_36 by halving the previous definition as follows formula_37 Basic properties. Total variation of differentiable functions. The total variation of a formula_38 function formula_0 can be expressed as an integral involving the given function instead of as the supremum of the functionals of definitions 1.1 and 1.2. The form of the total variation of a differentiable function of one variable. Theorem 1. The total variation of a differentiable function formula_0, defined on an interval formula_1, has the following expression if formula_39 is Riemann integrable formula_40 If formula_41 is differentiable and monotonic, then the above simplifies to formula_42 For any differentiable function formula_0, we can decompose the domain interval formula_43, into subintervals formula_44 (with formula_45) in which formula_0 is locally monotonic, then the total variation of formula_41 over formula_43 can be written as the sum of local variations on those subintervals: formula_46 The form of the total variation of a differentiable function of several variables. Theorem 2. Given a formula_38 function formula_0 defined on a bounded open set formula_10, with formula_47 of class formula_48, the total variation of formula_0 has the following expression formula_49 . Proof. The first step in the proof is to first prove an equality which follows from the Gauss–Ostrogradsky theorem. Lemma. Under the conditions of the theorem, the following equality holds: formula_50 Proof of the lemma. From the Gauss–Ostrogradsky theorem: formula_51 by substituting formula_52, we have: formula_53 where formula_54 is zero on the border of formula_7 by definition: formula_55 formula_56 formula_57 formula_58 formula_59 Proof of the equality. Under the conditions of the theorem, from the lemma we have: formula_60 in the last part formula_61 could be omitted, because by definition its essential supremum is at most one. On the other hand, we consider formula_62 and formula_63 which is the up to formula_64 approximation of formula_65 in formula_66 with the same integral. We can do this since formula_66 is dense in formula_67. Now again substituting into the lemma: formula_68 This means we have a convergent sequence of formula_69 that tends to formula_70 as well as we know that formula_71. Q.E.D. It can be seen from the proof that the supremum is attained when formula_72 The function formula_0 is said to be of bounded variation precisely if its total variation is finite. Total variation of a measure. The total variation is a norm defined on the space of measures of bounded variation. The space of measures on a σ-algebra of sets is a Banach space, called the ca space, relative to this norm. It is contained in the larger Banach space, called the ba space, consisting of "finitely additive" (as opposed to countably additive) measures, also with the same norm. The distance function associated to the norm gives rise to the total variation distance between two measures "μ" and "ν". For finite measures on R, the link between the total variation of a measure "μ" and the total variation of a function, as described above, goes as follows. Given "μ", define a function formula_73 by formula_74 Then, the total variation of the signed measure "μ" is equal to the total variation, in the above sense, of the function formula_75. In general, the total variation of a signed measure can be defined using Jordan's decomposition theorem by formula_76 for any signed measure "μ" on a measurable space formula_12. Applications. Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). As a functional, total variation finds applications in several branches of mathematics and engineering, like optimal control, numerical analysis, and calculus of variations, where the solution to a certain problem has to minimize its value. As an example, use of the total variation functional is common in the following two kind of problems Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. One variable One and more variables Measure theory
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": " [a , b] \\subset \\mathbb{R}" }, { "math_id": 2, "text": " V_a^b(f)=\\sup_{\\mathcal{P}} \\sum_{i=0}^{n_P-1} | f(x_{i+1})-f(x_i) |, " }, { "math_id": 3, "text": " \\mathcal{P} = \\left\\{P=\\{ x_0, \\dots , x_{n_P}\\} \\mid P\\text{ is a partition of } [a,b] \\right\\} " }, { "math_id": 4, "text": "a = x_{0} < x_{1} < ... < x_{n_{P}} = b" }, { "math_id": 5, "text": " V(f,\\Omega):=\\sup\\left\\{\\int_\\Omega f(x) \\operatorname{div} \\phi(x) \\, \\mathrm{d}x \\colon \\phi\\in C_c^1(\\Omega,\\mathbb{R}^n),\\ \\Vert \\phi\\Vert_{L^\\infty(\\Omega)}\\le 1\\right\\}, " }, { "math_id": 6, "text": " C_c^1(\\Omega,\\mathbb{R}^n)" }, { "math_id": 7, "text": "\\Omega" }, { "math_id": 8, "text": " \\Vert\\;\\Vert_{L^\\infty(\\Omega)}" }, { "math_id": 9, "text": "\\operatorname{div}" }, { "math_id": 10, "text": "\\Omega \\subseteq \\mathbb{R}^n" }, { "math_id": 11, "text": "\\mu" }, { "math_id": 12, "text": "(X,\\Sigma)" }, { "math_id": 13, "text": "\\overline{\\mathrm{W}}(\\mu,\\cdot)" }, { "math_id": 14, "text": "\\underline{\\mathrm{W}}(\\mu,\\cdot)" }, { "math_id": 15, "text": "\\overline{\\mathrm{W}}(\\mu,E)=\\sup\\left\\{\\mu(A)\\mid A\\in\\Sigma\\text{ and }A\\subset E \\right\\}\\qquad\\forall E\\in\\Sigma" }, { "math_id": 16, "text": "\\underline{\\mathrm{W}}(\\mu,E)=\\inf\\left\\{\\mu(A)\\mid A\\in\\Sigma\\text{ and }A\\subset E \\right\\}\\qquad\\forall E\\in\\Sigma" }, { "math_id": 17, "text": "\\overline{\\mathrm{W}}(\\mu,E)\\geq 0 \\geq \\underline{\\mathrm{W}}(\\mu,E)\\qquad\\forall E\\in\\Sigma" }, { "math_id": 18, "text": "|\\mu|(E)=\\overline{\\mathrm{W}}(\\mu,E)+\\left|\\underline{\\mathrm{W}}(\\mu,E)\\right|\\qquad\\forall E\\in\\Sigma" }, { "math_id": 19, "text": "\\|\\mu\\|=|\\mu|(X)" }, { "math_id": 20, "text": "\\mu^+(\\cdot)=\\overline{\\mathrm{W}}(\\mu,\\cdot)\\,," }, { "math_id": 21, "text": "\\mu^-(\\cdot)=-\\underline{\\mathrm{W}}(\\mu,\\cdot)\\,," }, { "math_id": 22, "text": "\\mu^+" }, { "math_id": 23, "text": "\\mu^-" }, { "math_id": 24, "text": "\\mu=\\mu^+-\\mu^-" }, { "math_id": 25, "text": "|\\mu|=\\mu^++\\mu^-" }, { "math_id": 26, "text": "|\\mu|(E)=\\sup_\\pi \\sum_{A\\isin\\pi} |\\mu(A)|\\qquad\\forall E\\in\\Sigma" }, { "math_id": 27, "text": "\\pi" }, { "math_id": 28, "text": "E" }, { "math_id": 29, "text": "|\\mu|(E) = \\sup_\\pi \\sum_{A\\isin\\pi} \\|\\mu(A)\\|\\qquad\\forall E\\in\\Sigma" }, { "math_id": 30, "text": "X" }, { "math_id": 31, "text": "\\| \\mu - \\nu \\|" }, { "math_id": 32, "text": "(\\mu-\\nu)(X)=0" }, { "math_id": 33, "text": "\\|\\mu-\\nu\\| = |\\mu-\\nu|(X)=2 \\sup\\left\\{\\,\\left|\\mu(A)-\\nu(A)\\right| : A\\in \\Sigma\\,\\right\\}" }, { "math_id": 34, "text": "2" }, { "math_id": 35, "text": "\\delta(\\mu,\\nu) = \\sum_x \\left| \\mu(x) - \\nu(x) \\right|\\;." }, { "math_id": 36, "text": "[0, 1]" }, { "math_id": 37, "text": "\\delta(\\mu,\\nu) = \\frac{1}{2}\\sum_x \\left| \\mu(x) - \\nu(x) \\right|" }, { "math_id": 38, "text": "C^1(\\overline{\\Omega})" }, { "math_id": 39, "text": "f'" }, { "math_id": 40, "text": " V_a^b(f) = \\int _a^b |f'(x)|\\mathrm{d}x" }, { "math_id": 41, "text": " f" }, { "math_id": 42, "text": " V_a^b(f) = |f(a) - f(b)|" }, { "math_id": 43, "text": "[a,b]" }, { "math_id": 44, "text": "[a,a_1], [a_1,a_2], \\dots, [a_N,b]" }, { "math_id": 45, "text": "a<a_1<a_2<\\cdots<a_N<b " }, { "math_id": 46, "text": "\n\\begin{align}\nV_a^b(f) &= V_a^{a_1}(f) + V_{a_1}^{a_2}(f) + \\, \\cdots \\, +V_{a_N}^b(f)\\\\[0.3em]\n&=|f(a)-f(a_1)|+|f(a_1)-f(a_2)|+ \\,\\cdots \\, + |f(a_N)-f(b)|\n\\end{align}" }, { "math_id": 47, "text": "\\partial \\Omega " }, { "math_id": 48, "text": "C^1" }, { "math_id": 49, "text": "V(f,\\Omega) = \\int_\\Omega \\left|\\nabla f(x) \\right| \\mathrm{d}x" }, { "math_id": 50, "text": " \\int_\\Omega f\\operatorname{div}\\varphi = -\\int_\\Omega\\nabla f\\cdot\\varphi " }, { "math_id": 51, "text": " \\int_\\Omega \\operatorname{div}\\mathbf R = \\int_{\\partial\\Omega}\\mathbf R\\cdot \\mathbf n " }, { "math_id": 52, "text": "\\mathbf R:= f\\mathbf\\varphi" }, { "math_id": 53, "text": " \\int_\\Omega\\operatorname{div}\\left(f\\mathbf\\varphi\\right) =\n\\int_{\\partial\\Omega}\\left(f\\mathbf\\varphi\\right)\\cdot\\mathbf n " }, { "math_id": 54, "text": "\\mathbf\\varphi " }, { "math_id": 55, "text": " \\int_\\Omega\\operatorname{div}\\left(f\\mathbf\\varphi\\right)=0" }, { "math_id": 56, "text": " \\int_\\Omega \\partial_{x_i} \\left(f\\mathbf\\varphi_i\\right)=0" }, { "math_id": 57, "text": " \\int_\\Omega \\mathbf\\varphi_i\\partial_{x_i} f + f\\partial_{x_i}\\mathbf\\varphi_i=0" }, { "math_id": 58, "text": " \\int_\\Omega f\\partial_{x_i}\\mathbf\\varphi_i = - \\int_\\Omega \\mathbf\\varphi_i\\partial_{x_i} f " }, { "math_id": 59, "text": " \\int_\\Omega f\\operatorname{div} \\mathbf\\varphi = - \\int_\\Omega \\mathbf\\varphi\\cdot\\nabla f " }, { "math_id": 60, "text": " \\int_\\Omega f\\operatorname{div} \\mathbf\\varphi\n= - \\int_\\Omega \\mathbf\\varphi\\cdot\\nabla f\n\\leq \\left| \\int_\\Omega \\mathbf\\varphi\\cdot\\nabla f \\right|\n\\leq \\int_\\Omega \\left|\\mathbf\\varphi\\right|\\cdot\\left|\\nabla f\\right|\n\\leq \\int_\\Omega \\left|\\nabla f\\right| " }, { "math_id": 61, "text": "\\mathbf\\varphi" }, { "math_id": 62, "text": "\\theta_N:=-\\mathbb I_{\\left[-N,N\\right]}\\mathbb I_{\\{\\nabla f\\ne 0\\}}\\frac{\\nabla f}{\\left|\\nabla f\\right|}" }, { "math_id": 63, "text": "\\theta^*_N" }, { "math_id": 64, "text": "\\varepsilon" }, { "math_id": 65, "text": "\\theta_N" }, { "math_id": 66, "text": " C^1_c" }, { "math_id": 67, "text": " L^1 " }, { "math_id": 68, "text": "\\begin{align}\n&\\lim_{N\\to\\infty}\\int_\\Omega f\\operatorname{div}\\theta^*_N \\\\[4pt]\n&= \\lim_{N\\to\\infty}\\int_{\\{\\nabla f\\ne 0\\}}\\mathbb I_{\\left[-N,N\\right]}\\nabla f\\cdot\\frac{\\nabla f}{\\left|\\nabla f\\right|} \\\\[4pt]\n&= \\lim_{N\\to\\infty}\\int_{\\left[-N,N\\right]\\cap{\\{\\nabla f\\ne 0\\}}} \\nabla f\\cdot\\frac{\\nabla f}{\\left|\\nabla f\\right|} \\\\[4pt]\n&= \\int_\\Omega\\left|\\nabla f\\right|\n\\end{align}" }, { "math_id": 69, "text": "\\int_\\Omega f \\operatorname{div} \\mathbf\\varphi" }, { "math_id": 70, "text": "\\int_\\Omega\\left|\\nabla f\\right|" }, { "math_id": 71, "text": "\\int_\\Omega f\\operatorname{div}\\mathbf\\varphi \\leq \\int_\\Omega\\left|\\nabla f\\right| " }, { "math_id": 72, "text": "\\varphi\\to \\frac{-\\nabla f}{\\left|\\nabla f\\right|}." }, { "math_id": 73, "text": "\\varphi\\colon \\mathbb{R}\\to \\mathbb{R}" }, { "math_id": 74, "text": "\\varphi(t) = \\mu((-\\infty,t])~." }, { "math_id": 75, "text": "\\varphi" }, { "math_id": 76, "text": "\\|\\mu\\|_{TV} = \\mu_+(X) + \\mu_-(X)~," } ]
https://en.wikipedia.org/wiki?curid=683561
683570
Total variation distance of probability measures
Concept in probability theory In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance, statistical difference or variational distance. Definition. Consider a measurable space formula_0 and probability measures formula_1 and formula_2 defined on formula_0. The total variation distance between formula_1 and formula_2 is defined as formula_3 This is the largest absolute difference between the probabilities that the two probability distributions assign to the same event. Properties. The total variation distance is an "f"-divergence and an integral probability metric. Relation to other distances. The total variation distance is related to the Kullback–Leibler divergence by Pinsker’s inequality: formula_4 One also has the following inequality, due to Bretagnolle and Huber (see also ), which has the advantage of providing a non-vacuous bound even when formula_5 formula_6 The total variation distance is half of the L1 distance between the probability functions: on discrete domains, this is the distance between the probability mass functions formula_7 and when the distributions have standard probability density functions p and q, formula_8 (or the analogous distance between Radon-Nikodym derivatives with any common dominating measure). This result can be shown by noticing that the supremum in the definition is achieved exactly at the set where one distribution dominates the other. The total variation distance is related to the Hellinger distance formula_9 as follows: formula_10 These inequalities follow immediately from the inequalities between the 1-norm and the 2-norm. Connection to transportation theory. The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is formula_11, that is, formula_12 where the expectation is taken with respect to the probability measure formula_13 on the space where formula_14 lives, and the infimum is taken over all such formula_13 with marginals formula_1 and formula_2, respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\Omega, \\mathcal{F})" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "Q" }, { "math_id": 3, "text": "\\delta(P,Q)=\\sup_{ A\\in \\mathcal{F}}\\left|P(A)-Q(A)\\right|." }, { "math_id": 4, "text": "\\delta(P,Q) \\le \\sqrt{\\frac{1}{2} D_{\\mathrm{KL}}(P\\parallel Q)}." }, { "math_id": 5, "text": "\\textstyle D_{\\mathrm{KL}}(P\\parallel Q)>2\\colon" }, { "math_id": 6, "text": "\\delta(P,Q) \\le \\sqrt{1-e^{ -D_{\\mathrm{KL}}(P\\parallel Q) }}." }, { "math_id": 7, "text": "\\delta(P, Q) = \\frac12 \\sum_{x} |P(x) - Q(x)|," }, { "math_id": 8, "text": "\\delta(P, Q) = \\frac12 \\int | p(x) - q(x) | \\, \\mathrm{d}x" }, { "math_id": 9, "text": "H(P,Q)" }, { "math_id": 10, "text": "H^2(P,Q) \\leq \\delta(P,Q) \\leq \\sqrt 2 H(P,Q)." }, { "math_id": 11, "text": "c(x,y) = {\\mathbf{1}}_{x \\neq y}" }, { "math_id": 12, "text": "\\frac{1}{2} \\| P - Q \\|_1 = \\delta(P,Q) = \\inf\\big\\{ \\mathbb{P}(X\\neq Y ) : \\text{Law}(X) = P , \\text{Law}(Y) = Q\\big\\} = \\inf_\\pi \\operatorname{E}_{\\pi}[{\\mathbf{1}}_{x\\neq y}]," }, { "math_id": 13, "text": "\\pi" }, { "math_id": 14, "text": "(x,y)" } ]
https://en.wikipedia.org/wiki?curid=683570
683591
F-statistics
Statistically expected level of heterozygosity in a population In population genetics, "F"-statistics (also known as fixation indices) describe the statistically expected level of heterozygosity in a population; more specifically the expected degree of (usually) a reduction in heterozygosity when compared to Hardy–Weinberg expectation. "F"-statistics can also be thought of as a measure of the correlation between genes drawn at different levels of a (hierarchically) subdivided population. This correlation is influenced by several evolutionary processes, such as genetic drift, founder effect, bottleneck, genetic hitchhiking, meiotic drive, mutation, gene flow, inbreeding, natural selection, or the Wahlund effect, but it was originally designed to measure the amount of allelic fixation owing to genetic drift. The concept of "F"-statistics was developed during the 1920s by the American geneticist Sewall Wright, who was interested in inbreeding in cattle. However, because complete dominance causes the phenotypes of homozygote dominants and heterozygotes to be the same, it was not until the advent of molecular genetics from the 1960s onwards that heterozygosity in populations could be measured. "F" can be used to define effective population size. Definitions and equations. The measures FIS, FST, and FIT are related to the amounts of heterozygosity at various levels of population structure. Together, they are called "F"-statistics, and are derived from "F", the inbreeding coefficient. In a simple two-allele system with inbreeding, the genotypic frequencies are: formula_0 The value for formula_1 is found by solving the equation for formula_1 using heterozygotes in the above inbred population. This becomes one minus the observed frequency of heterozygotes in a population divided by the expected frequency of heterozygotes at Hardy–Weinberg equilibrium: formula_2 where the expected frequency at Hardy–Weinberg equilibrium is given by formula_3 where formula_4 and formula_5 are the allele frequencies of formula_6 and formula_7, respectively. It is also the probability that at any locus, two alleles from a random individual of the population are identical by descent. For example, consider the data from E.B. Ford (1971) on a single population of the scarlet tiger moth: From this, the allele frequencies can be calculated, and the expectation of formula_8 derived : formula_9 formula_10 formula_11 The different F-statistics look at different levels of population structure. FIT is the inbreeding coefficient of an individual (I) relative to the total (T) population, as above; FIS is the inbreeding coefficient of an individual (I) relative to the subpopulation (S), using the above for subpopulations and averaging them; and FST is the effect of subpopulations (S) compared to the total population (T), and is calculated by solving the equation: formula_12 as shown in the next section. Partition due to population structure. Consider a population that has a population structure of two levels; one from the individual (I) to the subpopulation (S) and one from the subpopulation to the total (T). Then the total formula_1, known here as formula_13, can be partitioned into formula_15 and formula_14: formula_16 This may be further partitioned for population substructure, and it expands according to the rules of binomial expansion, so that for "I" partitions: formula_17 Fixation index. A reformulation of the definition of formula_1 would be the ratio of the average number of differences between pairs of chromosomes sampled within diploid individuals with the average number obtained when sampling chromosomes randomly from the population (excluding the grouping per individual). One can modify this definition and consider a grouping per sub-population instead of per individual. Population geneticists have used that idea to measure the degree of structure in a population. Unfortunately, there is a large number of definitions for formula_14, causing some confusion in the scientific literature. A common definition is the following: formula_18 where the variance of formula_19 is computed across sub-populations and formula_20 is the expected frequency of heterozygotes. Fixation index in human populations. It is well established that the genetic diversity among human populations is low, although the distribution of the genetic diversity was only roughly estimated. Early studies argued that 85–90% of the genetic variation is found within individuals residing in the same populations within continents (intra-continental populations) and only an additional 10–15% is found between populations of different continents (continental populations). Later studies based on hundreds of thousands single-nucleotide polymorphism (SNPs) suggested that the genetic diversity between continental populations is even smaller and accounts for 3 to 7% A later study based on three million SNPs found that 12% of the genetic variation is found between continental populations and only 1% within them. Most of these studies have used the "F""ST" statistics or closely related statistics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " p^2(1-F) + pF\\text{ for }\\mathbf{AA};\\ 2pq(1-F)\\text{ for }\\mathbf{Aa};\\text{ and }q^2(1-F) + qF\\text{ for }\\mathbf{aa}. " }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": " F = 1- \\frac{\\operatorname{O}(f(\\mathbf{Aa}))} {\\operatorname{E}(f(\\mathbf{Aa}))} = 1- \\frac{\\operatorname{ObservedFrequency}(\\mathbf{Aa})} {\\operatorname{ExpectedFrequency}(\\mathbf{Aa})}, \\!" }, { "math_id": 3, "text": " \\operatorname{E}(f(\\mathbf{Aa})) = 2pq, \\!" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "q" }, { "math_id": 6, "text": "\\mathbf{A}" }, { "math_id": 7, "text": "\\mathbf{a}" }, { "math_id": 8, "text": "f\\left(\\mathbf{Aa}\\right)" }, { "math_id": 9, "text": "p = {2 \\times \\mathrm{obs}(AA) + \\mathrm{obs}(Aa) \\over 2 \\times (\\mathrm{obs}(AA) + \\mathrm{obs}(Aa) + \\mathrm{obs}(aa))} = 0.954" }, { "math_id": 10, "text": "q = 1 - p = 0.046\\," }, { "math_id": 11, "text": "F = 1- \\frac{ \\mathrm{obs}(Aa) / n } { 2pq } = 1- {138 / 1612 \\over 2(0.954)(0.046)} = 0.023" }, { "math_id": 12, "text": "(1-F_{IS})(1-F_{ST}) = 1-F_{IT}, \\, " }, { "math_id": 13, "text": "F_{IT}" }, { "math_id": 14, "text": "F_{ST}" }, { "math_id": 15, "text": "F_{IS}" }, { "math_id": 16, "text": " 1 - F_{IT} = (1 - F_{IS})\\,(1 - F_{ST}). \\!" }, { "math_id": 17, "text": " 1 - F = \\prod_{i=0}^{i=I} (1 - F_{i,i+1}) \\!" }, { "math_id": 18, "text": " F_{ST} = \\frac{\\operatorname{var}(\\mathbf{p})}{p\\,(1 - p)} \\!" }, { "math_id": 19, "text": "\\mathbf{p}" }, { "math_id": 20, "text": "p\\,(1 - p)" } ]
https://en.wikipedia.org/wiki?curid=683591
683621
Duckworth–Lewis–Stern method
Mathematical cricket match scoring formulation The Duckworth–Lewis–Stern method (DLS) is a mathematical formulation designed to calculate the target score (number of runs needed to win) for the team batting second in a limited overs cricket match interrupted by weather or other circumstances. The method was devised by two English statisticians, Frank Duckworth and Tony Lewis, and was formerly known as the Duckworth–Lewis method (D/L). It was introduced in 1997, and adopted officially by the ICC in 1999. After the retirement of both Duckworth and Lewis, the Australian statistician Steven Stern became the custodian of the method, which was renamed to its current title in November 2014. In 2014, he refined the model to better fit modern scoring trends, especially in T20 cricket, resulting in the updated Duckworth-Lewis-Stern method. This refined method remains the standard for handling rain-affected matches in international cricket today. The target score in cricket matches without interruptions is one more than the number of runs scored by the team that batted first. When overs are lost, setting an adjusted target for the team batting second is not as simple as reducing the run target proportionally to the loss in overs, because a team with ten wickets in hand and 25 overs to bat can play more aggressively than if they had ten wickets and a full 50 overs, for example, and can consequently achieve a higher run rate. The DLS method is an attempt to set a statistically fair target for the second team's innings, which is the same difficulty as the original target. The basic principle is that each team in a limited-overs match has two resources available with which to score runs (overs to play and wickets remaining), and the target is adjusted proportionally to the change in the combination of these two resources. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; History and creation. Various different methods had been used previously to resolve rain-affected cricket matches, with the most common being the Average Run Rate method, and later, the Most Productive Overs method. While simple in nature, these methods had intrinsic flaws and were easily exploitable: The D/L method was devised by two British statisticians, Frank Duckworth and Tony Lewis, as a result of the outcome of the semi-final in the 1992 World Cup between England and South Africa, where the Most Productive Overs method was used. When rain stopped play for 12 minutes, South Africa needed 22 runs from 13 balls, but when play resumed, the revised target left South Africa needing 21 runs from one ball, a reduction of only one run compared to a reduction of two overs, and a virtually impossible target given that the maximum score from one ball is generally six runs. Duckworth said, "I recall hearing Christopher Martin-Jenkins on radio saying 'surely someone, somewhere could come up with something better' and I soon realised that it was a mathematical problem that required a mathematical solution." The D/L method avoids this flaw: in this match, the revised D/L target of 236 would have left South Africa needing four to tie or five to win from the final ball. The D/L method was first used in international cricket on 1 January 1997 in the second match of the Zimbabwe versus England ODI series, which Zimbabwe won by seven runs. The D/L method was formally adopted by the ICC in 1999 as the standard method of calculating target scores in rain-shortened one-day matches. Theory. Calculation summary. The essence of the D/L method is 'resources'. Each team is taken to have two 'resources' to use to score as many runs as possible: the number of overs they have to receive; and the number of wickets they have in hand. At any point in any innings, a team's ability to score more runs depends on the combination of these two resources they have left. Looking at historical scores, there is a very close correspondence between the availability of these resources and a team's final score, a correspondence which D/L exploits. The D/L method converts all possible combinations of overs (or, more accurately, balls) and wickets left into a combined resources remaining percentage figure (with 50 overs and 10 wickets = 100%), and these are all stored in a published table or computer. The target score for the team batting second ('Team 2') can be adjusted up or down from the total the team batting first ('Team 1') achieved using these resource percentages, to reflect the loss of resources to one or both teams when a match is shortened one or more times. In the version of D/L most commonly in use in international and first-class matches (the 'Professional Edition'), the target for Team 2 is adjusted simply in proportion to the two teams' resources, i.e. formula_0 If, as usually occurs, this 'par score' is a non-integer number of runs, then Team 2's target to win is this number rounded up to the next integer, and the score to tie (also called the par score), is this number rounded down to the preceding integer. If Team 2 reaches or passes the target score, then they have won the match. If the match ends when Team 2 has exactly met (but not passed) the par score then the match is a tie. If Team 2 fail to reach the par score then they have lost. For example, if a rain delay means that Team 2 only has 90% of resources available, and Team 1 scored 254 with 100% of resources available, then 254 × 90% / 100% = 228.6, so Team 2's target is 229, and the score to tie is 228. The actual resource values used in the Professional Edition are not publicly available, so a computer which has this software loaded must be used. If it is a 50-over match and Team 1 completed its innings uninterrupted, then they had 100% resource available to them, so the formula simplifies to: formula_1 For example, if Team 1 batted for 20 overs before rain came, thinking they would have 50 overs in total, but at the re-start there was only time for Team 2 to bat for 20 overs, it would clearly be unfair to give Team 2 the target that Team 1 achieved, as Team 1 would have batted less conservatively and scored more runs, if they had known they would only have the 20 overs. Mathematical theory. The original D/L model started by assuming that the number of runs that can still be scored (called formula_2), for a given number of overs remaining (called formula_3) and wickets lost (called formula_4), takes the following exponential decay relationship: formula_5 where the constant formula_6 is the asymptotic average total score in unlimited overs (under one-day rules), and formula_7 is the exponential decay constant. Both vary with formula_4 (only). The values of these two parameters for each formula_4 from 0 to 9 were estimated from scores from 'hundreds of one-day internationals' and 'extensive research and experimentation', though were not disclosed due to 'commercial confidentiality'. Finding the value of formula_2 for a particular combination of formula_3 and formula_4 (by putting in formula_3 and the values of these constants for the particular formula_4), and dividing this by the score achievable at the start of the innings, i.e. finding formula_8 gives the proportion of the combined run scoring resources of the innings remaining when formula_3 overs are left and formula_4 wickets are down. These proportions can be plotted in a graph, as shown right, or shown in a single table, as shown below. This became the Standard Edition. When it was introduced, it was necessary that D/L could be implemented with a single table of resource percentages, as it could not be guaranteed that computers would be present. Therefore, this single formula was used giving average resources. This method relies on the assumption that average performance is proportional to the mean, irrespective of the actual score. This was good enough in 95 per cent of matches, but in the 5 per cent of matches with very high scores, the simple approach started to break down. To overcome the problem, an upgraded formula was proposed with an additional parameter whose value depends on the Team 1 innings. This became the Professional Edition. Examples. Stoppage in first innings. Increased target. In the 4th India–England ODI in the 2008 series, the first innings was interrupted by rain on two occasions, reducing the match to 22 overs each. India (batting first) made 166/4. The D/L method increased England's target to 198 from 22 overs. As England "knew" they had only 22 overs, the expectation is that they could score more runs from those overs than India had from their (interrupted) innings. England made 178/8 from 22 overs, and so the match was listed as "India won by 19 runs (D/L method)". During the 5th ODI between India and South Africa in January 2011, rain halted play twice during the first innings. The match was reduced to 46 overs each. South Africa scored 250/9. The D/L method increased India's target to 268. As the number of overs was reduced during South Africa's innings, this method takes into account what South Africa were likely to have scored if they had known throughout their innings that it would only be 46 overs long. The match was listed as "South Africa won by 33 runs (D/L method)". Decreased target. On 3 December 2014, Sri Lanka played England and batted first, but play was interrupted when Sri Lanka had scored 6/1 from 2 overs. At the restart, both innings were reduced to 35 overs, and Sri Lanka finished on 242/8. D/L reduced England's target to 236 from 35 overs. Although Sri Lanka had less resource remaining after the interruption than England would have for their whole innings (about 7% less), they had used up 8% of their resource (2 overs and 1 wicket) before the interruption, so the total resource used by Sri Lanka was still slightly more than England had available, hence the slightly decreased target for England. Stoppage in second innings. A simple example of the D/L method being applied was the 1st ODI between India and Pakistan in their 2006 ODI series. India batted first, and were all out for 328. Pakistan, batting second, were 311/7 when bad light stopped play after the 47th over. Pakistan's target, had the match continued, was 18 runs in 18 balls, with three wickets in hand. Considering the overall scoring rate throughout the match, this is a target most teams would be favoured to achieve. And indeed, application of the D/L method resulted in a retrospective target score of 305 (or par score of 304) at the end of the 47th over, with the result therefore officially listed as "Pakistan won by 7 runs (D/L Method)". The D/L method was used in the group stage match between Sri Lanka and Zimbabwe at the T20 World Cup in 2010. Sri Lanka scored 173/7 in 20 overs batting first, and in reply Zimbabwe were 4/0 from 1 over when rain interrupted play. At the restart Zimbabwe's target was reduced to 108 from 12 overs, but rain stopped the match when they had scored 29/1 from 5 overs. The retrospective D/L target from 5 overs was a further reduction to 44, or a par score of 43, and hence Sri Lanka won the match by 14 runs. The DLS method was also used after the rain disruption in the 2023 Indian Premier League final, when Chennai Super Kings had scored 4/0 (0.3 overs) and the Gujarat Titans just scored 214/4 (20 overs). The target was reduced at 171 runs from 15 overs from earlier target of 215 runs from 20 overs for Chennai Super Kings. Chennai Super Kings won by 5 wickets by the DLS method. This was achieved by reaching 171/5 from 15 overs. An example of a D/L tied match was the ODI between England and India on 11 September 2011. This match was frequently interrupted by rain in the final overs, and a ball-by-ball calculation of the Duckworth–Lewis 'par' score played a key role in tactical decisions during those overs. At one point, India were leading under D/L during one rain delay, and would have won if play had not resumed. At a second rain interval, England, who had scored some quick runs (knowing they needed to get ahead in D/L terms) would correspondingly have won if play had not resumed. Play was finally called off with just 7 balls of the match remaining and England's score equal to the Duckworth–Lewis 'par' score, therefore resulting in a tie. This example does show how crucial (and difficult) the decisions of the umpires can be, in assessing when rain is heavy enough to justify ceasing play. If the umpires of that match had halted play one ball earlier, England would have been ahead on D/L, and so would have won the match. Equally, if play had stopped one ball later, India could have won the match with a dot ball – indicating how finely-tuned D/L calculations can be in such situations. Stoppages in both innings. During the 2012/13 KFC Big Bash League, D/L was used in the 2nd semi-final played between the Melbourne Stars and the Perth Scorchers. After rain delayed the start of the match, it interrupted Melbourne's innings when they had scored 159/1 off 15.2 overs, and both innings were reduced by 2 overs to 18, and Melbourne finished on 183/2. After a further rain delay reduced Perth's innings to 17 overs, Perth returned to the field to face 13 overs, with a revised target of 139. Perth won the game by 8 wickets with a boundary off the final ball. Use and updates. The published table that underpins the D/L method is regularly updated, using source data from more recent matches; this is done on 1 July annually. For 50-over matches decided by D/L, each team must face at least 20 overs for the result to be valid, and for Twenty20 games decided by D/L, each side must face at least five overs, unless one or both teams are bowled out and/or the second team reaches its target in fewer overs. If the conditions prevent a match from reaching this minimum length, it is declared a no result. 1996–2003 – Single version. Until 2003, a single version of D/L was in use. This used a single published reference table of total resource percentages remaining for all possible combinations of overs and wickets, and some simple mathematical calculations, and was relatively transparent and straightforward to implement. However, a flaw in how it handled very high first innings scores (350+) became apparent from the 1999 Cricket World Cup match in Bristol between India and Kenya. Tony Lewis noticed that there was an inherent weakness in the formula that would give a noticeable advantage to the side chasing a total in excess of 350. A correction was built into the formula and the software, but was not fully adopted until 2004. One-day matches were achieving significantly higher scores than in previous decades, affecting the historical relationship between resources and runs. The second version uses more sophisticated statistical modelling, but does not use a single table of resource percentages. Instead, the percentages also vary with score, so a computer is required. Therefore, it loses some of the previous advantages of transparency and simplicity. In 2002 the resource percentages were revised, following an extensive analysis of limited overs matches, and there was a change to the G50 for ODIs. (G50 is the average score expected from the team batting first in an uninterrupted 50 overs-per-innings match.) G50 was changed to 235 for ODIs. These changes came into effect on 1 September 2002. As of 2014, these resource percentages are the ones still in use in the Standard Edition, though G50 has subsequently changed. The tables show how the percentages were in 1999 and 2001, and what they were changed to in 2002. Mostly they were reduced. 2004 – Adoption of second version. The original version was named the Standard Edition, and the new version was named the Professional Edition. Tony Lewis said, "We were then [at the time of the 2003 World Cup Final] using what is now known as the Standard Edition. ... Australia got 359 and that showed up the flaws and straight away the next edition was introduced which handled high scores much better. The par score for India is likely to be much higher now." Duckworth and Lewis wrote, "When the side batting first score at or below the average for top level cricket ..., the results of applying the Professional Edition are generally similar to those from the Standard Edition. For higher scoring matches, the results start to diverge and the difference increases the higher the first innings total. In effect there is now a different table of resource percentages for every total score in the Team 1 innings." The Professional Edition has been in use in all international one-day cricket matches since early 2004. This edition also removed the use of the G50 constant when dealing with interruptions in the first innings. The decision on which edition should be used is for the cricket authority which runs the particular competition. The ICC Playing Handbook requires the use of the Professional Edition for internationals. This also applies to most countries' national competitions. At lower levels of the game, where use of a computer cannot always be guaranteed, the Standard Edition is used. 2009 - Twenty20 updates. In June 2009, it was reported that the D/L method would be reviewed for the Twenty20 format after its appropriateness was questioned in the quickest version of the game. Lewis was quoted admitting that "Certainly, people have suggested that we need to look very carefully and see whether in fact the numbers in our formula are totally appropriate for the Twenty20 game." 2015 – Becomes DLS. For the 2015 World Cup, the ICC implemented the Duckworth–Lewis–Stern formula, which included work by the new custodian of the method, Professor Steven Stern, from the Department of Statistics at Queensland University of Technology. These changes recognised that teams need to start out with a higher scoring rate when chasing high targets rather than keep wickets in hand. Target score calculations. Using the notation of the ICC Playing Handbook, the team that bats first is called Team 1, their final score is called S, the total resources available to Team 1 for their innings is called R1, the team that bats second is called Team 2, and the total resources available to Team 2 for their innings is called R2. Step 1. Find the batting resources available to each team. After each reduction in overs, the new total batting resources available to the two teams are found, using figures for the total amount of batting resources "remaining" for any combination of overs and wickets. While the process for converting these resources remaining figures into total resource available figures is the same in the two Editions, this can be done manually in the Standard Edition, as the resource remaining figures are published in a reference table. However, the resource remaining figures used in the Professional Edition are not publicly available, so a computer must be used which has the software loaded. These are just the different ways of having one interruption. With multiple interruptions possible, it may seem like finding the total resource percentage requires a different calculation for each different scenario. However, the formula is actually the same each time − it's just that different scenarios, with more or less interruptions and restarts, need to use more or less of the same formula. The total resources available to a team are given by: which can alternatively be written as: Each time there's an interruption or a restart after an interruption, the resource remaining percentages at those times (obtained from a reference table for the Standard Edition, or from a computer for the Professional Edition) can be entered into the formula, with the rest left blank. Note that a delay at the start of an innings counts as the 1st interruption. Step 2. Convert the two teams' batting resources into Team 2's target score. Standard Edition G50 G50 is the average score expected from the team batting first in an uninterrupted 50 overs-per-innings match. This will vary with the level of competition and over time. The annual ICC Playing Handbook gives the values of G50 to be used each year when the D/L Standard Edition is applied: Duckworth and Lewis wrote:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Professional Edition Example Standard Edition Target score calculations. As the resource percentages used in the Professional Edition are not publicly available, it is difficult to give examples of the D/L calculation for the Professional Edition. Therefore, examples are given from when the Standard Edition was widely used, which was up to early 2004. Reduced target: Team 1's innings completed; Team 2's innings delayed (resources lost at start of innings). On 18 May 2003, Lancashire played Hampshire in the 2003 ECB National League. Rain before play reduced the match to 30 overs each. Lancashire batted first and scored 231–4 from their 30 overs. Before Hampshire began their innings, it was further reduced to 28 overs. Hampshire's target was therefore 221 to win (in 28 overs), or 220 to tie. They were all out for 150, giving Lancashire victory by 220 − 150 = 70 runs. If Hampshire's target had been set by the Average Run Rate method (simply in proportion to the reduction in overs), their par score would have been 231 x 28/30 = 215.6, giving 216 to win or 215 to tie. While this would have kept the required run rate the same as Lancashire achieved (7.7 runs per over), this would have given an unfair advantage to Hampshire as it's easier to achieve and maintain a run rate for a shorter period. Increasing Hampshire's target from 216 overcomes this flaw. As Lancashire's innings was interrupted once (before it started), and then restarted, their resource can be found from the general formula above as follows (Hampshire's is similar): Total resources = 100% − Resources remaining at 1st interruption + Resources remaining at 1st restart = 100% − 100% + 75.1% = 75.1%. Reduced target: Team 1's innings completed; Team 2's innings cut short (resources lost at end of innings). On 3 March 2003, Sri Lanka played South Africa in World Cup Pool B. Sri Lanka batted first and scored 268–9 from their 50 overs. Chasing a target of 269, South Africa had reached 229–6 from 45 overs when play was abandoned. Therefore, South Africa's retrospective target from their 45 overs was 230 runs to win, or 229 to tie. In the event, as they had scored exactly 229, the match was declared a tie. South Africa scored no runs off the very last ball. If play had been abandoned without that ball having been bowled, the resource available to South Africa at the abandonment would have been 14.7%, giving them a par score of 228.6, and hence victory. As South Africa's innings was interrupted once (and not restarted), their resource is given by the general formula above as follows: Total resources available = 100% − Resources remaining at 1st interruption = 100% − 14.3% = 85.7%. Reduced target: Team 1's innings completed; Team 2's innings interrupted (resources lost in middle of innings). On 16 February 2003, New South Wales played South Australia in the ING Cup. New South Wales batted first and scored 273 all out (from 49.4 overs). Chasing a target of 274, rain interrupted play when South Australia had reached 70–2 from 19 overs, and at the restart their innings was reduced to 36 overs (i.e. 17 remaining). South Australia's new target was therefore 214 to win (in 36 overs), or 213 to tie. In the event, they were all out for 174, so New South Wales won by 213 − 174 = 39 runs. As South Australia's innings was interrupted once and restarted once, their resource is given by the general formula above as follows: Total resources available = 100% − Resources remaining at 1st interruption + Resources remaining at 1st restart = 100% − 68.6% + 46.7% = 78.1%. Increased target: Team 1's innings cut short (resources lost at end of innings); Team 2's innings completed. On 25 January 2001, West Indies played Zimbabwe. West Indies batted first and had reached 235–6 from 47 overs (of a scheduled 50) when rain halted play for two hours. At the restart, both innings were reduced to 47 overs, i.e. West Indies' innings was closed immediately, and Zimbabwe began their innings. Zimbabwe's target was therefore 253 to win (in 47 overs), or 252 to tie. It is fair that their target was increased, even though they had the same number of overs to bat as West Indies, as West Indies would have batted more aggressively in their last few overs, and scored more runs, if they had known that their innings would be cut short at 47 overs. Zimbabwe were all out for 175, giving West Indies victory by 252 − 175 = 77 runs. These resource percentages are the ones which were in use back in 2001, before the 2002 revision, and so do not match the currently used percentages for the Standard Edition, which are slightly different. Also, the formula for Zimbabwe's par score comes from the Standard Edition of D/L, which was used at the time. Currently the Professional Edition is used, which has a different formula when R2&gt;R1. The formula required Zimbabwe to match West Indies' performance with their overlapping 89.8% of resource (i.e. score 235 runs), and achieve average performance with their extra 97.4% − 89.8% = 7.6% of resource (i.e. score 7.6% of G50 (225 at the time) = 17.1 runs). As West Indies' innings was interrupted once (and not restarted), their resource is given by the general formula above as follows: Total resources available = 100% − Resources remaining at 1st interruption = 100% − 10.2% = 89.8%. Increased target: Multiple interruptions in Team 1's innings (resources lost in middle of innings); Team 2's innings completed. On 20 February 2003, Australia played Netherlands in the 2003 Cricket World Cup Pool A. Rain before play reduced the match to 47 overs each, and Australia batted first. The Netherlands' target was therefore 198 to win (in 36 overs), or 197 to tie. It is fair that their target was increased, even though they had the same number of overs to bat as Australia, as Australia would have batted less conservatively in their first 28 overs, and scored more runs at the expense of more wickets, if they had known that their innings would only be 36 overs long. Increasing the Netherlands' target score neutralises the injustice done to Australia when they were denied some of the overs to bat they thought they would get. The Netherlands were all out for 122, giving Australia victory by 197 − 122 = 75 runs. This formula for Netherlands' par score comes from the Standard Edition of D/L, which was used at the time. Currently the Professional Edition is used, which has a different formula when R2&gt;R1. The formula required Netherlands to match Australia's performance with their overlapping 72.6% of resource (i.e. score 170 runs), and achieve average performance with their extra 84.1% − 72.6% = 11.5% of resource (i.e. score 11.5% of G50 (235 at the time) = 27.025 runs). After the match there were reports in the media that Australia had batted conservatively in their final 8 overs after the final restart, to avoid losing wickets rather than maximising their numbers of runs, in belief that this would further increase the Netherlands' par score. However, if this is true, this belief was mistaken, in the same way that conserving wickets rather than maximising runs in the final 8 overs of a full 50-over innings would be a mistake. At that point the amount of resource available to each team was fixed (as long as there were no further rain interruptions), so the only undetermined number in the formula for Netherlands' par score was Australia's final score, so they should have tried to maximise this. As Australia's innings was interrupted three times (once before it started) and restarted three times, their resource is given by the general formula above as follows: Total resources available = 100% − Resources remaining at 1st interruption + Resources remaining at 1st restart − Resources remaining at 2nd interruption + Resources remaining at 2nd restart − Resources remaining at 3rd interruption + Resources remaining at 3rd restart = 100% − 100% + 97.1% − 55.8% + 50.5% − 44.7% + 25.5% = 72.6%. In-game strategy. During team 1's innings. Strategy for team 1. During Team 1's innings, the target score calculations (as described above), have not yet been made. The objective of the team batting first is to maximise the target score which will be calculated for the team batting second, which (in the Professional Edition) will be determined by the formula: formula_0 For these three terms: If there will not be any future interruptions to Team 1's innings, then the amount of resource available to them is now fixed (whether there have been interruptions so far or not), so the only thing Team 1 can do to increase Team 2's target is increase their own score, ignoring how many wickets they lose (as in a normal unaffected match). However, if there will be future interruptions to Team 1's innings, then an alternative strategy to scoring more runs is minimising the amount of resource they use before the coming interruption (i.e. preserving wickets). While the best overall strategy is obviously to both score more runs "and" preserve resources, if a choice has to be made between the two, sometimes preserving wickets at the expense of scoring runs ('conservative' batting) is a more effective way of increasing Team 2's target, and sometimes the reverse ('aggressive' batting) is true. For example, suppose Team 1 has been batting without interruptions, but thinks the innings will be cut short at 40 overs, i.e. with 10 overs left. (Then Team 2 will have 40 overs to bat, so Team 2's resource will be 89.3%.) Team 1 thinks by batting conservatively it can reach 200–6, or by batting aggressively it can reach 220–8: Therefore, in this case, the conservative strategy achieves a higher target for Team 2. However, suppose instead that the difference between the two strategies is scoring 200–2 or 220–4: In this case, the aggressive strategy is better. Therefore, the best batting strategy for Team 1 ahead of a coming interruption is not always the same, but varies with the facts of the match situation to date (runs scored, wickets lost, overs used, and whether there have been interruptions), and also with the opinions about what will happen with each strategy (how many further runs will be scored, further wickets will be lost, and further overs will be used? How likely are the coming interruptions, when will they happen, and how long will they last – will Team 1's innings be restarted?). This example shows just two possible batting strategies, but in reality there could be a range of others, e.g. 'neutral', 'semi-aggressive', 'super-aggressive', or timewasting to minimise the amount of resource used by slowing the over rate. Finding which strategy is the best can only be found by inputting the facts and one's opinions into the calculations and seeing what emerges. Of course, a chosen strategy may backfire. For example, if Team 1 chooses to bat conservatively, Team 2 may see this and decide to attack (rather than focus on saving runs), and Team 1 may both fail to score many more runs "and" lose wickets. If there have already been interruptions to Team 1's innings, the calculation of total resource they use will be more complicated than this example. Strategy for team 2. During Team 1's innings, Team 2's objective is to minimise the target score they will be set. This is achieved by minimising Team 1's score, or (as above), if there will be future interruptions to Team 1's innings, alternatively by maximising the resource used by Team 1 (i.e. wickets lost or overs bowled) before that happens. Team 2 can vary their bowling strategy (between conservative and aggressive) to try to achieve either of these objectives, so this means doing the same calculations as above, inputting their opinions of future runs conceded, wickets taken and overs bowled in each bowling strategy, to see which one is best. Also, Team 2 can encourage Team 1 to bat particularly conservatively or aggressively (e.g. through field settings). During team 2's innings. A target (from a given number of overs) is set for Team 2 at the start of its innings. If there will not be any future interruptions, then both sides can play to a finish in the normal way. However, if there are likely to be interruptions to Team 2's innings, then Team 2 will aim to keep itself ahead of the D/L par score, and Team 1 will aim to keep them behind it. This is because, if a match is abandoned before the given number of overs is complete, Team 2 is declared the winner if they're ahead of the par score, and Team 1 is declared the winner if Team 2 are behind the par score. A tie is declared if Team 2 are exactly on the par score. (This is provided a minimum number of overs has been bowled in Team 2's innings.) The par score increases with every ball bowled and every wicket lost, as the amount of resource used increases. As an example, in the 2003 Cricket World Cup Final Australia batted first and scored 359 from 50 overs. As Australia completed their 50 overs, their total resources used R1=100%, so India's par score throughout their innings was: 359 x R2/100%, where R2 is the amount of resource used to that point. As shown in the first line of the table below, after 9 overs India were 57-1, and 41 overs and 9 wickets remaining equates to 85.3% of resources, so 100% − 85.3% = 14.7% had been used. India's par score after 9 overs was therefore 359 x 14.7%/100% = 52.773, which is rounded down to 52. During the six balls of the 10th over India scored 0, 0, 0, 1 (from a no ball), loss of wicket, 0. At the start of the over India were ahead of the par score, but the loss of the wicket caused their par score to jump from 55 to 79, which put them behind the par score. Other uses. There are uses of the D/L method other than finding the current official final target score for the team batting second in a match that has already been reduced by the weather. Ball-by-ball par score. During the second team's innings, the number of runs a chasing side would expect to have scored on average with this number of overs used and wickets lost, if they were going to successfully match the first team's score, called the D/L par score, may be shown on a computer printout, the scoreboard and/or TV alongside the actual score, and updated after every ball. This can happen in matches which look like they're about to be shortened by the weather, and so D/L is about to be brought into play, or even in matches completely unaffected by the weather. This is: Net run rate calculation. It has been suggested that when a side batting second successfully completes the run chase, the D/L method could be used to predict how many runs they would have scored with a full innings (i.e. 50 overs in a One Day International), and use this prediction in the net run rate calculation. This suggestion is in response to the criticisms of NRR that it does not take into account wickets lost, and that it unfairly penalises teams which bat second and win, as those innings are shorter and therefore have less weight in the NRR calculation than other innings which go the full distance. Criticism. The D/L method has been criticised on the grounds that wickets are a much more heavily weighted resource than overs, leading to the suggestion that if teams are chasing large targets and there is the prospect of rain, a winning strategy could be to not lose wickets and score at what would seem to be a "losing" rate (e.g. if the required rate was 6.1, it could be enough to score at 4.75 an over for the first 20–25 overs). The 2015 update to DLS recognised this flaw, and changed the rate at which teams needed to score at the start of the second innings in response to a large first innings. Another criticism is that the D/L method does not account for changes in proportion of the innings for which field restrictions are in place compared to a completed match. More recent efforts have used ball-by-ball ODI databases of actually completed matches to evaluate the accuracy of the method. Those efforts have concluded that the DLS par score can have accuracies as low as 50 to 60% at predicting the eventual winner of the match when the team batting second bats between 20 and 24 overs and loses between 0 and 2 wickets. More common informal criticism from cricket fans and journalists of the D/L method is that it is unduly complex and can be misunderstood. For example, in a one-day match against England on 20 March 2009, the West Indies coach (John Dyson) called his players in for bad light, believing that his team would win by one run under the D/L method, but not realising that the loss of a wicket with the last ball had altered the Duckworth–Lewis score. In fact Javagal Srinath, the match referee, confirmed that the West Indies were two runs short of their target, giving the victory to England. Concerns have also been raised as to its suitability for Twenty20 matches, where a high scoring over can drastically alter the situation of the game, and variability of the run-rate is higher over matches with a shorter number of overs. Cultural influence. "The Duckworth Lewis Method" is the name of a pop group, formed by Neil Hannon of The Divine Comedy and Thomas Walsh of Pugwash. Their first release was an eponymous album, which features cricket-themed songs. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Team 2's par score }=\\text{ Team 1's score} \\times \\frac{\\text{Team 2's resources}}{\\text{Team 1's resources}}." }, { "math_id": 1, "text": "\\text{Team 2's par score }=\\text{ Team 1's score} \\times \\text{Team 2's resources}. " }, { "math_id": 2, "text": "Z" }, { "math_id": 3, "text": "u" }, { "math_id": 4, "text": "w" }, { "math_id": 5, "text": "Z(u,w) = Z_0(w)\\left({1 - e^{-b(w)u} } \\right)," }, { "math_id": 6, "text": "Z_0" }, { "math_id": 7, "text": "b" }, { "math_id": 8, "text": "P(u,w) = \\frac{Z(u,w)}{Z(u=50,w=0)},\\," } ]
https://en.wikipedia.org/wiki?curid=683621
683628
Accelerator effect
The accelerator effect in economics is a positive effect on private fixed investment of the growth of the market economy (measured e.g. by a change in gross domestic product). Rising GDP (an economic boom or prosperity) implies that businesses in general see rising profits, increased sales and cash flow, and greater use of existing capacity. This usually implies that profit expectations and business confidence rise, encouraging businesses to build more factories and other buildings and to install more machinery. (This expenditure is called "fixed investment".) This may lead to further growth of the economy through the stimulation of consumer incomes and purchases, i.e., via the multiplier effect. In essence, the accelerator effect proposes that investment levels are contingent on the pace of change in GDP rather than its absolute level. In simpler terms, it is the acceleration or deceleration of economic growth that shapes businesses' choices regarding investments. The accelerator effect operates in reverse as well: when the GDP declines (entering a recession), it negatively impacts business profits, sales, cash flow, capacity utilization, and expectations. Consequently, these factors discourage businesses from making fixed investments, which further intensifies the recession due to the multiplier effect. The accelerator effect fits the behavior of an economy best when either the economy is moving away from full employment or when it is already below that level of production. This is because high levels of aggregate demand hit against the limits set by the existing labour force, the existing stock of capital goods, the availability of natural resources, and the technical ability of an economy to convert inputs into products. History. The accelerator theory concept was mainly given by Thomas Nixon Carver and Albert Aftalion before Keynesian economics came into force, but it came into public knowledge more and more as Keynesian theory began to dominate the field of economics. John Maynard Keynes first introduced the idea in his seminal work "The General Theory of Employment, Interest, and Money," published in 1936. Keynes recognized that changes in investment are not solely driven by interest rates but are also influenced by the level of demand for goods and services. In the 1940s, American economist Alvin Hansen further developed the accelerator principle. He extended Keynes's ideas and introduced the concept of the "principle of acceleration." Hansen emphasized the role of the accelerator effect in business cycles, showing that fluctuations in investment are a significant driver of economic fluctuations. He proposed that investment decisions are not just influenced by current income or demand levels but are also sensitive to changes in the rate of income growth. Over time, economists have further refined and expanded upon the accelerator effect in various ways. Some have incorporated additional factors such as technological change, expectations, and financial constraints to enhance the accuracy of investment predictions. However, the accelerator effect has also faced criticism. Some people argued against it because it was thought to remove all the possibility of the demand control through the price control mechanism. Multiplier effect vs. acceleration effect. The acceleration effect is the phenomenon that a variable moves toward its desired value faster and faster with respect to time. Usually, the variable is the capital stock. In Keynesian models, fixed capital is not in consideration, so the accelerator coefficient becomes the reciprocal of the multiplier and the capital decision degenerates to investment decision. In more general theory, where the capital decision determines the desired level of capital stock (which includes fixed capital and working capital), and the investment decision determines the change of capital stock in a sequences of periods, the acceleration effect emerges as only the current period gap affects the current investment, so do the previous gaps. The Aftalion-Clark accelerator v has such a form formula_0, while the Keynesian multiplier m has such a form formula_1 where "MPC" is the marginal propensity to consume. The idea of the accelerator has been very well explained by Hayek. Business cycles vs. acceleration effect. As the acceleration effect dictates that the increase of income accelerates capital accumulation, and the decrease of income accelerates capital depletion (in a simple model), this might cause the system to become unstable or cyclical, and hence many kinds of business cycle models are of this kind (the multiplier-accelerator cycle models). Accelerator models. The accelerator effect is shown in the simple accelerator model. This model assumes that the stock of capital goods (K) is proportional to the level of production ("Y"): "K" = "k"×"Y" This implies that if "k" (the capital-output ratio) is constant, an increase in "Y" requires an increase in "K". That is, net investment, "In" equals: "In" = "k"×Δ"Y" Suppose that "k" = 2 (usually, k is assumed to be in (0,1)). This equation implies that if "Y" rises by 10, then net investment will equal 10×2 = 20, as suggested by the accelerator effect. If "Y" then rises by only 5, the equation implies that the level of investment will be 5×2 = 10. This means that the simple accelerator model implies that fixed investment will "fall" if the growth of production "slows". "An actual fall in production is not needed to cause investment to fall." However, such a fall in output will result if slowing growth of production causes investment to fall, since that reduces aggregate demand. Thus, the simple accelerator model implies an endogenous explanation of the business-cycle downturn, the transition to a recession. Modern economists have described the accelerator effect in terms of the more sophisticated flexible accelerator model of investment. Businesses are described as engaging in net investment in fixed capital goods in order to close the gap between the "desired" stock of capital goods ("K"d) and the "existing" stock of capital goods left over from the past ("K"−1): formula_2 where "x" is a coefficient representing the speed of adjustment (1 ≥ "x" ≥ 0). formula_0 The desired stock of capital goods is determined by such variables as the expected profit rate, the expected level of output, the interest rate (the cost of finance), and technology. Because the expected level of output plays a role, this model exhibits behavior described by the accelerator effect but less extreme than that of the simple accelerator. Because the existing capital stock grows over time due to past net investment, a "slowing" of the growth of output (GDP) can cause the gap between the desired "K" and the existing "K" to narrow, close, or even become negative, causing current net investment to fall. Obviously, ceteris paribus, an actual fall in output depresses the desired stock of capital goods and thus net investment. Similarly, a rise in output causes a rise in investment. Finally, if the desired capital stock is less than the actual stock, then net investment may be depressed for a long time. In the neoclassical accelerator model of Jorgenson, the desired capital stock is derived from the aggregate production function assuming profit maximization and perfect competition. In Jorgenson's original model (1963), there is no acceleration effect, since the investment is instantaneous, so the capital stock can jump. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_{t}=\\mu v\\sum_{i=1}^{\\infty}\\left(1-\\mu\\right)^{i}\\left(Y_{t-i}-Y_{t-i-1}\\right)" }, { "math_id": 1, "text": "Y_{t}=mI_{t}=\\frac{1}{1-MPC}I_{t}" }, { "math_id": 2, "text": "I_{n} = x(K^{d} - K_{-1})" } ]
https://en.wikipedia.org/wiki?curid=683628
6836612
Autoencoder
Neural network that learns efficient data encoding in an unsupervised manner &lt;templatestyles src="Machine learning/styles.css"/&gt; An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction. Variants exist, aiming to force the learned representations to assume useful properties. Examples are regularized autoencoders ("Sparse", "Denoising" and "Contractive"), which are effective in learning representations for subsequent classification tasks, and "Variational" autoencoders, with applications as generative models. Autoencoders are applied to many problems, including facial recognition, feature detection, anomaly detection and acquiring the meaning of words. Autoencoders are also generative models which can randomly generate new data that is similar to the input data (training data). &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Mathematical principles. Definition. An autoencoder is defined by the following components: Two sets: the space of decoded messages formula_0; the space of encoded messages formula_1. Typically formula_0 and formula_1 are Euclidean spaces, that is, formula_2 with formula_3 Two parametrized families of functions: the encoder family formula_4, parametrized by formula_5; the decoder family formula_6, parametrized by formula_7.For any formula_8, we usually write formula_9, and refer to it as the code, the latent variable, latent representation, latent vector, etc. Conversely, for any formula_10, we usually write formula_11, and refer to it as the (decoded) message. Usually, both the encoder and the decoder are defined as multilayer perceptrons. For example, a one-layer-MLP encoder formula_12 is: formula_13 where formula_14 is an element-wise activation function, formula_15 is a "weight" matrix, and formula_16 is a "bias" vector. Training an autoencoder. An autoencoder, by itself, is simply a tuple of two functions. To judge its "quality", we need a "task". A task is defined by a reference probability distribution formula_17 over formula_0, and a "reconstruction quality" function formula_18, such that formula_19 measures how much formula_20 differs from formula_21. With those, we can define the loss function for the autoencoder asformula_22The "optimal" autoencoder for the given task formula_23 is then formula_24. The search for the optimal autoencoder can be accomplished by any mathematical optimization technique, but usually by gradient descent. This search process is referred to as "training the autoencoder". In most situations, the reference distribution is just the empirical distribution given by a dataset formula_25, so thatformula_26 where formula_27 is the Dirac measure, the quality function is just L2 loss: formula_28, and formula_29 is the Euclidean norm. Then the problem of searching for the optimal autoencoder is just a least-squares optimization:formula_30 Interpretation. An autoencoder has two main parts: an encoder that maps the message to a code, and a decoder that reconstructs the message from the code. An optimal autoencoder would perform as close to perfect reconstruction as possible, with "close to perfect" defined by the reconstruction quality function formula_31. The simplest way to perform the copying task perfectly would be to duplicate the signal. To suppress this behavior, the code space formula_1 usually has fewer dimensions than the message space formula_32. Such an autoencoder is called "undercomplete". It can be interpreted as compressing the message, or reducing its dimensionality. At the limit of an ideal undercomplete autoencoder, every possible code formula_33 in the code space is used to encode a message formula_21 that really appears in the distribution formula_17, and the decoder is also perfect: formula_34. This ideal autoencoder can then be used to generate messages indistinguishable from real messages, by feeding its decoder arbitrary code formula_33 and obtaining formula_35, which is a message that really appears in the distribution formula_17. If the code space formula_1 has dimension larger than ("overcomplete"), or equal to, the message space formula_32, or the hidden units are given enough capacity, an autoencoder can learn the identity function and become useless. However, experimental results found that overcomplete autoencoders might still learn useful features. In the ideal setting, the code dimension and the model capacity could be set on the basis of the complexity of the data distribution to be modeled. A standard way to do so is to add modifications to the basic autoencoder, to be detailed below. History. The autoencoder was first proposed as a nonlinear generalization of principal components analysis (PCA) by Kramer. The autoencoder has also been called the autoassociator, or Diabolo network. Its first applications date to early 1990s. Their most traditional application was dimensionality reduction or feature learning, but the concept became widely used for learning generative models of data. Some of the most powerful AIs in the 2010s involved autoencoders stacked inside deep neural networks. Variations. Regularized autoencoders. Various techniques exist to prevent autoencoders from learning the identity function and to improve their ability to capture important information and learn richer representations. Sparse autoencoder. Inspired by the sparse coding hypothesis in neuroscience, sparse autoencoders (SAE) are variants of autoencoders, such that the codes formula_36 for messages tend to be "sparse codes", that is, formula_36 is close to zero in most entries. Sparse autoencoders may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time. Encouraging sparsity improves performance on classification tasks. There are two main ways to enforce sparsity. One way is to simply clamp all but the highest-k activations of the latent code to zero. This is the k-sparse autoencoder. The k-sparse autoencoder inserts the following "k-sparse function" in the latent layer of a standard autoencoder:formula_37where formula_38 if formula_39 ranks in the top k, and 0 otherwise. Backpropagating through formula_40 is simple: set gradient to 0 for formula_41 entries, and keep gradient for formula_42 entries. This is essentially a generalized ReLU function. The other way is a relaxed version of the k-sparse autoencoder. Instead of forcing sparsity, we add a sparsity regularization loss, then optimize forformula_43where formula_44 measures how much sparsity we want to enforce. Let the autoencoder architecture have formula_45 layers. To define a sparsity regularization loss, we need a "desired" sparsity formula_46 for each layer, a weight formula_47 for how much to enforce each sparsity, and a function formula_48 to measure how much two sparsities differ. For each input formula_21, let the actual sparsity of activation in each layer formula_49 beformula_50where formula_51 is the activation in the formula_52 -th neuron of the formula_49 -th layer upon input formula_21. The sparsity loss upon input formula_21 for one layer is formula_53, and the sparsity regularization loss for the entire autoencoder is the expected weighted sum of sparsity losses:formula_54Typically, the function formula_55 is either the Kullback-Leibler (KL) divergence, as formula_56 or the L1 loss, as formula_57, or the L2 loss, as formula_58. Alternatively, the sparsity regularization loss may be defined without reference to any "desired sparsity", but simply force as much sparsity as possible. In this case, one can define the sparsity regularization loss as formula_59where formula_60 is the activation vector in the formula_49-th layer of the autoencoder. The norm formula_61 is usually the L1 norm (giving the L1 sparse autoencoder) or the L2 norm (giving the L2 sparse autoencoder). Denoising autoencoder. Denoising autoencoders (DAE) try to achieve a "good" representation by changing the "reconstruction criterion". A DAE, originally called a "robust autoassociative network", is trained by intentionally corrupting the inputs of a standard autoencoder during training. A noise process is defined by a probability distribution formula_62 over functions formula_63. That is, the function formula_64 takes a message formula_8, and corrupts it to a noisy version formula_65. The function formula_64 is selected randomly, with a probability distribution formula_62. Given a task formula_23, the problem of training a DAE is the optimization problem:formula_66That is, the optimal DAE should take any noisy message and attempt to recover the original message without noise, thus the name "denoising""." Usually, the noise process formula_64 is applied only during training and testing, not during downstream use. The use of DAE depends on two assumptions: Example noise processes include: Contractive autoencoder (CAE). A contractive autoencoder adds the contractive regularization loss to the standard autoencoder loss:formula_67where formula_44 measures how much contractive-ness we want to enforce. The contractive regularization loss itself is defined as the expected Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input:formula_68To understand what formula_69 measures, note the factformula_70for any message formula_8, and small variation formula_71 in it. Thus, if formula_72 is small, it means that a small neighborhood of the message maps to a small neighborhood of its code. This is a desired property, as it means small variation in the message leads to small, perhaps even zero, variation in its code, like how two pictures may look the same even if they are not exactly the same. The DAE can be understood as an infinitesimal limit of CAE: in the limit of small Gaussian input noise, DAEs make the reconstruction function resist small but finite-sized input perturbations, while CAEs make the extracted features resist infinitesimal input perturbations. Concrete autoencoder. The concrete autoencoder is designed for discrete feature selection. A concrete autoencoder forces the latent space to consist only of a user-specified number of features. The concrete autoencoder uses a continuous relaxation of the categorical distribution to allow gradients to pass through the feature selector layer, which makes it possible to use standard backpropagation to learn an optimal subset of input features that minimize reconstruction loss. Variational autoencoder (VAE). Variational autoencoders (VAEs) belong to the families of variational Bayesian methods. Despite the architectural similarities with basic autoencoders, VAEs are architected with different goals and have a different mathematical formulation. The latent space is, in this case, composed of a mixture of distributions instead of fixed vectors. Given an input dataset formula_21 characterized by an unknown probability function formula_73 and a multivariate latent encoding vector formula_33, the objective is to model the data as a distribution formula_74, with formula_7 defined as the set of the network parameters so that formula_75. Advantages of depth. Autoencoders are often trained with a single-layer encoder and a single-layer decoder, but using many-layered (deep) encoders and decoders offers many advantages. Training. Geoffrey Hinton developed the deep belief network technique for training many-layered deep autoencoders. His method involves treating each neighboring set of two layers as a restricted Boltzmann machine so that pretraining approximates a good solution, then using backpropagation to fine-tune the results. Researchers have debated whether joint training (i.e. training the whole architecture together with a single global reconstruction objective to optimize) would be better for deep auto-encoders. A 2015 study showed that joint training learns better data models along with more representative features for classification as compared to the layerwise method. However, their experiments showed that the success of joint training depends heavily on the regularization strategies adopted. Applications. The two main applications of autoencoders are dimensionality reduction and information retrieval, but modern variations have been applied to other tasks. Dimensionality reduction. Dimensionality reduction was one of the first deep learning applications. For Hinton's 2006 study, he pretrained a multi-layer autoencoder with a stack of RBMs and then used their weights to initialize a deep autoencoder with gradually smaller hidden layers until hitting a bottleneck of 30 neurons. The resulting 30 dimensions of the code yielded a smaller reconstruction error compared to the first 30 components of a principal component analysis (PCA), and learned a representation that was qualitatively easier to interpret, clearly separating data clusters. Representing dimensions can improve performance on tasks such as classification. Indeed, the hallmark of dimensionality reduction is to place semantically related examples near each other. Principal component analysis. If linear activations are used, or only a single sigmoid hidden layer, then the optimal solution to an autoencoder is strongly related to principal component analysis (PCA). The weights of an autoencoder with a single hidden layer of size formula_76 (where formula_76 is less than the size of the input) span the same vector subspace as the one spanned by the first formula_76 principal components, and the output of the autoencoder is an orthogonal projection onto this subspace. The autoencoder weights are not equal to the principal components, and are generally not orthogonal, yet the principal components may be recovered from them using the singular value decomposition. However, the potential of autoencoders resides in their non-linearity, allowing the model to learn more powerful generalizations compared to PCA, and to reconstruct the input with significantly lower information loss. Information retrieval and Search engine optimization. Information retrieval benefits particularly from dimensionality reduction in that search can become more efficient in certain kinds of low dimensional spaces. Autoencoders were indeed applied to semantic hashing, proposed by Salakhutdinov and Hinton in 2007. By training the algorithm to produce a low-dimensional binary code, all database entries could be stored in a hash table mapping binary code vectors to entries. This table would then support information retrieval by returning all entries with the same binary code as the query, or slightly less similar entries by flipping some bits from the query encoding. The encoder-decoder architecture, often used in natural language processing and neural networks, can be scientifically applied in the field of SEO (Search Engine Optimization) in various ways: In essence, the encoder-decoder architecture or autoencoders can be leveraged in SEO to optimize web page content, improve their indexing, and enhance their appeal to both search engines and users. Anomaly detection. Another application for autoencoders is anomaly detection. By learning to replicate the most salient features in the training data under some of the constraints described previously, the model is encouraged to learn to precisely reproduce the most frequently observed characteristics. When facing anomalies, the model should worsen its reconstruction performance. In most cases, only data with normal instances are used to train the autoencoder; in others, the frequency of anomalies is small compared to the observation set so that its contribution to the learned representation could be ignored. After training, the autoencoder will accurately reconstruct "normal" data, while failing to do so with unfamiliar anomalous data. Reconstruction error (the error between the original data and its low dimensional reconstruction) is used as an anomaly score to detect anomalies. Recent literature has however shown that certain autoencoding models can, counterintuitively, be very good at reconstructing anomalous examples and consequently not able to reliably perform anomaly detection. Image processing. The characteristics of autoencoders are useful in image processing. One example can be found in lossy image compression, where autoencoders outperformed other approaches and proved competitive against JPEG 2000. Another useful application of autoencoders in image preprocessing is image denoising. Autoencoders found use in more demanding contexts such as medical imaging where they have been used for image denoising as well as super-resolution. In image-assisted diagnosis, experiments have applied autoencoders for breast cancer detection and for modelling the relation between the cognitive decline of Alzheimer's disease and the latent features of an autoencoder trained with MRI. Drug discovery. In 2019 molecules generated with variational autoencoders were validated experimentally in mice. Popularity prediction. Recently, a stacked autoencoder framework produced promising results in predicting popularity of social media posts, which is helpful for online advertising strategies. Machine translation. Autoencoders have been applied to machine translation, which is usually referred to as neural machine translation (NMT). Unlike traditional autoencoders, the output does not match the input - it is in another language. In NMT, texts are treated as sequences to be encoded into the learning procedure, while on the decoder side sequences in the target language(s) are generated. Language-specific autoencoders incorporate further linguistic features into the learning procedure, such as Chinese decomposition features. Machine translation is rarely still done with autoencoders, due to the availability of more effective transformer networks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal X" }, { "math_id": 1, "text": "\\mathcal Z" }, { "math_id": 2, "text": "\\mathcal X = \\R^m, \\mathcal Z = \\R^n" }, { "math_id": 3, "text": "m > n." }, { "math_id": 4, "text": "E_\\phi:\\mathcal{X} \\rightarrow \\mathcal{Z}" }, { "math_id": 5, "text": "\\phi" }, { "math_id": 6, "text": "D_\\theta:\\mathcal{Z} \\rightarrow \\mathcal{X}" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "x\\in \\mathcal X" }, { "math_id": 9, "text": "z = E_\\phi(x)" }, { "math_id": 10, "text": "z\\in \\mathcal Z" }, { "math_id": 11, "text": "x' = D_\\theta(z)" }, { "math_id": 12, "text": "E_\\phi" }, { "math_id": 13, "text": "E_\\phi(\\mathbf x) = \\sigma(Wx+b)" }, { "math_id": 14, "text": "\\sigma" }, { "math_id": 15, "text": "W" }, { "math_id": 16, "text": "b" }, { "math_id": 17, "text": "\\mu_{ref}" }, { "math_id": 18, "text": "d: \\mathcal X \\times \\mathcal X \\to [0, \\infty]" }, { "math_id": 19, "text": "d(x, x')" }, { "math_id": 20, "text": "x'" }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": "L(\\theta, \\phi) := \\mathbb \\mathbb E_{x\\sim \\mu_{ref}}[d(x, D_\\theta(E_\\phi(x)))]" }, { "math_id": 23, "text": "(\\mu_{ref}, d)" }, { "math_id": 24, "text": "\\arg\\min_{\\theta, \\phi}L(\\theta, \\phi)" }, { "math_id": 25, "text": "\\{x_1, ..., x_N\\} \\subset \\mathcal X" }, { "math_id": 26, "text": "\\mu_{ref} = \\frac{1}{N}\\sum_{i=1}^N \\delta_{x_i}" }, { "math_id": 27, "text": "\\delta_{x_i}" }, { "math_id": 28, "text": "d(x, x') = \\|x - x'\\|_2^2" }, { "math_id": 29, "text": "\\|\\cdot\\|_2" }, { "math_id": 30, "text": "\\min_{\\theta, \\phi} L(\\theta, \\phi), \\text{where } L(\\theta, \\phi) = \\frac{1}{N}\\sum_{i=1}^N \\|x_i - D_\\theta(E_\\phi(x_i))\\|_2^2" }, { "math_id": 31, "text": "d" }, { "math_id": 32, "text": "\\mathcal{X}" }, { "math_id": 33, "text": "z" }, { "math_id": 34, "text": "D_\\theta(E_\\phi(x)) = x" }, { "math_id": 35, "text": "D_\\theta(z)" }, { "math_id": 36, "text": "E_\\phi(x)" }, { "math_id": 37, "text": "f_k(x_1, ..., x_n) = (x_1 b_1, ..., x_n b_n)" }, { "math_id": 38, "text": "b_i = 1" }, { "math_id": 39, "text": "|x_i|" }, { "math_id": 40, "text": "f_k" }, { "math_id": 41, "text": "b_i = 0" }, { "math_id": 42, "text": "b_i=1" }, { "math_id": 43, "text": "\\min_{\\theta, \\phi}L(\\theta, \\phi) + \\lambda L_{sparsity} (\\theta, \\phi)" }, { "math_id": 44, "text": "\\lambda > 0" }, { "math_id": 45, "text": "K" }, { "math_id": 46, "text": "\\hat \\rho_k" }, { "math_id": 47, "text": "w_k" }, { "math_id": 48, "text": "s: [0, 1]\\times [0, 1] \\to [0, \\infty]" }, { "math_id": 49, "text": "k" }, { "math_id": 50, "text": "\\rho_k(x) = \\frac 1n \\sum_{i=1}^n a_{k, i}(x)" }, { "math_id": 51, "text": "a_{k, i}(x)" }, { "math_id": 52, "text": "i" }, { "math_id": 53, "text": "s(\\hat\\rho_k, \\rho_k(x))" }, { "math_id": 54, "text": "L_{sparsity}(\\theta, \\phi) = \\mathbb \\mathbb E_{x\\sim\\mu_X}\\left[\\sum_{k\\in 1:K} w_k s(\\hat\\rho_k, \\rho_k(x)) \\right]" }, { "math_id": 55, "text": "s" }, { "math_id": 56, "text": "s(\\rho, \\hat\\rho) = KL(\\rho || \\hat{\\rho}) = \\rho \\log \\frac{\\rho}{\\hat{\\rho}}+(1- \\rho)\\log \\frac{1-\\rho}{1-\\hat{\\rho}}" }, { "math_id": 57, "text": "s(\\rho, \\hat\\rho) = |\\rho- \\hat\\rho|" }, { "math_id": 58, "text": "s(\\rho, \\hat\\rho) = |\\rho- \\hat\\rho|^2" }, { "math_id": 59, "text": "L_{sparsity}(\\theta, \\phi) = \\mathbb \\mathbb E_{x\\sim\\mu_X}\\left[\n\\sum_{k\\in 1:K} w_k \\|h_k\\|\n\\right]" }, { "math_id": 60, "text": "h_k" }, { "math_id": 61, "text": "\\|\\cdot\\|" }, { "math_id": 62, "text": "\\mu_T" }, { "math_id": 63, "text": "T:\\mathcal X \\to \\mathcal X" }, { "math_id": 64, "text": "T" }, { "math_id": 65, "text": "T(x)" }, { "math_id": 66, "text": "\\min_{\\theta, \\phi}L(\\theta, \\phi) = \\mathbb \\mathbb E_{x\\sim \\mu_X, T\\sim\\mu_T}[d(x, (D_\\theta\\circ E_\\phi \\circ T)(x))]" }, { "math_id": 67, "text": "\\min_{\\theta, \\phi}L(\\theta, \\phi) + \\lambda L_{contractive} (\\theta, \\phi)" }, { "math_id": 68, "text": "L_{contractive}(\\theta, \\phi) = \\mathbb E_{x\\sim \\mu_{ref}} \\|\\nabla_x E_\\phi(x) \\|_F^2" }, { "math_id": 69, "text": "L_{contractive}" }, { "math_id": 70, "text": "\\|E_\\phi(x + \\delta x) - E_\\phi(x)\\|_2 \\leq \\|\\nabla_x E_\\phi(x) \\|_F \\|\\delta x\\|_2" }, { "math_id": 71, "text": "\\delta x" }, { "math_id": 72, "text": "\\|\\nabla_x E_\\phi(x) \\|_F^2" }, { "math_id": 73, "text": "P(x)" }, { "math_id": 74, "text": "p_\\theta(x)" }, { "math_id": 75, "text": "p_\\theta(x) = \\int_{z}p_\\theta(x,z)dz " }, { "math_id": 76, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=6836612
683726
Dandelin spheres
In geometry, the Dandelin spheres are one or two spheres that are tangent both to a plane and to a cone that intersects the plane. The intersection of the cone and the plane is a conic section, and the point at which either sphere touches the plane is a focus of the conic section, so the Dandelin spheres are also sometimes called focal spheres. The Dandelin spheres were discovered in 1822. They are named in honor of the French mathematician Germinal Pierre Dandelin, though Adolphe Quetelet is sometimes given partial credit as well. The Dandelin spheres can be used to give elegant modern proofs of two classical theorems known to Apollonius of Perga. The first theorem is that a closed conic section (i.e. an ellipse) is the locus of points such that the sum of the distances to two fixed points (the foci) is constant. The second theorem is that for any conic section, the distance from a fixed point (the focus) is proportional to the distance from a fixed line (the directrix), the constant of proportionality being called the eccentricity. A conic section has one Dandelin sphere for each focus. An ellipse has two Dandelin spheres touching the same nappe of the cone, while hyperbola has two Dandelin spheres touching opposite nappes. A parabola has just one Dandelin sphere. Proof that the intersection curve has constant sum of distances to foci. Consider the illustration, depicting a cone with apex "S" at the top. A plane "e" intersects the cone in a curve "C" (with blue interior). The following proof shall show that the curve "C" is an ellipse. The two brown Dandelin spheres, "G"1 and "G"2, are placed tangent to both the plane and the cone: "G"1 above the plane, "G"2 below. Each sphere touches the cone along a circle (colored white), formula_0 and formula_1. Denote the point of tangency of the plane with "G"1 by "F"1, and similarly for "G"2 and "F"2 . Let "P" be a typical point on the curve "C". "To Prove:" The sum of distances formula_2 remains constant as the point "P" moves along the intersection curve "C". (This is one definition of "C" being an ellipse, with formula_3 and formula_4 being its foci.) This gives a different proof of a theorem of Apollonius of Perga. If we define an ellipse to mean the locus of points "P" such that "d"("F"1, "P") + "d"("F"2, "P") = a constant, then the above argument proves that the intersection curve "C" is indeed an ellipse. That the intersection of the plane with the cone is symmetric about the perpendicular bisector of the line through "F"1 and "F"2 may be counterintuitive, but this argument makes it clear. Adaptations of this argument work for hyperbolas and parabolas as intersections of a plane with a cone. Another adaptation works for an ellipse realized as the intersection of a plane with a right circular cylinder. Proof of the focus-directrix property. The directrix of a conic section can be found using Dandelin's construction. Each Dandelin sphere intersects the cone at a circle; let both of these circles define their own planes. The intersections of these two parallel planes with the conic section's plane will be two parallel lines; these lines are the directrices of the conic section. However, a parabola has only one Dandelin sphere, and thus has only one directrix. Using the Dandelin spheres, it can be proved that any conic section is the locus of points for which the distance from a point (focus) is proportional to the distance from the directrix. Ancient Greek mathematicians such as Pappus of Alexandria were aware of this property, but the Dandelin spheres facilitate the proof. Neither Dandelin nor Quetelet used the Dandelin spheres to prove the focus-directrix property. The first to do so may have been Pierce Morton in 1829, or perhaps Hugh Hamilton who remarked (in 1758) that a sphere touches the cone at a circle which defines a plane whose intersection with the plane of the conic section is a directrix. The focus-directrix property can be used to prove that astronomical objects move along conic sections around the Sun.
[ { "math_id": 0, "text": "k_1" }, { "math_id": 1, "text": "k_2" }, { "math_id": 2, "text": " d(P,F_1) + d(P,F_2)" }, { "math_id": 3, "text": "F_1" }, { "math_id": 4, "text": "F_2" }, { "math_id": 5, "text": " d(P,F_1) + d(P,F_2) \\ =\\ d(P,P_1) + d(P,P_2) \\ =\\ d(P_1,P_2), " } ]
https://en.wikipedia.org/wiki?curid=683726
68373613
Tautness (topology)
In mathematics, particularly in algebraic topology, a taut pair is a topological pair whose direct limit of cohomology module of open neighborhood of that pair which is directed downward by inclusion is isomorphic to the cohomology module of original pair. Definition. For a topological pair formula_0 in a topological space formula_1, a "neighborhood" formula_2 of such a pair is defined to be a pair such that formula_3 and formula_4 are neighborhoods of formula_5 and formula_6 respectively. If we collect all neighborhoods of formula_0, then we can form a directed set which is directed downward by inclusion. Hence its cohomology module formula_7 is a direct system where formula_8 is a module over a ring with unity. If we denote its direct limit by formula_9 the restriction maps formula_10 define a natural homomorphism formula_11. The pair formula_0 is said to be "tautly embedded" in formula_1 (or a "taut pair" in formula_1) if formula_12 is an isomorphism for all formula_13 and formula_8. formula_18 Properties related to cohomology theory. Note. Since the Čech cohomology and the Alexander-Spanier cohomology are naturally isomorphic on the category of all topological pairs, all of the above properties are valid for Čech cohomology. However, it's not true for singular cohomology ("see Example") Dependence of cohomology theory. Example. Let formula_1 be the subspace of formula_19 which is the union of four sets formula_20 formula_21 formula_22 formula_23 The first singular cohomology of formula_1 is formula_24 and using the Alexander duality theorem on formula_25, formula_26 as formula_3 varies over neighborhoods of formula_1. Therefore, formula_27 is not a monomorphism so that formula_1 is not a taut subspace of formula_28 with respect to singular cohomology. However, since formula_1 is closed in formula_28, it's taut subspace with respect to Alexander cohomology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(A,B)" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "(U,V)" }, { "math_id": 3, "text": "U" }, { "math_id": 4, "text": "V" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "B" }, { "math_id": 7, "text": "H^q(U,V;G)" }, { "math_id": 8, "text": "G" }, { "math_id": 9, "text": "\\bar{H}^q(A,B;G)=\\varinjlim H^q(U,V;G)" }, { "math_id": 10, "text": "H^q(U,V;G)\\to H^q(A,B;G)" }, { "math_id": 11, "text": "i:\\bar{H}^q(A,B;G)\\to H^q(A,B;G)" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "q" }, { "math_id": 14, "text": "(B,\\emptyset),(A,\\emptyset)" }, { "math_id": 15, "text": "A,B" }, { "math_id": 16, "text": "\\varinjlim \\bar{H}^q(U;G)\\simeq \\bar{H}^q(A;G)" }, { "math_id": 17, "text": "(A',B')" }, { "math_id": 18, "text": "\\cdots\\to\\bar{H}^q(A\\cup A',B\\cup B')\\to\\bar{H}^q(A,B)\\oplus\\bar{H}^q(A',B')\\to \\bar{H}^q(A\\cap A',B\\cap B')\\to\\cdots" }, { "math_id": 19, "text": "\\mathbb R^2\\subset S^2" }, { "math_id": 20, "text": "A_1 = \\{(x,y)\\mid x =0,-2\\leq y\\leq 1\\}" }, { "math_id": 21, "text": "A_2 = \\{(x,y)\\mid 0\\leq x\\leq 1,y =-2\\}" }, { "math_id": 22, "text": "A_3 = \\{(x,y)\\mid x = 1,-2\\leq y\\leq 0\\}" }, { "math_id": 23, "text": "A_4 =\\{(x,y)\\mid 0<x\\leq 1,y =\\sin 2\\pi/x\\}" }, { "math_id": 24, "text": "H^1(X;Z) = 0" }, { "math_id": 25, "text": "S^2-X" }, { "math_id": 26, "text": "\\varinjlim\\{H^q(U;\\mathbb Z)\\} = \\mathbb Z" }, { "math_id": 27, "text": "\\varinjlim\\{H^q(U;\\mathbb Z)\\}\\to H^1(X;\\mathbb Z)" }, { "math_id": 28, "text": "\\mathbb R^2" } ]
https://en.wikipedia.org/wiki?curid=68373613
68374180
Satisfaction equilibrium
Solution Concept for Noncooperative games In game theory, a satisfaction equilibrium is a solution concept for a class of non-cooperative games, namely games in satisfaction form. Games in satisfaction form model situations in which players aim at satisfying a given individual constraint, e.g., a performance metric must be smaller or bigger than a given threshold. When a player satisfies its own constraint, the player is said to be satisfied. A satisfaction equilibrium, if it exists, arises when all players in the game are satisfied. History. The term Satisfaction equilibrium (SE) was first used to refer to the stable point of a dynamic interaction between players that are "learning an equilibrium" by taking actions and observing their own payoffs. The equilibrium lies on the satisfaction principle, which stipulates that an agent that is satisfied with its current payoff does not change its current action. Later, the notion of satisfaction equilibrium was introduced as a solution concept for Games in satisfaction form. Such solution concept was introduced in the realm of electrical engineering for the analysis of quality of service (QoS) in Wireless ad hoc networks. In this context, radio devices (network components) are modelled as players that decide upon their own operating configurations in order to satisfy some targeted QoS. Games in satisfaction form and the notion of satisfaction equilibrium have been used in the context of the fifth generation of cellular communications (5G) for tackling the problem of energy efficiency, spectrum sharing and transmit power control. In the smart grid, games in satisfaction form have been used for modelling the problem of data injection attacks. Games in Satisfaction Form. In static games of complete, perfect information, a satisfaction-form representation of a game is a specification of the set of players, the players' action sets and their preferences. The preferences for a given player are determined by a mapping, often referred to as the preference mapping, from the Cartesian product of all the other players' action sets to the given player's power set of actions. That is, given the actions adopted by all the other players, the preference mapping determines the subset of actions with which the player is satisfied. &lt;hr&gt; Definition [Games in Satisfaction Form]&lt;br&gt; A game in satisfaction form is described by a tuple formula_0 where, the set formula_1, with formula_2, represents the set of players; the set formula_3, with formula_4 and formula_5, represents the set of actions that player formula_6 can play. The preference mapping formula_7 determines the set of actions with which player formula_8 is satisfied given the actions played by all the other players. The set formula_9 is the power set of formula_10. &lt;hr&gt; In contrast to other existing game formulations, e.g., normal form and normal form with constrained action sets, the notion of performance optimization, i.e., utility maximization or cost minimization, is not present. Games in satisfaction-form model the case in which players adopt their actions aiming to satisfy a specific individual constraint given the actions adopted by all the other players. An important remark is that, players are assumed to be careless of whether other players can satisfy or not their individual constraints. Satisfaction Equilibrium. An action profile is a tuple formula_11. The action profile in which all players are satisfied is an equilibrium of the corresponding game in satisfaction form. At a satisfaction equilibrium, players do not exhibit a particular interest in changing its current action. &lt;hr&gt; Definition [Satisfaction Equilibrium in Pure Strategies]&lt;br&gt; The action profile formula_12 is a satisfaction equilibrium in pure strategies for the game formula_13 if for all formula_14, formula_15. &lt;br&gt; &lt;hr&gt; Satisfaction Equilibrium in Mixed Strategies. For all formula_14, denote the set of all possible probability distributions over the set formula_16 by formula_17, with formula_18. Denote by formula_19 the probability distribution (mixed strategy) adopted by player formula_6 to choose its actions. For all formula_20, formula_21 represents the probability with which player formula_8 chooses action formula_22. The notation formula_23 represents the mixed strategies of all players except that of player formula_6. &lt;hr&gt; Definition [Extension to Mixed Strategies of the Satisfaction Form ] The extension in mixed strategies of the game formula_24 is described by the tuple formula_25, where the correspondence formula_26 determines the set of all possible probability distributions that allow player formula_6 to choose an action that satisfies its individual conditions with probability one, that is, formula_27 &lt;br&gt; &lt;hr&gt; A satisfaction equilibrium in mixed strategies is defined as follows. &lt;hr&gt; Definition [Satisfaction Equilibrium in Mixed Strategies]&lt;br&gt; The mixed strategy profile formula_28 is an SE in mixed strategies if for all formula_14, formula_29.&lt;br&gt; &lt;hr&gt; Let the formula_30-th action of player formula_8, i.e., formula_31, be associated with the unitary vector formula_32, where, all the components are zero except its formula_33-th component, which is equal to one. The vector formula_34 represents a degenerated probability distribution, where the action formula_31 is deterministically chosen. Using this argument, it becomes clear that every satisfaction equilibrium in pure strategies of the game formula_24 is also a satisfaction equilibrium in mixed strategies of the game formula_25. At an SE of the game formula_35, players choose their actions following a probability distribution such that only action profiles that allow all players to simultaneously satisfy their individual conditions with probability one are played with positive probability. Hence, in the case in which one SE in pure strategies does not exist, then, it does not exist a SE in mixed strategies in the game formula_36. ε-Satisfaction Equilibrium. Under certain conditions, it is always possible to build mixed strategies that allow players to be satisfied with probability formula_37, for some formula_38. This observation leads to the definition of a solution concept known as formula_39-satisfaction equilibrium (formula_39-SE). &lt;br&gt; &lt;hr&gt; Definition: [ε-Satisfaction Equilibrium]&lt;br&gt; Let formula_39 satisfy formula_40. The mixed strategy profile formula_41 is an epsilon-satisfaction equilibrium (formula_39-SE) of the game formula_36, if for all formula_14, it follows that formula_42, where formula_43&lt;br&gt; &lt;hr&gt; From the definition above, it can be implied that if the mixed strategy profile formula_44 is an formula_39-SE, it holds that, formula_45 That is, players are unsatisfied with probability formula_39. The relevance of the formula_39-SE is that it models the fact that players can be tolerant a certain unsatisfaction level. At a given formula_39-SE, none of the players is interested in changing its mixed strategy profile as long as it is satisfied with a probability higher than or equal to formula_46, for some formula_38. In contrast to the conditions for the existence of a SE in either pure or mixed strategies, the conditions for the existence of an formula_39-SE are mild. &lt;hr&gt; Proposition [Existence of an formula_39-SE]&lt;br&gt; Let formula_35, be a finite game in satisfaction form. Then, if for all formula_14, there always exists an action profile formula_47 such that formula_48, then there always exists a strategy profile formula_49 and a real formula_39, with formula_50, such that, formula_51 is an formula_39-SE.&lt;br&gt; &lt;hr&gt; Equilibrium Selection. Games in satisfaction form might exhibit several satisfaction equilibria. In such a case, players might associate to each of their own actions a value representing the effort or cost to play such action. From this perspective, if several SEs exist, players might prefer the one that requires the lowest (global or individual) effort or cost. To model this preference, games in satisfaction form might be equipped with cost functions for each of the players. For all formula_14, let the function formula_52 determine the effort or cost paid by player formula_8 for using each of its actions. More specifically, given a pair of actions formula_53, the action formula_54 is preferred against formula_55 by player formula_8 if formula_56 Note that this preference for player formula_8 is independent of the actions adopted by all the other players. &lt;hr&gt; Definition: [Efficient Satisfaction Equilibrium (ESE)]&lt;br&gt; Let formula_57 be the set of satisfaction equilibria in pure strategies of the game in satisfaction form formula_35. The strategy profile formula_58 is an efficient satisfaction equilibrium if for all formula_47, it follows that formula_59.&lt;br&gt; &lt;hr&gt; In the trivial case in which for all formula_14 the function formula_60 is a constant function, the set of ESE and the set of SE are identical. This highlights the relevance of the ability of players to differentiate the effort of playing one action or another in order to select one (satisfaction) equilibrium among all the existing equilibria. In games in satisfaction form with nonempty sets of satisfaction equilibria, when all players assign different costs to its actions, i.e., for all formula_61 and for all formula_62, it holds that formula_63, there always exists an ESE. Nonetheless, it is not necessarily unique, which implies that there still exists room for other equilibrium refinements beyond the notion of individual cost functions. Generalizations. Games in satisfaction form for which it does not exists an action profile in which all players are satisfied are said not to possess a satisfaction equilibrium. In this case, an action profile induces a partition of the set formula_64 formed by the sets formula_65 and formula_66. On one hand, the players in formula_65 are satisfied. On the other hand, players in formula_66 are unsatisfied. If players in the set formula_66 cannot be satisfied by any of its actions given the actions of all the other players, these players are not interested in changing its current action. This implies that action profiles that satisfy this condition are also equilibria. This is because none of the players is particularly interested in changing their current actions, even those that are unsatisfied. This reasoning led to another solution concept known as generalized satisfaction equilibrium (GSE). This generalization is proposed in the context of a novel game formulation, namely the generalized satisfaction form. &lt;hr&gt; Definition: [Generalized Satisfaction Form]&lt;br&gt; A game in generalized satisfaction form is described by a tuple formula_67, where, the set formula_1, with formula_2, represents the set of players; the set formula_3, with formula_4 and formula_5, represents the set of actions that player formula_6 can play; and the preference mapping formula_68, determines the set of probability mass functions (mixed strategies) with support formula_10 that satisfy player formula_8 given the mixed strategies adopted by all the other players. &lt;br&gt; &lt;hr&gt; The generalized satisfaction equilibrium is defined as follows. &lt;hr&gt; Definition: [Generalized Satisfaction Equilibrium (GSE)]&lt;br&gt; The mixed strategy profile formula_28 is a generalized satisfaction equilibrium of the game in generalized satisfaction form formula_67 if there exists a partition of the set formula_64 formed by the sets formula_65 and formula_66 and the following holds:&lt;br&gt; (i) For all formula_69, formula_70; and &lt;br&gt; (ii)For all formula_71, formula_72 &lt;br&gt; &lt;hr&gt; Note that the GSE boils down to the notion of formula_39-SE of the game in satisfaction form formula_73 when, formula_74 and for all formula_14, the correspondence formula_75 is chosen to be formula_76 with formula_38. Similarly, the GSE boils down to the notion of SE in mixed strategies when formula_77 and formula_74. Finally, note that any SE is a GSE, but the converse is not true. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace f_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right), " }, { "math_id": 1, "text": "\\mathcal{K} = \\lbrace 1, \\ldots, K \\rbrace \\subset \\mathrm{N}" }, { "math_id": 2, "text": " 0 < K < +\\infty " }, { "math_id": 3, "text": "\\mathcal{A}_{k} " }, { "math_id": 4, "text": " k \\in \\mathcal{K} " }, { "math_id": 5, "text": " 0< |\\mathcal{A}_k | < +\\infty" }, { "math_id": 6, "text": " k " }, { "math_id": 7, "text": "f_k: \\mathcal{A}_1 \\times \\ldots \\times \\mathcal{A}_{k-1} \\times \\mathcal{A}_{k+1} \\times \\ldots, \\times\\mathcal{A}_K \\rightarrow 2^{\\mathcal{A}_k}" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "2^{\\mathcal{A}_k}" }, { "math_id": 10, "text": "\\mathcal{A}_k" }, { "math_id": 11, "text": "\\boldsymbol{a} = \\left(a_{1}, \\ldots, a_{K} \\right) \\in \\mathcal{A}_1 \\times \\ldots \\times \\mathcal{A}_K" }, { "math_id": 12, "text": "\\boldsymbol{a}" }, { "math_id": 13, "text": " \\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace f_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right)," }, { "math_id": 14, "text": "k \\in \\mathcal{K}" }, { "math_id": 15, "text": "a_k \\in f_{k}\\left(a_1, \\ldots, a_{k-1},a_{k+1},\\ldots, a_K\\right)" }, { "math_id": 16, "text": "\\mathcal{A}_k = \\lbrace A_{k,1},A_{k,2}, \\ldots, A_{k,N_k} \\rbrace" }, { "math_id": 17, "text": "\\triangle\\left( \\mathcal{A}_k \\right)" }, { "math_id": 18, "text": "N_k = |\\mathcal{A}_k|" }, { "math_id": 19, "text": "\\boldsymbol{\\pi}_k = \\left(\\pi_{k,1}, \\pi_{k,2},\\ldots, \\pi_{k,N_k} \\right)" }, { "math_id": 20, "text": " j \\in \\lbrace 1, \\ldots, N_k\\rbrace" }, { "math_id": 21, "text": "\\pi_{k,j}" }, { "math_id": 22, "text": " A_{k,j} \\in \\mathcal{A}_k" }, { "math_id": 23, "text": "\\boldsymbol{\\pi}_{-k}" }, { "math_id": 24, "text": "\\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace f_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right)" }, { "math_id": 25, "text": "\\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace \\bar{f}_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right)" }, { "math_id": 26, "text": "\\bar{f}_k: \\prod_{j\\in\\mathcal{K}\\setminus\\lbrace k \\rbrace} \\triangle\\left(\\mathcal{A}_j\\right) \\rightarrow 2^{\\triangle\\left(\\mathcal{A}_{k}\\right)}" }, { "math_id": 27, "text": "\\bar{f}_k\\left( \\boldsymbol{\\pi}_{-k} \\right) = \\left\\lbrace \\boldsymbol{\\pi}_k \\in \\triangle\\left( \\mathcal{A}_k \\right): \\mathrm{Pr}\\left(a_k \\in f_k\\left( \\boldsymbol{a}_{-k} \\right)| a_k \\sim \\boldsymbol{\\pi}_k, \\boldsymbol{a}_{-k} \\sim \\boldsymbol{\\pi}_{-k}\\right) = 1\\right\\rbrace." }, { "math_id": 28, "text": "\\boldsymbol{\\pi}^* \\in \\triangle\\left(\\mathcal{A}_1\\right)\\times\\ldots\\times\\triangle\\left(\\mathcal{A}_K\\right)" }, { "math_id": 29, "text": "\\boldsymbol{\\pi}_k^* \\in \\bar{f}_k\\left( \\boldsymbol{\\pi}_{-k}^*\\right)" }, { "math_id": 30, "text": " j" }, { "math_id": 31, "text": "A_{k,j}" }, { "math_id": 32, "text": "\\boldsymbol{e}_{j} = \\left(e_{1},e_{2} \\ldots, e_{N_k} \\right) \\in \\mathrm{R}^{N_k}" }, { "math_id": 33, "text": "j" }, { "math_id": 34, "text": "\\boldsymbol{e}_{j}" }, { "math_id": 35, "text": " \\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace f_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right)" }, { "math_id": 36, "text": " \\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace \\bar{f}_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right)" }, { "math_id": 37, "text": "1-\\epsilon" }, { "math_id": 38, "text": "\\epsilon > 0" }, { "math_id": 39, "text": "\\epsilon" }, { "math_id": 40, "text": "\\epsilon \\in \\left] 0, 1\\right]" }, { "math_id": 41, "text": "\\boldsymbol{\\pi}^* \\in \\triangle\\left(\\mathcal{A}_1\\right)\\times\\triangle\\left(\\mathcal{A}_2\\right)\\times \\ldots\\times\\triangle\\left(\\mathcal{A}_K\\right)" }, { "math_id": 42, "text": "\\boldsymbol{\\pi}_{k}^* \\in \\bar{\\bar{f}}_k\\left( \\boldsymbol{\\pi}_{-k}^* \\right)" }, { "math_id": 43, "text": "\\bar{\\bar{f}}_k\\left( \\boldsymbol{\\pi}_{-k}^* \\right) = \\left\\lbrace \\boldsymbol{\\pi}_k \\in \\triangle\\left( \\mathcal{A}_k \\right): \\mathrm{Pr}\\left( a_k \\in f_k\\left( \\boldsymbol{a}_{-k} \\right) | a_k \\sim \\boldsymbol{\\pi}_k, \\boldsymbol{a}_{-k} \\sim \\boldsymbol{\\pi}_{-k}^* \\right) \\geqslant 1-\\epsilon \\right\\rbrace.\n" }, { "math_id": 44, "text": "\\boldsymbol{\\pi}^*" }, { "math_id": 45, "text": "\\mathrm{Pr} \\left( a_k \\in f_k\\left( \\boldsymbol{a}_{-k} \\right) | a_k \\sim \\boldsymbol{\\pi}_k^*, \\boldsymbol{a}_{-k} \\sim \\boldsymbol{\\pi}_{-k}^*\\right) \\geqslant 1 - \\epsilon.\n" }, { "math_id": 46, "text": "1 - \\epsilon" }, { "math_id": 47, "text": "\\boldsymbol{a} \\in \\mathcal{A}" }, { "math_id": 48, "text": " a_k \\in f_k\\left( \\boldsymbol{a}_{-k} \\right)" }, { "math_id": 49, "text": "\\boldsymbol{\\pi}^* \\in \\triangle\\left(\\mathcal{A}_1\\right) \\times \\triangle\\left(\\mathcal{A}_2\\right) \\times \\ldots \\times \\triangle\\left(\\mathcal{A}_K\\right)" }, { "math_id": 50, "text": " 1 > \\epsilon > 0" }, { "math_id": 51, "text": "\\boldsymbol{\\pi}^{\\star}" }, { "math_id": 52, "text": "c_k : \\mathcal{A}_k \\rightarrow \\left[ 0, 1 \\right]" }, { "math_id": 53, "text": "(a_k, a_k') \\in \\mathcal{A}_k^2" }, { "math_id": 54, "text": "a_k" }, { "math_id": 55, "text": "a_k'" }, { "math_id": 56, "text": " c_k\\left( a_k \\right) < c_k\\left( a_k' \\right)," }, { "math_id": 57, "text": "\\mathcal{S}" }, { "math_id": 58, "text": "\\boldsymbol{a}^{\\star} = \\left(a_1^{\\star}, a_2^{\\star}, \\ldots, a_K^{\\star} \\right) \\in \\mathcal{A}" }, { "math_id": 59, "text": " \\sum_{k =1}^{K} c_{k}\\left( a_k^{\\star} \\right) \\leqslant \\sum_{k =1}^{K} c_{k}\\left( a_k \\right)" }, { "math_id": 60, "text": " c_k " }, { "math_id": 61, "text": " k \\in \\mathcal{K}" }, { "math_id": 62, "text": " (a, a') \\in \\mathcal{A}_k \\times \\mathcal{A}_k" }, { "math_id": 63, "text": " c_k(a) \\neq c_k(a')" }, { "math_id": 64, "text": "\\mathcal{K}" }, { "math_id": 65, "text": "\\mathcal{K}_{\\mathrm{s}}" }, { "math_id": 66, "text": "\\mathcal{K}_{\\mathrm{u}}" }, { "math_id": 67, "text": "\\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace g_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right)" }, { "math_id": 68, "text": "g_k: \\prod_{j\\in\\mathcal{K}\\setminus\\lbrace k \\rbrace} \\triangle\\left(\\mathcal{A}_j\\right) \\rightarrow 2^{\\triangle\\left(\\mathcal{A}_{k}\\right)}" }, { "math_id": 69, "text": " k \\in \\mathcal{K}_{\\mathrm{s}}" }, { "math_id": 70, "text": " \\boldsymbol{\\pi}_k \\in g_{k}\\left( \\boldsymbol{\\pi}_{-k}\\right) " }, { "math_id": 71, "text": " k \\in \\mathcal{K}_{\\mathrm{u}}" }, { "math_id": 72, "text": "g_{k}\\left( \\boldsymbol{\\pi}_{-k}\\right) = \\empty. " }, { "math_id": 73, "text": " \\left( \\mathcal{K}, \\left\\lbrace \\mathcal{A}_k \\right\\rbrace_{k \\in \\mathcal{K}},\\left\\lbrace \\bar{f}_{k}\\right\\rbrace_{ k \\in\\mathcal{K}} \\right)," }, { "math_id": 74, "text": "\\mathcal{K}_{\\mathrm{u}} = \\emptyset" }, { "math_id": 75, "text": "g_k" }, { "math_id": 76, "text": "g(\\boldsymbol{a}_{-k}) = \\bar{\\bar{f}}_k\\left( \\boldsymbol{\\pi}_{-k}^* \\right)," }, { "math_id": 77, "text": "\\epsilon = 0" } ]
https://en.wikipedia.org/wiki?curid=68374180
6837693
Bass diffusion model
Mathematical marketing model The Bass model or Bass diffusion model was developed by Frank Bass. It consists of a simple differential equation that describes the process of how new products get adopted in a population. The model presents a rationale of how current adopters and potential adopters of a new product interact. The basic premise of the model is that adopters can be classified as innovators or as imitators, and the speed and timing of adoption depends on their degree of innovation and the degree of imitation among adopters. The Bass model has been widely used in forecasting, especially new product sales forecasting and technology forecasting. Mathematically, the basic Bass diffusion is a Riccati equation with constant coefficients equivalent to Verhulst—Pearl logistic growth. In 1969, Frank Bass published his paper on a new product growth model for consumer durables. Prior to this, Everett Rogers published "Diffusion of Innovations", a highly influential work that described the different stages of product adoption. Bass contributed some mathematical ideas to the concept. While the Rogers model describes all four stages of the product lifecycle (Introduction, Growth, Maturity, Decline), The Bass model focuses on the first two (Introduction and Growth). Some of the Bass model extensions present mathematical models for the last two (Maturity and Decline). formula_0 Model formulation. Where: Expressed as an ordinary differential equation, formula_6 Sales (or new adopters)formula_7 at timeformula_8 is the rate of change of installed base, i.e.,formula_2 multiplied by the ultimate market potentialformula_9. Under the conditionformula_10, we have that formula_11 formula_12 We have the decompositionformula_13 whereformula_14 is the number of innovators at timeformula_15, andformula_16 is the number of imitators at timeformula_8. The time of peak salesformula_17: formula_18 The times of the inflection points at the new adopters' curveformula_19: formula_20 or in another form (related to peak sales): formula_21 The peak time and inflection points' times must be positive. Whenformula_17 is negative, sales have no peak (and decline since introduction). There are cases (depending on the values offormula_4 and"formula_5") when the new adopters curve (that begins at 0) has only one or zero inflection points. Explanation. The coefficientformula_4 is called the coefficient of innovation, external influence or advertising effect. The coefficient"formula_5" is called the coefficient of imitation, internal influence or word-of-mouth effect. Typical values offormula_4 and"formula_5" when timeformula_8 is measured in years: Derivation. The Bass diffusion model is derived by assuming that the hazard rate formula_22 for the uptake of a product or service may be defined as:formula_23where formula_24 is the probability density function and formula_25 is the survival function, with formula_26 being the cumulative distribution function. From these basic definitions in survival analysis, we know that:formula_27Therefore, the differential equation for the survival function is equivalent to:formula_28Integration and rearrangement of terms gives us that:formula_29For any survival function, we must have that formula_30 and this implies that formula_31. With this condition, the survival function is:formula_32Finally, using the fact that formula_33, we find that the Bass diffusion model for product uptake is:formula_34 Extensions to the model. Generalised Bass model (with pricing). Bass found that his model fit the data for almost all product introductions, despite a wide range of managerial decision variables, e.g. pricing and advertising. This means that decision variables can shift the Bass curve in time, but that the shape of the curve is always similar. Although many extensions of the model have been proposed, only one of these reduces to the Bass model under ordinary circumstances. This model was developed in 1994 by Frank Bass, Trichy Krishnan and Dipak Jain: formula_35 where formula_36 is a function of percentage change in price and other variables Unlike the Bass model which has an analytic solution, but can also be solved numerically, the generalized bass models usually do not have analytic solutions and must be solved numerically. Orbach (2016) notes that the values of p,q are not perfectly identical for the continuous-time and discrete-time forms. For the common cases (where p is within the range of 0.01-0.03 and q within the range of 0.2-0.4) the discrete-time and continuous-time forecasts are very close. For other p,q values the forecasts may divert significantly.   Successive generations. Technology products succeed one another in generations. Norton and Bass extended the model in 1987 for sales of products with continuous repeat purchasing. The formulation for three generations is as follows: formula_37 formula_38 formula_39 where It has been found that the p and q terms are generally the same between successive generations. Relationship with other s-curves. There are two special cases of the Bass diffusion model. The Bass model is a special case of the Gamma/shifted Gompertz distribution (G/SG): Bemmaor (1994) Use in online social networks. The rapid, recent (as of early 2007) growth in online social networks (and other virtual communities) has led to an increased use of the Bass diffusion model. The Bass diffusion model is used to estimate the size and growth rate of these social networks. The work by Christian Bauckhage and co-authors shows that the Bass model provides a more pessimistic picture of the future than alternative model(s) such as the Weibull distribution and the shifted Gompertz distribution. The ranges of the p, q parameters. Bass (1969) distinguished between a case of "p"&lt;"q" wherein periodic sales grow and then decline (a successful product has a periodic sales peak); and a case of "p&gt;q" wherein periodic sales decline from launch (no peak). Jain et al. (1995) explored the impact of seeding. When using seeding, diffusion can begin when p + qF(0) &gt; 0 even if "p"’s value is negative, but a marketer uses seeding strategy with seed size of F(0) &gt; -p/q . The interpretation of a negative "p" value does not necessarily mean that the product is useless: There can be cases wherein there are price or effort barriers to adoption when very few others have already adopted. When others adopt, the benefits from the product increase, due to externalities or uncertainty reduction, and the product becomes more and more plausible for many potential customers. Moldovan and Goldenberg (2004) incorporated negative word of mouth (WOM) effect on the diffusion, which implies a possibility of a negative q. Negative "q" does not necessarily mean that adopters are disappointed and dissatisfied with their purchase. It can fit a case wherein the benefit from a product declines as more people adopt. For example, for a certain demand level for train commuting, reserved tickets may be sold to those who like to guarantee a seat. Those who do not reserve seating may have to commute while standing. As more reserved seating are sold, the crowding in the non-reserved railroad car is reduced, and the likelihood of finding a seat in the non-reserved car increases, thus reducing the incentive to buy reserved seating. While the non-cumulative sales curve with negative "q" is similar to those with "q"=0, the cumulative sales curve presents a more interesting situation: When p &gt; -q, the market will reach 100% of its potential, eventually, as for a regular positive value of "q". However, if p &lt; -q, at the long-range, the market will saturate at an equilibrium level –p/q of its potential. Orbach (2022) summarized the diffusion behavior at each portion of the p,q space and maps the extended ("p","q") regions beyond the positive right quadrant (where diffusion is spontaneous) to other regions where diffusion faces barriers (negative "p"), where diffusion requires “stimuli” to start, or resistance of adopters to new members (negative "q"), which might stabilize the market below full adoption, occur. Adoption of this model. The model is one of the most cited empirical generalizations in marketing; as of August 2023 the paper "A New Product Growth for Model Consumer Durables" published in "Management Science" had (approximately) 11352 citations in Google Scholar. This model has been widely influential in marketing and management science. In 2004 it was selected as one of the ten most frequently cited papers in the 50-year history of "Management Science". It was ranked number five, and the only marketing paper in the list. It was subsequently reprinted in the December 2004 issue of "Management Science". The Bass model was developed for consumer durables. However, it has been used also to forecast market acceptance of numerous consumer and industrial products and services, including tangible, non-tangible, medical, and financial products. Sultan et al. (1990) applied the Bass model to 213 product categories, mostly consumer durables (in a wide range of prices) but also to services such as motels and industrial/farming products like hybrid corn seeds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{f(t)}{1-F(t)} = p + q F(t)" }, { "math_id": 1, "text": "\\ F(t) " }, { "math_id": 2, "text": "\\ f(t) " }, { "math_id": 3, "text": "\\ f(t)= F'(t) " }, { "math_id": 4, "text": "\\ p " }, { "math_id": 5, "text": "\\ q " }, { "math_id": 6, "text": "\\frac{dF}{dt} = p (1-F) + q (1-F) F = (1-F)(p+qF) = p - F(p-q) - qF^2." }, { "math_id": 7, "text": "\\ s(t) " }, { "math_id": 8, "text": "\\ t" }, { "math_id": 9, "text": "\\ m " }, { "math_id": 10, "text": "\\ F(0)=0 " }, { "math_id": 11, "text": "\\ s(t)=mf(t) " }, { "math_id": 12, "text": "\\ s(t)=m{ \\frac{(p+q)^2}{p}} \\frac{e^{-(p+q)t}}{(1+\\frac{q}{p}e^{-(p+q)t})^2} " }, { "math_id": 13, "text": "\\ s(t)=s_n(t)+ s_i(t) " }, { "math_id": 14, "text": "\\ s_n(t):= m p (1-F(t))" }, { "math_id": 15, "text": "\\ t " }, { "math_id": 16, "text": "\\ s_i(t):= m q (1-F(t))F(t) " }, { "math_id": 17, "text": "\\ t^* " }, { "math_id": 18, "text": "\\ t^*=\\frac{\\ln q - \\ln p}{p+q} " }, { "math_id": 19, "text": "\\ t^{**} " }, { "math_id": 20, "text": "\\ t^{**}=\\frac{\\ln (q/p ) - \\ln ( 2 \\pm \\sqrt { 3 }))}{p+q} " }, { "math_id": 21, "text": "\\ t^{**}= t^{*} \\pm \\frac{\\ln ( 2 + \\sqrt { 3 }))}{p+q} " }, { "math_id": 22, "text": "\\lambda(t)" }, { "math_id": 23, "text": "\\lambda(t) = {f(t)\\over{S(t)}} = p + q[1-S(t)]" }, { "math_id": 24, "text": "f(t)" }, { "math_id": 25, "text": "S(t) = 1-F(t)" }, { "math_id": 26, "text": "F(t)" }, { "math_id": 27, "text": "f(t) = -{dS\\over{dt}} \\implies \\lambda(t) = -{1\\over{S}}{dS\\over{dt}}" }, { "math_id": 28, "text": "{dS\\over{S[p + q(1-S)]}} = -dt" }, { "math_id": 29, "text": "{S\\over{p+q(1-S)}} = Ae^{-(p+q)t}" }, { "math_id": 30, "text": "S(0) = 1" }, { "math_id": 31, "text": "A = p^{-1}" }, { "math_id": 32, "text": "S(t) = {e^{-(p+q)t} + {q\\over{p}}e^{-(p+q)t}\\over{1 + {q\\over{p}} e^{-(p+q)t} }}" }, { "math_id": 33, "text": "F(t) = 1-S(t)" }, { "math_id": 34, "text": "F(t) = {1 - e^{-(p+q)t}\\over{1 + {q\\over{p}} e^{-(p+q)t} }}" }, { "math_id": 35, "text": "\\frac{f(t)}{1-F(t)} = (p + {q}F(t)) x(t)" }, { "math_id": 36, "text": "\\ x(t) " }, { "math_id": 37, "text": "\\ S_{1,t} = F(t_1) m_1 (1-F(t_2)) " }, { "math_id": 38, "text": "\\ S_{2,t} = F(t_2) (m_2 + F(t_1) m_1 ) (1-F(t_3)) " }, { "math_id": 39, "text": "\\ S_{3,t} = F(t_3) (m_3 + F(t_2) (m_2 + F(t_1) m_1 )) " }, { "math_id": 40, "text": "\\ m_i = a_i M_i " }, { "math_id": 41, "text": "\\ M_i " }, { "math_id": 42, "text": "\\ a_i " }, { "math_id": 43, "text": "\\ t_i " }, { "math_id": 44, "text": "\\ F(t_i) = \\frac{1-e^{-(p+q)t_i}}{1+\\frac{q}{p} e^{-(p+q)t_i}} " } ]
https://en.wikipedia.org/wiki?curid=6837693
68377413
Near-field radiative heat transfer
Branch of radiative heat transfer Near-field radiative heat transfer (NFRHT) is a branch of radiative heat transfer which deals with situations for which the objects and/or distances separating objects are comparable or smaller in scale or to the dominant wavelength of thermal radiation exchanging thermal energy. In this regime, the assumptions of geometrical optics inherent to classical radiative heat transfer are not valid and the effects of diffraction, interference, and tunneling of electromagentic waves can dominate the net heat transfer. These "near-field effects" can result in heat transfer rates exceeding the blackbody limit of classical radiative heat transfer. History. The origin of the field of NFRHT is commonly traced to the work of Sergei M. Rytov in the Soviet Union. Rytov examined the case of a semi-infinite absorbing body separated by a vacuum gap from a near-perfect mirror at zero temperature. He treated the source of thermal radiation as randomly fluctuating electromagnetic fields. Later in the United States, various groups theoretically examined the effects of wave interference and evanescent wave tunneling. In 1971, Dirk Polder and Michel Van Hove published the first fully correct formulation of NFRHT between arbitrary non-magnetic media. They examined the case of two half-spaces separated by a small vacuum gap. Polder and Van Hove used the fluctuation-dissipation theorem to determine the statistical properties of the randomly fluctuating currents responsible for thermal emission and demonstrated definitively that evanescent waves were responsible for super-Planckian (exceeding the blackbody limit) heat transfer across small gaps. Since the work of Polder and Van Hove, significant progress has been made in predicting NFRHT. Theoretical formalisms involving trace formulas, fluctuating surface currents, and dyadic Green's functions, have all been developed. Though identical in result, each formalism can be more or less convenient when applied to different situations. Exact solutions for NFRHT between two spheres, ensembles of spheres, a sphere and a half-space, and concentric cylinders have all been determined using these various formalisms. NFRHT in other geometries has been addressed primarily through finite element methods. Meshed surface and volume methods have been developed which handle arbitrary geometries. Alternatively, curved surfaces can be discretized into pairs of flat surfaces and approximated to exchange energy like two semi-infinite half spaces using a thermal proximity approximation (sometimes referred to as the Derjaguin approximation). In systems of small particles, the discrete dipole approximation can be applied. Theory. Fundamentals. Most modern works on NFRHT express results in the form of a Landauer formula. Specifically, the net heat power transferred from body 1 to body 2 is given by formula_0, where formula_1 is the reduced Planck constant, formula_2 is the angular frequency, formula_3 is the thermodynamic temperature, formula_4 is the Bose function, formula_5 is the Boltzmann constant, and formula_6. The Landauer approach writes the transmission of heat in terms discrete of thermal radiation channels, formula_7. The individual channel probabilities, formula_8, take values between 0 and 1. NFRHT is sometimes alternatively reported as a linearized conductance, given by formula_9. Two half-spaces. For two half-spaces, the radiation channels, formula_7, are the s- and p- linearly polarized waves. The transmission probabilities are given by formula_10 where formula_11 is the component of the wavevector parallel to the surface of the half-space. Further, formula_12 where: Contributions to heat transfer for which formula_19 arise from propagating waves whereas contributions from formula_20 arise from evanescent waves. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nP_{\\mathrm{1 \\rightarrow 2,net}} = \\int_{0}^{\\infty}\\left\\{ \\frac{\\hbar \\omega}{2 \\pi} \\left[ n(\\omega,T_{1}) - n(\\omega,T_{2}) \\right] \\mathcal{T}(\\omega) \\right\\} d\\omega\n" }, { "math_id": 1, "text": "\\hbar" }, { "math_id": 2, "text": "\\omega" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "n(\\omega,T)=\\left(1/2\\right) \\left[ \\coth{\\left(\\hbar \\omega / 2 k_{b} T\\right)} - 1 \\right]" }, { "math_id": 5, "text": "k_{b}" }, { "math_id": 6, "text": "\\mathcal{T}(\\omega) = \\sum_{\\alpha}\\tau_{\\alpha}(\\omega) " }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "\\tau_{\\alpha}" }, { "math_id": 9, "text": "\nG_{\\mathrm{1 \\rightarrow 2,net}}(T) = \\lim_{T_{1}, T_{2} \\rightarrow T} \\frac{P_{\\mathrm{1 \\rightarrow 2,net}}}{T_{1}-T_{2}} = \\int_{0}^{\\infty}\\left[ \\frac{\\hbar \\omega}{2 \\pi} \\frac{\\partial n}{\\partial T} \\mathcal{T}(\\omega) \\right] d\\omega\n" }, { "math_id": 10, "text": "\n\\tau_{\\alpha}(\\omega) = \\int_{0}^{\\infty} \\left[ \\frac{k_{\\rho}}{2\\pi} \\widehat{\\tau}_{\\alpha}(\\omega) \\right] dk_{\\rho},\n" }, { "math_id": 11, "text": "k_{\\rho}" }, { "math_id": 12, "text": "\n\\widehat{\\tau}_{\\alpha}(\\omega) = \\begin{cases}\n \\frac{\\left( 1 - \\left| r_{0,1}^{\\alpha} \\right|^{2} \\right)\\left( 1 - \\left| r_{0,2}^{\\alpha} \\right|^{2} \\right)}{\\left| 1 - r_{0,1}^{\\alpha} r_{0,2}^{\\alpha} \\exp{\\left(2 i k_{z,0} l \\right)} \\right|^{2}} , & \\text{if } k_{\\rho} \\le \\omega/c \\\\\n \\frac{4 \\Im{\\left( r_{0,1}^{\\alpha} \\right)} \\Im{\\left( r_{0,2}^{\\alpha} \\right)} \\exp{\\left(-2 \\left| k_{z,0} \\right| l \\right)}}{\\left| 1 - r_{0,1}^{\\alpha} r_{0,2}^{\\alpha} \\exp{\\left(-2 \\left| k_{z,0} \\right| l \\right)} \\right|^{2}}, & \\text{if } k_{\\rho} > \\omega/c,\n\\end{cases}\n" }, { "math_id": 13, "text": "r_{0,j}^{\\alpha}" }, { "math_id": 14, "text": "\\alpha=s,p" }, { "math_id": 15, "text": "j=1,2" }, { "math_id": 16, "text": "k_{z,0} = \\sqrt{(\\omega/c)^2-k_{\\rho}^{2}}" }, { "math_id": 17, "text": "l" }, { "math_id": 18, "text": "c" }, { "math_id": 19, "text": "k_{\\rho} \\le \\omega/c" }, { "math_id": 20, "text": "k_{\\rho} > \\omega/c" } ]
https://en.wikipedia.org/wiki?curid=68377413